How to do a lazy create and set with AtomicReference in a safe and efficient manner?











up vote
7
down vote

favorite
2












I'm looking to lazily create something and cache the results as an optimization. Is the code below safe and efficient, or is there a better way to do this? Is the compare and set loop needed here?



...
AtomicReference<V> fCachedValue = new AtomicReference<>();

public V getLazy() {
V result = fCachedValue.get();
if (result == null) {
result = costlyIdempotentOperation();
fCachedValue.set(result);
}
return result;
}


edit: The value being set in my example here from costlyIdempotentOperation() would always be the same no matter what thread called it.










share|improve this question




























    up vote
    7
    down vote

    favorite
    2












    I'm looking to lazily create something and cache the results as an optimization. Is the code below safe and efficient, or is there a better way to do this? Is the compare and set loop needed here?



    ...
    AtomicReference<V> fCachedValue = new AtomicReference<>();

    public V getLazy() {
    V result = fCachedValue.get();
    if (result == null) {
    result = costlyIdempotentOperation();
    fCachedValue.set(result);
    }
    return result;
    }


    edit: The value being set in my example here from costlyIdempotentOperation() would always be the same no matter what thread called it.










    share|improve this question


























      up vote
      7
      down vote

      favorite
      2









      up vote
      7
      down vote

      favorite
      2






      2





      I'm looking to lazily create something and cache the results as an optimization. Is the code below safe and efficient, or is there a better way to do this? Is the compare and set loop needed here?



      ...
      AtomicReference<V> fCachedValue = new AtomicReference<>();

      public V getLazy() {
      V result = fCachedValue.get();
      if (result == null) {
      result = costlyIdempotentOperation();
      fCachedValue.set(result);
      }
      return result;
      }


      edit: The value being set in my example here from costlyIdempotentOperation() would always be the same no matter what thread called it.










      share|improve this question















      I'm looking to lazily create something and cache the results as an optimization. Is the code below safe and efficient, or is there a better way to do this? Is the compare and set loop needed here?



      ...
      AtomicReference<V> fCachedValue = new AtomicReference<>();

      public V getLazy() {
      V result = fCachedValue.get();
      if (result == null) {
      result = costlyIdempotentOperation();
      fCachedValue.set(result);
      }
      return result;
      }


      edit: The value being set in my example here from costlyIdempotentOperation() would always be the same no matter what thread called it.







      java concurrency java.util.concurrent






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 '13 at 4:24

























      asked Nov 20 '13 at 4:11









      marathon

      2,62494591




      2,62494591
























          6 Answers
          6






          active

          oldest

          votes

















          up vote
          10
          down vote



          accepted










          That is not a great system. The problem is that two threads may find that the result == null, and both will set the fCachedValue to their new result value.



          You want to use the compareAndSet(...) method:



          AtomicReference<V> fCachedValue = new AtomicReference<>();

          public V getLazy() {
          V result = fCachedValue.get();
          if (result == null) {
          result = costlyIdempotentOperation();
          if (!fCachedValue.compareAndSet(null, result)) {
          return fCachedValue.get();
          }
          }
          return result;
          }


          If multiple threads get in to the method before it has been initialized, they may all try to create the large result instance. They will all create their own version of it, but the first one to complete the process will be the one who gets to store their result in the AtomicReference. The other threads will complete their work, then dispose of their result and instead use the result instance created by the 'winner'.






          share|improve this answer



















          • 1




            If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
            – rolfl
            Nov 20 '13 at 4:27










          • good point. thanks.
            – marathon
            Nov 20 '13 at 5:39










          • this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
            – Jarrod Roberson
            Nov 20 at 1:11


















          up vote
          3
          down vote













          For a similar purpose I implemented OnceEnteredCallable which returns a ListenableFuture for a result. The advantage is that the other threads are not being blocked and this costly operation is being called once.



          Usage (requires Guava):



          Callable<V> costlyIdempotentOperation = new Callable<>() {...};

          // this would block only the thread to execute the callable
          ListenableFuture<V> future = new OnceEnteredCallable<>().runOnce(costlyIdempotentOperation);

          // this would block all the threads and set the reference
          fCachedValue.set(future.get());

          // this would set the reference upon computation, Java 8 syntax
          future.addListener(() -> {fCachedValue.set(future.get())}, executorService);





          share|improve this answer




























            up vote
            3
            down vote













            This expands the answer by @TwoThe on how AtomicReference<Future<V>> may be used.



            Basically, if you don't mind having (a little bit more expensive) synchronized sections in your code, the easiest (and the most readable) solution would be to use the Double-checked Locking idiom (with volatile).



            If you still want to utilize the CAS (this is what the whole family of Atomic* types is about), you have to use AtomicReference<Future<V>>, not AtomicReference<V> (or you may end up having multiple threads computing the same expensive value).



            But here's another catch: you may obtain a valid Future<V> instance and share it between multiple threads, but the instance itself may be unusable because your costly computation may have failed. This leads us to the need to re-set the atomic reference we have (fCachedValue.set(null)) in some or all exceptional situations.



            The above implies that it's no longer sufficient to call fCachedValue.compareAndSet(null, new FutureTask(...)) once -- you'll have to atomically test whether the reference contains a non-null value and re-initialize it if necessary (on each invocation). Luckily, the AtomicReference class has the getAndUpdate(...) method which merely invokes compareAndSet(...) in a loop. So the resulting code might look like this:



            class ConcurrentLazy<V> implements Callable<V> {
            private final AtomicReference<Future<V>> fCachedValue = new AtomicReference<>();

            private final Callable<V> callable;

            public ConcurrentLazy(final Callable<V> callable) {
            this.callable = callable;
            }

            /**
            * {@inheritDoc}
            *
            * @throws Error if thrown by the underlying callable task.
            * @throws RuntimeException if thrown by the underlying callable task,
            * or the task throws a checked exception,
            * or the task is interrupted (in this last case, it's the
            * client's responsibility to process the cause of the
            * exception).
            * @see Callable#call()
            */
            @Override
            public V call() {
            final RunnableFuture<V> newTask = new FutureTask<>(this.callable);
            final Future<V> oldTask = this.fCachedValue.getAndUpdate(f -> {
            /*
            * If the atomic reference is un-initialised or reset,
            * set it to the new task. Otherwise, return the
            * previous (running or completed) task.
            */
            return f == null ? newTask : f;
            });

            if (oldTask == null) {
            /*
            * Compute the new value on the current thread.
            */
            newTask.run();
            }

            try {
            return (oldTask == null ? newTask : oldTask).get();
            } catch (final ExecutionException ee) {
            /*
            * Re-set the reference.
            */
            this.fCachedValue.set(null);

            final Throwable cause = ee.getCause();
            if (cause instanceof Error) {
            throw (Error) cause;
            }
            throw toUnchecked(cause);
            } catch (final InterruptedException ie) {
            /*
            * Re-set the reference.
            */
            this.fCachedValue.set(null);

            /*
            * It's the client's responsibility to check the cause.
            */
            throw new RuntimeException(ie);
            }
            }

            private static RuntimeException toUnchecked(final Throwable t) {
            return t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
            }
            }


            P. S. You might also want to take a look at the CompletableFuture class.






            share|improve this answer





















            • There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
              – Jarrod Roberson
              Nov 20 at 1:10


















            up vote
            2
            down vote













            You can properly double-check before you do the costly operation (tm) by using a secondary atomic boolean, like this:



            AtomicReference<V> fCachedValue = new AtomicReference<>();
            AtomicBoolean inProgress = new AtomicBoolean(false);

            public V getLazy() {
            V result = fCachedValue.get();
            if (result == null) {
            if (inProgress.compareAndSet(false, true)) {
            result = costlyIdempotentOperation();
            fCachedValue.set(result);
            notifyAllSleepers();
            } else {
            while ((result = fCachedValue.get()) == null) {
            awaitResultOfSet(); // block and sleep until above is done
            }
            }
            }
            return result;
            }


            Even though this won't stop threads from blocking if the value is not set yet, it will at least guarantee that the calculation is only done once. And blocking as well means that the CPU is available for other tasks. But note that if you use standard wait/notify, this might cause a thread-lock, if the first notifies and after that the other one waits. You can either do wait(T_MS) or use a more sophisticated tool like AtomicReference<Future<V>>.






            share|improve this answer





















            • I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
              – Francesco Menzani
              Jul 7 at 17:56












            • There is no reason for all this added complexity. See my answer for why.
              – Jarrod Roberson
              Nov 20 at 1:11












            • Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
              – BeeOnRope
              Nov 20 at 4:01


















            up vote
            1
            down vote













            As @rolfl points out himself, under a CAS-based approach multiple threads might create their own instances of result, which is supposedly costly.



            A well known solution is to use the lock-based lazy initialization pattern. It uses a single lock and it can handle well exceptions thrown while holding the lock, so if correctly applied, this approach is free from most complexities associated to locking.






            share|improve this answer




























              up vote
              1
              down vote













              You just need a synchronized block and a second null check inside it.



              AtomicReference<V> fCachedValue = new AtomicReference<>();
              private final Object forSettingCachedVal = new Object();

              public V getLazy() {
              V result = fCachedValue.get();
              if (result == null) {

              // synchronizing inside the null check avoids thread blockage
              // where unnecessary, and only before initialization.
              synchronized(forSettingCachedVal) {
              // because the thread may have waited for another thread
              // when attempting to enter the synchronized block:
              result = fCachedValue.get();
              // check that this was the first thread to enter the
              // synchronized block. if not, the op is done, so we're done.
              if (result != null) return result;

              // the first thread can now generate that V
              result = costlyIdempotentOperation();
              // compareAndSet isn't strictly necessary, but it allows a
              // subsequent assertion that the code executed as expected,
              // for documentation purposes.
              boolean successfulSet = fCachedValue.compareAndSet(null, result);
              // assertions are good for documenting things you're pretty damn sure about
              assert successfulSet : "something fishy is afoot";
              }
              }
              return result;
              }


              This solution, though slightly more complicated than rolfl's, will avoid executing the costly operation more than once. Hence:




              1. that costly operation doesn't have to be idempotent,

              2. thread contention during lazy initialization is out of the picture, and

              3. despite introduction of synchronization, your code may actually execute faster.






              share|improve this answer























              • There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                – Jarrod Roberson
                Nov 20 at 1:09










              • This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                – BeeOnRope
                Nov 20 at 4:05










              • @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                – Travis Wellman
                Nov 23 at 7:49











              Your Answer






              StackExchange.ifUsing("editor", function () {
              StackExchange.using("externalEditor", function () {
              StackExchange.using("snippets", function () {
              StackExchange.snippets.init();
              });
              });
              }, "code-snippets");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "1"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f20087173%2fhow-to-do-a-lazy-create-and-set-with-atomicreference-in-a-safe-and-efficient-man%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              6 Answers
              6






              active

              oldest

              votes








              6 Answers
              6






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              10
              down vote



              accepted










              That is not a great system. The problem is that two threads may find that the result == null, and both will set the fCachedValue to their new result value.



              You want to use the compareAndSet(...) method:



              AtomicReference<V> fCachedValue = new AtomicReference<>();

              public V getLazy() {
              V result = fCachedValue.get();
              if (result == null) {
              result = costlyIdempotentOperation();
              if (!fCachedValue.compareAndSet(null, result)) {
              return fCachedValue.get();
              }
              }
              return result;
              }


              If multiple threads get in to the method before it has been initialized, they may all try to create the large result instance. They will all create their own version of it, but the first one to complete the process will be the one who gets to store their result in the AtomicReference. The other threads will complete their work, then dispose of their result and instead use the result instance created by the 'winner'.






              share|improve this answer



















              • 1




                If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
                – rolfl
                Nov 20 '13 at 4:27










              • good point. thanks.
                – marathon
                Nov 20 '13 at 5:39










              • this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
                – Jarrod Roberson
                Nov 20 at 1:11















              up vote
              10
              down vote



              accepted










              That is not a great system. The problem is that two threads may find that the result == null, and both will set the fCachedValue to their new result value.



              You want to use the compareAndSet(...) method:



              AtomicReference<V> fCachedValue = new AtomicReference<>();

              public V getLazy() {
              V result = fCachedValue.get();
              if (result == null) {
              result = costlyIdempotentOperation();
              if (!fCachedValue.compareAndSet(null, result)) {
              return fCachedValue.get();
              }
              }
              return result;
              }


              If multiple threads get in to the method before it has been initialized, they may all try to create the large result instance. They will all create their own version of it, but the first one to complete the process will be the one who gets to store their result in the AtomicReference. The other threads will complete their work, then dispose of their result and instead use the result instance created by the 'winner'.






              share|improve this answer



















              • 1




                If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
                – rolfl
                Nov 20 '13 at 4:27










              • good point. thanks.
                – marathon
                Nov 20 '13 at 5:39










              • this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
                – Jarrod Roberson
                Nov 20 at 1:11













              up vote
              10
              down vote



              accepted







              up vote
              10
              down vote



              accepted






              That is not a great system. The problem is that two threads may find that the result == null, and both will set the fCachedValue to their new result value.



              You want to use the compareAndSet(...) method:



              AtomicReference<V> fCachedValue = new AtomicReference<>();

              public V getLazy() {
              V result = fCachedValue.get();
              if (result == null) {
              result = costlyIdempotentOperation();
              if (!fCachedValue.compareAndSet(null, result)) {
              return fCachedValue.get();
              }
              }
              return result;
              }


              If multiple threads get in to the method before it has been initialized, they may all try to create the large result instance. They will all create their own version of it, but the first one to complete the process will be the one who gets to store their result in the AtomicReference. The other threads will complete their work, then dispose of their result and instead use the result instance created by the 'winner'.






              share|improve this answer














              That is not a great system. The problem is that two threads may find that the result == null, and both will set the fCachedValue to their new result value.



              You want to use the compareAndSet(...) method:



              AtomicReference<V> fCachedValue = new AtomicReference<>();

              public V getLazy() {
              V result = fCachedValue.get();
              if (result == null) {
              result = costlyIdempotentOperation();
              if (!fCachedValue.compareAndSet(null, result)) {
              return fCachedValue.get();
              }
              }
              return result;
              }


              If multiple threads get in to the method before it has been initialized, they may all try to create the large result instance. They will all create their own version of it, but the first one to complete the process will be the one who gets to store their result in the AtomicReference. The other threads will complete their work, then dispose of their result and instead use the result instance created by the 'winner'.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Nov 20 '13 at 4:23

























              answered Nov 20 '13 at 4:17









              rolfl

              15.6k63166




              15.6k63166








              • 1




                If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
                – rolfl
                Nov 20 '13 at 4:27










              • good point. thanks.
                – marathon
                Nov 20 '13 at 5:39










              • this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
                – Jarrod Roberson
                Nov 20 at 1:11














              • 1




                If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
                – rolfl
                Nov 20 '13 at 4:27










              • good point. thanks.
                – marathon
                Nov 20 '13 at 5:39










              • this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
                – Jarrod Roberson
                Nov 20 at 1:11








              1




              1




              If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
              – rolfl
              Nov 20 '13 at 4:27




              If the costlyIdempotentOperation() always returns the exact same instance value (must be synchronized or something) then I would still recommend my suggested approach. It has the same end result, but guarantees that if, in the future, your costlyIdempotentOperation method changes, you will still only ever get exactly one, and only one instance of the result back.
              – rolfl
              Nov 20 '13 at 4:27












              good point. thanks.
              – marathon
              Nov 20 '13 at 5:39




              good point. thanks.
              – marathon
              Nov 20 '13 at 5:39












              this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
              – Jarrod Roberson
              Nov 20 at 1:11




              this is correct, but it is overly complex, there is no need for the multiple checks and conditionals.
              – Jarrod Roberson
              Nov 20 at 1:11












              up vote
              3
              down vote













              For a similar purpose I implemented OnceEnteredCallable which returns a ListenableFuture for a result. The advantage is that the other threads are not being blocked and this costly operation is being called once.



              Usage (requires Guava):



              Callable<V> costlyIdempotentOperation = new Callable<>() {...};

              // this would block only the thread to execute the callable
              ListenableFuture<V> future = new OnceEnteredCallable<>().runOnce(costlyIdempotentOperation);

              // this would block all the threads and set the reference
              fCachedValue.set(future.get());

              // this would set the reference upon computation, Java 8 syntax
              future.addListener(() -> {fCachedValue.set(future.get())}, executorService);





              share|improve this answer

























                up vote
                3
                down vote













                For a similar purpose I implemented OnceEnteredCallable which returns a ListenableFuture for a result. The advantage is that the other threads are not being blocked and this costly operation is being called once.



                Usage (requires Guava):



                Callable<V> costlyIdempotentOperation = new Callable<>() {...};

                // this would block only the thread to execute the callable
                ListenableFuture<V> future = new OnceEnteredCallable<>().runOnce(costlyIdempotentOperation);

                // this would block all the threads and set the reference
                fCachedValue.set(future.get());

                // this would set the reference upon computation, Java 8 syntax
                future.addListener(() -> {fCachedValue.set(future.get())}, executorService);





                share|improve this answer























                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote









                  For a similar purpose I implemented OnceEnteredCallable which returns a ListenableFuture for a result. The advantage is that the other threads are not being blocked and this costly operation is being called once.



                  Usage (requires Guava):



                  Callable<V> costlyIdempotentOperation = new Callable<>() {...};

                  // this would block only the thread to execute the callable
                  ListenableFuture<V> future = new OnceEnteredCallable<>().runOnce(costlyIdempotentOperation);

                  // this would block all the threads and set the reference
                  fCachedValue.set(future.get());

                  // this would set the reference upon computation, Java 8 syntax
                  future.addListener(() -> {fCachedValue.set(future.get())}, executorService);





                  share|improve this answer












                  For a similar purpose I implemented OnceEnteredCallable which returns a ListenableFuture for a result. The advantage is that the other threads are not being blocked and this costly operation is being called once.



                  Usage (requires Guava):



                  Callable<V> costlyIdempotentOperation = new Callable<>() {...};

                  // this would block only the thread to execute the callable
                  ListenableFuture<V> future = new OnceEnteredCallable<>().runOnce(costlyIdempotentOperation);

                  // this would block all the threads and set the reference
                  fCachedValue.set(future.get());

                  // this would set the reference upon computation, Java 8 syntax
                  future.addListener(() -> {fCachedValue.set(future.get())}, executorService);






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 20 '13 at 9:01









                  Andrey Chaschev

                  12.5k33658




                  12.5k33658






















                      up vote
                      3
                      down vote













                      This expands the answer by @TwoThe on how AtomicReference<Future<V>> may be used.



                      Basically, if you don't mind having (a little bit more expensive) synchronized sections in your code, the easiest (and the most readable) solution would be to use the Double-checked Locking idiom (with volatile).



                      If you still want to utilize the CAS (this is what the whole family of Atomic* types is about), you have to use AtomicReference<Future<V>>, not AtomicReference<V> (or you may end up having multiple threads computing the same expensive value).



                      But here's another catch: you may obtain a valid Future<V> instance and share it between multiple threads, but the instance itself may be unusable because your costly computation may have failed. This leads us to the need to re-set the atomic reference we have (fCachedValue.set(null)) in some or all exceptional situations.



                      The above implies that it's no longer sufficient to call fCachedValue.compareAndSet(null, new FutureTask(...)) once -- you'll have to atomically test whether the reference contains a non-null value and re-initialize it if necessary (on each invocation). Luckily, the AtomicReference class has the getAndUpdate(...) method which merely invokes compareAndSet(...) in a loop. So the resulting code might look like this:



                      class ConcurrentLazy<V> implements Callable<V> {
                      private final AtomicReference<Future<V>> fCachedValue = new AtomicReference<>();

                      private final Callable<V> callable;

                      public ConcurrentLazy(final Callable<V> callable) {
                      this.callable = callable;
                      }

                      /**
                      * {@inheritDoc}
                      *
                      * @throws Error if thrown by the underlying callable task.
                      * @throws RuntimeException if thrown by the underlying callable task,
                      * or the task throws a checked exception,
                      * or the task is interrupted (in this last case, it's the
                      * client's responsibility to process the cause of the
                      * exception).
                      * @see Callable#call()
                      */
                      @Override
                      public V call() {
                      final RunnableFuture<V> newTask = new FutureTask<>(this.callable);
                      final Future<V> oldTask = this.fCachedValue.getAndUpdate(f -> {
                      /*
                      * If the atomic reference is un-initialised or reset,
                      * set it to the new task. Otherwise, return the
                      * previous (running or completed) task.
                      */
                      return f == null ? newTask : f;
                      });

                      if (oldTask == null) {
                      /*
                      * Compute the new value on the current thread.
                      */
                      newTask.run();
                      }

                      try {
                      return (oldTask == null ? newTask : oldTask).get();
                      } catch (final ExecutionException ee) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      final Throwable cause = ee.getCause();
                      if (cause instanceof Error) {
                      throw (Error) cause;
                      }
                      throw toUnchecked(cause);
                      } catch (final InterruptedException ie) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      /*
                      * It's the client's responsibility to check the cause.
                      */
                      throw new RuntimeException(ie);
                      }
                      }

                      private static RuntimeException toUnchecked(final Throwable t) {
                      return t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
                      }
                      }


                      P. S. You might also want to take a look at the CompletableFuture class.






                      share|improve this answer





















                      • There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:10















                      up vote
                      3
                      down vote













                      This expands the answer by @TwoThe on how AtomicReference<Future<V>> may be used.



                      Basically, if you don't mind having (a little bit more expensive) synchronized sections in your code, the easiest (and the most readable) solution would be to use the Double-checked Locking idiom (with volatile).



                      If you still want to utilize the CAS (this is what the whole family of Atomic* types is about), you have to use AtomicReference<Future<V>>, not AtomicReference<V> (or you may end up having multiple threads computing the same expensive value).



                      But here's another catch: you may obtain a valid Future<V> instance and share it between multiple threads, but the instance itself may be unusable because your costly computation may have failed. This leads us to the need to re-set the atomic reference we have (fCachedValue.set(null)) in some or all exceptional situations.



                      The above implies that it's no longer sufficient to call fCachedValue.compareAndSet(null, new FutureTask(...)) once -- you'll have to atomically test whether the reference contains a non-null value and re-initialize it if necessary (on each invocation). Luckily, the AtomicReference class has the getAndUpdate(...) method which merely invokes compareAndSet(...) in a loop. So the resulting code might look like this:



                      class ConcurrentLazy<V> implements Callable<V> {
                      private final AtomicReference<Future<V>> fCachedValue = new AtomicReference<>();

                      private final Callable<V> callable;

                      public ConcurrentLazy(final Callable<V> callable) {
                      this.callable = callable;
                      }

                      /**
                      * {@inheritDoc}
                      *
                      * @throws Error if thrown by the underlying callable task.
                      * @throws RuntimeException if thrown by the underlying callable task,
                      * or the task throws a checked exception,
                      * or the task is interrupted (in this last case, it's the
                      * client's responsibility to process the cause of the
                      * exception).
                      * @see Callable#call()
                      */
                      @Override
                      public V call() {
                      final RunnableFuture<V> newTask = new FutureTask<>(this.callable);
                      final Future<V> oldTask = this.fCachedValue.getAndUpdate(f -> {
                      /*
                      * If the atomic reference is un-initialised or reset,
                      * set it to the new task. Otherwise, return the
                      * previous (running or completed) task.
                      */
                      return f == null ? newTask : f;
                      });

                      if (oldTask == null) {
                      /*
                      * Compute the new value on the current thread.
                      */
                      newTask.run();
                      }

                      try {
                      return (oldTask == null ? newTask : oldTask).get();
                      } catch (final ExecutionException ee) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      final Throwable cause = ee.getCause();
                      if (cause instanceof Error) {
                      throw (Error) cause;
                      }
                      throw toUnchecked(cause);
                      } catch (final InterruptedException ie) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      /*
                      * It's the client's responsibility to check the cause.
                      */
                      throw new RuntimeException(ie);
                      }
                      }

                      private static RuntimeException toUnchecked(final Throwable t) {
                      return t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
                      }
                      }


                      P. S. You might also want to take a look at the CompletableFuture class.






                      share|improve this answer





















                      • There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:10













                      up vote
                      3
                      down vote










                      up vote
                      3
                      down vote









                      This expands the answer by @TwoThe on how AtomicReference<Future<V>> may be used.



                      Basically, if you don't mind having (a little bit more expensive) synchronized sections in your code, the easiest (and the most readable) solution would be to use the Double-checked Locking idiom (with volatile).



                      If you still want to utilize the CAS (this is what the whole family of Atomic* types is about), you have to use AtomicReference<Future<V>>, not AtomicReference<V> (or you may end up having multiple threads computing the same expensive value).



                      But here's another catch: you may obtain a valid Future<V> instance and share it between multiple threads, but the instance itself may be unusable because your costly computation may have failed. This leads us to the need to re-set the atomic reference we have (fCachedValue.set(null)) in some or all exceptional situations.



                      The above implies that it's no longer sufficient to call fCachedValue.compareAndSet(null, new FutureTask(...)) once -- you'll have to atomically test whether the reference contains a non-null value and re-initialize it if necessary (on each invocation). Luckily, the AtomicReference class has the getAndUpdate(...) method which merely invokes compareAndSet(...) in a loop. So the resulting code might look like this:



                      class ConcurrentLazy<V> implements Callable<V> {
                      private final AtomicReference<Future<V>> fCachedValue = new AtomicReference<>();

                      private final Callable<V> callable;

                      public ConcurrentLazy(final Callable<V> callable) {
                      this.callable = callable;
                      }

                      /**
                      * {@inheritDoc}
                      *
                      * @throws Error if thrown by the underlying callable task.
                      * @throws RuntimeException if thrown by the underlying callable task,
                      * or the task throws a checked exception,
                      * or the task is interrupted (in this last case, it's the
                      * client's responsibility to process the cause of the
                      * exception).
                      * @see Callable#call()
                      */
                      @Override
                      public V call() {
                      final RunnableFuture<V> newTask = new FutureTask<>(this.callable);
                      final Future<V> oldTask = this.fCachedValue.getAndUpdate(f -> {
                      /*
                      * If the atomic reference is un-initialised or reset,
                      * set it to the new task. Otherwise, return the
                      * previous (running or completed) task.
                      */
                      return f == null ? newTask : f;
                      });

                      if (oldTask == null) {
                      /*
                      * Compute the new value on the current thread.
                      */
                      newTask.run();
                      }

                      try {
                      return (oldTask == null ? newTask : oldTask).get();
                      } catch (final ExecutionException ee) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      final Throwable cause = ee.getCause();
                      if (cause instanceof Error) {
                      throw (Error) cause;
                      }
                      throw toUnchecked(cause);
                      } catch (final InterruptedException ie) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      /*
                      * It's the client's responsibility to check the cause.
                      */
                      throw new RuntimeException(ie);
                      }
                      }

                      private static RuntimeException toUnchecked(final Throwable t) {
                      return t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
                      }
                      }


                      P. S. You might also want to take a look at the CompletableFuture class.






                      share|improve this answer












                      This expands the answer by @TwoThe on how AtomicReference<Future<V>> may be used.



                      Basically, if you don't mind having (a little bit more expensive) synchronized sections in your code, the easiest (and the most readable) solution would be to use the Double-checked Locking idiom (with volatile).



                      If you still want to utilize the CAS (this is what the whole family of Atomic* types is about), you have to use AtomicReference<Future<V>>, not AtomicReference<V> (or you may end up having multiple threads computing the same expensive value).



                      But here's another catch: you may obtain a valid Future<V> instance and share it between multiple threads, but the instance itself may be unusable because your costly computation may have failed. This leads us to the need to re-set the atomic reference we have (fCachedValue.set(null)) in some or all exceptional situations.



                      The above implies that it's no longer sufficient to call fCachedValue.compareAndSet(null, new FutureTask(...)) once -- you'll have to atomically test whether the reference contains a non-null value and re-initialize it if necessary (on each invocation). Luckily, the AtomicReference class has the getAndUpdate(...) method which merely invokes compareAndSet(...) in a loop. So the resulting code might look like this:



                      class ConcurrentLazy<V> implements Callable<V> {
                      private final AtomicReference<Future<V>> fCachedValue = new AtomicReference<>();

                      private final Callable<V> callable;

                      public ConcurrentLazy(final Callable<V> callable) {
                      this.callable = callable;
                      }

                      /**
                      * {@inheritDoc}
                      *
                      * @throws Error if thrown by the underlying callable task.
                      * @throws RuntimeException if thrown by the underlying callable task,
                      * or the task throws a checked exception,
                      * or the task is interrupted (in this last case, it's the
                      * client's responsibility to process the cause of the
                      * exception).
                      * @see Callable#call()
                      */
                      @Override
                      public V call() {
                      final RunnableFuture<V> newTask = new FutureTask<>(this.callable);
                      final Future<V> oldTask = this.fCachedValue.getAndUpdate(f -> {
                      /*
                      * If the atomic reference is un-initialised or reset,
                      * set it to the new task. Otherwise, return the
                      * previous (running or completed) task.
                      */
                      return f == null ? newTask : f;
                      });

                      if (oldTask == null) {
                      /*
                      * Compute the new value on the current thread.
                      */
                      newTask.run();
                      }

                      try {
                      return (oldTask == null ? newTask : oldTask).get();
                      } catch (final ExecutionException ee) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      final Throwable cause = ee.getCause();
                      if (cause instanceof Error) {
                      throw (Error) cause;
                      }
                      throw toUnchecked(cause);
                      } catch (final InterruptedException ie) {
                      /*
                      * Re-set the reference.
                      */
                      this.fCachedValue.set(null);

                      /*
                      * It's the client's responsibility to check the cause.
                      */
                      throw new RuntimeException(ie);
                      }
                      }

                      private static RuntimeException toUnchecked(final Throwable t) {
                      return t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
                      }
                      }


                      P. S. You might also want to take a look at the CompletableFuture class.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Nov 17 '17 at 15:12









                      Bass

                      1,65721646




                      1,65721646












                      • There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:10


















                      • There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:10
















                      There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                      – Jarrod Roberson
                      Nov 20 at 1:10




                      There is no reason for the synchronized blocks and all this added useless complexity. See my answer for why.
                      – Jarrod Roberson
                      Nov 20 at 1:10










                      up vote
                      2
                      down vote













                      You can properly double-check before you do the costly operation (tm) by using a secondary atomic boolean, like this:



                      AtomicReference<V> fCachedValue = new AtomicReference<>();
                      AtomicBoolean inProgress = new AtomicBoolean(false);

                      public V getLazy() {
                      V result = fCachedValue.get();
                      if (result == null) {
                      if (inProgress.compareAndSet(false, true)) {
                      result = costlyIdempotentOperation();
                      fCachedValue.set(result);
                      notifyAllSleepers();
                      } else {
                      while ((result = fCachedValue.get()) == null) {
                      awaitResultOfSet(); // block and sleep until above is done
                      }
                      }
                      }
                      return result;
                      }


                      Even though this won't stop threads from blocking if the value is not set yet, it will at least guarantee that the calculation is only done once. And blocking as well means that the CPU is available for other tasks. But note that if you use standard wait/notify, this might cause a thread-lock, if the first notifies and after that the other one waits. You can either do wait(T_MS) or use a more sophisticated tool like AtomicReference<Future<V>>.






                      share|improve this answer





















                      • I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                        – Francesco Menzani
                        Jul 7 at 17:56












                      • There is no reason for all this added complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:11












                      • Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                        – BeeOnRope
                        Nov 20 at 4:01















                      up vote
                      2
                      down vote













                      You can properly double-check before you do the costly operation (tm) by using a secondary atomic boolean, like this:



                      AtomicReference<V> fCachedValue = new AtomicReference<>();
                      AtomicBoolean inProgress = new AtomicBoolean(false);

                      public V getLazy() {
                      V result = fCachedValue.get();
                      if (result == null) {
                      if (inProgress.compareAndSet(false, true)) {
                      result = costlyIdempotentOperation();
                      fCachedValue.set(result);
                      notifyAllSleepers();
                      } else {
                      while ((result = fCachedValue.get()) == null) {
                      awaitResultOfSet(); // block and sleep until above is done
                      }
                      }
                      }
                      return result;
                      }


                      Even though this won't stop threads from blocking if the value is not set yet, it will at least guarantee that the calculation is only done once. And blocking as well means that the CPU is available for other tasks. But note that if you use standard wait/notify, this might cause a thread-lock, if the first notifies and after that the other one waits. You can either do wait(T_MS) or use a more sophisticated tool like AtomicReference<Future<V>>.






                      share|improve this answer





















                      • I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                        – Francesco Menzani
                        Jul 7 at 17:56












                      • There is no reason for all this added complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:11












                      • Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                        – BeeOnRope
                        Nov 20 at 4:01













                      up vote
                      2
                      down vote










                      up vote
                      2
                      down vote









                      You can properly double-check before you do the costly operation (tm) by using a secondary atomic boolean, like this:



                      AtomicReference<V> fCachedValue = new AtomicReference<>();
                      AtomicBoolean inProgress = new AtomicBoolean(false);

                      public V getLazy() {
                      V result = fCachedValue.get();
                      if (result == null) {
                      if (inProgress.compareAndSet(false, true)) {
                      result = costlyIdempotentOperation();
                      fCachedValue.set(result);
                      notifyAllSleepers();
                      } else {
                      while ((result = fCachedValue.get()) == null) {
                      awaitResultOfSet(); // block and sleep until above is done
                      }
                      }
                      }
                      return result;
                      }


                      Even though this won't stop threads from blocking if the value is not set yet, it will at least guarantee that the calculation is only done once. And blocking as well means that the CPU is available for other tasks. But note that if you use standard wait/notify, this might cause a thread-lock, if the first notifies and after that the other one waits. You can either do wait(T_MS) or use a more sophisticated tool like AtomicReference<Future<V>>.






                      share|improve this answer












                      You can properly double-check before you do the costly operation (tm) by using a secondary atomic boolean, like this:



                      AtomicReference<V> fCachedValue = new AtomicReference<>();
                      AtomicBoolean inProgress = new AtomicBoolean(false);

                      public V getLazy() {
                      V result = fCachedValue.get();
                      if (result == null) {
                      if (inProgress.compareAndSet(false, true)) {
                      result = costlyIdempotentOperation();
                      fCachedValue.set(result);
                      notifyAllSleepers();
                      } else {
                      while ((result = fCachedValue.get()) == null) {
                      awaitResultOfSet(); // block and sleep until above is done
                      }
                      }
                      }
                      return result;
                      }


                      Even though this won't stop threads from blocking if the value is not set yet, it will at least guarantee that the calculation is only done once. And blocking as well means that the CPU is available for other tasks. But note that if you use standard wait/notify, this might cause a thread-lock, if the first notifies and after that the other one waits. You can either do wait(T_MS) or use a more sophisticated tool like AtomicReference<Future<V>>.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Nov 22 '13 at 14:43









                      TwoThe

                      9,99612040




                      9,99612040












                      • I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                        – Francesco Menzani
                        Jul 7 at 17:56












                      • There is no reason for all this added complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:11












                      • Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                        – BeeOnRope
                        Nov 20 at 4:01


















                      • I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                        – Francesco Menzani
                        Jul 7 at 17:56












                      • There is no reason for all this added complexity. See my answer for why.
                        – Jarrod Roberson
                        Nov 20 at 1:11












                      • Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                        – BeeOnRope
                        Nov 20 at 4:01
















                      I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                      – Francesco Menzani
                      Jul 7 at 17:56






                      I've found this to be the most complete answer around about this topic. I've created a small library made up of three classes you can grab here.
                      – Francesco Menzani
                      Jul 7 at 17:56














                      There is no reason for all this added complexity. See my answer for why.
                      – Jarrod Roberson
                      Nov 20 at 1:11






                      There is no reason for all this added complexity. See my answer for why.
                      – Jarrod Roberson
                      Nov 20 at 1:11














                      Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                      – BeeOnRope
                      Nov 20 at 4:01




                      Doesn't even show the implementation of the notify and await methods, which of course are key here. Has a race condition which may result in some threading awaiting forever unless timeouts are used...
                      – BeeOnRope
                      Nov 20 at 4:01










                      up vote
                      1
                      down vote













                      As @rolfl points out himself, under a CAS-based approach multiple threads might create their own instances of result, which is supposedly costly.



                      A well known solution is to use the lock-based lazy initialization pattern. It uses a single lock and it can handle well exceptions thrown while holding the lock, so if correctly applied, this approach is free from most complexities associated to locking.






                      share|improve this answer

























                        up vote
                        1
                        down vote













                        As @rolfl points out himself, under a CAS-based approach multiple threads might create their own instances of result, which is supposedly costly.



                        A well known solution is to use the lock-based lazy initialization pattern. It uses a single lock and it can handle well exceptions thrown while holding the lock, so if correctly applied, this approach is free from most complexities associated to locking.






                        share|improve this answer























                          up vote
                          1
                          down vote










                          up vote
                          1
                          down vote









                          As @rolfl points out himself, under a CAS-based approach multiple threads might create their own instances of result, which is supposedly costly.



                          A well known solution is to use the lock-based lazy initialization pattern. It uses a single lock and it can handle well exceptions thrown while holding the lock, so if correctly applied, this approach is free from most complexities associated to locking.






                          share|improve this answer












                          As @rolfl points out himself, under a CAS-based approach multiple threads might create their own instances of result, which is supposedly costly.



                          A well known solution is to use the lock-based lazy initialization pattern. It uses a single lock and it can handle well exceptions thrown while holding the lock, so if correctly applied, this approach is free from most complexities associated to locking.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 20 '13 at 7:40









                          vemv

                          2,85433151




                          2,85433151






















                              up vote
                              1
                              down vote













                              You just need a synchronized block and a second null check inside it.



                              AtomicReference<V> fCachedValue = new AtomicReference<>();
                              private final Object forSettingCachedVal = new Object();

                              public V getLazy() {
                              V result = fCachedValue.get();
                              if (result == null) {

                              // synchronizing inside the null check avoids thread blockage
                              // where unnecessary, and only before initialization.
                              synchronized(forSettingCachedVal) {
                              // because the thread may have waited for another thread
                              // when attempting to enter the synchronized block:
                              result = fCachedValue.get();
                              // check that this was the first thread to enter the
                              // synchronized block. if not, the op is done, so we're done.
                              if (result != null) return result;

                              // the first thread can now generate that V
                              result = costlyIdempotentOperation();
                              // compareAndSet isn't strictly necessary, but it allows a
                              // subsequent assertion that the code executed as expected,
                              // for documentation purposes.
                              boolean successfulSet = fCachedValue.compareAndSet(null, result);
                              // assertions are good for documenting things you're pretty damn sure about
                              assert successfulSet : "something fishy is afoot";
                              }
                              }
                              return result;
                              }


                              This solution, though slightly more complicated than rolfl's, will avoid executing the costly operation more than once. Hence:




                              1. that costly operation doesn't have to be idempotent,

                              2. thread contention during lazy initialization is out of the picture, and

                              3. despite introduction of synchronization, your code may actually execute faster.






                              share|improve this answer























                              • There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                                – Jarrod Roberson
                                Nov 20 at 1:09










                              • This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                                – BeeOnRope
                                Nov 20 at 4:05










                              • @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                                – Travis Wellman
                                Nov 23 at 7:49















                              up vote
                              1
                              down vote













                              You just need a synchronized block and a second null check inside it.



                              AtomicReference<V> fCachedValue = new AtomicReference<>();
                              private final Object forSettingCachedVal = new Object();

                              public V getLazy() {
                              V result = fCachedValue.get();
                              if (result == null) {

                              // synchronizing inside the null check avoids thread blockage
                              // where unnecessary, and only before initialization.
                              synchronized(forSettingCachedVal) {
                              // because the thread may have waited for another thread
                              // when attempting to enter the synchronized block:
                              result = fCachedValue.get();
                              // check that this was the first thread to enter the
                              // synchronized block. if not, the op is done, so we're done.
                              if (result != null) return result;

                              // the first thread can now generate that V
                              result = costlyIdempotentOperation();
                              // compareAndSet isn't strictly necessary, but it allows a
                              // subsequent assertion that the code executed as expected,
                              // for documentation purposes.
                              boolean successfulSet = fCachedValue.compareAndSet(null, result);
                              // assertions are good for documenting things you're pretty damn sure about
                              assert successfulSet : "something fishy is afoot";
                              }
                              }
                              return result;
                              }


                              This solution, though slightly more complicated than rolfl's, will avoid executing the costly operation more than once. Hence:




                              1. that costly operation doesn't have to be idempotent,

                              2. thread contention during lazy initialization is out of the picture, and

                              3. despite introduction of synchronization, your code may actually execute faster.






                              share|improve this answer























                              • There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                                – Jarrod Roberson
                                Nov 20 at 1:09










                              • This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                                – BeeOnRope
                                Nov 20 at 4:05










                              • @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                                – Travis Wellman
                                Nov 23 at 7:49













                              up vote
                              1
                              down vote










                              up vote
                              1
                              down vote









                              You just need a synchronized block and a second null check inside it.



                              AtomicReference<V> fCachedValue = new AtomicReference<>();
                              private final Object forSettingCachedVal = new Object();

                              public V getLazy() {
                              V result = fCachedValue.get();
                              if (result == null) {

                              // synchronizing inside the null check avoids thread blockage
                              // where unnecessary, and only before initialization.
                              synchronized(forSettingCachedVal) {
                              // because the thread may have waited for another thread
                              // when attempting to enter the synchronized block:
                              result = fCachedValue.get();
                              // check that this was the first thread to enter the
                              // synchronized block. if not, the op is done, so we're done.
                              if (result != null) return result;

                              // the first thread can now generate that V
                              result = costlyIdempotentOperation();
                              // compareAndSet isn't strictly necessary, but it allows a
                              // subsequent assertion that the code executed as expected,
                              // for documentation purposes.
                              boolean successfulSet = fCachedValue.compareAndSet(null, result);
                              // assertions are good for documenting things you're pretty damn sure about
                              assert successfulSet : "something fishy is afoot";
                              }
                              }
                              return result;
                              }


                              This solution, though slightly more complicated than rolfl's, will avoid executing the costly operation more than once. Hence:




                              1. that costly operation doesn't have to be idempotent,

                              2. thread contention during lazy initialization is out of the picture, and

                              3. despite introduction of synchronization, your code may actually execute faster.






                              share|improve this answer














                              You just need a synchronized block and a second null check inside it.



                              AtomicReference<V> fCachedValue = new AtomicReference<>();
                              private final Object forSettingCachedVal = new Object();

                              public V getLazy() {
                              V result = fCachedValue.get();
                              if (result == null) {

                              // synchronizing inside the null check avoids thread blockage
                              // where unnecessary, and only before initialization.
                              synchronized(forSettingCachedVal) {
                              // because the thread may have waited for another thread
                              // when attempting to enter the synchronized block:
                              result = fCachedValue.get();
                              // check that this was the first thread to enter the
                              // synchronized block. if not, the op is done, so we're done.
                              if (result != null) return result;

                              // the first thread can now generate that V
                              result = costlyIdempotentOperation();
                              // compareAndSet isn't strictly necessary, but it allows a
                              // subsequent assertion that the code executed as expected,
                              // for documentation purposes.
                              boolean successfulSet = fCachedValue.compareAndSet(null, result);
                              // assertions are good for documenting things you're pretty damn sure about
                              assert successfulSet : "something fishy is afoot";
                              }
                              }
                              return result;
                              }


                              This solution, though slightly more complicated than rolfl's, will avoid executing the costly operation more than once. Hence:




                              1. that costly operation doesn't have to be idempotent,

                              2. thread contention during lazy initialization is out of the picture, and

                              3. despite introduction of synchronization, your code may actually execute faster.







                              share|improve this answer














                              share|improve this answer



                              share|improve this answer








                              edited Nov 23 at 7:53

























                              answered Nov 14 at 2:56









                              Travis Wellman

                              373116




                              373116












                              • There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                                – Jarrod Roberson
                                Nov 20 at 1:09










                              • This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                                – BeeOnRope
                                Nov 20 at 4:05










                              • @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                                – Travis Wellman
                                Nov 23 at 7:49


















                              • There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                                – Jarrod Roberson
                                Nov 20 at 1:09










                              • This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                                – BeeOnRope
                                Nov 20 at 4:05










                              • @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                                – Travis Wellman
                                Nov 23 at 7:49
















                              There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                              – Jarrod Roberson
                              Nov 20 at 1:09




                              There is no reason for the synchronized blocks and all this added complexity. See my answer for why.
                              – Jarrod Roberson
                              Nov 20 at 1:09












                              This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                              – BeeOnRope
                              Nov 20 at 4:05




                              This is the right answer for an "expensive" initialization. Just use double checked locking! As a small optimization, one might simply lock on the AtomicReference object itself, to avoid creating the second Object (and this is a bit cache friendlier).
                              – BeeOnRope
                              Nov 20 at 4:05












                              @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                              – Travis Wellman
                              Nov 23 at 7:49




                              @BeeOnRope you don't want to synchronize on the AtomicReference because it is common practice to synchronize on this (what the method modifier synchronized actually does), and you can't be sure if and how AtomicReference does so, so you could accidentally write a deadlock.
                              – Travis Wellman
                              Nov 23 at 7:49


















                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.





                              Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                              Please pay close attention to the following guidance:


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f20087173%2fhow-to-do-a-lazy-create-and-set-with-atomicreference-in-a-safe-and-efficient-man%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              404 Error Contact Form 7 ajax form submitting

                              How to know if a Active Directory user can login interactively

                              TypeError: fit_transform() missing 1 required positional argument: 'X'