Thursday, August 30, 2007

Atomicity, Visibility and Ordering

(Note: I've cribbed this from my doctoral dissertation. I tried to edit it heavily to ease up on the mangled academic syntax required by thesis committees, but I may have missed some / badly edited in places. Let me know if there is something confusingly written or just plain confusing, and I'll try to untangle it.)

There are these three concepts, you see. And they are fundamental to correct concurrent programming. When a concurrent program is not correctly written, the errors tend to fall into one of the three categories: atomicity, visibility, or ordering.

Atomicity deals with which actions and sets of actions have indivisible effects. This is the aspect of concurrency most familiar to programmers: it is usually thought of in terms of mutual exclusion. Visibility determines when the effects of one thread can be seen by another. Ordering determines when actions in one thread can be seen to occur out of order with respect to another. Let's talk about them.

Atomicity

Everyone doing any serious concurrent programming knows what atomicity is (or will when I describe it) — I'm just putting it in for completeness's sake. If an action is (or a set of actions are) atomic, its result must be seen to happen ``all at once'', or indivisibly. Atomicity is the traditional bugbear of concurrent programming. Enforcing it usually means using locking to enforce mutual exclusion. To see atomicity in action (or in inaction, perhaps), consider this code:

class BrokenBankAccount {
  private int balance;

  synchronized int getBalance() {
    return balance;
  }

  synchronized void setBalance(int x) 
    throws IllegalStateException {
    balance = x;
    if (balance < 0) {
      throw new IllegalStateException("Negative Balance");
    }
  }

  void deposit(int x) {
    int b = getBalance();
    setBalance(b + x);
  }

  void withdraw(int x) {
    int b = getBalance();
    setBalance(b - x);
  }
}

Since all accesses to the shared variable balance are guarded by locks, this code is free of what are called data races, which are basically what happens when you access a variable concurrently without some use of synchronization or volatile or the like (one of those accesses has to be a write to be a data race in the true sense). When code is free from data races, we say it is correctly synchronized. So the code is correct, right?

No, of course not. This code is not at all correct, in the sense that it doesn't necessarily do what we want it to do. Think about what happens if one thread calls deposit(5) and another calls withdraw(5); there is an initial balance of 10. Ideally, at the end of these two calls, there would still be a balance of 10. However, consider what would happen if:
  1. The deposit() method sees a value of 10 for the balance, then

  2. The withdraw() method sees a value of 10 for the balance
    and withdraws 5, leaving a balance of 5, and finally

  3. The deposit() method uses the balance it originally saw (10) to
    calculate a new balance of 15.
As a result of this lack of "atomicity", the balance is 15 instead of 10. This effect is often referred to as a lost update, because the withdrawal is lost. A programmer writing multi-threaded code must use synchronization carefully to avoid this sort of error. In Java, if the deposit() and withdraw() methods are declared synchronized, it will ensure that locks are held for their duration: the actions of those methods will be seen to take place atomically.

Atomicity, of course, is only guaranteed when all the threads use synchronization correctly. If someone comes along and decides to read the balance without acquiring a lock, it can end up with all sorts of confusing results.

Atomicity is the most common problem you get when using synchronization. It is a common mistake to think that it is the only problem; it is not. Here's another one:

Visibility

What's visibility, you ask? Well, if an action in one thread is visible to another thread, then the result of that action can be observed by the second thread. In order to guarantee that the results of one action are observable to a second action, then you have to use some form of synchronization to make sure that the second thread sees what the first thread did.

(Note: when I say synchronization in this post, I don't actually mean locking. I mean anything that guarantees visibility or ordering in Java. This can include final and volatile fields, as well as class initialization and thread starts and joins and all sorts of other good stuff.)

Here's an example of a code with visibility problems:

class LoopMayNeverEnd { 
  boolean done = false; 

  void work() { 
    while (!done) { 
      // do work 
    } 
  } 
 
  void stopWork() { 
    done = true; 
  } 
} 


In this code, imagine that two threads are created; one thread calls work, and at some point, the other thread calls stopWork on the same object. Because there is no synchronization between the two, the thread in the loop may never see the update to done performed by the other thread. In practice, this may happen if the compiler detects that no writes are performed to done in the first thread; the compiler may decide that the program only has to read done once, transforming it into an infinite loop.

(By the way, "compiler" in this context doesn't mean javac — it actually means the JVM itself, which plays lots of games with your code to get it to run more quickly. In this case, if the compiler decides that you are reading a variable that you don't have to read, it can eliminate the read quite nicely. As described above.)

To ensure that this does not happen, you have to use a mechanism that provides synchronization between the two threads. In LoopMayNeverEnd, if you want to do this, you can declare done to be volatile. Conceptually, all actions on volatiles happen in a single order, where each read sees the last write in that order. In other words, the compiler can't prevent a read of a volatile from seeing a write performed by another thread.

There is a side issue here; some architectures and virtual machines may execute this program without providing a guarantee that the thread that executes work will ever give up the CPU and allow other threads to execute. This would prevent the loop from ever terminating because of scheduling guarantees, not because of a lack of visibility guarantees. This is typically called cooperative multithreading.

There's one more problem that crops up:

Ordering

Ordering constraints describe what order things are seen to occur. You only get intuitive ordering constraints by synchronizing correctly. Here's an example of when ordering problems can bite you:

class BadlyOrdered {
  boolean a = false;
  boolean b = false;

  void threadOne() {
    a = true;
    b = true;
  }

  boolean threadTwo() {
    boolean r1 = b; // sees true
    boolean r2 = a; // sees false
    boolean r3 = a; // sees true
    return (r1 && !r2) && r3; // returns true
  }

}

Consider what happens if threadOne() gets invoked in one thread and threadTwo() gets invoked on the same object in another. Would it be possible for threadTwo() to return the value true? If threadTwo() returns true, it means that the thread saw both updates by threadOne, but that it saw the change to b before the change to a.

Well, this code fragment does not use synchronization correctly, so surprising things can happen! It turns out that Java allows this result, contrary to what a programmer might have expected.

The assignments to a and b in threadOne() can be seen to be performed out of order. Compilers have a lot of freedom to reorder code in the absence of synchronization; they could either reorder the writes in threadOne or the reads in threadTwo freely.

How do you fix it? Synchronize your code carefully! In this case, you can throw a lock around threadOne or threadTwo, or you can declare them both both a and b to be volatile, and get the ordering you want.




So keep an eye peeled for these things, and don't assume that your intuition about the way your code is executed will be correct. Everyone knows about atomicity concerns, but many fewer people know about visibility and ordering. Keep your nose clean, and you may get out of this thing alive...

52 comments:

Anonymous said...

I have been wrestling with some of these issues myself - I have posted a little about them but your post is better..

Anonymous said...

When is the follow up coming?

truecube said...

nice article

Francis Stephens said...

Hey Jeremy,

I am trying to school myself on Java concurrency. My project at the moment is to try to rewrite the Java examples from "The Art of Multiprocessor Programming" so that they actually work (as most of them don't).

However, I am struggling successfully test my different implementations of the two thread Peterson Lock. The race memory visibility issues that I think should plague the version in the book do not manifest themselves on my i7 processor.

Are you able to point out some good resources approaching this issue? I am having a look at 'FindBugs' right now.

It seems to me that it would be valuable to have a java virtual machine which, to use the model in your post on volatiles, does not allow memory writes to leak between threads unless it is forced by the JMM. Anyway, if you have any tips or any resources that I should dig into that would be much appreciated.

If you have read 'The Art of Multiprocessor Programming' I would also be very interested in what you thought of it.

Ciao
Francis

Jeremy Manson said...

@Francis - the book is pretty good. My feeling is that it should spend a bit more time and care on memory models, but I'm biased.

The Peterson issue is a tricky one. Not every potential memory model issue is going to show up in practice. Is your machine a multiprocessor, or just a multicore? Really, the only way that the x86 processor can break Peterson is by performing the loads earlier than the stores (IIRC). Even then, you would have to get that AND the right interleaving. It probably isn't performing the loads early (I'm not an expert on how out-of-order execution is done on an i7).

Another potential memory model issue is that the JIT compiler could turn the loop into an infinite loop. By and large, they don't do this (it is a quality of service issue, really).

Some folks (including me) have looked into building a JVM / JVM simulator that could expose these sorts of problems easily, but I don't think anyone has followed through.

Francis Stephens said...

Jeremy,

Could you clarify the issues that you identified. Looking at the code in the book the first issue I see is that writes to the the boolean array may not become visible - the JIT would be in its rights to remove 'flag[j]' from the loop, but could it remove 'victim == i'? I thought that as that is a volatile int it must necessarily remain.

The other issue is providing visibility to all of the writes done inside the body of the lock, i.e. the work done after lock() and before unlock() is called. If we provide a volatile write (by making the boolean array into an AtomicReference[]) this seems like it would be sufficient to guarantee that the next call to lock would read from that Atomic Reference and provide the memory visibility that we were after.

Or am I barking up the wrong tree?

I wonder how much work would be involved in developing such a JVM simulator. It seems like it would have enormous value.

Jeremy Manson said...

@francis - I'm not sure what issues you mean. I wasn't going by the book's definition of Peterson, which I strongly suspect is correct (the book is in the other room, and I'm being lazy).

I developed a JMM simulator as part of my dissertation. It only worked on small examples, but that would have been okay for locking issues. The major problem with it was that there were bugs I didn't have time to address. My alma mater took down my website, so I have nothing to point you to (I'm sure I have the code around somewhere, but nowhere convenient and public).

Francis Stephens said...

I was looking specifically at the Peterson code taken from the second chapter of the book.

I don't think it is correct. But I have been through a bit of the book (chapters 2 and 7) and written out the locks which appear there. I have found a number of ordinary typos but also, I think, that many volatile declarations were missing from these code examples.

Anyway, drilling into typos in code snippets from books in the comment section of a blog isn't really an efficient way to get anything done - so I won't use up more of your time :)

I will put the code that I have written up from the book onto a blog. I think it would be useful to have a better set of example locks for the book as well as hopefully generating some discussion about testing such fiddly bits of concurrent code.

Have you thought about opening up the VM from your dissertation?

Unknown said...

Hello,

I was asked why Math.random stated to be thread-safe. My vision is the following
Suppose, Thread 1 executes 04 and understands that randomNumberGenerator == null. Then the thread occurs lock on class (static synchronized),
creates new Random and puts reference to it in randomNumberGenerator.
The reference and object can't escape from the Thread 1 before it goes out from the synchronization block because synchronized set "happens-before" relationship

So Thread 2 can see only
* randomNumberGenerator == null
or
* randomNumberGenerator, and the new Random itself.


00: private static Random randomNumberGenerator;

01: private static synchronized void initRNG() {
02: if (randomNumberGenerator == null)
03: randomNumberGenerator = new Random();
}

public static double random() {
04: if (randomNumberGenerator == null)
05: initRNG();
06: return randomNumberGenerator.nextDouble();
}


Can you clarify this point?

Thank you,
Ivan

Jeremy Manson said...

@Ivan - Looks to me like the reference should be volatile. I'll chat with the maintainers.

Jeremy Manson said...

@Ivan - this bug has recently been fixed in JDK7 (Randoms are now immutable (not quite, but enough to avoid this issue)).

Kir said...

Nice post. Something about visibility. I use java for about five years,and saw a tons of code with no visibility guaranties and it works fine on sun jvm. allways. Looks like guys from sun made a trick with jvm, to protected from stupid developers... and the trick they did not made - try run emty ever loop.

Jeremy Manson said...

@kir - there are plenty of things in Sun VMs that create weaker visibility guarantees, and there are more every release. They tend to manifest as sporadic failures, and you might not realize that's what's happening.

Sharath said...

Hello Jeremy,
Nice post. I have a question regarding visibility. I just started reading JCiP and the visibility chapter left me in a state of (kind of) shock! (It makes me think if some of the code I developed until now has visibility issues). Could you please answer my following question? I am confused with the following excerpt from JCiP (section 3.5)
*********
A properly constructed object can be safely published by:

* Initializing an object reference from a static initializer;
* Storing a reference to it into a volatile field or AtomicReference;
* Storing a reference to it into a final field of a properly constructed object
* Storing a reference to it into a field that is properly guarded by a lock.
[1]
*********
1. Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, Doug Lea: "Java Concurrency in Practice", section 3.5. Addison-Wesley Professional, 2006

Does that excerpt mean that the following simple Spring(MVC)-based setup has visibility issues:

Controller:
public class SampleController extends AbstractController
{
private Dao dao;

public ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception
{
ModelAndView mav = new ModelAndView("someJsp");
List result = dao.fetchSomeList();
mav.addObject("result", "result");
return mav;
}

public setDao(Dao dao)
{
this.dao = dao;
}
}

where Dao is a thread-safe mutable class (i.e., it has non-synchronized setters for setting some private properties such as lookup maps and jdbctemplates that would be used only in Dao) and 'dao' reference wouldn't be modified after initialization by Spring. The Dao and Controller would be configured in Spring and I am assuming that Spring (MVC) via a servlet would initialize the Controller and inject it's dao at application start-up in a "main" thread and then the Controller would be ready to serve requests in separate/other threads... correct? This appears to be NOT a safe publication of dao (and also SampleController?) as none of the safe publication idioms are followed, am I correct? and do I have to declare 'dao' as volatile or final to avoid 'visibility' issues for separate/other threads which serves the users requests? (Since the Controller and Dao are initialized by a "main" thread and NOT the separate/other threads which serves the users requests)?

Thanks in advance.

- Sharath

Jeremy Manson said...

@sharath - If a thread creates an object and then spawns other threads that use that object, then the object is visible to those other threads. If the other threads are already executing, then the object will not be visible.

Having said that, I'm not sure I'm reading the description of its thread-safety properties correctly. if you change any of the fields of the object concurrently, you have to use synchronization.

Jeremy Manson said...

@Francis - sorry about the delay in approving your post; it fell through the cracks. At some point, I had a page with the simulator on it. I realized, however, that it was a little too buggy for public consumption.

Sharath said...

Hello Jeremy,
Thanks for your prompt response. Based on the behavior you described, looks like that particular spring setup wouldn't have visibility issues (yippie!!).

And regarding the thread-safety properties you mentioned in 2nd para, that setup is not meant to change any object fields after the initial "spring" setup (i.e., spring MVC via a servlet would only set the properties initially at app start-up and then those fields would be always read-only and no other thread would change them ever.. so, i think there is no need to synchronize the access to those fields)

Also could you please let me know where I can find some documentation/specs regarding the behavior you mentioned "If a thread creates an object and then spawns other threads that use that object, then the object is visible to those other threads. If the other threads are already executing, then the object will not be visible". [Just wondering if I can find more hidden "gems" like the one you specified in that documentation/spec:) ..]

Thanks once again for answering my question.

Best regards,
Sharath

Jeremy Manson said...

@sharath - The reference is the third edition of the Java Language Spec. Visibility from synchronization actions is defined by the synchronization order, which is documented in Section 17.4.4.

Jeremy Manson said...

Sorry - that should be "synchronized-with" order, not "synchronization order" (two different things).

Sharath said...

Hello Jeremy,

Thanks for the link and info.

Best regards,
Sharath

vac said...

The following code uses a non-volatile variable as a flag. The variable can make the loop stops. (It does not have to be volatile.)

public class Volatile extends Thread {
  boolean done = false;

  @Override
  public void run() {
    while (!done) {
    }
    System.out.println("stop");
  }

  public static void main(String[] args) throws Exception {
      Volatile e = new Volatile();
      e.start();
      Thread.sleep(1000);
      e.done = true;
  }
}



How can I demonstrate the difference between a volatile and a non-volatile variable?

Jeremy Manson said...

Sometimes it works. The problem is that there is no guarantee that it will work.

One thought - if your JIT compiler kicks in, then it is likely to optimize the guard away. One thing you could try is to invoke the run() method, say, 20000 times.

suni said...

Hi Jeremy,
It is a very nice article.
I have doubt in the example of ordering.
Assume I have made a,b volatile.
Now one thread enters in threadOne() method.it has updated only value of a = true.b is still false.
Meanwhile other thread reads b = false in threadTwo() method,because last updated value of b = false.
so how volatile keyword helps in ordering?Can you explain it in the same example?

Jeremy Manson said...

Hi Suni,

What you are describing has nothing to do with code being reordered by a compiler or underlying architecture. It's just what happens when you have multiple threads reading from and writing to multiple variables without atomicity (which, in thie case, would take the form of locking). Making a variable volatile doesn't fix it.

suni said...

Hi Jeremy,
Thanks for your reply.
your answer cleared my confusion between atomicity and reordering.
I have one more doubt on volatile keyword -
example code of visibilty can be solved by making variable "done" as volatile.

My doubt 1 - Is it necessary to use volatile under synchronized block(does it depend on syncronized like wait and notify)?

Example given by you for ordering - does it reflect ordering problem or visibility problem?because the wrong output which we are getting is not because of reordering of statements,but because of visibilty of a and b while accessing by other thread.

My doubt 2 - assume we make b as volatile.does it force other reading threads to see the change to a before b?If yes.then It clears my doubt.and it is reordering example.

Jeremy Manson said...

Hi Suni

- Volatile does not need to be used under a synchronized block, but you have to be very, very careful about its use.

- You are correct about making b volatile.

Anonymous said...

Hi, very nice article, hope had it read before!
In the BankBrokenAccount, I have a question though:
You placed the gets and the sets "synchronized".
The read in the line int a = getBalance()
is NOT synchronized though ?
I mean, the reference of a to the function isn't synchronized ?
So would I need an extra synchronized key? or volatile keyword ?

Regards,
Dimitris

Jeremy Manson said...

@Dimitris - yes, that's right - you need to synchronize the whole function, or it isn't atomic.

Unknown said...

Hi Jeremy,

Your posts are just awesome. I do not know why I found your blog web site so late.

Kindly rectify my understanding, if I am wrong somewhere.


Q 1> Can I conclude that "atomic" operations can never be concurrent? I meant to say that whenever one "atomic" operation is taking place by a "particular" thread, all the other threads can not access the variables being accessed/involved (read from or written to) in that "atomic" operation until the operation is complete?



Q 2> Also if you could let me know which all operations are atomic (without synchronization)in detail.


Q 3> Also exactly in which cases, if a primitive variable/object is accessed by multiple threads there may be garbage values/half created objects.

I guess Q 3 is unnecessary as there is hardly any chances for any garbage value/ half created objects.
I saw in your other post that any custom object which is reffered by a volatile reference, is only seen by other threads, after being fully created.

But still I need to have correct answers for Q 3.

Please give your answers with respect to Java 1.5 or higher


Best Regards

Unknown said...

Also another question.
Say there are two processors and as per Java 7 threading model (I am yet to explore it though) two different threads are engaging two processors.
Both of them have invoked the same method (which is not synchronized)of the same object.

Say the method is as below

class TwoThreads{

int i = 0;

public method access(int a, int write){

i = a;// ..... Line 1

write = a;// .... Line 2

}

If I have understood your post, Line 1 can never happen at the same instance of time for both the threads. One has to follow other.
I guess write operation is "atomic".

But what about Line 2.
here first a is being read by both the threads first. next each thread is assigning a into the different local copies of "write" variable.

here is the question. As the threads are running in different processors, can Line 2 execute (or start executing) at the same instance of time by both the threads?

Will wait for your answer...
Thanks in advance!

Jeremy Manson said...

Q 1> Can I conclude that "atomic" operations can never be concurrent? I meant to say that whenever one "atomic" operation is taking place by a "particular" thread, all the other threads can not access the variables being accessed/involved (read from or written to) in that "atomic" operation until the operation is complete?

Not exactly. The other threads will either see all of or none of the atomic operation.

Q 2> Also if you could let me know which all operations are atomic (without synchronization)in detail.

Writes to volatiles, writes to references, and writes to 32 bit or smaller scalar values.


Q 3> Also exactly in which cases, if a primitive variable/object is accessed by multiple threads there may be garbage values/half created objects.

Writes to non-volatile 64-bit values (i.e., longs and doubles).

Jeremy Manson said...


If I have understood your post, Line 1 can never happen at the same instance of time for both the threads. One has to follow other.
I guess write operation is "atomic".


No, they can happen at the same time. It is just that other threads won't see them half-done.

Unknown said...

Thanks Jeremy for replying my queries .

Then can I suppose that read and
write to a "non-volatile" variable is mostly local to the thread.
And when the local cache of the thread is empty then only it reads and writes to the variable in memory?

And it is then only, that other thread sees this if "this other thread" also has empty local cache and hence read and write the same variable from memory?

But for volatile the read and write is always from memory?

Kindly let me know.
Thanks again for your prompt reply

Jeremy Manson said...

@Sankar - I urge you not to think about these things in terms of caches and main memories.

The first reason is that you would have to think about the individual processor and the guarantees it makes; those models are very, very complicated, and the number of different processor features (caches, write buffers, out-of-order execution...) that come into this are very large.

The second reason is that the compiler is allowed to move your code around, and write or not write to main memory as the specification allows.

If you are looking for something simple to hang your head around, one way of looking at it is that non-volatile writes may or may not be seen by other threads without additional synchronization (and they might show up in a weird order), and that volatile writes will be seen by other threads in the order they happen. This isn't 100% accurate, but it is pretty close.

Unknown said...

Thanks Jeremy.

Unknown said...

Jeremy there is one more question regarding "ordering", i.e. the third example. I just add a third variable c and only make this variable as volatile.

Question 1: I guess threadTwo() will return true. Then (in this scenario) can we assume that we could make only c as volatile, rather than all three?

class BadlyOrdered {
boolean a = false;
boolean b = false;
volatile boolean c = false;

void threadOne() {
a = true;
b = true;
c = true;
}

boolean threadTwo() {
boolean r3 = c; // sees true
boolean r1 = b; // sees true
boolean r2 = a; // sees true

return (r1 && r2 && r3);// returns true
}

}

Question 2: Do all the instructions in a method in the same thread become ordered if only the last instruction writes to a volatile, like in threadOne().

Questioned: Are all the statements in a synchronized block ordered?

Unknown said...

Sorry, I am taking your precious time Jeremy :-)

But where else to go?

One question: Which all independent statement may not be reordered? I meant to say other than write to volatile are there any other such statements? What about Thread.start() or Thread.join()?

Thanks in advance..

Jeremy Manson said...

@sankar - Doug Lea runs a concurrency-interest mailing list, and they often deal with memory model questions.

For question 1, if r3 is true, then r1 and r2 will also be true.

For question 2, if there is a volatile write, and another thread sees the result of that write, then it will see all of the actions that happened before that write. If you are asking whether the actions that happened before that write are ordered with respect to each other, the answer is no.

boolean a = false;
boolean b = false;
volatile boolean c = false;

void threadOne() {
a = true;
b = true;
c = true;
}

boolean threadTwo() {
boolean r3 = c; // sees false
boolean r1 = b; // sees true
boolean r2 = a; // may see false, because not ordered WRT b.
}

For question 3, statements in a synchronized block are ordered with respect to other blocks that synchronize on that lock, but not necessarily other threads that don't synchronize on that lock:

boolean a = false;
boolean b = false;

synchronized void threadOne() {
a = true;
b = true;
}

synchronized boolean threadTwo() {
boolean r3 = b; // sees true
boolean r1 = a; // sees true
}

/*unsynchronized*/ boolean threadThree() {
boolean r3 = b; // sees true
boolean r1 = a; // may see false
}

For question 4, Thread.start() acts like a volatile write, and the start of the thread acts like a volatile read. Thread.join() acts like a volatile read, and the end of the thread acts like a volatile write. We say that these things come in synchronization order, and the list of actions that form the synchronization order are here:

http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4.4

Unknown said...

Thanks a ton Jeremy

Wood Pecker said...

Awesome Post Jeremy !!! Thanks,

Would this code fix the atomicity problem you described..

class BrokenBankAccount {
private int balance = 10;

synchronized int getBalance() {
return balance;
}

synchronized void setBalance(int x)
throws IllegalStateException {
balance = x;
if (balance < 0) {
throw new IllegalStateException("Negative Balance");
}
}

void deposit(int x) {

setBalance(this.balance + x);

}

void withdraw(int x) {


setBalance(this.balance - x);

}
}

Jeremy Manson said...

@TRANSFORMATION: No, that would not work. In that case, you are just accessing balance without holding the lock at all, which is a no-no.

Anonymous said...

Hi Jeremy, I have a question about the code below.

class MyClass{

private int val;

int update(int upd){
this.val = upd;
return val
}
}

If I then create obj, an object of type MyClass and let it be shared by two threads t1 and t2, whose codes are as follows:

t1:
a = 10;
System.out.println((a == obj.update(a)));

t2:
b = 20;
System.out.println((b == obj.update(b)));

Is it possible that t1 and t2 print false? Based on what I know, update() is not atomic and I am thinking that it is possible to have the following trace:

1. t1 calls update without returning yet
2. t2 calls update without returning yet
3. the call of update by t1 returns
4. the call of update by t2 returns

If that would happen, I expect that t1 will print false, assuming that the change is visible. However, I tried to run this code many times and it has always been true for both thread. Of course I know that just running the code is not enough, but I cannot seem to find a good explanation for this.

What do you think?

Jeremy Manson said...

@anonymous: it is possible, but not very likely in practice. The more likely possibility is that one of them will return false and the other will return true, because it will be interrupted by the other thread between setting the value and checking the assertion - that would almost certainly happen in a real application.

Anonymous said...

Hey Jeremy, This is a great post .... Was wondering if we can discuss as to why VOLATILE is needed for Double checked singleton pattern ? When Thread2 acquires the lock after Thread1 has created the instance (for ex: instance = new SingletonClass()) and existed the synchronized block, wont this line sync all data with the main memory for the Thread2 to see when it enters the synchronized block. Am not deeply understanding why volatile helps here.

Jeremy Manson said...

@anonymous - Because you need synchronization on both the reader and writer sides. More information about it is available in my post about DCL

http://jeremymanson.blogspot.com/2008/05/double-checked-locking.html

Prem said...

Hi Jeremy, Thank for a great post...
I often think why would I add synchronization or lock mechanism inside the run implementation, which makes other thread to wait. Is there any other way I can address this "Atomicity" and "visibility" issue by using ThreadLocal or thread context.

Jeremy Manson said...

If you don't want to share data with another thread, then a ThreadLocal is a perfectly reasonable thing to use. If you do want to share data with another thread, then you have to use some form of explicit communication between threads, whether by locking, or use of volatile variables, or another well-documented mechanism.

Angel said...

Thanks for writing such good and informative article.

Anonymous said...

"check the box for details"

Which box?

Jeremy Manson said...

Nine years later, I have no idea. I've deleted the offending sentence.

Anonymous said...

In the Ordering section you said "In this case, you can throw a lock around threadOne or threadTwo, or you can declare them both to be volatile, and get the ordering you want."
What do you mean? Declare the methods volatile? Why do that?

Jeremy Manson said...

@Anonymous - Poor choice of words on my part. By "them" I meant "a" and "b". Fixed this in the text.