Skip to main content

Roach Motels and The Java Memory Model

I've been asked by a couple of people lately about what a compiler can do with code motion around synchronized blocks. The first person asked me about the following example, so let's run with it:

x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;

I may have answered this already (at some point), but I feel free to repeat myself.

The JMM has what we call "roach motel" ordering. "Roach Motel" is Black Flag's name for their cockroach trap; in grad school, my apartment had lots and lots and lots of cockroaches, so I find it particularly salient (although I think it was probably Bill Pugh who came up with this association).

In the 1980s, the slogan for Black Flag Roach Motels was "Roaches check in, but they don't check out". The memory model is the same: accesses can be moved into synchronized blocks, but they can't be moved out. So, in this case, you can get:

x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;

if we want to move the write to y early, this code transforms to:

x = 1;
synchronized(o) {
z = z + 1;
y = 1; // code motion
}

and transforms to:

x = 1;
synchronized(o) {
y = 1; // code motion
z = z + 1;
}

and can't be transformed any more, because the write to y has been moved into the
synchronized block, but can't move out.

Similarly, if we want to move the write to x down:

x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;

transforms to:

synchronized(o) {
x = 1; // code motion
z = z + 1;
}
y = 1

transforms to:

synchronized(o) {
z = z + 1;
x = 1; // code motion
}
y = 1

.. but the write to x can't move any further.

At the end of these sorts of transformations, you can end up with something that looks like this:

synchronized(o){
y = 1;
z = z + 1;
x = 1;
}

... so the accesses to x, y and z can happen in any order, but once you move them into the synchronized block, you can't move them out again.

This is a tiny bit misleading, of course, because they can go out the direction they came in. But there is no difference between that and not having entered the synchronized block in the first place.

Now, if the compiler can remove the synchronized block altogether, then all the accesses can be moved about freely. This can happen, for example, if o is an object only accessed by a single thread -- the compiler will know that no other thread can lock on it, so it can feel free to remove the synchronization. That's a tale for another day, of course.

Comments

Anonymous said…
Ok, this seems to be the right post to reply to about this. This Wikipedia article (see the last Java example) says that in Java 5 this is a working implementation of double-checked locking:

class Foo {
private Helper helper = null;
private boolean initialized = false;
public Helper getHelper() {
if (!initialized) {
synchronized(this) {
if (!initialized) {
helper = new Helper();
initialized = true;
}
}
}
return helper;
}

// other functions and members...
}

But can't the writes to 'helper' and 'initialized' be reordered in the synchronized block since 'initialized' is not volatile?

And how the hell can I keep the code formatting in my replies?! :)
Jeremy Manson said…
Gak! Thanks for pointing that out. I'll fix their wagons, but good. Grr..

(You can keep the formatting by using HTML. I tend to put   every place I want an indent)
Tushar Goel said…
Sorry to reply on old post.

But i think question asked by Merriti is right and i dint find the answer. Jeremy, What do you think about it?
Jeremy Manson said…
I don't understand the comment, Tushar. The Wikipedia article was fixed years ago. It still seems okay.
Tushar Goel said…
Sorry, i was confused. I dint check wikipedia page carefully. My bad. :(
Pushparaj said…
Does this apply even if x,y,z are declared volatile?
Another question: When memory is flushed from processor cache to main memory?
1. At each synchronized(object) statement entry and exit?
2. At each volatile read and write?
3. Does it flush full processor cache which includes data about all objects or just this object to which the volatile variable belongs or synchronized statement was applied
Jeremy Manson said…
@pushparaj

It doesn't apply broadly if x, y, and z are declared volatile.

All of your questions assume that there is some defined point where the memory has to be flushed. The semantics don't specify such a point. The trick is to understand the semantics.

The key thing to understand is the happens-before relationship. Reads generally have to return the last write that happened before them (with lots of caveats). Every action in a thread happens-before actions that happen later in the thread. Synchronization can also imply happens-before ordering. For example, a write to a volatile variable happens-before a read of the same volatile variable.

One way of implementing this is to flush everything to memory when there is a volatile write. Another way would be to figure out what you wrote since the last time you did a flush, and just flush that.

There are even ways in which there might be *no* flushes. If a volatile variable is confined to a single thread, it won't set up any happens-before relationships with other threads, so you just don't need the flushes at all.

That said, if you are looking for implementation strategies, then the JSR-133 cookbook document is useful.
Pushparaj said…
@Jermy

But it should flush all writes which are transitive in relation(happens-before) to write of volatile variable assuming some other thread reads this volatile variable.

For ex:
y is volatile but not x
x=10;
y=30;

another thread which reads y should see x as 10;
Jeremy Manson said…
Again, don't mix up happens-before with the notion of a flush. Consider this code:

Thread 1:
x = 10;
y = 30;

Thread 2:
r1 = y
// thread termination

The compiler could decide not even to execute that read. And then figure out that there are no reads after it for which you need to guarantee happens-before. And then decide that the write to y is thread-local, and not perform any flushing.

As long as the semantics are faithfully observed, there may be a mflush, there may not be a mflush.

That said, a volatile write followed by a volatile read of the same variable will produce a happens-before relationship, and provide the kinds of visibility guarantees you are asking about.
Pushparaj said…
Does sequential consistency is achieved by happens-before semantics alone or along some other constructs in JMM
Jeremy Manson said…
@pushparaj

Sequential consistency is only guaranteed for volatile field accesses. It is not guaranteed by happens-before semantics.
Pushparaj said…
@Jeremy

I have a question exactly mentioned here regarding final field semantics mentioned in the article http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#finalRight

"Now, having said all of this, if, after a thread constructs an immutable object (that is, an object that only contains final fields), you want to ensure that it is seen correctly by all of the other thread, you still typically need to use synchronization. There is no other way to ensure, for example, that the reference to the immutable object will be seen by the second thread."

I am unable to understand scenario where in any way the reference to the immutable object will not be seen by the second thread, as mentioned above. Is it that the reference is not published to other thread? (via main memory). If reference is declared as volatile then the other thread will see it. If not, it should eventually see. Could you provide small example to understand.


The same question is asked here
https://stackoverflow.com/questions/6599058/synchronize-to-ensure-that-reference-to-immutable-object-will-be-seen-by-another

Jeremy Manson said…
@pushparaj - You are assuming there is a main memory that writes are eventually flushed to, and that other threads will reconcile from that main memory. The JMM makes no such assumption. Writes that don't involve synchronization (via locks or volatile or any other mechanism) aren't guaranteed to be seen by other threads.

Depending on how your code is written, your implementation may guarantee that the writes are eventually written. Or, the compiler might be smart enough to elide the writes, so they won't get written. Or, you might be programming to a distributed shared memory machine with no coherence. The JMM is not there to be prescriptive.

If you decide that your implementation will always provide you a certain guarantee, then that's your prerogative, of course, but it is very hard to reason about such things.
Pushparaj said…
Now I understood the reason.
" the compiler might be smart enough to elide the writes," could you explain me how ca n this happen? How can compiler decide not to write when other threads or same thread could be reading that value.
Jeremy Manson said…
@Pushparaj:

The compiler doesn't have to care that another thread might read the value unless there is subsequent synchronization. That's, in part, what this posting was about.

The compiler does have to care that a subsequent load in the same thread might read that value, but it can often be clever about that. For example, if it knows where all of the loads in the current thread will happen, it might just keep the value in a register, and let the loads read out of that register.

In practice, it is hard to get *too* clever about not performing writes to static variables. For example, it might not perform a write if it knew that write was immediately overwritten, but it's a bit tricky to create a situation where reasonable code depends on that behavior (unless it is memory mapped IO or something, in which case, you really should be using special-purpose APIs / volatile).

However, it's not that hard to create situations where one thread never sees the write to a static variable. In this case, for example:

Thread 1:
while (static_x != false);

Thread 2:
static_x = true;

The compiler can convert Thread 1 to:
if (static_1) {
  while (true);
}

Because there is no synchronization in Thread 1.

Popular posts from this blog

Double Checked Locking

I still get a lot of questions about whether double-checked locking works in Java, and I should probably post something to clear it up. And I'll plug Josh Bloch's new book, too. Double Checked Locking is this idiom: // Broken -- Do Not Use! class Foo {   private Helper helper = null;   public Helper getHelper() {     if (helper == null) {       synchronized(this) {         if (helper == null) {           helper = new Helper();         }       }     }   return helper; } The point of this code is to avoid synchronization when the object has already been constructed. This code doesn't work in Java. The basic principle is that compiler transformations (this includes the JIT, which is the optimizer that the JVM uses) can change the code around so that the code in the Helper constructor occurs after the write to the helper variable. If it does this, then after the constructing thread writes to helper, but before it actually finishes constructing the object,

What Volatile Means in Java

Today, I'm going to talk about what volatile means in Java. I've sort-of covered this in other posts, such as my posting on the ++ operator , my post on double-checked locking and the like, but I've never really addressed it directly. First, you have to understand a little something about the Java memory model. I've struggled a bit over the years to explain it briefly and well. As of today, the best way I can think of to describe it is if you imagine it this way: Each thread in Java takes place in a separate memory space (this is clearly untrue, so bear with me on this one). You need to use special mechanisms to guarantee that communication happens between these threads, as you would on a message passing system. Memory writes that happen in one thread can "leak through" and be seen by another thread, but this is by no means guaranteed. Without explicit communication, you can't guarantee which writes get seen by other threads, or even the order in whic

Date-Race-Ful Lazy Initialization for Performance

I was asked a question about benign data races in Java this week, so I thought I would take the opportunity to discuss one of the (only) approved patterns for benign races. So, at the risk of encouraging bad behavior (don't use data races in your code!), I will discuss the canonical example of "benign races for performance improvement". Also, I'll put in another plug for Josh Bloch's new revision of Effective Java (lgt amazon) , which I continue to recommend. As a reminder, basically, a data race is when you have one (or more) writes, and potentially some reads; they are all to the same memory location; they can happen at the same time; and that there is nothing in the program to prevent it. This is different from a race condition , which is when you just don't know the order in which two actions are going to occur. I've put more discussion of what a data race actually is at the bottom of this post. A lot of people think that it is okay to have a data