I've been asked by a couple of people lately about what a compiler can do with code motion around synchronized blocks. The first person asked me about the following example, so let's run with it:
I may have answered this already (at some point), but I feel free to repeat myself.
The JMM has what we call "roach motel" ordering. "Roach Motel" is Black Flag's name for their cockroach trap; in grad school, my apartment had lots and lots and lots of cockroaches, so I find it particularly salient (although I think it was probably Bill Pugh who came up with this association).
In the 1980s, the slogan for Black Flag Roach Motels was "Roaches check in, but they don't check out". The memory model is the same: accesses can be moved into synchronized blocks, but they can't be moved out. So, in this case, you can get:
if we want to move the write to y early, this code transforms to:
and transforms to:
and can't be transformed any more, because the write to y has been moved into the
synchronized block, but can't move out.
Similarly, if we want to move the write to x down:
transforms to:
transforms to:
.. but the write to x can't move any further.
At the end of these sorts of transformations, you can end up with something that looks like this:
... so the accesses to x, y and z can happen in any order, but once you move them into the synchronized block, you can't move them out again.
This is a tiny bit misleading, of course, because they can go out the direction they came in. But there is no difference between that and not having entered the synchronized block in the first place.
Now, if the compiler can remove the synchronized block altogether, then all the accesses can be moved about freely. This can happen, for example, if o is an object only accessed by a single thread -- the compiler will know that no other thread can lock on it, so it can feel free to remove the synchronization. That's a tale for another day, of course.
x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;
I may have answered this already (at some point), but I feel free to repeat myself.
The JMM has what we call "roach motel" ordering. "Roach Motel" is Black Flag's name for their cockroach trap; in grad school, my apartment had lots and lots and lots of cockroaches, so I find it particularly salient (although I think it was probably Bill Pugh who came up with this association).
In the 1980s, the slogan for Black Flag Roach Motels was "Roaches check in, but they don't check out". The memory model is the same: accesses can be moved into synchronized blocks, but they can't be moved out. So, in this case, you can get:
x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;
if we want to move the write to y early, this code transforms to:
x = 1;
synchronized(o) {
z = z + 1;
y = 1; // code motion
}
and transforms to:
x = 1;
synchronized(o) {
y = 1; // code motion
z = z + 1;
}
and can't be transformed any more, because the write to y has been moved into the
synchronized block, but can't move out.
Similarly, if we want to move the write to x down:
x = 1;
synchronized(o) {
z = z + 1;
}
y = 1;
transforms to:
synchronized(o) {
x = 1; // code motion
z = z + 1;
}
y = 1
transforms to:
synchronized(o) {
z = z + 1;
x = 1; // code motion
}
y = 1
.. but the write to x can't move any further.
At the end of these sorts of transformations, you can end up with something that looks like this:
synchronized(o){
y = 1;
z = z + 1;
x = 1;
}
... so the accesses to x, y and z can happen in any order, but once you move them into the synchronized block, you can't move them out again.
This is a tiny bit misleading, of course, because they can go out the direction they came in. But there is no difference between that and not having entered the synchronized block in the first place.
Now, if the compiler can remove the synchronized block altogether, then all the accesses can be moved about freely. This can happen, for example, if o is an object only accessed by a single thread -- the compiler will know that no other thread can lock on it, so it can feel free to remove the synchronization. That's a tale for another day, of course.
Comments
class Foo {
private Helper helper = null;
private boolean initialized = false;
public Helper getHelper() {
if (!initialized) {
synchronized(this) {
if (!initialized) {
helper = new Helper();
initialized = true;
}
}
}
return helper;
}
// other functions and members...
}
But can't the writes to 'helper' and 'initialized' be reordered in the synchronized block since 'initialized' is not volatile?
And how the hell can I keep the code formatting in my replies?! :)
(You can keep the formatting by using HTML. I tend to put every place I want an indent)
But i think question asked by Merriti is right and i dint find the answer. Jeremy, What do you think about it?
Another question: When memory is flushed from processor cache to main memory?
1. At each synchronized(object) statement entry and exit?
2. At each volatile read and write?
3. Does it flush full processor cache which includes data about all objects or just this object to which the volatile variable belongs or synchronized statement was applied
It doesn't apply broadly if x, y, and z are declared volatile.
All of your questions assume that there is some defined point where the memory has to be flushed. The semantics don't specify such a point. The trick is to understand the semantics.
The key thing to understand is the happens-before relationship. Reads generally have to return the last write that happened before them (with lots of caveats). Every action in a thread happens-before actions that happen later in the thread. Synchronization can also imply happens-before ordering. For example, a write to a volatile variable happens-before a read of the same volatile variable.
One way of implementing this is to flush everything to memory when there is a volatile write. Another way would be to figure out what you wrote since the last time you did a flush, and just flush that.
There are even ways in which there might be *no* flushes. If a volatile variable is confined to a single thread, it won't set up any happens-before relationships with other threads, so you just don't need the flushes at all.
That said, if you are looking for implementation strategies, then the JSR-133 cookbook document is useful.
But it should flush all writes which are transitive in relation(happens-before) to write of volatile variable assuming some other thread reads this volatile variable.
For ex:
y is volatile but not x
x=10;
y=30;
another thread which reads y should see x as 10;
Thread 1:
x = 10;
y = 30;
Thread 2:
r1 = y
// thread termination
The compiler could decide not even to execute that read. And then figure out that there are no reads after it for which you need to guarantee happens-before. And then decide that the write to y is thread-local, and not perform any flushing.
As long as the semantics are faithfully observed, there may be a mflush, there may not be a mflush.
That said, a volatile write followed by a volatile read of the same variable will produce a happens-before relationship, and provide the kinds of visibility guarantees you are asking about.
Sequential consistency is only guaranteed for volatile field accesses. It is not guaranteed by happens-before semantics.
I have a question exactly mentioned here regarding final field semantics mentioned in the article http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#finalRight
"Now, having said all of this, if, after a thread constructs an immutable object (that is, an object that only contains final fields), you want to ensure that it is seen correctly by all of the other thread, you still typically need to use synchronization. There is no other way to ensure, for example, that the reference to the immutable object will be seen by the second thread."
I am unable to understand scenario where in any way the reference to the immutable object will not be seen by the second thread, as mentioned above. Is it that the reference is not published to other thread? (via main memory). If reference is declared as volatile then the other thread will see it. If not, it should eventually see. Could you provide small example to understand.
The same question is asked here
https://stackoverflow.com/questions/6599058/synchronize-to-ensure-that-reference-to-immutable-object-will-be-seen-by-another
Depending on how your code is written, your implementation may guarantee that the writes are eventually written. Or, the compiler might be smart enough to elide the writes, so they won't get written. Or, you might be programming to a distributed shared memory machine with no coherence. The JMM is not there to be prescriptive.
If you decide that your implementation will always provide you a certain guarantee, then that's your prerogative, of course, but it is very hard to reason about such things.
" the compiler might be smart enough to elide the writes," could you explain me how ca n this happen? How can compiler decide not to write when other threads or same thread could be reading that value.
The compiler doesn't have to care that another thread might read the value unless there is subsequent synchronization. That's, in part, what this posting was about.
The compiler does have to care that a subsequent load in the same thread might read that value, but it can often be clever about that. For example, if it knows where all of the loads in the current thread will happen, it might just keep the value in a register, and let the loads read out of that register.
In practice, it is hard to get *too* clever about not performing writes to static variables. For example, it might not perform a write if it knew that write was immediately overwritten, but it's a bit tricky to create a situation where reasonable code depends on that behavior (unless it is memory mapped IO or something, in which case, you really should be using special-purpose APIs / volatile).
However, it's not that hard to create situations where one thread never sees the write to a static variable. In this case, for example:
Thread 1:
while (static_x != false);
Thread 2:
static_x = true;
The compiler can convert Thread 1 to:
if (static_1) {
while (true);
}
Because there is no synchronization in Thread 1.