A question from a viewer. In my talk, I had a class that looked like this:
The setOnce method is executed by one thread, and get is executed by another. The point of this example is to examine what difference the volatile modifier on ready makes. In this case, if ready is not marked volatile, the compiler can reorder the writes to data and ready. This has the result that the thread that invokes get can see the value true when it reads ready, even if data has not been set yet. Messy.
The questioner asked why setOnce is synchronized. This is somewhat orthogonal to the example, which is why I didn't mention it in the talk. However, I thought it was important enough to include it on the slide. If this method is not synchronized, and multiple threads call it, then those threads can interfere with each other. The take away message here is that volatile is not a magic wand that can take away your concurrency issues with no additional cost.
Why is it in there, if I specified a single writer when I talked about it? What happened here was that I wrote this, said, "Do people really want to write this code without the synchronized block?", answered "No," and put in the synchronized block.
class Future {
private volatile boolean ready;
private Obj data;
...
public synchronized void setOnce(Obj o) {
if (ready) throw ...;
data = o;
ready = true;
}
public Object get() {
if (!ready)
return null;
return data;
}
}
The setOnce method is executed by one thread, and get is executed by another. The point of this example is to examine what difference the volatile modifier on ready makes. In this case, if ready is not marked volatile, the compiler can reorder the writes to data and ready. This has the result that the thread that invokes get can see the value true when it reads ready, even if data has not been set yet. Messy.
The questioner asked why setOnce is synchronized. This is somewhat orthogonal to the example, which is why I didn't mention it in the talk. However, I thought it was important enough to include it on the slide. If this method is not synchronized, and multiple threads call it, then those threads can interfere with each other. The take away message here is that volatile is not a magic wand that can take away your concurrency issues with no additional cost.
Why is it in there, if I specified a single writer when I talked about it? What happened here was that I wrote this, said, "Do people really want to write this code without the synchronized block?", answered "No," and put in the synchronized block.
Comments
Usually, when people talk about "read barriers", they mean something that is performed every time a read from the heap is performed. This is usually in the context of garbage collection, and has nothing to do with memory consistency issues.
I just wanted to know if the code you posted does have a potential problem, or if the fact that 'ready' is volatile somehow guarantees that 'data' will be read from main memory and not from the processor cache, even though it's not in a synchronized block and is not volatile...Cause that's what your slideset suggests (slide 28), that visibility of 'data' is guaranteed by the fact that 'ready' is volatile...And I don't get that.
Thanks! Google Tech Talks Rule! :)
In general, volatile is implemented using memory barriers to ensure that writes will go out to main memory.
I have a quick question ( I just found your blog and it's awesome )
The synchronization block is used not only for gateway but also visibility. Since 'get()' is not synchronized, is it possible for 'data' to be null while 'ready' to be true?
I understand that this article is to show 'volatile' and its effect on 'reordering' but it still does not achieve the desired effect b/c 'data' can be null.
I have a quick question ( I just found your blog and it's awesome )
The synchronization block is used not only for gateway but also visibility. Since 'get()' is not synchronized, is it possible for 'data' to be null while 'ready' to be true?
I understand that this article is to show 'volatile' and its effect on 'reordering' but it still does not achieve the desired effect b/c 'data' can be null.
Thanks for the kind words. volatile can also be used to provide visibility and ordering guarantees. In this case, that's exactly what it is doing. All of the writes that happen before the volatile write to ready are going to be ordered before and visible to anything that happens after a read of that write.
In this case, the write is the value true. The get() method sees the value true. Therefore, the write to data is going to be ordered before and visible to the read of data that happens after the read of ready.
Can you please let me know if the following code is bug-free.
// implements looped wait() under a condition
// as recommended by Doug Lee/Joshua Bloch
class MyMonitor
{
private boolean flag = false;
void myWait()
{
synchronized(this)
{
while (flag)
{
this.wait();
}
flag = true;
}
}
void myNotify()
{
synchronized(this)
{
flag = false;
this.notify();
}
}
}
Thread A calls myWait();
Thread B calls myWait();
Thread A calls myNotify();
Is it possible that Thread B will wake up and still see flag=true, since 'flag' is not volatile?
And btw, is there any way how one can check if a code like above is bug free? :-)
Thanks.
Krishna
I would like to know how synchronizing on a monitor influences the ordering. Will syncronization on any monitor, invalidate 'all' local variables and force loading from main memory?
Nope. Thinking about this in terms of invalidations is not quite right. You have to think about it in terms of the happens-before relationship. All updates that are made to heap variables that are ordered before an unlock on a particular monitor are also ordered before any subsequent lock of that monitor. So random synchronization -- for example, "synchronized(new Object())" -- is not guaranteed to do anything.
Can you please let me know if the following code is bug-free.
I'm not sure what you are trying to do in the code, but, the way you phrased it, it seems to me as if both Thread A and Thread B will call wait and never wake up. Thread A won't be able to call notify(). Perhaps I'm confused?
Thread A don't goes in loop for the first time so enter in "critical section", flag=true.
Thread B is blocked with loop (wait on critical section).
Thread a release flag, flag=false, so Thread B is allowed to enter critical section.
It work without volatile keyword
----
Why is it that threads Thread A and Thread B, both while calling myWait, wait indefinitely?
Isn't there a check which blocks the call of wait?
Say, Thread A calls myWait, acquires the lock and sets flag = true as flag wasn't modified by any thread before.
Then, Thread B will go on to wait and will be notified by Thread A using myNotify().
Is the problem, which you are saying arises because of "ordering" of the "execution" of different statements ?
I mean to say, flag is set as true before the execution of while loop by any thread.
Is it the problem you are pointing to?
@Jeremy when thread A calls myWait it will NOT wait because flag initially is false, so it won't go into the loop and this.wait() won't be called. that is if thread A calls myWait before thread B does. what you are saying is correct only if thread B enters synchronized block before thread A does, but the ordering here is given as A.mywait, B.mywait, A.mynotify (A enters the block first)
I guess I am being picky but your comment is somewhat confusing without this detail.
I was wondering about the visibility guarantees of volatile accesses. The JMM clearly says, that
Sorry for the previous, incomplete comment. I accidentally hit the Enter. I wanted to ask something related to Chapter 8 of the JVMS, 2nd edition, but I have later observed, that Sun explicitly says, that it has been replaced by JSR 133. So there my question is not relevant anymore. But thanks anyway!
Best regards,
Zoltan Majo
I just saw your great Google Tech talk on the Java Memory model - very enlightening, thank you.
To help me understand further, I've been writing a few Java tests that demonstrate the various concurrency issues you mentioned, but I've become a bit stuck.
I was wondering if you'd mind casting your eye over the following issue....
Any help appreciated,
James Siddle
The issue is that I have two threads, incrementing a volatile, unsynchronized int - each thread counts to 1000 and increments each time. This was to see the effect of multiple threads writing with non-atomic operations. The final result is often 2000, sometimes around 1500, but occasionally less than 1000.
The cases where the number is less than 1000 surprised me, because I would have expected at worst to be losing 1 in every two writes, because the other thread is working from an earlier read.
Any thoughts as to why I'm seeing less than 1000?
for (int i = 0; i < 1000; i++) {
v = v + 1;
}
This actually means this:
for (int i = 0; i < 1000; i++) {
r1 = v + 1;
v = r1;
}
Imagine the following:
Thread 1
r1 = v + 1 // r1 == 1
Thread 2:
r1 = v + 1;
v = r1;
// repeat 100 times, now v == 100
Thread 1:
v = r1 // v is now 1
That's called a lost update
The volatile keyword should not be used by beginners. Also, please stop sending messages containing irrelevant links to your blog post on volatile (or anything else). I don't believe in increasing people's Google rankings that way, and will be deleting your post soon.
If I synchronized on get(), I can omit volatile on ready, am i right?
Thanks.