I just gave a talk in which I said that people could post questions on the Java memory model here. So feel free to post JMM questions below!
I still get a lot of questions about whether double-checked locking works in Java, and I should probably post something to clear it up. And I'll plug Josh Bloch's new book, too. Double Checked Locking is this idiom: // Broken -- Do Not Use! class Foo { private Helper helper = null; public Helper getHelper() { if (helper == null) { synchronized(this) { if (helper == null) { helper = new Helper(); } } } return helper; } The point of this code is to avoid synchronization when the object has already been constructed. This code doesn't work in Java. The basic principle is that compiler transformations (this includes the JIT, which is the optimizer that the JVM uses) can change the code around so that the code in the Helper constructor occurs after the write to the helper variable. If it does this, then after the constructing thread writes to helper, but before it actually finishes constructing the object,
Comments
First it was a very nice talk. My question is about the code showed below:
class Future {
private volatile boolean ready;
private Obj data;
...
public sync void setOnce(Obj o) {
if (ready) throw ...;
data = o;
ready = true;
}
}
Do we need the sync keyword in setOnce even we have guaranteed not reordering of the ready variable?
Tnx, and I´ll wait for the following-up talk.
Feel free not to wait for the follow-up talk to ask questions, because it might not happen for a good while yet...
I just watched you presentation over Google Video, and it was a good refresher for me about how the JMM works. But I wanted to ask you question that I haven't seen addressed in any of the JMM or Java concurrency discussions, which are very good at explaining why the BAD coding patterns don't work, but not so good at explaining why the GOOD ones do. In particular, why does a properly synchronized mutable wrapper class like the below work:
class C {
private int x;
public C(int v) {
this.x = v;
}
public synchronized int get() {
return this.x;
}
public synchronized void set(int v) {
this.x = v;
}
}
In particular, does get() return the value assigned to x in the constructor? Particularly in the case where the object is initialized in one thread, and get() is called from a different thread before any other thread ever calls the (synchronized) set(int) method.
The answer I have come up with is it works correctly only if the transfer of the reference to the C object between the threads is done correctly:
// thread1
synchronized (Global.lock) {
Global.cRef = new C(5);
}
// thread2
C myCref;
synchronized (Global.lock) {
myCref = Global.cRef;
}
int v = myCref.get();
(Or alternatively, cRef can be a volatile field.)
In this case, the unlock of Global.lock in thread1 makes all previous heap actions of thread1 visible to the next thread that locks Global.lock. So when thread2 locks Global.lock the write of 5 to the x field of the C object is made visible to it. In particular, the synchronization in the get() method, in this case, plays no role in making the value of x visible or not to thread2. Subsequent synchronization on the C object, however, makes changes to x visible between the two threads.
So the punchline is that the initial visibility of the field x is piggybacking on the synchronization used to make the value of Global.cRef visible between the two threads.
Am I correct in my analysis here?
I did mention in my talk that overuse of locks can lead to contention problems. I also mentioned that there exist a number of good, high performance algorithms, especially in packages like java.util.concurrent. I strongly advise everyone to take a look at these, and did so in my talk, as well.
However, I would not advise anyone to go out and write their own lock-free algorithms for production unless they have a lot of experience writing non-blocking, concurrent data structures. They are a lot more difficult to get right than locks.
I saw your talk on Google Video. It was really very informative. But i didn't understand the problem with double checking without making the variable volatile.
in the code below:
Helper helper;
Helper getHelper(){
if(helper == null)
synchronized(this){
if(helper == null){
helper = new Helper();
}
}
}
The Helper object will be created when the first thread enters the sync block. By the time, if anyother thread reads helper the value would be null. Only after helper is "created" the reference will be assigned to helper variable and this assignment will be a atomtic operation. So how does some other thread return a garbage value of helper?
If you still have questions after reading that, come back and post again, and I'll explain it in more detail.
One question about an usage example in JDK1.5 that is puzzeling me.
in java.util.concurrent.locks
Class LockSupport
There is an example of First-in-first-out non-reentrant lock class
It looks like that user of this class must declare FIFOMutex as final or else this program may not behave consistently. Or is there something that I am missing ?
Thanks.
class FIFOMutex {
private AtomicBoolean locked = new AtomicBoolean(false);
private Queue waiters = new ConcurrentLinkedQueue();
public void lock() {
boolean wasInterrupted = false;
Thread current = Thread.currentThread();
waiters.add(current);
// Block while not first in queue or cannot acquire lock
while (waiters.peek() != current ||
!locked.compareAndSet(false, true)) {
LockSupport.park();
if (Thread.interrupted()) // ignore interrupts while waiting
wasInterrupted = true;
}
waiters.remove();
if (wasInterrupted) // reassert interrupt status on exit
current.interrupt();
}
public void unlock() {
locked.set(false);
LockSupport.unpark(waiters.peek());
}
}
After FIFOMutex instance is created and assigned to a volatile variable( say fifoMutex ), threads trying to use fifoMutex as lock will have to read the volatile reference fifoMutex ( before they can use it ) which will make the two variables locked & waiters ( in FifoMutex ) properly publish their values to these threads. This is because, under the new semantics of volatile, writes of regular variables cannot be reordered with the volatile ( fifoMutex ) write.
Thanks and will wait for your comments.
I think you must have said "Other thread reads helper Equal to Null". But you said "... helper Not Equal to null".
Considering the context, I think it is not a trivial mistake!
Overall, it was a nice presentation.
I think you must have said "Other thread reads helper Equal to Null". But you said "... helper Not Equal to null".
Considering the context, I think it is not a trivial mistake!
Overall, it was a nice presentation.
Thanks for the kind words!
please help out with one sample from your thesis here:
Figure 4.1(from your thesis):
Initially, x == y == 0
===T1====||====T2=====
r2 = x || r1 = y
y = 1 || x = 2
may return : r2 == 2 r1 == 1
To get r2 == 2, r1 == 1 we want
the compiler to reorder the instructions
as follows:
Initially, x == y == 0
===T1====||====T2=====
..y = 1 || x = 2
.r2 = x || r1 = y
Then,
do the following executoins allow to accept
the above example according to JMM:
E1:
C1:
x = 0
y = 0
y = 1
E2:
C2:
x = 0
y = 0
x = 2
E3:
here, the r2 = x, according to rule 6 sees 0
E4:
C3:
r2 = 2, by rule 7
E5:
by rule 6, r1 = y sees 0
E6:
C4:
r1 = 1, by rule 7
Can you, please, clarify the meaning of "set" in the following definition:
"A set of actions A is happens-before consistent if for all reads r in A, where W(r) is the write action seen by r, it is not the case that either hb(r, W(r)) or that there exists a write w in A such that w.v = r.v and hb(W(r), w) and hb(w, r)."?
Is it mathematical set that has no any ordering?
Thanx
Should it say that W on A, where W(r) is the write action (in A) seen by r, is happens-before consistent if for all reads r in A, .... ?
Having said that, the justification you are giving is longer than it has to be: you can commit the reads performed by r1 and r2 in a single step (step 3). No reason to do them one at a time. (Also, you are missing lots of descriptions of the contents of sets, but I can infer the ones you left out).
http://stackoverflow.com/questions/5066866/testing-initialization-safety-of-final-fields
I have the following code:
class A {
private volatile int v = 2;
public void thread1 {
int myV = v;
// do lots of calculations using myV;
}
public void thread2 {
while (true) {
// keep changing v;
}
}
}
There are two threads each one running a method.
Does the local copy of v at thread1 method guarantees that while "doing lots of calculations using myV" myV will always has the same value?
If it does, how can I justify this behavior using the JMM's rules?
For example:
class C {
int x = 1;
static volatile C c = new C();
}
Thread 1:
C myC = C.c;
int myX = myC.x;
Thread 2
while (true) {
c.x++;
}
In the above case, you've changed the value of c.x, but not the value of c. The changes to c.x can be seen by Thread 1, and you can't say anything useful about the value of myX. However, if Thread 2 looks like this;
Thread 2:
int val = 0;
while (true) {
C.c = new C();
C.c.x = val++;
}
Then you are changing the value of c, and thread 1 will read the single x value associated with the appropriate c. Does that make sense?
The most difficult point for me is to understand how the JMM (and which rules) guarantees such behavior, i.e. how does JMM disallow (in this case) a code optimization that reads the value of "v" directly instead of reading from "myV"?
In this case, I imagine it would be pretty easy to find a counterexample for any particular transformation of the kind you mention: that is, it would be pretty easy to construct a transformed program of the kind you describe that has behavior that the original wouldn't have had under the model. I leave constructing such an example and walking through its possible behaviors as an exercise for the reader.
I was reading an old email about JMM and String#hashCode() (http://www.cs.umd.edu/~pugh/java/memoryModel/archive/2179.html) and I could not understand the restriction number 7.
As far I understand the JMM, if we use something like "if (hash != 0) return hash;" and the test does not fail, we are allowed to see only a non-zero value that represents the computed hashCode. That is, if a thread reads a value once and nobody else updated that memory position (or updated writing the same value) the thread will necessarily read the same value again.
Could you please explain it better?
Thanks,
Luis
Your understanding is not quite correct. If you have:
if (hash != 0) {
return hash;
}
The model posits two reads of the value for hash. The model also allows the compiler / runtime system to perform these reads in the opposite order. So, if you have compiled to this:
Thread 1:
r1 = hash;
r2 = hash;
if (r2 != 0)
return r1;
Thread 2:
hash = 1;
Then you could easily imagine thread 1 returning 0. It isn't terribly likely for the compiler to do it this way in this case, but you could imagine similar cases where it is likely. I'll leave those as an exercise for the reader.
But based on your compiled code, supposing Thread 1 reads an non-zero value on the first read. Does JMM posits it will read the same non-zero value on the second read (supposing nobody else updated that memory position, or updated writing the same value that was already there) ?
I was studying the C++ memory model and comparing it with Java memory model when a doubt arose. The JMM definition states that an execution has a "synchronization order, which is a total order over all synchronization actions in A". That is, as far as I understood, there is an order over all synchronization actions even if they refer to different memory positions.
C++ memory model, however, states there is a total order only if the actions refer to the same memory position.
My question is: how does implementation guarantee this total order? Does it employ memory barriers for all synchronization action in order to achieve this total order?
Thanks for your time!
http://gee.cs.oswego.edu/dl/jmm/cookbook.html
Thanks a lot for your talk. It is very informative and I learn a lot from it.
But I still have a doubt.
In the following scenario:
initial: a = 0; b = 0;
T1 T2
a = 1;
lock;
b = 1;
unlock;
lock;
read b;
unlock;
read a;
Can I say the value of a read by T2 must be 1?
To my understanding, the unlock in T1 flushes both the value of a and b to main memory, and the lock in T2 makes sure both read a and read b can get the latest value.
Am I right?
I've just watched your talk and read 'The Java Memory Model' paper. I understand most of it, but my understanding of actions is failing a bit. I understand the definition of an action, but not how the source code actually translates to actions. From the paper, I've deduced that actions aren't atomic (e.g. r2 = y is a single action, despite it being a read of y and a write to r2). How does this translate to an if statement, for example? Would if(r1 == y && r2 == x) be a single action?