Skip to main content

JMM questions go here

I just gave a talk in which I said that people could post questions on the Java memory model here. So feel free to post JMM questions below!

Comments

Marcell Manfrin said…
Hi Jeremy,

First it was a very nice talk. My question is about the code showed below:

class Future {
private volatile boolean ready;
private Obj data;
...
public sync void setOnce(Obj o) {
if (ready) throw ...;
data = o;
ready = true;
}
}

Do we need the sync keyword in setOnce even we have guaranteed not reordering of the ready variable?

Tnx, and I´ll wait for the following-up talk.
Jeremy Manson said…
Answered here.

Feel free not to wait for the follow-up talk to ask questions, because it might not happen for a good while yet...
Jeremy Manson said…
And thanks for the kind words!
Hi Jeremy,

I just watched you presentation over Google Video, and it was a good refresher for me about how the JMM works. But I wanted to ask you question that I haven't seen addressed in any of the JMM or Java concurrency discussions, which are very good at explaining why the BAD coding patterns don't work, but not so good at explaining why the GOOD ones do. In particular, why does a properly synchronized mutable wrapper class like the below work:

class C {
private int x;

public C(int v) {
this.x = v;
}

public synchronized int get() {
return this.x;
}

public synchronized void set(int v) {
this.x = v;
}
}

In particular, does get() return the value assigned to x in the constructor? Particularly in the case where the object is initialized in one thread, and get() is called from a different thread before any other thread ever calls the (synchronized) set(int) method.

The answer I have come up with is it works correctly only if the transfer of the reference to the C object between the threads is done correctly:

// thread1
synchronized (Global.lock) {
Global.cRef = new C(5);
}

// thread2
C myCref;
synchronized (Global.lock) {
myCref = Global.cRef;
}
int v = myCref.get();

(Or alternatively, cRef can be a volatile field.)

In this case, the unlock of Global.lock in thread1 makes all previous heap actions of thread1 visible to the next thread that locks Global.lock. So when thread2 locks Global.lock the write of 5 to the x field of the C object is made visible to it. In particular, the synchronization in the get() method, in this case, plays no role in making the value of x visible or not to thread2. Subsequent synchronization on the C object, however, makes changes to x visible between the two threads.

So the punchline is that the initial visibility of the field x is piggybacking on the synchronization used to make the value of Global.cRef visible between the two threads.

Am I correct in my analysis here?
Eugene Kuleshov said…
You been pushing synchronization in your talk and int can be really dangerous route if it is dome without thinking about consequences. The problem with any kind of synchronization is that introduces contention, and if more threads trying to acquire lock contention grows and performance goes down. Generally there is no solution, but in some cases we can use lock-free algorithms (Cliff Click gave a great example) or try to reduce contention (i.e. tricks used in ReentrantReadWriteLock, ConcurrentHashMap and other classes from java.util.concurrent).
Jeremy Manson said…
Hi Eugene,

I did mention in my talk that overuse of locks can lead to contention problems. I also mentioned that there exist a number of good, high performance algorithms, especially in packages like java.util.concurrent. I strongly advise everyone to take a look at these, and did so in my talk, as well.

However, I would not advise anyone to go out and write their own lock-free algorithms for production unless they have a lot of experience writing non-blocking, concurrent data structures. They are a lot more difficult to get right than locks.
Abhinav said…
Hi Jeremy,

I saw your talk on Google Video. It was really very informative. But i didn't understand the problem with double checking without making the variable volatile.

in the code below:

Helper helper;

Helper getHelper(){
if(helper == null)
synchronized(this){
if(helper == null){
helper = new Helper();
}
}
}

The Helper object will be created when the first thread enters the sync block. By the time, if anyother thread reads helper the value would be null. Only after helper is "created" the reference will be assigned to helper variable and this assignment will be a atomtic operation. So how does some other thread return a garbage value of helper?
Jeremy Manson said…
Abhi: The (main) problem is that the obejct isn't necessarily constructed before the assignment to helper is made. Have you read this page?

If you still have questions after reading that, come back and post again, and I'll explain it in more detail.
Anonymous said…
Hi Jeremy,

One question about an usage example in JDK1.5 that is puzzeling me.

in java.util.concurrent.locks
Class LockSupport

There is an example of First-in-first-out non-reentrant lock class

It looks like that user of this class must declare FIFOMutex as final or else this program may not behave consistently. Or is there something that I am missing ?

Thanks.

class FIFOMutex {
private AtomicBoolean locked = new AtomicBoolean(false);
private Queue waiters = new ConcurrentLinkedQueue();

public void lock() {
boolean wasInterrupted = false;
Thread current = Thread.currentThread();
waiters.add(current);

// Block while not first in queue or cannot acquire lock
while (waiters.peek() != current ||
!locked.compareAndSet(false, true)) {
LockSupport.park();
if (Thread.interrupted()) // ignore interrupts while waiting
wasInterrupted = true;
}

waiters.remove();
if (wasInterrupted) // reassert interrupt status on exit
current.interrupt();
}

public void unlock() {
locked.set(false);
LockSupport.unpark(waiters.peek());
}
}
Jeremy Manson said…
sanjay: Sorry to take so long to get back on this. If I am understanding your question correctly: The instantiated FIFOMutex would have to be correctly published to the various threads that see it. There are several ways to do this. One would be to declare it final. Another would be to construct it and assign it at class initialization time. Another would be to use volatile fields.
Anonymous said…
Jeremy, I am guilty of not putting the question properly, but you anyway answered my question. I wanted to ask about the visibility aspect of variables locked ( AtomicBoolean ) & waiters ( Queue ) declared in FIFOMutex. My initial thoughts were that declaring the instantiated FIFOMutex reference volatile may not guarantee the visibility of these two above mentioned references. Since you mentioned that declaring the FIFOMutex instance volatile will make the FIFO instance correctly published to threads and you did not foresee any problem in general, I relooked at the code. And yes, declaring FIFOMutex instance volatile will make the program behave correctly because of the new semantics of volatile under new JMM. Jeremy, please validate my thoughts here.

After FIFOMutex instance is created and assigned to a volatile variable( say fifoMutex ), threads trying to use fifoMutex as lock will have to read the volatile reference fifoMutex ( before they can use it ) which will make the two variables locked & waiters ( in FifoMutex ) properly publish their values to these threads. This is because, under the new semantics of volatile, writes of regular variables cannot be reordered with the volatile ( fifoMutex ) write.

Thanks and will wait for your comments.
Freedom said…
Watch this video carefully at the 43rd minute:
I think you must have said "Other thread reads helper Equal to Null". But you said "... helper Not Equal to null".

Considering the context, I think it is not a trivial mistake!

Overall, it was a nice presentation.
Jeremy Manson said…
Watch this video carefully at the 43rd minute:
I think you must have said "Other thread reads helper Equal to Null". But you said "... helper Not Equal to null".

Considering the context, I think it is not a trivial mistake!

Overall, it was a nice presentation.


Thanks for the kind words!
Anonymous said…
Hi Jeremy,
please help out with one sample from your thesis here:

Figure 4.1(from your thesis):
Initially, x == y == 0
===T1====||====T2=====
r2 = x || r1 = y
y = 1 || x = 2
may return : r2 == 2 r1 == 1

To get r2 == 2, r1 == 1 we want
the compiler to reorder the instructions
as follows:

Initially, x == y == 0
===T1====||====T2=====
..y = 1 || x = 2
.r2 = x || r1 = y


Then,
do the following executoins allow to accept
the above example according to JMM:

E1:
C1:
x = 0
y = 0
y = 1

E2:
C2:
x = 0
y = 0
x = 2


E3:
here, the r2 = x, according to rule 6 sees 0

E4:
C3:
r2 = 2, by rule 7

E5:
by rule 6, r1 = y sees 0

E6:
C4:
r1 = 1, by rule 7
JSparrow said…
Hi.
Can you, please, clarify the meaning of "set" in the following definition:
"A set of actions A is happens-before consistent if for all reads r in A, where W(r) is the write action seen by r, it is not the case that either hb(r, W(r)) or that there exists a write w in A such that w.v = r.v and hb(W(r), w) and hb(w, r)."?

Is it mathematical set that has no any ordering?

Thanx
JSparrow said…
"A set of actions A is happens-before consistent if for all reads r in A, where W(r) is the write action seen by r, it is not the case that either hb(r, W(r)) or that there exists a write w in A such that w.v = r.v and hb(W(r), w) and hb(w, r)."

Should it say that W on A, where W(r) is the write action (in A) seen by r, is happens-before consistent if for all reads r in A, .... ?
Jeremy Manson said…
jsparrow: I'm not sure if I'm parsing that correctly, but if you are asking if the write is also in A, then the answer is yes.
Jeremy Manson said…
@anonymous - Sorry about the latency; that came a week or two before my wedding; I put it off, and forgot about it.

Having said that, the justification you are giving is longer than it has to be: you can commit the reads performed by r1 and r2 in a single step (step 3). No reason to do them one at a time. (Also, you are missing lots of descriptions of the contents of sets, but I can infer the ones you left out).
Anonymous said…
It would be great if you could check this Stack Overflow question about the JLS spec's chapter 17:
http://stackoverflow.com/questions/5066866/testing-initialization-safety-of-final-fields
Jeremy Manson said…
@Anonymous: Done, although I think a couple of people ahead of me answered it well enough.
Luis Henrique said…
Hi Jeremy,
I have the following code:

class A {
private volatile int v = 2;


public void thread1 {
int myV = v;

// do lots of calculations using myV;
}


public void thread2 {
while (true) {
// keep changing v;
}
}

}

There are two threads each one running a method.

Does the local copy of v at thread1 method guarantees that while "doing lots of calculations using myV" myV will always has the same value?

If it does, how can I justify this behavior using the JMM's rules?
Jeremy Manson said…
@Luis - When you say "keep changing v", do you mean keep changing the object pointed to by v, or do you mean keep changing the value of v itself? If you mean the former, then those changes can be seen by the thread, and if you mean the latter, then you are fine.

For example:

class C {
  int x = 1;
  static volatile C c = new C();
}

Thread 1:
C myC = C.c;
int myX = myC.x;

Thread 2
while (true) {
  c.x++;
}

In the above case, you've changed the value of c.x, but not the value of c. The changes to c.x can be seen by Thread 1, and you can't say anything useful about the value of myX. However, if Thread 2 looks like this;

Thread 2:
int val = 0;
while (true) {
  C.c = new C();
  C.c.x = val++;
}

Then you are changing the value of c, and thread 1 will read the single x value associated with the appropriate c. Does that make sense?
Luis Henrique said…
Thanks Jeremy. I was talking about the second case (the one that just change the value). Indeed, that was the behavior I was expecting.

The most difficult point for me is to understand how the JMM (and which rules) guarantees such behavior, i.e. how does JMM disallow (in this case) a code optimization that reads the value of "v" directly instead of reading from "myV"?
Jeremy Manson said…
@Luis - The model doesn't explicitly allow or prohibit particular program transformations. For any transformation, if you want to demonstrate that it is legal, you have to demonstrate that the execution of such a program after transformation is equivalent to a legal execution of the original program as described by the model.

In this case, I imagine it would be pretty easy to find a counterexample for any particular transformation of the kind you mention: that is, it would be pretty easy to construct a transformed program of the kind you describe that has behavior that the original wouldn't have had under the model. I leave constructing such an example and walking through its possible behaviors as an exercise for the reader.
Luis Henrique said…
Hi Jeremy,
I was reading an old email about JMM and String#hashCode() (http://www.cs.umd.edu/~pugh/java/memoryModel/archive/2179.html) and I could not understand the restriction number 7.

As far I understand the JMM, if we use something like "if (hash != 0) return hash;" and the test does not fail, we are allowed to see only a non-zero value that represents the computed hashCode. That is, if a thread reads a value once and nobody else updated that memory position (or updated writing the same value) the thread will necessarily read the same value again.

Could you please explain it better?

Thanks,
Luis
Jeremy Manson said…
Hi Luis,

Your understanding is not quite correct. If you have:

if (hash != 0) {
  return hash;
}

The model posits two reads of the value for hash. The model also allows the compiler / runtime system to perform these reads in the opposite order. So, if you have compiled to this:

Thread 1:
r1 = hash;
r2 = hash;
if (r2 != 0)
  return r1;

Thread 2:
hash = 1;

Then you could easily imagine thread 1 returning 0. It isn't terribly likely for the compiler to do it this way in this case, but you could imagine similar cases where it is likely. I'll leave those as an exercise for the reader.
Luis Henrique said…
Thanks for clarifying things. I forgot reordering. =/

But based on your compiled code, supposing Thread 1 reads an non-zero value on the first read. Does JMM posits it will read the same non-zero value on the second read (supposing nobody else updated that memory position, or updated writing the same value that was already there) ?
Jeremy Manson said…
@luis - It cannot read a write that never occurred. The write that it sees either has to be the last write that occurred to that variable in happens-before order, or a write that is performed in a data race with the read. In this case, if you implement lazy racy hash codes, it is very easy to have a write that is performed in a data race with the read.
Luis Henrique said…
Hi Jeremy!

I was studying the C++ memory model and comparing it with Java memory model when a doubt arose. The JMM definition states that an execution has a "synchronization order, which is a total order over all synchronization actions in A". That is, as far as I understood, there is an order over all synchronization actions even if they refer to different memory positions.

C++ memory model, however, states there is a total order only if the actions refer to the same memory position.

My question is: how does implementation guarantee this total order? Does it employ memory barriers for all synchronization action in order to achieve this total order?

Thanks for your time!
Jeremy Manson said…
@luis - yes, that's right. See Doug Lea's JSR-133 cookbook for more implementation details.

http://gee.cs.oswego.edu/dl/jmm/cookbook.html
Jing's blogger said…
Hi Jeremy,

Thanks a lot for your talk. It is very informative and I learn a lot from it.

But I still have a doubt.

In the following scenario:
initial: a = 0; b = 0;

T1 T2
a = 1;
lock;
b = 1;
unlock;

lock;
read b;
unlock;
read a;
Can I say the value of a read by T2 must be 1?
To my understanding, the unlock in T1 flushes both the value of a and b to main memory, and the lock in T2 makes sure both read a and read b can get the latest value.
Am I right?
Jeremy Manson said…
@jing - Not necessarily. All of T2 could take place first. However, you can say that if the read of b returns the value 1, the read of a will return the value 1.
Unknown said…
Hi Jeremy,

I've just watched your talk and read 'The Java Memory Model' paper. I understand most of it, but my understanding of actions is failing a bit. I understand the definition of an action, but not how the source code actually translates to actions. From the paper, I've deduced that actions aren't atomic (e.g. r2 = y is a single action, despite it being a read of y and a write to r2). How does this translate to an if statement, for example? Would if(r1 == y && r2 == x) be a single action?
Jeremy Manson said…
@james (better late than never, right?) For simplicity, consider one action to be one possibly-interthread-action. So, r2 = y is just a read, because r2 is a local and y is a heap variable. if (r1 == y && r2 == x) is two actions - a read of y and a read of x.

Popular posts from this blog

Double Checked Locking

I still get a lot of questions about whether double-checked locking works in Java, and I should probably post something to clear it up. And I'll plug Josh Bloch's new book, too. Double Checked Locking is this idiom: // Broken -- Do Not Use! class Foo {   private Helper helper = null;   public Helper getHelper() {     if (helper == null) {       synchronized(this) {         if (helper == null) {           helper = new Helper();         }       }     }   return helper; } The point of this code is to avoid synchronization when the object has already been constructed. This code doesn't work in Java. The basic principle is that compiler transformations (this includes the JIT, which is the optimizer that the JVM uses) can change the code around so that the code in the Helper constructor occurs after the write to the helper variable. If it does this, then after the constructing thread writes to helper, but before it actually finishes constructing the object,

What Volatile Means in Java

Today, I'm going to talk about what volatile means in Java. I've sort-of covered this in other posts, such as my posting on the ++ operator , my post on double-checked locking and the like, but I've never really addressed it directly. First, you have to understand a little something about the Java memory model. I've struggled a bit over the years to explain it briefly and well. As of today, the best way I can think of to describe it is if you imagine it this way: Each thread in Java takes place in a separate memory space (this is clearly untrue, so bear with me on this one). You need to use special mechanisms to guarantee that communication happens between these threads, as you would on a message passing system. Memory writes that happen in one thread can "leak through" and be seen by another thread, but this is by no means guaranteed. Without explicit communication, you can't guarantee which writes get seen by other threads, or even the order in whic

Date-Race-Ful Lazy Initialization for Performance

I was asked a question about benign data races in Java this week, so I thought I would take the opportunity to discuss one of the (only) approved patterns for benign races. So, at the risk of encouraging bad behavior (don't use data races in your code!), I will discuss the canonical example of "benign races for performance improvement". Also, I'll put in another plug for Josh Bloch's new revision of Effective Java (lgt amazon) , which I continue to recommend. As a reminder, basically, a data race is when you have one (or more) writes, and potentially some reads; they are all to the same memory location; they can happen at the same time; and that there is nothing in the program to prevent it. This is different from a race condition , which is when you just don't know the order in which two actions are going to occur. I've put more discussion of what a data race actually is at the bottom of this post. A lot of people think that it is okay to have a data