简体   繁体   中英

Does StoreStore memory barrier in Java forbid the read-write reordering?

Now we have

Load A
StoreStore
Store B

Is it possible that the actual execution order is as follows

StoreStore
Store B
Load A

If it is possible, how to explain a situation which seems to violate the The Java volatile Happens-Before Guarantee .

As far as I know, the volatile semantic is implemented using the following JMM memory barrier addition strategy

insert a StoreStore before volatile variable write operation
insert a StoreLoad after volatile variable write operation
insert a LoadLoad after volatile variable read operation
insert a LoadStore after volatile variable read operation

Now if we have two java threads as follows

thread 1

Load A
StoreStore
Store volatile B

thread 2

Load volatile B
Load C

Accoring to "The Java volatile Happens-Before Guarantee", Load A should happens-before Load C when Load volatile B is after Store volatile B , but if Load A can be reordered to "after Store volatile B",how to guarantee Load A is before Load C ?

Technically speaking, the Java language doesn't have memory barriers. Rather the Java Memory Model is specified in terms of happens before relations; see the following for details:

The terminology you are discussing comes from The JSR-133 Cookbook for Compiler Writers . As the document says it is a guide for people who are writing compilers that implement the Java Memory Model. It is interpreting the implications of the JMM, and is clearly not intended to be an official specification. The JLS is the specification.

The section in the JSR-133 Cookbook on memory barriers classifies them in terms of the way that they constrain specific sequences of loads and stores. For StoreStore barriers it says:

The sequence: Store1; StoreStore; Store2 Store1; StoreStore; Store2 Store1; StoreStore; Store2 ensures that Store1's data are visible to other processors (ie, flushed to memory) before the data associated with Store2 and all subsequent store instructions. In general, StoreStore barriers are needed on processors that do not otherwise guarantee strict ordering of flushes from write buffers and/or caches to other processors or main memory.

As you can see, a StoreStore barrier only constrains the behavior of store operations.

In your example, you have a load followed by a store . The semantics of a StoreStore barrier says nothing about load operations. Therefore, the reordering that you propose is allowed.

This is answering just the updated part of your Question.

First of all, the example you have presented is not Java code. Therefore we cannot apply JMM reasoning to it. (Just so that we are clear about this.)

If you want to understand how Java code behaves, forget about memory barriers . The Java Memory Model tells you everything that you need to do in order for the memory reads and writes to have guaranteed behavior. And everything you need know in order to reason about (correct) behavior. So:

  • Write your Java code
  • Analyze the code to ensure that there are proper happens before chains in all cases where on thread needs to read a value written by another thread.
  • Leave the problem of compiling your (correct) Java code to machine instructions to the compiler.

Looking at the sequences of pseudo-instructions in your example, they don't make much sense. I don't think that a real Java compiler would (internally) use barriers like that when compiling real Java code. Rather, I think there would be a StoreLoad memory barrier after each volatile write and before each volatile read.

Lets consider some real Java code snippets:

public int a;
public volatile int b;

// thread "one"
{
  a = 1;
  b = 2;
}

// thread "two"
{ 
  if (b == 2) {
      print(a);
  }
}

Now assuming that the code in thread "two" is executed after thread "one", there will be a happens-before chain like this:

  • a = 1 happens-before b = 2
  • b = 2 happens-before b == 2
  • b == 2 happens-before print(a)

Unless there is some other code involved, the happens-before chain means that thread "two" will print "1".

Note:

  1. It is not necessary to consider the memory barriers that the compiler uses when compiling the code.
  2. The barriers are implementation specific and internal to the compiler.
  3. If you look at the native code you won't see memory barriers per se . You will see native instructions that have the required semantics to ensure that the (hidden) memory barrier is present.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM