简体   繁体   中英

If there is no mutual exclusion then how volatile ensures happens-before relationship?

I read over here<\/a> that volatile<\/code> doesn't entail mutual exclusion locking, then how volatile ensures happens-before relationship or how volatile ensures that other thread reads updated value?

I'll hope to provide some insights in how a lock and volatile variable fit into the happens-before relation.

int a=0
int b=0

CPU1:
    a=1 (1)
    b=1 (2)

CPU2:
    while(b==0); (3)
    print(a) (4)

Volatile ensure happens-before relationship by two mechanisms:

  1. Value of a volatile variable must be immediately flushed to main memory. It is mentioned in JVM specification section 4.5<\/a> . <\/li>
  2. <\/li><\/ol>"

Actually, on most multi-CPU architectures, it's kind of the other way around. Locking and unlocking a mutex ensures "happens-before" because a mutex treats unlock operations the same as writes to a volatile variable, and it treats lock operations the same as reads.

The real magic happens in hardware: Most modern processors have special memory barrier instructions that user-mode programs can use to ensure coherence between different CPU caches when it is important.

Forcing coherence is expensive. If the caches had to always be coherent, programs would run much more slowly. The purpose of the memory barrier instructions is to mark the parts of the program where coherence really matters, and outside of those code regions, the CPUs are free to cache data independently of each other.

Reading and writing any volatile variable causes your program to execute memory barrier instructions that force the hardware to obey the "happens before" requirements of the Java Language Spec. And, locking and unlocking any mutex also causes your program to execute the same or similar instructions.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM