简体   繁体   中英

two processes accessing shared memory without lock

There are two variables in a shared memory (shared by two processes), one is a general int variable (call it intVar ) with initial value say 5 and the other variable of pid_t type (call it pidVar ) with initial value 0.

Now the code to be developed is that both the processes try to add a number (which is fixed and different for both these processes) to the intVar variable in the shared memory (there is no lock on memory ever) and if a process finds the pidVar variable in the shared memory to be having the value 0 , only then it will update it with it's own pid .

So the point of confusion for me is:

1: Since both the processes are trying to add a number, then ultimately the final value in the intVar variable will be same all the time after both the processes are done adding a number?

2: The pidVar in the shared memory is going to be totally random? Right?

  1. Not necessarily. Adding to intVar involves reading from memory, calculating a new value then writing back to memory. The two processes could read at the same time, which would mean two writes, so whichever came last would win.

  2. pidVar is defined to be initially zero. Either or both of the processes may find it that way, causing either or both to update it. So you will get one of them.

Your assignment says no locking , which is a loaded and imprecise term. Locking can refer to physical layer operations such as bus protocols right through high level operations such as inter process communication. I prefer synchronizing is locking; thus everything from a bus protocol to message passing is locking.

Some processors create the illusion of atomic memory, for example by having opcodes like "lock: addl $2, (%sp)"; but in reality that is a form of synchronization implemented in the processor via busy wait . Synchronization is locking; thus that would be in violation of your assignment.

Some processors slice the memory protocol down so you can control interference. Typical implementations use a load linked, store conditional pattern, where the store fails if anything has interfered with cache line identified by the load linked instruction. This permits composing atomics by retry operations; but is again synchronization, thus locking.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM