I read material of distributed devices
in tensorflow
that training can be assigned to a specific CPU core.
Can we assign a task/thread to a CPU core to achieve concurrent or parallel processing?
with tf.device("/job:ps/task:0"):
weights_1 = tf.Variable(...)
biases_1 = tf.Variable(...)
with tf.device("/job:ps/task:1"):
weights_2 = tf.Variable(...)
biases_2 = tf.Variable(...)
You can get the current pid of the python process and use a third party utility like taskset to assign it to a CPU core.
Dont know much about tensorflow but I think the GIL will come into Play here.You will have to use multiprocessing and assign each process to a dufferent core.
You can bind a particular thread of process to an arbitrary core (assuming you are using linux). This works not only for python but for any process. I made a python script to show how you can do that.
You can get thread ids via ps
command: [user@dev ~]$ ps -Lo pid,%cpu,lwp -p {pid} Output for me:
PID %CPU LWP
28216 98.0 28216
28216 0.0 28217
28216 0.0 28218
Here 28216 is PID of the process, while you can see there are other threads running in a simple python script.
Now you can assign a thread to a particular core via taskset
taskset -cp 0-5 28218
It will show the following output:
pid 28218's current affinity list: 0-11
pid 28218's new affinity list: 0-5
You then can observe that some threads are bound to different set of CPUs:
[user@host ~]$ taskset -cp 28218
pid 28218's current affinity list: 0-5
[user@host ~]$ taskset -cp 28217
pid 28217's current affinity list: 0-11
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.