简体   繁体   中英

How to set CPU affinity for a process from C or C++ in Linux?

是否有一种编程方法可以为 Linux 操作系统的 c/c++ 中的进程设置 CPU 亲和性?

You need to use sched_setaffinity(2) .

For example, to run on CPUs 0 and 2 only:

#define _GNU_SOURCE
#include <sched.h>

cpu_set_t  mask;
CPU_ZERO(&mask);
CPU_SET(0, &mask);
CPU_SET(2, &mask);
int result = sched_setaffinity(0, sizeof(mask), &mask);

( 0 for the first parameter means the current process, supply a PID if it's some other process you want to control).

See also sched_getcpu(3) .

在进程级别使用 sched_setaffinity,或对单个线程使用pthread_attr_setaffinity_np

I have done many effort to realize what is happening so I add this answer for helping people like me(I use gcc compiler in linux mint)

#include <sched.h> 
cpu_set_t  mask;

inline void assignToThisCore(int core_id)
{
    CPU_ZERO(&mask);
    CPU_SET(core_id, &mask);
    sched_setaffinity(0, sizeof(mask), &mask);
}
int main(){
    //cal this:
    assignToThisCore(2);//assign to core 0,1,2,...

    return 0;
}

But don't forget to add this options to the compiler command : -D _GNU_SOURCE Because operating system might assign a process to the particular core, you can add this GRUB_CMDLINE_LINUX_DEFAULT="quiet splash isolcpus=2,3" to the grub file located in /etc/default and the run sudo update-grub in terminal to reserve the cores you want

UPDATE: If you want to assign more cores you can follow this piece of code:

inline void assignToThisCores(int core_id1, int core_id2)
{
    CPU_ZERO(&mask1);
    CPU_SET(core_id1, &mask1);
    CPU_SET(core_id2, &mask1);
    sched_setaffinity(0, sizeof(mask1), &mask1);
    //__asm__ __volatile__ ( "vzeroupper" : : : ); // It is hear because of that bug which dirtied the AVX registers, so, if you rely on AVX uncomment it.
}

sched_setaffinity + sched_getaffinity minimal C runnable example

This example was extracted from my answer at: How to use sched_getaffinity and sched_setaffinity in Linux from C? I believe the questions are not duplicates since that one is a subset of this one, as it asks about sched_getaffinity only, and does not mention C++.

In this example, we get the affinity, modify it, and check if it has taken effect with sched_getcpu() .

main.c

#define _GNU_SOURCE
#include <assert.h>
#include <sched.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

void print_affinity() {
    cpu_set_t mask;
    long nproc, i;

    if (sched_getaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
        perror("sched_getaffinity");
        assert(false);
    }
    nproc = sysconf(_SC_NPROCESSORS_ONLN);
    printf("sched_getaffinity = ");
    for (i = 0; i < nproc; i++) {
        printf("%d ", CPU_ISSET(i, &mask));
    }
    printf("\n");
}

int main(void) {
    cpu_set_t mask;

    print_affinity();
    printf("sched_getcpu = %d\n", sched_getcpu());
    CPU_ZERO(&mask);
    CPU_SET(0, &mask);
    if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
        perror("sched_setaffinity");
        assert(false);
    }
    print_affinity();
    /* TODO is it guaranteed to have taken effect already? Always worked on my tests. */
    printf("sched_getcpu = %d\n", sched_getcpu());
    return EXIT_SUCCESS;
}

GitHub upstream .

Compile and run:

gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c
./main.out

Sample output:

sched_getaffinity = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 
sched_getcpu = 9
sched_getaffinity = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
sched_getcpu = 0

Which means that:

  • initially, all of my 16 cores were enabled, and the process was randomly running on core 9 (the 10th one)
  • after we set the affinity to only the first core, the process was moved necessarily to core 0 (the first one)

It is also fun to run this program through taskset :

taskset -c 1,3 ./a.out

Which gives output of form:

sched_getaffinity = 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 
sched_getcpu = 2
sched_getaffinity = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
sched_getcpu = 0

and so we see that it limited the affinity from the start.

This works because the affinity is inherited by child processes, which taskset is forking: How to prevent inheriting CPU affinity by child forked process?

Python: os.sched_getaffinity and os.sched_setaffinity

See: How to find out the number of CPUs using python

Tested in Ubuntu 16.04.

In short

unsigned long mask = 7; /* processors 0, 1, and 2 */
unsigned int len = sizeof(mask);
if (sched_setaffinity(0, len, &mask) < 0) {
    perror("sched_setaffinity");
}

Look in CPU Affinity for more details

It is also possible to make it through the shell without any modification in the programs with the cgroups and the cpuset sub-system. Cgroups (v1 at least) are typically mounted on /sys/fs/cgroup under which the cpuset sub-system resides. For example:

$ ls -l /sys/fs/cgroup/
total 0
drwxr-xr-x 15 root root 380 nov.   22 20:00 ./
drwxr-xr-x  8 root root   0 nov.   22 20:00 ../
dr-xr-xr-x  2 root root   0 nov.   22 20:00 blkio/
[...]
lrwxrwxrwx  1 root root  11 nov.   22 20:00 cpuacct -> cpu,cpuacct/
dr-xr-xr-x  2 root root   0 nov.   22 20:00 cpuset/
dr-xr-xr-x  5 root root   0 nov.   22 20:00 devices/
dr-xr-xr-x  3 root root   0 nov.   22 20:00 freezer/
[...]

Under cpuset , the cpuset.cpus defines the range of CPUs on which the processes belonging to this cgroup are allowed to run. Here, at the top level, all the CPUs are configured for all the processes of the system. Here, the system has 8 CPUs:

$ cd /sys/fs/cgroup/cpuset
$ cat cpuset.cpus
0-7

The list of processes belonging to this cgroup is listed in the cgroup.procs file:

$ cat cgroup.procs
1
2
3
[...]
12364
12423
12424
12425
[...]

It is possible to create a child cgroup into which a subset of CPUs are allowed. For example, let's define a sub-cgroup with CPU cores 1 and 3:

$ pwd
/sys/fs/cgroup/cpuset
$ sudo mkdir subset1
$ cd subset1
$ pwd
/sys/fs/cgroup/cpuset/subset1
$ ls -l 
total 0
-rw-r--r-- 1 root root 0 nov.   22 23:28 cgroup.clone_children
-rw-r--r-- 1 root root 0 nov.   22 23:28 cgroup.procs
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.cpu_exclusive
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.cpus
-r--r--r-- 1 root root 0 nov.   22 23:28 cpuset.effective_cpus
-r--r--r-- 1 root root 0 nov.   22 23:28 cpuset.effective_mems
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.mem_exclusive
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.mem_hardwall
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.memory_migrate
-r--r--r-- 1 root root 0 nov.   22 23:28 cpuset.memory_pressure
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.memory_spread_page
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.memory_spread_slab
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.mems
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.sched_load_balance
-rw-r--r-- 1 root root 0 nov.   22 23:28 cpuset.sched_relax_domain_level
-rw-r--r-- 1 root root 0 nov.   22 23:28 notify_on_release
-rw-r--r-- 1 root root 0 nov.   22 23:28 tasks
$ cat cpuset.cpus

$ sudo sh -c "echo 1,3 > cpuset.cpus"
$ cat cpuset.cpus 
1,3

The cpuset.mems files must be filled before moving any process into this cgroup. Here we move the current shell into this new cgroup (we merely write the pid of the process to move into the cgroup.procs file):

$ cat cgroup.procs

$ echo $$
4753
$ sudo sh -c "echo 4753 > cgroup.procs"
sh: 1: echo: echo: I/O error
$ cat cpuset.mems

$ sudo sh -c "echo 0 > cpuset.mems"
$ cat cpuset.mems
0
$ sudo sh -c "echo 4753 > cgroup.procs"
$ cat cgroup.procs
4753
12569

The latter shows that the current shell (pid#4753) is now located in the newly created cgroup (the second pid 12569 is the cat 's command one as being the child of the current shell, it inherits its cgroups). With a formatted ps command, it is possible to verify on which CPU the processes are running ( PSR column):

$ ps -o pid,ppid,psr,command
    PID    PPID PSR COMMAND
   4753    2372   3 bash
  12672    4753   1 ps -o pid,ppid,psr,command

We can see that the current shell is running on CPU#3 and its child ( ps command) which inherits the its cgroups is running on CPU#1.

As a conclusion, instead of using sched_setaffinity() or any pthread service, it is possible to create a cpuset hierarchy in the cgroups tree and move the processes into them by writing their pids in the corresponding cgroup.procs files.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM