简体   繁体   中英

Multi-threaded C++ code slower with more physical cores? (threaded C++ mex function)

I am running multi-threaded C++ code on different machines right now. I am using it within a Matlab mex function, so the overall program is run from MatLab. I used the code in this link here, only changed what is done in "main_loop" to fit to my task. The code is running perfectly fine on two of my computers and it is many times faster than running the same C++ code as single thread. So I think that the program itself is fine.

However, when I run the same things on a third machine, it is suddenly extremely slow. The single threaded version is fine, but the multi-threaded one takes 10-15 times longer. Now, since everything seems fine on the other computers, my guess is that it has something to do with the specs of the third machine (details see below). My guess: The third computer has two physical processors. I guess this requires to copy everything physically to both processors? (The original code is intentionally written such that no hard-copy of any involved variable is required) If so, is there a way to control on which processor the threads are opened? (It would already help if I can just limit myself to one CPU and avoid copying everything) I already tried to set the number of threads down to 2, what did not help.

Specs of 2-CPU computer:

  • Intel Xeon Silver 4210R, 2.40Ghz (2 times), 128 GB Ram 64bit, Windows 10 Pro

Specs of other computers:

  • Intel Core i7-8700, 3.2Ghz, 64 GB Ram 64bit, Windows 10 Pro
  • Intel Core i7-10750H, 2.6Ghz, 16 GB Ram 64bit, Windows 10 Pro, Laptop

TL;DR: NUMA effects combined with false-sharing are very likely to produce the observed effect only on the 2-socket system. Low-level profiling information to confirm/disprove the hypothesis.


Multi-processors systems are subject to NUMA effect . Non-uniform memory access platforms are composed of NUMA nodes which have their own local memory. Accessing to the memory of another node is more expensive (higher latency or/and smaller throughput). Multiples threads/processes located on multiple NUMA nodes accessing to the same NUMA node memory can saturate it.

Allocated memory is split in pages that are are mapped to NUMA nodes. The exact mapping policy is very dependent of the operating system (OS), its configuration and the one of the target processes. The first touch policy is quite usual. The idea is to allocate the page on the NUMA node performing the first access on the target page. Regarding the target chosen policy, OS can migrate pages from one NUMA node to another regarding the amount of remote NUMA node access. Controlling the policy is critical on NUMA platforms, especially if the application is not NUMA-aware.

The memory of multiple NUMA nodes is kept coherent thanks to a cache coherence protocol and an high-performance inter-processor communication network (Ultra Path Interconnect in your case). Cache coherence also applies between cores of the same processor. The thing is moving a cache line from (the L2 cache of) one core to another (L2 cache) is much faster than moving it from (the L3 cache of) one processor to another (L3 cache). Here is an analogy for human communication: neurons of different cortical area communicate faster than two humans together.

If your application operate in parallel on the same cache line, the false-sharing can cause a cache-line bouncing effect which is much more visible between threads spread on different processors.

This is a very complex topic . That being said, you can analyse these effects using low-level profilers like VTune (or perf on Linux). The idea is to analyse low-level performance hardware counters like L2/L3 cache misses/hit, RFOs, remote NUMA accesses, etc. This can be complex and tedious to use for someone not familiar with how processors and OS works but VTune help a bit. Note that there are some more specific tools of Intel to analyse (more easily) such specific effects that usually happens on parallel applications. AFAIK, they are part of the Intel XE set of applications (which is not free). The best to do is to avoid false-sharing using padding , design your application so each thread should operate on its own memory location as much a possible (ie. good locality), to control the NUMA allocation policy and finally to bind threads/processes to core (to avoid unexpected migrations).

Experimental benchmarks can also be used to quickly check if NUMA effect and false sharing occurs. For example, you can bind all the threads/processes on the same NUMA node and tell the OS to allocate pages on this NUMA node. This enable you to find issues related to NUMA effects. Another example is to bind two threads/processes on two different logical cores (ie. hardware thread) of the same physical cores, and then on different physical cores so to see if performance is impacted. This one help you to locate false sharing issues. That being said, such experiments can be impacted by many other effects adding noise and making the analysis pretty complex in practice for large applications. Thus, a low-level analysis based on hardware performance counters is better.

Note that some processors like AMD Zen ones are composed of multiple sub-parts (called CCD/CCX) that can be seen has multiple NUMA nodes even though there is only one processor and one socket. Such architectures will certainly become more widespread in the future. In fact, Intel also started to go in this direction with Sub-NUMA Clustering .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM