I have some code and would like to optimize the L1 cache miss/hit ratio. Is a way to see the cache hit/miss in memory profiling in Python ?
There are tools in C++ like this: Measuring Cache Latencies
EDIT : It may include compiled variant of Python like Cython / Numba (JIT)
Although specific for Python are not yet found, some 3rd party tools might be helfpul to investigate this technical issue :
Cachegrind: a cache and branch-prediction profiler http://valgrind.org/docs/manual/cg-manual.html
PyCacheSim (simulation only) : https://github.com/RRZE-HPC/pycachesim
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.