I have a python script with a normal runtime of ~90 seconds. However, when I change only minor things in it (like alternating the colors in my final pyplot
figure) and execute it thusly multiple times in quick succession, its runtime increases up to close to 10 minutes.
Some bullet points of what I'm doing:
.dat
-files using numpy.genfromtxt
and crunch some numbers with them. array.columnname
extensively. if
's here and there but nothing fancy, really. I use the multiprocessing
module as follows
import multiprocessing npro = multiprocessing.cpu_count() # Count the number of processors pool = multiprocessing.Pool(processes=npro) bigdata = list(pool.map(analyze, range(len(FileEndings)))) pool.close()
with analyze
being my main function and FileEndings
its input, a string, to create the right name of the file I want to load and the evaluate. Afterwards, I use it a second time with
pool2 = multiprocessing.Pool(processes=npro) listofaverages = list(pool2.map(averaging, range(8))) pool2.close()
averaging
being another function of mine.
@jit
decorator to speed up the basic calculations I do in my inner loops, nogil
, nopython
, and cache
all set to be True
. Commenting these out doesn't resolve the issue. bash
doesn't help either. htop
reveals that all processors are at full capacity when running. I am also seeing a lot of processes stemming from PyCharm (50 or so) that are each at equal MEM%
of 7.9. The CPU%
is at 0 for most of them, a few exceptions are in the range of several %. Has anyone experienced such an issue before? And if so, any suggestions what might help? Or are any of the things I use simply prone to cause these problems?
可能是关闭的,该问题是由我的机器中的风扇故障引起的。
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.