[英]Python multiprocessing with single worker faster than sequential operation
簡要概述-我編寫了一些隨機文件,其中包含很多隨機數以測試光盤python多處理與順序操作的性能。
功能說明
putfiles :將測試文件寫入驅動器
readFile :讀取傳遞的文件位置並返回結果(代碼中數字的總和)
getSequential :使用for循環讀取某些文件
getParallel :讀取具有多個進程的文件
性能結果:(讀取和處理100個文件,以及順序和處理池)
timeit getSequential(numFiles = 100)-最佳約2.85s
timeit getParallel(numFiles = 100,numProcesses = 4)-最佳960ms
timeit getParallel(numFiles = 100,numProcesses = 1)-最佳980ms
import os
import random
from multiprocessing import Pool
os.chdir('/Users/test/Desktop/filewritetest')
def putfiles(numFiles=5, numCount=100):
#numFiles = int(input("how many files?: "))
#numCount = int(input('How many random numbers?: '))
for num in range(numFiles):
with open('r' + str(num) + '.txt', 'w') as f:
f.write("\n".join([str(random.randint(1, 100)) for i in range(numCount)]))
def readFile(fileurl):
with open(fileurl, 'r') as f, open("ans_" + fileurl, 'w') as fw:
fw.write(str((sum([int(i) for i in f.read().split()]))))
def getSequential(numFiles=5):
#in1 = int(input("how many files?: "))
for num in range(numFiles):
(readFile('r' + str(num) + '.txt'))
def getParallel(numFiles=5, numProcesses=2):
#numFiles = int(input("how many files?: "))
#numProcesses = int(input('How many processes?: '))
with Pool(numProcesses) as p:
p.map(readFile, ['r' + str(num) + '.txt' for num in range(numFiles)])
#putfiles()
putfiles(numFiles=1000, numCount=100000)
timeit getSequential(numFiles=100)
##around 2.85s best
timeit getParallel(numFiles=100, numProcesses=1)
##around 980ms best
timeit getParallel(numFiles=100, numProcesses=4)
##around 960ms best
更新:在新的sypder會話中,我沒有看到此問題。 更新了下面的運行時
##100 files
#around 2.97s best
timeit getSequential(numFiles=100)
#around 2.99s best
timeit getParallel(numFiles=100, numProcesses=1)
#around 1.57s best
timeit getParallel(numFiles=100, numProcesses=2)
#around 942ms best
timeit getParallel(numFiles=100, numProcesses=4)
##1000 files
#around 29.3s best
timeit getSequential(numFiles=1000)
#around 11.8s best
timeit getParallel(numFiles=1000, numProcesses=4)
#around 9.6s best
timeit getParallel(numFiles=1000, numProcesses=16)
#around 9.65s best #let pool choose best default value
timeit getParallel(numFiles=1000)
請不要認為這是一個答案,它是為了在python 3.x中運行這些東西時向您顯示我的代碼(您的timeit用法對我來說根本不起作用,我認為它是2.x)。 抱歉,但是我現在沒有時間深入研究它。
在旋轉的驅動器上進行[EDIT],請考慮磁盤緩存:請勿在不同的測試中訪問相同的文件,或者只是切換測試順序以查看是否涉及磁盤緩存
使用以下代碼,手動更改numProcesses = X參數,我得到了以下結果:
在SSD上,帶1個線程的1000個並行的0.31秒和帶1個線程的1000個並行的0.37秒,使用4個線程的0.23個1000並行
import os
import random
import timeit
from multiprocessing import Pool
from contextlib import closing
os.chdir('c:\\temp\\')
def putfiles(numFiles=5, numCount=1):
#numFiles = int(input("how many files?: "))
#numCount = int(input('How many random numbers?: '))
for num in range(numFiles):
#print("num: " + str(num))
with open('r' + str(num) + '.txt', 'w') as f:
f.write("\n".join([str(random.randint(1, 100)) for i in range( numCount )]))
#print ("pufiles done")
def readFile(fileurl):
with open(fileurl, 'r') as f, open("ans_" + fileurl, 'w') as fw:
fw.write(str((sum([int(i) for i in f.read().split()]))))
def getSequential(numFiles=10000):
# print ("getSequential, nufile: " + str (numFiles))
#in1 = int(input("how many files?: "))
for num in range(numFiles):
#print ("getseq for")
(readFile('r' + str(num) + '.txt'))
#print ("getSequential done")
def getParallel(numFiles=10000, numProcesses=1):
#numFiles = int(input("how many files?: "))
#numProcesses = int(input('How many processes?: '))
#readFile, ['r' + str(num) + '.txt' for num in range(numFiles)]
#with Pool(10) as p:
with closing(Pool(processes=1)) as p:
p.map(readFile, ['r' + str(num) + '.txt' for num in range(numFiles)])
if __name__ == '__main__':
#putfiles(numFiles=10000, numCount=1)
print (timeit.timeit ("getSequential()","from __main__ import getSequential",number=1))
print (timeit.timeit ("getParallel()","from __main__ import getParallel",number=1))
#timeit (getParallel(numFiles=100, numProcesses=4)) #-around 960ms best
#timeit (getParallel(numFiles=100, numProcesses=1)) #-around 980ms best
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.