简体   繁体   中英

Python Multiprocessing not making it faster

Hello i have one program which takes its input from stdin which i thought it would be faster if i used multiprocessing but it takes actually longer:

Normal:

import sys
import hashlib
import base58
from progress.bar import ShadyBar


bar=ShadyBar('Fighting', max=100000, suffix='%(percent)d%% - %(index)d / %(max)d - %(elapsed)d')

listagood=[]
for cc in sys.stdin:
    try:
        bar.next()
        hexwif=cc[0:51]         
        enco=base58.b58decode_check(hexwif)     
        Fhash=hashlib.sha256(enco)          
        d2=hashlib.sha256() 
        d2.update(Fhash.digest())
        Shash=d2.hexdigest() 
        Conf1=Shash[0:8] 
        encooo=base58.b58decode(hexwif) 
        Conf2=encooo.encode("hex")
        Conf2=Conf2[len(Conf2)-8:len(Conf2)]
        if Conf1==Conf2:
            listagood.append(cc)
    except:
        pass



bar.finish()
print("\nChecksum: " )
print(listagood)
print("\n")

Multiprocessing:

def worker(line):
    try:
        hexwif=line[0:51]       
        enco=base58.b58decode_check(hexwif)
        Fhash=hashlib.sha256(enco)          
        d2=hashlib.sha256() 
        d2.update(Fhash.digest())
        Shash=d2.hexdigest()
        Conf1=Shash[0:8]
        encooo=base58.b58decode(hexwif)
        Conf2=encooo.encode("hex")
        Conf2=Conf2[len(Conf2)-8:len(Conf2)]
        if Conf1==Conf2:
            return(line)
    except:
        #e=sys.exc_info()
        #print(str(e))
        pass



listagood=[]
pool = multiprocessing.Pool(processes=4)
bar=ShadyBar('Fighting', max=100000, suffix='%(percent)d%% - %(index)d / %(max)d - %(elapsed)d')
for result in pool.imap(worker, sys.stdin):
    if result != None:
        listagood.append(result)
    #print "Result: %r" % (result)
    bar.next()


bar.finish()
print("\nChecksum: " )
print(listagood)
print("\n")

Unfortunately, when i check the elapsed time it is almost the triple with the multiprocess one.

I have one processor, two physical cores, and 2 virtual cores for each physical core.

How can i know if this is caused by the multiprocessing overhead? Or is it something i did wrong?

Any help would be much appreciated

Pool divides input in multiple processes. In your case 4. But you are passing one input at a time which is not actually triggering all 4 threads. Use the following and see the change in timing

 pool.imap(worker, sys.stdin.readlines())

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM