I have a chunk of code that takes a series of variables and passes them to N number of modules. To simplify readability of code rather than passing the variables over and over again I created a dictionary and unpack that to modules as follows:
message_package = {
'v1' : v1,
'v2' : v2,
'v3' : v3
}
for mod in mods:
mod.f1(**message_package)
[...]
if condition:
mod.f2(**message_package)
Each module then grabs variables they need and ignore the rest:
def mod1.f1(v1=None,**kwargs):
do_something()
From a readability/usability standpoint I find this quite nice -- variables are immediately available without having to pull them out of **kwargs, and if I add a variable to the message package it's only one line and I don't have to update all modules.
As I'm somewhat new to Python I'm wondering... is this very unpythonic? Is there a big performance impact from constantly unpacking these dictionaries and/or is there a better way to do this?
Thanks to all the comments above. I followed martijn's suggestion and ran a simple test using timeit.
The results using my data are as follows:
>>> timeit.timeit('passdict()',setup=setup,number=1000000)
0.1841774140484631
>>> timeit.timeit('unpack()',setup=setup,number=1000000)
0.43643336702371016
>>>
Looks like Cyphase was correct that there would only be a performance issue if I were doing this a "whoooole lot" -- unpacking is twice as slow as passing a dictionary, but only costs 250ms over 1M iterations. For me this is negligible as I'm only dealing with 5-10 calls in one function.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.