简体   繁体   中英

Is it a good idea to import packages from other packages in Python?

Suppose there is some well-known (third-party) Python package a , which depends on another well-known package b (eg imageio and numpy , respectively). Both are available through pip.

Now suppose my own code explicitly uses both a and b . Package versions are fixed in my requirements.txt .

I see a few options for importing and using these packages, as described below. To me Options 2 and 3 look the cleanest, as they appear to reduce the number of dependencies that I need to manage explicitly .

Is there a preferred way of importing these packages, from the dependency management point-of-view? Or is it just a matter of style?

Option 1:

import a
import b
...
a.something()
b.something_else()
...

Option 2:

import a  # which imports b
...
a.something()
a.b.something_else()
...

Option 3:

import a
from a import b
...
a.something()
b.something_else()
...

ps The following questions seem related but do not provide an answer: 1 , 2 , 3 , 4

You should always manage your direct dependencies explicitly and not depend on internal implementation details of 3rd party packages. Those may change with future updates (unless a 3rd party package explicitly states that it exposes some particular package) and your code will break. If you depend on package b , put it into your requirements.txt and import b explicitly.

A maxim in the Python community is: "flat is better than nested." For this reason, I dislike Option 2. It is less readable. Also, if package A should ever change the way that it organizes its namespace, Option 2 could break.

Options 1 and 3 read identically within the main body of the code, so I have trouble choosing between them.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM