简体   繁体   中英

Validity check of function arguments in Python: should every function do it?

Should every Python function (or method) check for the validity of its arguments?

Not just the type, but if the values are within valid range, the dimensions are as expected, etc?

Or can just have functions that simply expect "adult-like" well-behaved behavior from other functions, programmer etc? And just mention in the docstring what is expected, and simply check for validity of user-input arguments at the user-facing function?

In the case of 'private' methods, it appears to me that we don't need to check for validity, but what about other methods/functions?

If we are too strict about this and check for everything, wouldn't the code be full of boring decorator type code? Is that fine?

Python encourages duck-typing or 'easier to ask for forgiveness than permission' (EAFP) meaning you should assume your arguments are correct, if not, handle the situation appropriately.

We assume that “if it walks and talks like a duck, then it must be a duck”. Look here:

class Duck:
    def quack(self):
        return 'quack!'

class Person:
    def quack(self):
        return 'Hey there!'

d = Duck()
p = Person()

def make_it_quack(duck):
    return duck.quack()

print(make_it_quack(d))
print(make_it_quack(p))

As you can see, both types work. This is intentional behaviour. If you encounter something that doesn't define that method, you'll get an AttributeError as expected. The way to get around this is to use exception handling:

try:
   print(make_it_quack(d))
    print(make_it_quack(p))
    print(make_it_quack('hello world'))
except AttributeError:
    print('object must have a "quack" method')

Having said all of that, I, personally, don't stick to it all of the time. For instance, if I can't guarantee the types of my objects then I'll use things like if isinstance(x, Y) to direct code properly. If not, then it's back to try except .

It's up to your discretion to choose which one makes your code cleaner and fits the situation. There are recommendations about this such as "always use a try/except for IOErrors" (there is a reason behind this one).

I think this can be applied in general and not just to Python, but all the people I know always followed the philosophy that if the worst that can happen is an error message, assume that the input is going to be correct.
Unless your code is running a task where even though you've written it as safely as you can, it can still destroy something, in which case always check.

I've seen a lot of different approaches. The most extreme one was that the docstrings contained formal declaration of the parameter types and ranges (also return values). The application checked the parameters during runtime based on these docstrings and threw an exception if something was wrong with them. This was implemented using metaprogramming and could be disabled for production runs.

It wasn't really Python any more, ducktyping wasn't possible either. It was more like a statically typed language without a helping compiler. You still had to wait until it breaks during runtime.

I've also seen applications that don't do any checks and just run comprehensive unit tests . If they pass it's assumed the parameters are ok as well.

Personally I also cover most things with unit tests but in critical sections I use asserts. I also take extra care in methods that receive input from the user.

Hope that helps

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM