简体   繁体   中英

How to force NumPy to always use a precision (float32, float64 …)?

I am trying to study a little FIR example written in Python. See https://scipy-cookbook.readthedocs.io/items/FIRFilter.html

My goal is to study how output precision varies for each float16 , float32 and float64 (available in numpy). So for the first case I need to keep all my computations done in float16 only. The thing is i should each time cast the data to ensure that I'm using the right format. Is there a method to consistently use a unified context for the whole computations ie to perform all computations (additions, substractions, cos, sin ...etc) using float16 for example without re-writing code with casts?

From the numpy basics :

When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting).

You can define the data type on the array creation. Applying a sum, multiplication or substraction, result will upcast to the "larger" type, it will also keep the dtype if you perform operations on the array, eg:

x = np.ones(10, dtype=np.float16)
y = np.ones(10, dtype=np.float32)
print((x + y).dtype, (x - y).dtype, (x * y).dtype)
print(np.sin(x).dtype, np.sin(y).dtype)
>> float32 float32 float32
   float16 float32

An exception is when passing an integer, in which case, by default, numpy upcasts to float64

print(np.sin(np.ones(10, dtype=int)).dtype)
>> float64

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM