简体   繁体   中英

The dot product of int32 and float32 produces float64 in Theano

When I compile the function

x = theano.tensor.imatrix('x')
y = theano.tensor.fmatrix('y')
z = x.dot(y)
f = theano.function([x, y], z)

the resulting output is a float64, even when x is of type int32 and y is of type float32. When I compute the same operation where x is a Theano fmatrix of type float32, the resulting matrix is a float32. Why is the smaller bit size not preserved in the former case? In other words, why is the dot product of an int32 and a float32 equal to a float64 instead of float32 in Theano?

I'm using Theano version 0.9.0

In general, it is assumed that you want to keep whatever precision you start out with. int32 has 31 significand bits, float32 has 24 significand bits and 8 exponent bits. Thus staying at either may cause reduced precision, and float64 is chosen, with its 53 significand bits.

You can also configure theano not to use float64 by default .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM