简体   繁体   中英

How does the decimal accuracy of Python compare to that of C?

I was looking at the Golden Ratio formula for finding the nth Fibonacci number, and it made me curious.

I know Python handles arbitrarily large integers, but what sort of precision do you get with decimals? Is it just straight on top of a C double or something, or does it use aa more accurate modified implementation too? (Obviously not with arbitrary accuracy. ;D)

almost all platforms map Python floats to IEEE-754 “double precision”.

http://docs.python.org/tutorial/floatingpoint.html#representation-error

there's also the decimal module for arbitrary precision floating point math

Python floats use the double type of the underlying C compiler. As Bwmat says, this is generally IEEE-754 double precision.

However if you need more precision than that you can use the Python decimal module which was added in Python 2.4.

Python 2.6 also added the fraction module which may be a better fit for some problems.

Both of these are going to be slower than using the float type, but that is the price for more precision.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM