简体   繁体   中英

How is Python's decimal (and other precise decimal libraries) implemented and why are they slower than built in floating point calculations?

I've been reading the floating point guide to try to clarify some points about floating point numbers and I assume Python's decimal library is an implementation of "Limited-Precision Decimal" mentioned on the linked page.

It mentions that "Limited-Precision Decimal" is "Basically the same as a IEEE 754 binary floating-point, except that the exponent is interpreted as base 10. As a result, there are no unexpected rounding errors. Also, this kind of format is relatively compact and fast, but usually slower than binary formats."

Is Python decimal implemented the same way? If all else is equal in the representation besides the exponent being interpreted differently, why is it slower and why isn't this representation always preferred over the IEEE 754 implementation? Finally, why does using the exponent as base 10 prevent unexpected rounding errors?

Thanks!

It mentions that "Limited-Precision Decimal" [...] Is Python decimal implemented the same way?

No, internally Python's Decimal uses a base-10 exponent, along with an arbitrarily large integer. Since the size of the integer is unlimited, the potential precision is unlimited too.

why is [Python's Decimal] slower

There are a few reasons for this. First, adding two Decimal values of different exponents requires multiplying by ten, and multiplying by ten is more expensive than multiplying by two on a computer which uses binary. Second, doing an exact calculation requires more digits of precision than doing an approximate calculation. Third, IEEE754 floating point has hardware acceleration because it's such a common operation.

why isn't this representation always preferred over the IEEE 754 implementation?

Speed is a feature, and not all calculations benefit from being done exactly. The use of inexact calculations is more widespread than you might think. For example, Excel uses floating-point numbers internally . Yet, it has hundreds of millions of users, so evidently you can get pretty far with only floating point.

Finally, why does using the exponent as base 10 prevent unexpected rounding errors?

The key word in that sentence is "unexpected." You wouldn't be surprised to learn that a base 10 number system can't represent the number 1/3 without rounding it. We understand and are okay with not being able to represent 1/3, 1/7, and 1/9 perfectly accurately. But people are much less accepting of computer systems which can't represent 1/5 accurately.

If you tried to represent 0.2 in binary, you'd get 0.0011(0011), with the 0011 part repeating forever. A floating point number doesn't have an infinite number of bits, so it rounds off everything after 53 bits (assuming double precision) and approximates it.

This is not to say that Decimal is perfectly accurate. There are lots of situations that force rounding. For example, if you took the square root of two, that's an irrational number, and can't be represented as an exact decimal.

Example:

>>> Decimal(2).sqrt()
Decimal('1.414213562373095048801688724')
>>> Decimal(2).sqrt() ** 2
Decimal('1.999999999999999999999999999')

Decimal is a way of doing math that agrees with the answer you'd get by doing it with pencil and paper. For this, it trades off speed and memory use.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM