简体   繁体   中英

What's the benefit of accepting floating point inaccuracy in c#

I've had this problem on my mind the last few days, and I'm struggling to phrase my question. However, I think I've nailed what I want to know.

Why does c# accept the inaccuracy by using floating points to store data? And what's the benefit of using it over other methods?

For example, Math.Pow(Math.Sqrt(2),2) is not exact in c#. There are programming languages that can calculate it exactly (for example, Mathematica).

One argument I could think of is that calculating it exactly is a lot slower then just coping with the inaccuracy, but Mathematica & Matlab are used to calculate gigantic scientific problems, so I find it hard to believe those languages are really significantly slower than c#.

So why is it then?

PS: I'm sorry for spamming you with these questions, you've all been really helpful

Why does c# accept the inaccuracy by using floating points to store data?

"C#" doesn't accept the tradeoff of performance over accuracy; users do, or do not, accept that.

C# has three floating point types - float, double and decimal - because those three types meet the vast majority of the needs of real-world programmers.

float and double are good for "scientific" calculations where an answer that is correct to three or four decimal places is always close enough, because that's the precision that the original measurement came in with. Suppose you divide 10.00 by 3 and get 3.333333333333. Since the original measurement was probably accurate to only 0.01, the fact that the computed result is off by less than 0.0000000000004 is irrelevant. In scientific calculations, you're not representing known-to-be-exact quantities. Imprecision in the fifteenth decimal place is irrelevant if the original measurement value was only precise to the second decimal place .

This is of course not true of financial calculations. The operands to a financial calculation are usually precise to two decimal places and represent exact quantities . Decimal is good for "financial" calculations because decimal operation results are exact provided that all of the inputs and outputs can be represented exactly as decimals (and they are all in a reasonable range). Decimals still have rounding errors, of course, but the operations which are exact are precisely those that you are likely to want to be exact when doing financial calculations.

And what's the benefit of using it over other methods?

You should state what other methods you'd like to compare against. There are a great many different techniques for performing calculations on computers.

For example, Math.Pow(Math.Sqrt(2),2) is not exact in c#. There are programming languages that can calculate it exactly (for example, Mathematica).

Let's be clear on this point; Mathematica does not "calculate" root 2 exactly; the number is irrational, so it cannot be calculated exactly in any finite amount of storage. Instead, what mathematica does is it represents numbers as objects that describe how the number was produced. If you say "give me the square root of two", then Mathematica essentially allocates an object that means "the application of the square root operator to the exact number 2". If you then square that, it has special purpose logic that says "if you square something that was the square root of something else, give back the original value". Mathematica has objects that represent various special numbers as well, like pi or e, and a huge body of rules for how various manipulations of those numbers combine together.

Basically, it is a symbolic system; it manipulates numbers the same way people do when they do pencil-and-paper math. Most computer programs manipulate numbers like a calculator: perform the calculation immediately and round it off. If that is not acceptable then you should stick to a symbolic system.

One argument I could think of is that calculating it exactly is a lot slower then just coping with the inaccuracy, but Mathematica & Matlab are used to calculate gigantic scientific problems, so I find it hard to believe those languages are really significantly slower than c#.

It's not that they're slower, though multiplication of floating points really is incredibly fast on modern hardware. It's that the symbolic calculation engine is immensely complex . It encodes all the rules of basic mathematics , and there are a lot of those rules! C# is not intended to be a professional-grade symbolic computation engine, it's intended to be a general-purpose programming language.

One word: performance. Floating point arithmetic is typically implemented on hardware and is many orders of magnitude faster than other approaches.

What's more your example of MATLAB is bogus. MATLAB uses double precision floating point arithmetic just like C#.

Why does c# accept the inaccuracy by using floating points to store data?

This way, floating point support can map to the way hardware supports floating points, that is - it's the more or less the only way of taking advantage of floating point operations in hardware, which is much faster than a software solution. The drawback is the hardware represents the floating points with a finite number of bits, which leads to inaccuracy (note that the inaccuracy is well defined).

Other ways of representing floating point values needs a software solution, it is significantly slower and require more space. "Anyone" can implement that with what's available in c#, including native floating point support for the available hardware would be quite hard for "anyone" if this wasn't already supported in the language/CLR.

For most programming problems, the inaccuracy is not a problem and float (or double ) data types are good enough. Many years ago, there wasn't such thing as "floating point values" and software had to store such values as two integers. And performance was an issue (not mentioning programming errors--and wtf scenarios--from custom made floating point calculation functions). Thus a convention was designed and soon after computers were equiped with FPU s.

Now, when to use the FPU for calculations or using other mathematical librairies/programs (such as Mathematica) depends on the problem. For example, calculating vertexes in a 3d environment prefers performance over precision. But accounting software is different. In that aspect, both problem differ; an accounting software will not need to calculate complex numbers millions of times per seconds :) ( edit : or if it does, some very expensive hardware will probably be part of the equation too!)

If you know that you'll be doing Math.pow(Math.sqrt(2),2) then you should rethink the way you store both values (like recalculating them every time). This is not a problem with the programming language, but more a conceptual problem.

C#和大多数其他语言(除了特定的语言,如Matlab)将浮点数存储为固定大小的字段(6或8字节),这会导致不准确。

I don't think it's c# problem. c# is a general purpose language and gives you basic data types to play with. If you are not happy with them you are always free to create your own.

Moreover c# isn't the one who accept inaccuracy. Programmer does. For large set of problems inaccuracy is acceptable. Float shouldn't be used when exact result is expected but this is decision for programmer not for language designer.

One reason is that numbers and number formats are unambiguous and universal. Yes, there are rounding errors, but they are constant and predictable. Trying to set up a general format for any algorithmic problem is not trivial.

There's a bit of an explanation here for mathematica

The short version is for regular day to day floating point math, the hardware can do it quickly with a known amount of inaccuracy. So if your calculation doesn't rely on being more precise, then do it the quick way.

If you do need the precision, then the programmer has to write the algorithm to the degree of precision required. Which will be slower.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM