简体   繁体   中英

Best practices for dealing with monetary amounts in C#

I have read a couple of articles about the perils of certain data types being used to store monetary amounts. Unfortunately, some of the concepts are not in my comfort zone.

Having read these articles, what are the best practises and recommendations for working with money in C#? Should I use a certain data type for a small amount and a different one for a larger amount? Also, I am based in the UK which means we use , (eg £4,000, whereas other cultures represent the same amount differently).

Decimal is the most sensible type for monetary amounts.

Decimal is a floating point base 10 numeric type with 28+ decimal digits of precision. Using Decimal, you will have fewer surprises than you will using the base 2 Double type.

Double uses half as much memory as Decimal and Double will be much faster because of the CPU hardware for many common floating point operations, but it cannot represent most base 10 fractions (such as 1.05) accurately and has a less accurate 15+ decimal digits of precision. Double does have the advantage of more range (it can represent larger and smaller numbers) which can come in handy for some computations, particularly some statistics computations.

One answer to your question states that Decimal is fixed point with 4 decimal digits. This is not the case. If you doubt this, notice that the following line of code yields 0.0000000001:

Console.WriteLine("number={0}", 1m / 10000000000m);

Having said all of that, it is interesting to note that the most widely used software in the world for working with monetary amounts, Microsoft Excel, uses doubles. Of course, they have to jump through a lot of hoops to make it work well, and it still leaves something to be desired. Try these two formulas in Excel:

  • =1-0.9-0.1
  • =(1-0.9-0.1)

The first yields 0, the second yields ~-2.77e-17. Excel actually massages the numbers when adding and subtracting numbers in some cases, but not in all cases.

You should not use floating point numbers because of rounding errors. The decimal type should suit you.

Martin Fowler recommends using a Money class . See the link for the rationale. There are a number of implementations of his idea out there, or you could write your own. Fowler's own implementation is in Java, so he uses a class. The C# versions I have seen use a struct, which seems sensible.

I use a value object to hold both the amount (as a decimal ) and the currency. This allows to work with different currencies simultaneously. decimal is the recommend data type for money in .NET.

My recommendation would be to use Decimal, as is recommended by others if division is required. For simple tally application, I would recommend an Integer type. For both types, I would always work on the lowest monetary denomination. (ie. cents in Canada / US)

I do like the theory of Fowler's Money call added by @dangph.

Whatever you do, make sure that you understand how currency amounts are handled in each tier of your application.

I once spent week tracking down a 1¢ error because SQLServer and .Net use different ways of rounding currencies, and the application wasn't consistent in how it handled certain types of calculations - sometimes they were done in SQL, sometimes in .net. Look into " bankers' rounding " if you're interested.

There are also issues related to formatting currencies - not sure if you have to deal with non-UK amounts, other languages/cultures, etc but that'll add another level of complexity.

As you indicated in your question aside from using an appropriate data type how well your program will handle currency conversions is important. This issue of course if not exclusive to currency. Jeff Atwood did a great post summarizing the virtues of performing The Turkey Test .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM