简体   繁体   中英

What data type should I use to represent money in C#?

In C#, what data type should I use to represent m.netary amounts? Decimal? Float? Double? I want to take in consideration: precision, rounding, etc.

Use System.Decimal :

The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.

Neither System.Single ( float ) nor System.Double ( double ) are precise enoughcapable of representing high-precision floating point numbers without rounding errors.

Use decimal and money in the DB if you're using SQL.

In C#, the Decimal type actually a struct with overloaded functions for all math and comparison operations in base 10, so it will have less significant rounding errors. A float (and double), on the other hand is akin to scientific notation in binary. As a result, Decimal types are more accurate when you know the precision you need.

Run this to see the difference in the accuracy of the 2:

using System;
using System.Collections.Generic;
using System.Text;

namespace FloatVsDecimal
{
    class Program
    {
        static void Main(string[] args) 
        {
            Decimal _decimal = 1.0m;
            float _float = 1.0f;
            for (int _i = 0; _i < 5; _i++)
            {
                Console.WriteLine("float: {0}, decimal: {1}", 
                                _float.ToString("e10"), 
                                _decimal.ToString("e10"));
                _decimal += 0.1m;
                _float += 0.1f;
            }
            Console.ReadKey();
        }
    }
}

Decimal is the one you want.

Consider using the Money Type for the CLR . It is a custom value type (struct) that also supports currencies and handles rounding off issues.

In C# You should take " Decimal " to represent m.netary amounts.

For something quick and dirty, any of the floating point primitive types will do.

The problem with float and double is that neither of them can represent 1/10 accurately, occasionally resulting in surprising trillionths of a cent. You've probably heard of the infamous 10¢ + 20¢. More realistically, try calculating a 6% sales tax on three items valued at $39.99 each pre-tax.

Also, float and double have values like negative infinity and NaN that are of no use whatsoever for representing money. So decimal , which can represent 1/10 precisely would seem to be the best choice.

However, decimal doesn't carry any information about what currency we're dealing with. Does the amount $29.89, for example, equal €29.89? Is $29.89 > €29.89? How do I make sure these amounts are displayed with the correct currency symbols?

If these sorts of details matter for your program, then you should either use a third-party library or create your own CurrencyAmount class (or whatever you want to call it).

But if that sort of thing doesn't matter to the program, you can just use a floating point type. Or maybe even integers (eg, my blackjack implementation in Java asks the player to enter a wager in whole dollars).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM