简体   繁体   中英

Data Annotations in .net fail to validate a range specified for a decimal value

I'm attempting to perform some data validation using System.ComponentModel.DataAnnotations to validate that the offer price for an item falls within a range of 1$ to $1,000,000. I've created a class called ItemPrice and decorated it with the following attributes:

 public class ItemPrice
{
    [Required (ErrorMessage = "Name is required")]
    public string Name
    {
        get;
        set;
    }
    [Range(1.00,1000000.00)]
    public decimal Price
    {
        get;
        set;
    }
}

Later I try to validate an instance of this class where ItemPrice.Price is set to 0.0. The following code correctly determines if the Name value has been omitted, but never detects that a price of less than 1 has been entered. Can anyone tell me why the following code would fail to detect a price outside the range of 1 to 1,000,000?

private void validateMessage(object message)
    {
        if (message == null)
        {
            throw new ArgumentNullException("message null");
        } 

        var context = new ValidationContext(message, serviceProvider: null, items: null);
        var results = new List<ValidationResult>();

        var isValid = Validator.TryValidateObject(message, context, results);
        var sb = new StringBuilder();
        if (!isValid)
        {
                foreach (var validationResult in results)
                {
                    Trace.WriteLine(validationResult.ErrorMessage);
                    if (sb.Length > 0)
                        sb.Append("\n");
                    sb.Append(validationResult.ErrorMessage);
                }
            Exception innerException = new Exception(sb.ToString());
            throw new ArgumentException("Invalid argument(s) in message", innerException);
        }
    }

System.Decimal is the CLR's ugly step-child. It doesn't consider it a primary type, like Int32 et al. The most severe problem is that the CLI spec , the Golden Standard for the way the CLR needs to work, does not nail down the internal format of Decimal. It was left as an implementation detail.

There was considerable debate at the time the CLI spec was written what Decimal should look like. The one we use today is one that was defined long before .NET ever came around. But there was considerable background noise from the IEEE-754 standard that also wanted to nail down a standard decimal format. A standard that suffered from the classic problem of anybody trying to set a standard, it just added another one. And one that, after 15 years, everybody still ignores. Including the chip manufacturers, they need to go first to give everybody a good reason to adopt standard N+1.

This will not do, you cannot create a standard CLI on quicksand. So the CLR does not support attribute constructors that take Decimal arguments. Their values are encoded into the metadata and having a binary standard for metadata is super duper hyper important.

The workaround is otherwise simple. Use an integral type instead, the decimal value multiplied by 100 for reasonable money values. Use System.Double if you really, really have to to get the range or the fractional digits. Be sure to be lenient, nobody likes to be off by 1E-15 and be reminded about it. You can simply convert to Decimal in the attribute constructor.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM