简体   繁体   中英

How can Color.FromArgb take Int32 as parameter?

The Color.FromArgb method takes Int32 as a parameter. The value of Color.White is #FFFFFFFF as ARGB, which is 4.294.967.295 as decimal (way over int.MaxValue ). What am I not understanding here? How can the method take int as a parameter if valid ARGB values are above the maximum value of an int ?

不幸的是,由于Color.FromArgb采用int而不是uint ,因此需要对大于int.MaxValue的颜色使用unchecked关键字。

var white = Color.FromArgb(unchecked((int)0xFFFFFFFF));

Your confusion lies in signage. Although Int32.MaxValue is equal to 2,147,483,647, that is signed.

If you look at UInt32.MaxValue , that is unsigned and as you can see, the maximum value is 4,294,967,295.

You see, signed numbers, in binary, use the left most bit to determine if its a positive or negative number. Unsigned numbers, in binary, don't have a signed bit and make use of that last bit, giving you essentially double the storage capacity.

i think part of the reason that the Color class uses Int32 instead of unsigned is because unsigned int's aren't CLS compliant, as stated in this SO Question

The practical problem is that you want to enter a eight-digit hexadecimal number, but because the single-parameter version uses an int , rather than a uint , it is difficult to represent colours with an Alpha value above &7F. This is because an int uses one bit to represent the sign.

The easiest solution is to use the four-parameter version:

var whiteColor = Color.FromArgb(0xFF, 0xFF, 0xFF, 0xFF);

The byte-ordering of the 32-bit ARGB value is AARRGGBB. The most significant byte (MSB), represented by AA, is the alpha component value. The second, third, and fourth bytes, represented by RR, GG, and BB, respectively, are the color components red, green, and blue, respectively.

http://msdn.microsoft.com/en-us/library/2zys7833(v=vs.110).aspx

It appears that the method breaks the int32 into 32 bits and converts that to AARRGGBB which is two nibbles (1 byte) for each parameter A, R, G, and B.

This works because each digit in FFFFFFFF in hexadecimal converts to a single nibble. Each space equals 4 bits specifically. So, this bit representation converts directly to 32 bits, which can be represented as a single int32.

To give just a little more detail:

The maximum value of a space in hexadecimal is F (or 15 in decimal).

The maximum value of 4 bits ( 1 nibble) is (8, 4, 2, 1) which is also 15.

So, FFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111 which is then represented as an int32 .

AS @icemanind pointed out, the first bit is reserved for the sign (+ or -), and therefore limits the numeric value of the int32 to 2,147,483,647.

It's not the numeric value, but the bit values that are important for this method.

According to the MSDN page for Color.FromArgb Method (Int32) , you don't pass in the actual int value for the color. For example, to get red you would call Color.FromArgb(0x78FF0000) . So, for white, you should just need to call Color.FromArgb(0xFFFFFFFF) .

A Color is made up of four important fields, A (alpha), R (red), G (green), and B (blue). Each of these is eight bits. four eight-bit values fit into an int32. Although the MSB may be the sign bit, that is ignored.

0xFFFFFFFF may be a negative number when expressed as an int , but it's white as a color.

It doesn't matter.

#FFFFFFFF is 11111111111111111111111111111111 in binary.

In decimal it is 4.294.967.295 if you're using unsigned ints . If you're using signed ints, it is interpreted as -1.

But the actual decimal value doesn't matter; what matters is the value of the individual bits.

A signed int can still store 4.294.967.295 values, just half of them are negative. The bits are the same.

您可以怀疑0x00FFFFFF-0x01000000编译器将正常工作

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM