简体   繁体   中英

Floating point representation

What determines the representation of floating point numbers in the memory? By the compiler or FPU. If the representation depends on the FPU, how the compiler stores constants such as 1.337f in a binary file? Maybe when the application starts happens unpacking of the floating point values? I have long been interested in this question because do network programming.

The C and C++ standards do not require any particular floating point representations, although recent standards have included some specific support for IEEE (ie facilities that are available IF the implementation uses IEEE floating point). For particular implementations (aka toolchains) the representation of floating point depends on the host system and, to some extent, on decisions by compiler vendors.

For older microprocessor (and other processing hardware such as microcontrollers, Digital Signal Processors [DSP], etc), the implementation is often in hardware - for example, a set of specialised electronic circuits that implement registers that represent floating point values, and circuits which perform operations on such registers.

In modern processing hardware (microcontrollers,DSPs, graphic processing units, floating point units, etc) the implementation is in microcode - over-simplistically, a layer of hardware instructions that implement machine code instructions and a state machine (a basis for how the processor appears to work, as far as programs and operating systems are concerned). So higher level instruction sets (X86, etc) are used by executables and operating systems, and microcode is the intermediary between the operating system and the hardware (which often implements a very simple set of instructions). The term "modern" in this description is relative - the first microcode-based processors date from the 1970s.

Historically, processing hardware has implemented floating point in a wide number of ways - some proprietary, and some standardised. There are a number of processors which supported multiple distinct representations. In some cases, software layers have emulated floating point on top of hardware that does not support floating point at all. Most compilers will use hardware-supplied floating point if available (and some compilers have options to select different floating point representations, reflecting their target platforms), but a number of compilers targeting hardware with no floating point support literally emulate the representations and operations in software.

The IEEE floating point specification (first version related in 1985, most recent version IEEE 754-2008) which has been adopted as an international standard ISO/IEC/IEEE 60559:2011 defines a bunch of things, including arithmetic formats (how values, infinities, NaNs, etc are represented in floating point variables), interchange formats (encodings for exchange of floating point values between systems), operations (for arithmetic, etc), rounding rules during operations, and exception handling (dealing with things like division by zero). The IEEE specification has evolved over some time, and is becoming increasingly common in modern hardware and software.

What determines the representation of floating point numbers in the memory

The compiler determines which representation it uses. But, if it targets an fpu, then it must use the representation used by the fpu.

how the compiler stores constants such as 1.337f in a binary file?

Typically, in the same binary representation as it uses in memory.

Who determines the representation is the FPU which adheres to a standard with which the compiler supports.

The current standard is the IEEE 754 . It describes how floating-point computation and data should be represented (see this article for a detailed description).

The data is always represented by a fixed number of bits, such as 32-bit, 64-bit, 128-bit, 80-bit (aka x86 extended precision). In memory they are all but bits. But then, each set of bits represents a component of the floating-point data, such as: the most significant bit (depending on the endianness ) is the sign, another set of bits are the exponent and another the significant part.

Then, the compiler's support of the standard (the IEEE 754 ) generates code specific to that representation.

So, user2079303 's answer is right: who determines the representation of your code is the compiler which targets the standard, however it wouldn't work if the standard was not in charge.

EDIT: Peter 's answer is quite detailed and covers many other cases.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM