简体   繁体   中英

Why is mpfr_printf different than printf for hex float (%a conversion specifier)?

I'm comparing values from regular floating point arithmetic and using a high-precision MPFR number as a baseline. When printing, I'm confused why the following code is output different.

The MPFR documentation says the output specifiers are

'a' 'A' hex float, C99 style"

So I would assume it prints the same as printf . However, this is not the case:

#include <boost/multiprecision/mpfr.hpp>

#include <mpfr.h>

using namespace boost::multiprecision;

int main (int argc, char* argv[])
{
    double               a = 254.5;
    mpfr_float_1000 mpfr_a = 254.5;

    mpfr_t t;
    mpfr_init2(t, 3324); // 1000 decimal precision
    mpfr_set_d(t, a, MPFR_RNDN);

    printf("double:\t%a\n", a);
    mpfr_printf("mpfr_t:\t%RNa\n", t);
    mpfr_printf("boost:\t%RNa\n", mpfr_a);
}

Gives output:

double: 0x1.fdp+7
mpfr_t: 0xf.e8p+4
boost:  0xf.e8p+4

It's not a huge deal because sscanf parses both of them as the same value, but I couldn't locate any documentation explaining why they are different.

Which one is canonical?

There is no canonical form. The C standard does not regulate the first digit except that it must be non-zero for normal numbers. C 2018 7.21.6.1 8 says, of the a and A conversion specifiers:

A double argument representing a floating-point number is converted in the style [-]0xh.hhhhp±d , where there is one hexadecimal digit (which is nonzero if the argument is a normalized floating-point number and is otherwise unspecified) before the decimal-point character and the number of hexadecimal digits after it is equal to the precision;…

To complete the answer about the absence of canonical form, I'm trying to present the various choices.

For GNU libc, stdio-common/printf_fphex.c contains:

  /* We have 52 bits of mantissa plus one implicit digit.  Since
     52 bits are representable without rest using hexadecimal
     digits we use only the implicit digits for the number before
     the decimal point.  */

But note that not all formats use the implicit bit: the x87 extended-precision format ( long double on x86) doesn't. So this could yield some inconsistency.

For the printf POSIX utility, tests of printf "%a\n" 256 I did in 2007 showed 3 different results:

  • 0x1p+8 with the printf builtin of bash 3.1.17 under Linux/x86 and bash 2.05b.0 under Mac OS X (PPC) 10.4.11;
  • 0x2p+7 with the printf from coreutils 6.9 under Mac OS X (PPC) 10.4.11;
  • 0x8p+5 with the printf from coreutils 5.97 under Linux/x86.

Concerning GNU MPFR, the initial choice was to have the first digit as large as possible, ie between 8 and 15 (= F), which allows one to get the shortest possible significand string. But it seems that this was changed to have an exponent that is a multiple of 4, except when the precision field is 0. I don't know why...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM