简体   繁体   English

如何在Python 2.5中将Python float转换为十六进制字符串? 附加非工作解决方案

[英]How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached

What I really need to do is to export a floating point number to C with no precision loss. 我真正需要做的是将浮点数导出到C而没有精度损失。

I did this in python: 我在python中这样做了:

import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)

And in CI try this: 在CI中试试这个:

#include <math.h>
#include <stdio.h>

int main(void)
{
  unsigned long long x[2] = {1719614413, 1073127582};
  long long lx;
  double xf;

  lx = (x[0] << 32) | x[1];
  xf = (double)lx;
  printf("%lf\n", xf);
  return 0;
}

But in CI get: 但在CI获得:

7385687666638364672.000000 and not sqrt(2). 7385687666638364672.000000而不是sqrt(2)。

What am I missing? 我错过了什么?

Thanks. 谢谢。

The Python code appears to work. Python代码似乎有效。 The problem is in the C code: you have the long long filled out right, but then you convert the integer value directly into floating point, rather than reinterpreting the bytes as a double . 问题出在C代码中:你有long long填写正确,但是你将整数值直接转换为浮点数,而不是将字节重新解释double If you throw some pointers/addressing at it it works: 如果你在它上面抛出一些指针/寻址,它可以工作:

jkugelman$ cat float.c
#include <stdio.h>

int main(void)
{
    unsigned long x[2] = {1719614413, 1073127582};
    double d = *(double *) x;

    printf("%f\n", d);
    return 0;
}
jkugelman$ gcc -o float float.c 
jkugelman$ ./float 
1.414214

Notice also that the format specifier for double (and for float ) is %f , not %lf . 另请注意, double (和float )的格式说明符是%f ,而不是%lf %lf is for long double . %lflong double

If you're targeting a little-endian architecture, 如果你的目标是小端架构,

>>> s = struct.pack('<d', x)
>>> ''.join('%.2x' % ord(c) for c in s)
'cd3b7f669ea0f63f'

if big-endian, use '>d' instead of <d . 如果是big-endian,请使用'>d'而不是<d In either case, this gives you a hex string as you're asking for in the question title's, and of course C code can interpret it; 在任何一种情况下,这都会给你一个十六进制字符串,正如你在问题标题中所要求的那样,当然C代码可以解释它; I'm not sure what those two ints have to do with a "hex string". 我不确定这两个整数与“十六进制字符串”有什么关系。

repr() is your friend. repr()是你的朋友。

C:\junk\es2>type es2.c
#include <stdio.h>
#include <math.h>
#include <assert.h>

int main(int argc, char** argv) {
    double expected, actual;
    int nconv;
    expected = sqrt(2.0);
    printf("expected: %20.17g\n", expected);
    actual = -666.666;
    nconv = scanf("%lf", &actual);
    assert(nconv == 1);
    printf("actual:   %20.17g\n", actual);
    assert(actual == expected);
    return 0;
    }


C:\junk\es2>gcc es2.c

C:\junk\es2>\python26\python -c "import math; print repr(math.sqrt(2.0))" | a
expected:   1.4142135623730951
actual:     1.4142135623730951

C:\junk\es2>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM