简体   繁体   English

gcc舍入版本之间的差异

[英]gcc rounding difference between versions

I'm looking into why a test case is failing 我正在研究为什么测试用例失败

The problematic test can be reduced to doing (4.0/9.0) ** (1.0/2.6) , rounding this to 6 digits and checking against a known value (as a string): 有问题的测试可以简化为做(4.0/9.0) ** (1.0/2.6) ,将其四舍五入为6位并检查已知值(作为字符串):

#include<stdio.h>
#include<math.h>
int main(){
    printf("%.06f\n", powf(4.0/9.0, (1.0/2.6)));
}

If I compile and run this in gcc 4.1.2 on Linux, I get: 如果我在Linux上的gcc 4.1.2中编译并运行它,我得到:

0.732057

Python agrees, as does Wolfram|Alpha : Python同意, Wolfram | Alpha也是如此

$ python2.7 -c 'print "%.06f" % (4.0/9.0)**(1/2.6)'
0.732057

However I get the following result on gcc 4.4.0 on Linux, and 4.2.1 on OS X: 但是我在Linux上的gcc 4.4.0和OS X上的4.2.1上得到以下结果:

0.732058

A double acts identically (although I didn't test this extensively) double行为相同(虽然我没有广泛测试)

I'm not sure how to narrow this down any further.. Is this a gcc regression? 我不确定如何进一步缩小这个范围。这是一个gcc回归吗? A change in rounding algorithm? 舍入算法的变化? Me doing something silly? 我做些傻事?

Edit: Printing the result to 12 digits, the digit at the 7th place is 4 vs 5, which explains the rounding difference, but not the value difference: 编辑:将结果打印到12位,第7位的数字是4对5,这解释了舍入差异,但不是值差:

gcc 4.1.2: gcc 4.1.2:

0.732057452202

gcc 4.4.0: gcc 4.4.0:

0.732057511806

Here's the gcc -S output from both versions: https://gist.github.com/1588729 这是两个版本的gcc -S输出: https//gist.github.com/1588729

Recent gcc version are able to use mfpr to do compile time floating point computation. 最近的gcc版本能够使用mfpr进行编译时浮点计算。 My guess is that your recent gcc does that and use an higher precision for the compile time version. 我的猜测是你最近的gcc这样做并且在编译时版本中使用更高的精度。 This is allowed by the at least the C99 standard (I've not looked in other one if it was modified) 这至少是C99标准所允许的(如果它被修改,我没有查看其他标准)

6.3.1.8/2 in C99 C99中的6.3.1.8/2

The values of floating operands and of the results of floating expressions may be represented in greater precision and range than that required by the type; 浮动操作数的值和浮动表达式的结果可以以比该类型所需的精度和范围更高的精度和范围来表示; the types are not changed thereby. 因此不改变类型。

Edit: your gcc -S results confirm that. 编辑:你的gcc -S结果证实了这一点。 I haven't checked the computations, but the old one has (after substituting memory for its constant content) 我没有检查过计算,但旧计算机已经(在用内存替换其常量内容之后)

movss 1053092943, %xmm1
movss 1055100473, %xmm0
call powf

calling powf with the precomputed values for 4/9.0 and 1/2.6 and then printing the result after promotion to double, while the new one just print the float 0x3f3b681f promoted to double. 用4/9.0和1 / 2.6的预先计算值调用powf然后将提升后的结果打印到double,而新的只打印浮动0x3f3b681f提升为double。

I think the old gcc used double under the hood. 我认为旧的gcc使用了double引擎盖。 Doing the calculation in Haskell and printing the results to full precision, I get 在Haskell中进行计算并将结果打印到完全精确,我得到了

Prelude Text.FShow.RealFloat> FD ((4/9) ** (1/2.6))
0.73205748476369969512944635425810702145099639892578125
Prelude Text.FShow.RealFloat> FF ((4/9) ** (1/2.6))
0.732057511806488037109375

So the double result agrees with what gcc-4.1.2 produced and the float result with what gcc-4.4.0 does. 因此, double结果与gcc-4.1.2产生的结果以及gcc-4.4.0所做的float结果一致。 The results gcc-4.5.1 produces here for float resp. 结果gcc-4.5.1在这里为float resp生成。 double agree with the Haskell results. double同意Haskell的结果。

As A Programmer cited, the compiler is allowed to use higher precision, the old gcc did, the new apparently doesn't. 正如程序员所引用的那样,编译器允许使用更高的精度,旧的gcc确实如此,新的显然没有。

There are many players here. 这里有很多球员。 Gcc is most probably just going to forward the calculation to your floating point processor; Gcc很可能只是将计算转发给你的浮点处理器; you can check the disassembly for that. 你可以检查反汇编。

You may check the binary results with a binary representation (from same wolfram/alpha ): 您可以使用二进制表示(来自相同的wolfram / alpha )检查二进制结果:

float q=powf(4.0/9.0, (1.0/2.6));
unsigned long long hex=*reinterpret_cast<unsigned long long*>(&q);
unsigned long long reference=0x1f683b3f;
assert( hex==reference );

But also printf is a possible culprit: the decimal representation of that number may be the problem, too. 但是printf也可能是罪魁祸首:该数字的十进制表示也可能是问题所在。 You could try to write printf("%0.06f", 0.73205748 ); 你可以尝试写printf("%0.06f", 0.73205748 ); to test that. 测试一下。

You should be able to distinguish between the format rounding differently and the math giving a different answer, just by printing more (all) significant digits. 您应该能够区分不同的舍入格式和给出不同答案的数学,只需打印更多(所有)有效数字。

If it looks the same when no rounding takes place, printf("%0.6f" is just rounding differently. 如果在没有发生舍printf("%0.6f"它看起来相同,则printf("%0.6f"只是以不同的方式舍入。


OK, with the old Linux+python environment I have to hand, I get: 好的,我必须提供旧的Linux + python环境,我得到:

Python 2.4.3 (#1, Jun 11 2009, 14:09:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> (4.0/9.0)**(1.0/2.6)
0.7320574847636997

which is different again. 这又是不同的。

Maybe it would be simpler to ask instead, how many significant figures are really significant for this unit test? 也许更简单的问一下,有多少有效数字对于这个单元测试真的很重要?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM