简体   繁体   English

当 n 非常大时,辛普森的复合规则给出了太大的值

[英]Simpson's Composite Rule giving too large values for when n is very large

Using Simpson's Composite Rule to calculate the integral from 2 to 1,000 of 1/ln(x) , however when using a large n (usually around 500,000 ), I start to get results that vary from the value my calculator and other sources give me ( 176.5644 ).使用辛普森的复合规则计算1/ln(x)21,000的积分,但是当使用较大的n (通常约为500,000 )时,我开始得到与我的计算器和其他来源给我的值不同的结果( 176.5644 )。 For example, when n = 10,000,000 , it gives me a value of 184.1495 .例如,当n = 10,000,000时,它给我的值为184.1495 Wondering why this is, since as n gets larger, the accuracy is supposed to increase and not decrease.想知道为什么会这样,因为随着 n 变大,精度应该会增加而不是降低。

#include <iostream>
#include <cmath>

// the function f(x)
float f(float x)
{
    return (float) 1 / std::log(x);
}




float my_simpson(float a, float b, long int n)
{
  if (n % 2 == 1) n += 1; // since n has to be even
  float area, h = (b-a)/n;


  float x, y, z;
  for (int i = 1; i <= n/2; i++)
  {
    x = a + (2*i - 2)*h;
    y = a + (2*i - 1)*h;
    z = a + 2*i*h;
    area += f(x) + 4*f(y) + f(z);
  }

  return area*h/3;
}



int main()
{
    std::cout.precision(20);
    int upperBound = 1'000;
    int subsplits = 1'000'000;

    float approx = my_simpson(2, upperBound, subsplits);

    std::cout << "Output: " << approx << std::endl;

    return 0;
}

Update: Switched from floats to doubles and works much better now!更新:从花车切换到双打,现在效果好多了! Thank you!谢谢!

Unlike a real (in mathematical sense) number, a float has a limited precision.与实数(在数学意义上)不同, float的精度有限。

A typical IEEE 754 32-bit (single precision) floating-point number binary representation dedicates only 24 bits (one of which is implicit) to the mantissa and that translates in roughly less than 8 decimal significant digits (please take this as a gross semplification).典型的 IEEE 754 32 位(单精度)浮点数二进制表示仅将 24 位(其中一位是隐含的)用于尾数,并且转换为大约少于 8 个十进制有效数字(请将此视为粗略)。

A double on the other end, has 53 significand bits, making it more accurate and (usually) the first choice for numerical computations, these days.另一端的double具有 53 个有效位,使其更准确,并且(通常)是当今数值计算的首选。

since as n gets larger, the accuracy is supposed to increase and not decrease.因为随着 n 变大,精度应该增加而不是减少。

Unfortunately, that's not how it works.不幸的是,它不是这样工作的。 There's a sweat spot, but after that the accumulation of rounding errors prevales and the results diverge from their expected values.有一个汗点,但在那之后,舍入误差的积累盛行,结果与预期值背道而驰。

In OP's case, this calculation在 OP 的情况下,这个计算

area += f(x) + 4*f(y) + f(z);

introduces (and accumulates) rounding errors, due to the fact that area becomes much greater than f(x) + 4*f(y) + f(z) (eg 224678.937 vs. 0.3606823).由于area变得远大于f(x) + 4*f(y) + f(z) (例如 224678.937 与 0.3606823),引入(并累积)舍入误差。 The bigger n is, the sooner this gets relevant, making the result diverging from the real one. n越大,相关性越早,从而使结果与真实结果不同。

As mentioned in the comments, another issue (undefined behavior) is that area isn't initialized (to zero).如评论中所述,另一个问题(未定义的行为)是该area未初始化(为零)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM