简体   繁体   中英

Can floating-point precision be thread-dependent?

I have a small 3D vector class in C# 3.0 based on struct that uses double as basic unit.

An example: One vector's y-value is

-20.0 straight

I subtract a vector with an y-value of

10.094999999999965

The value for y I would expect is

-30.094999999999963         (1)

Instead I get

-30.094999313354492         (2)

When I'm doing the whole computation in one single thread, I get (1). Also the debugger and VS quick-watch returns (1). But, when I run a few iterations in one thread and then call the function from a different thread, the result is (2). Now, the debugger returns (2) as well!

We have to keep in mind the .NET JIT might write the values back to memory (website Jon Skeet) which reduces the accuracy from 80 bit (FPU) to 64 bit (double). However, the accuracy of (2) is far below that.

The vector class looks basically like this

public struct Vector3d
{
  private readonly double _x, _y, _z;
  ...
  public static Vector3d operator -(Vector3d v1, Vector3d v2)
  {
      return new Vector3d(v1._x - v2._x, v1._y - v2._y, v1._z - v2._z);
  }  
}

The computation is as easy as this

Vector3d pos41 = pos4 - pos1;

Yes, I believe the result can be thread-dependent.

My guess is you're using DirectX at some point in your code - and that sets the precision for the FPU, and I believe it sets it on a per-thread basis.

To fix this, use the D3DCREATE_FPU_PRESERVE flag when you call CreateDevice . Note that this will potentially have a performance impact. The managed equivalent is CreateFlags.FpuPreserve .

(See this related question . I haven't suggested that this one be closed as a duplicate, as they at least look a bit different on the surface. Having both should help the answer to be discoverable.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM