M
mbelew
I'm seeing a very strange behavior with double precision subtraction.
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:
[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;
Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}
/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <param name="a">angle in radians</param>
/// <param name="b">angle in radians</param>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);
if (a == b) return 0;
double result = b-a; //////////This line gives me grief.///////
double absResult = Math.Abs(result);
double direction = absResult / result;
if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));
return result;
}
The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.
Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.
Has anyone else experienced this behavior? What can I do to correct it
or work around it?
BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.
Thanks!
Marshall
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:
[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;
Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}
/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <param name="a">angle in radians</param>
/// <param name="b">angle in radians</param>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);
if (a == b) return 0;
double result = b-a; //////////This line gives me grief.///////
double absResult = Math.Abs(result);
double direction = absResult / result;
if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));
return result;
}
The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.
Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.
Has anyone else experienced this behavior? What can I do to correct it
or work around it?
BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.
Thanks!
Marshall