inconsistent double precision math

M

mbelew

I'm seeing a very strange behavior with double precision subtraction.
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:

[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;

Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}


/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <param name="a">angle in radians</param>
/// <param name="b">angle in radians</param>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);

if (a == b) return 0;

double result = b-a; //////////This line gives me grief.///////

double absResult = Math.Abs(result);
double direction = absResult / result;

if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));

return result;
}

The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.

Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.

Has anyone else experienced this behavior? What can I do to correct it
or work around it?

BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.

Thanks!

Marshall
 
G

Guest

Hi Marshall,
I am not sure why your test returns different results when run
individually or as part of a batch. As we know floating point numbers cannot
be precise in some cases as they cannot represent all possible contiguous
numbers, I would add an epsilon value (a small error margin) to the
Assert.Equals function, it is possible as one of the overloaded parameters,
which would allow your test to fail even if the numbers are not perfectly
identical, but within accepted error margins.

Mark R Dawson
 
M

mbelew

Well, I guess I can see why PI - 0 might not be PI. I just wish it was
consistent no matter how I run the program.

I've done two things that at least make my test pass. The first is
that internally to the GetAcuteAngleDifference function, I tossed out
the double precision, and I deal only with floating point precision.
The value is still not exact, but it's more managable - at least I'm
not getting an unexpected direction change in my return value. The
second thing that I have done is to add the suggested epsilon value.

Here's my new test:

[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0.0;
double b = Math.PI;

Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b), .000001);
Assert.Equals(-b, Euclid.GetAcuteAngleDifference(b, a), .000001);
}

Thanks for the reply!

Marshall
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top