Math Error in the .NET Framework 1.1.4322 SP1

G

Guest

I posted this message to dotnet.general, but thought it would be applicable
here as well:

I have discovered a math error in the .NET framework's Log function. It
returns incorrect results for varying powers of 2 that depend on whether the
program is run from within the IDE or from the command line. The amount by
which the calculation is off is very small; even though the double data type
holds the errant value, it seems to round off when printed (via ToString())
and shows the correct one. The problem is that the errant value is still used
in further calculations, which can throw off some functions.

Specific example:

The function System.Math.Log(8,2) yields a value of 3 - 4x10^-16
(2.9999999999999996) instead of 3 when run from a command line (this will not
occur if run inside the IDE). You can store the result of this computation in
a double precision variable, but if you print the variable's value to the
console by calling its ToString() method, the output will be 3. You can
verify, however, that the value is indeed off by printing the result of some
calculations using it. To see this, use Visual Studio to create a new C#
console application and paste the following code sample in its "Main" method:

---- Begin Code Sample ---
// try log function using the overloaded method that specifies the base
Console.WriteLine("Using System.Math.Log overload to specify desired
base...");
double log = System.Math.Log(8,2); // compute the log of 8 to the
base of 2 (should be 3, but will be 3 - 4x10^-16 when run from a command
prompt – note will print out as 3 regardless)
double floor = System.Math.Floor(log); // get the floor of the result
(should be 3, but will be 2 when run from a command prompt)
Console.WriteLine("Log2 8 = " + log.ToString()); // print the log of 8 to
the base of 2
Console.WriteLine("Floor(Log2 8) = " + floor.ToString()); // print the
floor of the result
Console.WriteLine("Log2 8 == 3? : " + ((bool) (log == 3)).ToString()); //
print whether the log of 8 to the base of 2 is equal to 3 (should be true,
but will be false when run from a command prompt)

Console.WriteLine();

// try the log function while doing things the "manual" way
Console.WriteLine("Using Log x / Log y method of computing the same
logarithm...");
log = System.Math.Log(8) / System.Math.Log(2); // compute the log of 8 to
the base of 2 (will be 3)
floor = System.Math.Floor(log); // get the floor of the result (will
be 3)
Console.WriteLine("Log2 8 = " + log.ToString()); // print the log of 8 to
the base of 2 computed via Log x / Log y
Console.WriteLine("Floor(Log2 8) = " + floor.ToString()); // print the
floor of the result
Console.WriteLine("Log2 8 == 3? : " + ((bool) (log == 3)).ToString()); //
print whether the log of 8 to the base of 2 is equal to 3 (will be true)

Console.Read();
--- End Code Sample ---


Note that if you calculate via log x / log y then the calculation is
correct. Using ILDASM to examine mscorlib.dll reveals that this is what is
actually happening internally. That is, the overloaded method
System.Math.Log(double a,double newBase) actually calls
System.Math.Log(double d) and divides the result by another call to
System.Math.Log(double d). That there is a difference between the results
returned by the framework when it does this internally as opposed to when you
do it is certainly interesting.

You can see the actual value of the log function by putting a statement
such as "Console.Read()" as your first line so you have a way to get the
program to stop. Compile the program and run it from the command line. Open a
new instance of Visual Studio and click on Tools > Debug Processes and select
the application. Once you have attached the debugger, break the application
and start stepping through the code. Add the variable "log" to your watch
window and you will see its value is 3 - 4x10^-16, not 3 when computed using
System.Math.Log(8,2). These steps must be taken, as the problem will not
occur if you initially start the program within the IDE.

Additionally, the problem can be verified with one line of code. Create a
new C# console application and run the following code:

Console.WriteLine(System.Math.Log(8,2) == 3);

which will print “true†when run from within the IDE and “false†when run
from the command line.

The interesting thing here is that the issue occurs on some powers of 2 if
the program is run from the command line and different powers of two if run
from within the IDE. I have verified this by computing the first 64
logarithms of 2 from both IDE and command line. You can do so by running the
following code:

for(int pow = 1;pow <= 64;pow++)
{
if(System.Math.Log(System.Math.Pow(2,pow),2) != pow)
Console.WriteLine(pow.ToString());
}

Additionally, you can compute the same logarithms in VB.NET and the answers
will still be off. This rules out the problem being isolated to a single
language and shows the issue is within the framework itself. As an
interesting side check, I computed these logarithms from VB 6.0 and all
passed the test, so the problem is isolated to .NET.

Lastly, computation using log x / log y is also off, but for different
powers of 2 than what log(power,newBase) is and it is more consistent; it
computes the incorrect value for the same powers when run from either the
command line or the IDE.

I contacted Microsoft support regarding this and they verified that they
have reproduced the problem. The last correspondence with them said, “[I am]
creating a problem report to send to the dev team nowâ€.

The most concerning thing about all of this is that programs, at least in
this instance, behave differently when run from the command line than they do
when run inside the IDE. I find the fact that the value of calculations is
different depending on how you start your application to be alarming.
 
T

Thomas Scheidegger [MVP]

Hi
this instance, behave differently when run from the command line than they do
when run inside the IDE. I find the fact that the value of calculations is
different depending on how you start your application to be alarming.


I don't know for sure,
but often debuggers activate their own (simulation) math-library (CRT-lib)
instead of using the hardware FPU.
But it could also be with ToString() differences under the debugger...

To verify, try
BitConverter.GetBytes( Single or Double )
to retrieve the internal bits of the values.


Math.Log(8,2) yields a value of 3 - 4x10^-16 (2.9999999999999996)

I don't think this is a bug.
System.Math.Log(8,2) == 3

NEVER test floating point numbers for equality!



Floating-Point Theory:
http://research.microsoft.com/~hollasch/cgindex/coding/ieeefloat.html

http://docs.sun.com/source/806-3568/ncg_goldberg.html

and MS KB
http://support.microsoft.com/?kbid=145889

http://support.microsoft.com/?kbid=125056

http://support.microsoft.com/?kbid=214118

http://support.microsoft.com/?kbid=42980

http://support.microsoft.com/?kbid=36068
 
D

Daniel Roth

Hi

Your problem is the way .NET processes the ==

Try this

double x = 8;
double y = 2;
double log = System.Math.Log(x,y);

if( Convert.ToDecimal(log) == Convert.ToDecimal(3) )
{
Console.WriteLine("true");
}
else
{
Console.WriteLine("false");
}
/// Output is true
Daniel Roth
MCSD.NET

limelight said:
I posted this message to dotnet.general, but thought it would be applicable
here as well:

I have discovered a math error in the .NET framework's Log function. It
returns incorrect results for varying powers of 2 that depend on whether the
program is run from within the IDE or from the command line. The amount by
which the calculation is off is very small; even though the double data type
holds the errant value, it seems to round off when printed (via ToString())
and shows the correct one. The problem is that the errant value is still used
in further calculations, which can throw off some functions.

Specific example:

The function System.Math.Log(8,2) yields a value of 3 - 4x10^-16
(2.9999999999999996) instead of 3 when run from a command line (this will not
occur if run inside the IDE). You can store the result of this computation in
a double precision variable, but if you print the variable's value to the
console by calling its ToString() method, the output will be 3. You can
verify, however, that the value is indeed off by printing the result of some
calculations using it. To see this, use Visual Studio to create a new C#
console application and paste the following code sample in its "Main" method:

---- Begin Code Sample ---
// try log function using the overloaded method that specifies the base
Console.WriteLine("Using System.Math.Log overload to specify desired
base...");
double log = System.Math.Log(8,2); // compute the log of 8 to the
base of 2 (should be 3, but will be 3 - 4x10^-16 when run from a command
prompt – note will print out as 3 regardless)
double floor = System.Math.Floor(log); // get the floor of the result
(should be 3, but will be 2 when run from a command prompt)
Console.WriteLine("Log2 8 = " + log.ToString()); // print the log of 8 to
the base of 2
Console.WriteLine("Floor(Log2 8) = " + floor.ToString()); // print the
floor of the result
Console.WriteLine("Log2 8 == 3? : " + ((bool) (log == 3)).ToString()); //
print whether the log of 8 to the base of 2 is equal to 3 (should be true,
but will be false when run from a command prompt)

Console.WriteLine();

// try the log function while doing things the "manual" way
Console.WriteLine("Using Log x / Log y method of computing the same
logarithm...");
log = System.Math.Log(8) / System.Math.Log(2); // compute the log of 8 to
the base of 2 (will be 3)
floor = System.Math.Floor(log); // get the floor of the result (will
be 3)
Console.WriteLine("Log2 8 = " + log.ToString()); // print the log of 8 to
the base of 2 computed via Log x / Log y
Console.WriteLine("Floor(Log2 8) = " + floor.ToString()); // print the
floor of the result
Console.WriteLine("Log2 8 == 3? : " + ((bool) (log == 3)).ToString()); //
print whether the log of 8 to the base of 2 is equal to 3 (will be true)

Console.Read();
--- End Code Sample ---


Note that if you calculate via log x / log y then the calculation is
correct. Using ILDASM to examine mscorlib.dll reveals that this is what is
actually happening internally. That is, the overloaded method
System.Math.Log(double a,double newBase) actually calls
System.Math.Log(double d) and divides the result by another call to
System.Math.Log(double d). That there is a difference between the results
returned by the framework when it does this internally as opposed to when you
do it is certainly interesting.

You can see the actual value of the log function by putting a statement
such as "Console.Read()" as your first line so you have a way to get the
program to stop. Compile the program and run it from the command line. Open a
new instance of Visual Studio and click on Tools > Debug Processes and select
the application. Once you have attached the debugger, break the application
and start stepping through the code. Add the variable "log" to your watch
window and you will see its value is 3 - 4x10^-16, not 3 when computed using
System.Math.Log(8,2). These steps must be taken, as the problem will not
occur if you initially start the program within the IDE.

Additionally, the problem can be verified with one line of code. Create a
new C# console application and run the following code:

Console.WriteLine(System.Math.Log(8,2) == 3);

which will print â€Å"true†when run from within the IDE and â€Å"false†when run
from the command line.

The interesting thing here is that the issue occurs on some powers of 2 if
the program is run from the command line and different powers of two if run
from within the IDE. I have verified this by computing the first 64
logarithms of 2 from both IDE and command line. You can do so by running the
following code:

for(int pow = 1;pow <= 64;pow++)
{
if(System.Math.Log(System.Math.Pow(2,pow),2) != pow)
Console.WriteLine(pow.ToString());
}

Additionally, you can compute the same logarithms in VB.NET and the answers
will still be off. This rules out the problem being isolated to a single
language and shows the issue is within the framework itself. As an
interesting side check, I computed these logarithms from VB 6.0 and all
passed the test, so the problem is isolated to .NET.

Lastly, computation using log x / log y is also off, but for different
powers of 2 than what log(power,newBase) is and it is more consistent; it
computes the incorrect value for the same powers when run from either the
command line or the IDE.

I contacted Microsoft support regarding this and they verified that they
have reproduced the problem. The last correspondence with them said, â€Å"[I am]
creating a problem report to send to the dev team nowâ€ÂÂ.

The most concerning thing about all of this is that programs, at least in
this instance, behave differently when run from the command line than they do
when run inside the IDE. I find the fact that the value of calculations is
different depending on how you start your application to be alarming.
 
J

Jon Skeet [C# MVP]

Daniel Roth said:
Your problem is the way .NET processes the ==

Try this

double x = 8;
double y = 2;
double log = System.Math.Log(x,y);

if( Convert.ToDecimal(log) == Convert.ToDecimal(3) )
{
Console.WriteLine("true");
}
else
{
Console.WriteLine("false");
}

No it's not. Converting the numbers to decimal is just masking the
error.

Try this code, using the DoubleConverter.cs available from
http://www.pobox.com/~skeet/csharp/floatingpoint.html

using System;

class Test
{
static void Main()
{
double log = Math.Log(8, 2);
Console.WriteLine (DoubleConverter.ToExactString(log));
}
}

The result, when run from a console, is
2.999999999999999555910790149937383830547332763671875

Basically, as Thomas said, you should never test floating point results
for equality - because of the way things like logs are calculated,
there's every possibility of getting a slight error, as in this case.

It shouldn't be a problem if you're using floating point numbers
appropriately though.
 
D

Daniel Roth

Hi Jon

Either the result of the if statement is mathematically correct or
it's not.

And, with my code, as above, it is.

The only change I made to the previous post was to change the if
statement. "masking" or not, it now works for this case.

I would appreciate it if you can show cases where this "masking"
fails?

That is where "if( Convert.ToDecimal(x) == Convert.ToDecimal(y) )"
produces a mathematically incorrect result. That is where x = y & x ,
y are elements of real numbers but the above code produces a false
result.

Daniel Roth
MCSD.NET
 
D

Dick Grier

This can fail in many complex calculations. Any elementary book on
numerical analysis will show actual cases. I don't have any example code
offhand.

Suffice it to say (I don't think this is opinion), Jon and Thomas are
exactly correct. At least, as "exactly" as we may assume when using the
limited precision that floating point numbers allow.

--
Richard Grier (Microsoft Visual Basic MVP)

See www.hardandsoftware.net for contact information.

Author of Visual Basic Programmer's Guide to Serial Communications, 4th
Edition ISBN 1-890422-28-2 (391 pages) published July 2004. See
www.mabry.com/vbpgser4 to order.
 
G

Guest

The code that I was running when I encountered this issue took the result
of the logarithm and passed it to the floor function. It was a problem when
the result came up as 2 instead of 3. I have changed the code to:

System.Math.Floor(System.Math.Round(System.Math.Log(8,2),10));

which rounds the result of log to 10 decimal places and avoids the
imprecision at the end. Would there be a more appropriate way of handing this
or would using round to chop off the precision at the desired level indeed be
the best way to do it?
 
J

Jon Skeet [C# MVP]

Daniel Roth said:
Either the result of the if statement is mathematically correct or
it's not.

True - but if you convert both sides of a false statement, it can
become true. For instance:

(int)0.5==(int)0.6

despite 0.5 not being 0.6.
And, with my code, as above, it is.

The only change I made to the previous post was to change the if
statement. "masking" or not, it now works for this case.

I would appreciate it if you can show cases where this "masking"
fails?

That is where "if( Convert.ToDecimal(x) == Convert.ToDecimal(y) )"
produces a mathematically incorrect result. That is where x = y & x ,
y are elements of real numbers but the above code produces a false
result.

It won't do that - but it *will* do what happened here, which if

if (Convert.ToDecimal(x) == Convert.ToDecimal(y))

returns true, despite x and y being real numbers which are different.

Here is a program to demonstrate exactly that:

using System;

class Test
{
static void Main()
{
double x =
2.999999999999999555910790149937383830547332763671875;
double y = 3;

Console.WriteLine(x==y);
Console.WriteLine(Convert.ToDecimal(x)==Convert.ToDecimal(y));
}
}
 
J

Jon Skeet [C# MVP]

limelight said:
The code that I was running when I encountered this issue took the result
of the logarithm and passed it to the floor function. It was a problem when
the result came up as 2 instead of 3. I have changed the code to:

System.Math.Floor(System.Math.Round(System.Math.Log(8,2),10));

which rounds the result of log to 10 decimal places and avoids the
imprecision at the end. Would there be a more appropriate way of handing this
or would using round to chop off the precision at the desired level indeed be
the best way to do it?

It really depends on what exactly you want the result to be - and what
you expect the inputs to be. If you know that you'll always be passed
something where Math.Log *should* be returning something which is
actually an integer, then you could add a small amount and then call
Floor. Otherwise, it's harder to say. Could you give more details of
what you need?
 
G

Guest

I have an array of 5 bytes that I am treating as one long array of bits
(this makes 40 bits total). Each bit represents a boolean flag. Some
verification checks on run on the bits and stored and then the entire value
is passed between applications as a hex string, where the same checks are
re-run and verified.

In the code snippet below, I compute a “sum†and count check. The count
check is straightforward in that it counts how many bits are set. The sum
check works on the principle of numbering each bit and then summing up the
values. For example, if the first three bits are set then the sum check is 6
(1 + 2 + 3). If all 40 bits are set then the sum check is 820 (1 + 2 + 3 + 4
+ 5 … + 38 + 39 + 40). The log function is used to determine which bits are
set.

Perhaps there is a better way of doing this given the tools of the .NET
framework.


// count and sum check
int count = 0;
int sum = 0;
for(int i = 0;i < 5;i++)
{
int modifier = (4 - i) * 8; // used to continue sum over bits in
multiple bytes
int val = b;
while(val > 0)
{
int bit = (int) Math.Floor(Math.Round(Math.Log(val,2),10));
sum += modifier + bit + 1;
count++;
val = (int) (val - System.Math.Pow(2,bit));
}
}
 
J

Jay B. Harlow [MVP - Outlook]

limelight,
PMFJI It sounds like you simply need to use the bit shift operators.

Something like (minimally tested):

// count and sum check
int count = 0;
int sum = 0;
int digit = 40;
for(int i = 0;i < 5;i++)
{
int val = b;
for(int j = 0; j < 8; j++)
{
if ((val & 0x80) != 0)
{
sum += digit;
count += 1;
}
val <<= 1;
digit -= 1;
}
}

Hope this helps
Jay

| I have an array of 5 bytes that I am treating as one long array of bits
| (this makes 40 bits total). Each bit represents a boolean flag. Some
| verification checks on run on the bits and stored and then the entire
value
| is passed between applications as a hex string, where the same checks are
| re-run and verified.
|
| In the code snippet below, I compute a "sum" and count check. The count
| check is straightforward in that it counts how many bits are set. The sum
| check works on the principle of numbering each bit and then summing up the
| values. For example, if the first three bits are set then the sum check is
6
| (1 + 2 + 3). If all 40 bits are set then the sum check is 820 (1 + 2 + 3 +
4
| + 5 . + 38 + 39 + 40). The log function is used to determine which bits
are
| set.
|
| Perhaps there is a better way of doing this given the tools of the .NET
| framework.
|
|
| // count and sum check
| int count = 0;
| int sum = 0;
| for(int i = 0;i < 5;i++)
| {
| int modifier = (4 - i) * 8; // used to continue sum over bits in
| multiple bytes
| int val = b;
| while(val > 0)
| {
| int bit = (int) Math.Floor(Math.Round(Math.Log(val,2),10));
| sum += modifier + bit + 1;
| count++;
| val = (int) (val - System.Math.Pow(2,bit));
| }
| }
|
|
| "Jon Skeet [C# MVP]" wrote:
|
| > > The code that I was running when I encountered this issue took the
result
| > > of the logarithm and passed it to the floor function. It was a problem
when
| > > the result came up as 2 instead of 3. I have changed the code to:
| > >
| > > System.Math.Floor(System.Math.Round(System.Math.Log(8,2),10));
| > >
| > > which rounds the result of log to 10 decimal places and avoids the
| > > imprecision at the end. Would there be a more appropriate way of
handing this
| > > or would using round to chop off the precision at the desired level
indeed be
| > > the best way to do it?
| >
| > It really depends on what exactly you want the result to be - and what
| > you expect the inputs to be. If you know that you'll always be passed
| > something where Math.Log *should* be returning something which is
| > actually an integer, then you could add a small amount and then call
| > Floor. Otherwise, it's harder to say. Could you give more details of
| > what you need?
| >
| > --
| > Jon Skeet - <[email protected]>
| > http://www.pobox.com/~skeet
| > If replying to the group, please do not mail me too
| >
 
D

Daniel Roth

Hi limelight

According to MSDN a decimal has a "Precision of 28-29 significant
digits"

So, as long as ALL your variables and constants are decimal then all
your operations will be correct to the "28-29 significant digit"

I have found no cases to counter the above statement

Regards,

Daniel Roth
MCSD.NET


limelight said:
I have an array of 5 bytes that I am treating as one long array of bits
(this makes 40 bits total). Each bit represents a boolean flag. Some
verification checks on run on the bits and stored and then the entire value
is passed between applications as a hex string, where the same checks are
re-run and verified.

In the code snippet below, I compute a â€Å"sum†and count check. The count
check is straightforward in that it counts how many bits are set. The sum
check works on the principle of numbering each bit and then summing up the
values. For example, if the first three bits are set then the sum check is 6
(1 + 2 + 3). If all 40 bits are set then the sum check is 820 (1 + 2 + 3 + 4
+ 5 … + 38 + 39 + 40). The log function is used to determine which bits are
set.

Perhaps there is a better way of doing this given the tools of the .NET
framework.


// count and sum check
int count = 0;
int sum = 0;
for(int i = 0;i < 5;i++)
{
int modifier = (4 - i) * 8; // used to continue sum over bits in
multiple bytes
int val = b;
while(val > 0)
{
int bit = (int) Math.Floor(Math.Round(Math.Log(val,2),10));
sum += modifier + bit + 1;
count++;
val = (int) (val - System.Math.Pow(2,bit));
}
}


Jon Skeet said:
It really depends on what exactly you want the result to be - and what
you expect the inputs to be. If you know that you'll always be passed
something where Math.Log *should* be returning something which is
actually an integer, then you could add a small amount and then call
Floor. Otherwise, it's harder to say. Could you give more details of
what you need?
 
J

Jon Skeet [C# MVP]

Daniel Roth said:
According to MSDN a decimal has a "Precision of 28-29 significant
digits"

So, as long as ALL your variables and constants are decimal then all
your operations will be correct to the "28-29 significant digit"

I have found no cases to counter the above statement

Except the counterexample I gave, where Convert.ToDecimal is converting
a double which is definitely less than 3 (in the 16th DP) to 3.
 
J

Jon Skeet [C# MVP]

limelight said:
I have an array of 5 bytes that I am treating as one long array of bits
(this makes 40 bits total). Each bit represents a boolean flag. Some
verification checks on run on the bits and stored and then the entire value
is passed between applications as a hex string, where the same checks are
re-run and verified.

In the code snippet below, I compute a ?sum? and count check. The count
check is straightforward in that it counts how many bits are set. The sum
check works on the principle of numbering each bit and then summing up the
values. For example, if the first three bits are set then the sum check is 6
(1 + 2 + 3). If all 40 bits are set then the sum check is 820 (1 + 2 + 3 + 4
+ 5 ? + 38 + 39 + 40). The log function is used to determine which bits are
set.

Perhaps there is a better way of doing this given the tools of the .NET
framework.

I agree with Jay - bit shifting is definitely a better way of doing
this. You don't fundamentally need any floating point arithmetic here,
so it's best to avoid it.
 
G

Guest

Thanks to everyone for all the input. The code sample is great and is
definitely a better way of accomplishing what I need. I will be more mindful
of floating point operations in the future and try to avoid them when they
are not necessary.


Jay B. Harlow said:
limelight,
PMFJI It sounds like you simply need to use the bit shift operators.

Something like (minimally tested):

// count and sum check
int count = 0;
int sum = 0;
int digit = 40;
for(int i = 0;i < 5;i++)
{
int val = b;
for(int j = 0; j < 8; j++)
{
if ((val & 0x80) != 0)
{
sum += digit;
count += 1;
}
val <<= 1;
digit -= 1;
}
}

Hope this helps
Jay

| I have an array of 5 bytes that I am treating as one long array of bits
| (this makes 40 bits total). Each bit represents a boolean flag. Some
| verification checks on run on the bits and stored and then the entire
value
| is passed between applications as a hex string, where the same checks are
| re-run and verified.
|
| In the code snippet below, I compute a "sum" and count check. The count
| check is straightforward in that it counts how many bits are set. The sum
| check works on the principle of numbering each bit and then summing up the
| values. For example, if the first three bits are set then the sum check is
6
| (1 + 2 + 3). If all 40 bits are set then the sum check is 820 (1 + 2 + 3 +
4
| + 5 . + 38 + 39 + 40). The log function is used to determine which bits
are
| set.
|
| Perhaps there is a better way of doing this given the tools of the .NET
| framework.
|
|
| // count and sum check
| int count = 0;
| int sum = 0;
| for(int i = 0;i < 5;i++)
| {
| int modifier = (4 - i) * 8; // used to continue sum over bits in
| multiple bytes
| int val = b;
| while(val > 0)
| {
| int bit = (int) Math.Floor(Math.Round(Math.Log(val,2),10));
| sum += modifier + bit + 1;
| count++;
| val = (int) (val - System.Math.Pow(2,bit));
| }
| }
|
|
| "Jon Skeet [C# MVP]" wrote:
|
| > > The code that I was running when I encountered this issue took the
result
| > > of the logarithm and passed it to the floor function. It was a problem
when
| > > the result came up as 2 instead of 3. I have changed the code to:
| > >
| > > System.Math.Floor(System.Math.Round(System.Math.Log(8,2),10));
| > >
| > > which rounds the result of log to 10 decimal places and avoids the
| > > imprecision at the end. Would there be a more appropriate way of
handing this
| > > or would using round to chop off the precision at the desired level
indeed be
| > > the best way to do it?
| >
| > It really depends on what exactly you want the result to be - and what
| > you expect the inputs to be. If you know that you'll always be passed
| > something where Math.Log *should* be returning something which is
| > actually an integer, then you could add a small amount and then call
| > Floor. Otherwise, it's harder to say. Could you give more details of
| > what you need?
| >
| > --
| > Jon Skeet - <[email protected]>
| > http://www.pobox.com/~skeet
| > If replying to the group, please do not mail me too
| >
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top