TimeSpan.TotalDays versus DateTime.ToOADate difference

G

Guest

Given the example below, can someone explain why TimeSpan.TotalDays gives a
different result than subtracting 2 DateTime.ToOADates?

I am completely stumped. Thanks in advance,
Dan

Example Output:

start: 1/1/2007 12:00:00 AM
end : 1/1/2007 1:00:00 AM

(end - start).TotalDays : 0.0416666666666667
(end.ToOADate() - start.ToOADate()) : 0.0416666666642413
((decimal)end.ToOADate() - (decimal)start.ToOADate()) : 0.0416666667

Code:

using System;
using System.Collections.Generic;
using System.Text;

namespace DurationTest
{
class Program
{
static void Main(string[] args)
{
DateTime start = new DateTime(2007, 1, 1, 0, 0, 0);
DateTime end = new DateTime(2007, 1, 1, 1, 0, 0);

Console.WriteLine("start: " + start);
Console.WriteLine("end : " + end);
Console.WriteLine();

TimeSpan duration = end - start;
Console.WriteLine("(end - start).TotalDays
: " + duration.TotalDays);

double totalDays = end.ToOADate() - start.ToOADate();
Console.WriteLine("(end.ToOADate() - start.ToOADate())
: " + totalDays);

decimal totalDays2 = ((decimal)end.ToOADate()) -
((decimal)start.ToOADate());
Console.WriteLine("((decimal)end.ToOADate() -
(decimal)start.ToOADate()) : " + totalDays2);

Console.ReadLine();
}
}
}
 
G

Guest

Given the example below, can someone explain why TimeSpan.TotalDays gives a
different result than subtracting 2 DateTime.ToOADates?

I am completely stumped. Thanks in advance,
Dan

Example Output:

start: 1/1/2007 12:00:00 AM
end : 1/1/2007 1:00:00 AM

(end - start).TotalDays : 0.0416666666666667
(end.ToOADate() - start.ToOADate()) : 0.0416666666642413
((decimal)end.ToOADate() - (decimal)start.ToOADate()) : 0.0416666667

DateTime and TimeSpan are 64 bit values which represent time in 1/10000
seconds.

When you convert this to a double value, you lose precision. A double is
also a 64 bit value, but only 52 bits are used for the actual data. You
only get nine to ten significant digits after the decimal point in an
OADate, so the last six digits that you see is just noise.

The conversion from double to decimal doesn't cause any further loss of
precision, but you can't get anything back that was lost in the first
conversion. The second and third results are equivalent, they are only
displayed differently because the decimal values keep track of how much
significant data there actually is.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top