Time Accuracy

A

Alex Rauket

I'm trying to increace the accuracy with which EXCEL reports the time. I'm
using the NOW function to write the time that an entry is entered into a
spreadsheet. I would like to know the time to better than 1 second.

For example, instead of getting 13:45:45, I would like to see 13:45:24.56785.

Thanks
 
B

Bill Sharpe

Alex said:
I'm trying to increace the accuracy with which EXCEL reports the time. I'm
using the NOW function to write the time that an entry is entered into a
spreadsheet. I would like to know the time to better than 1 second.

For example, instead of getting 13:45:45, I would like to see 13:45:24.56785.

Thanks
You might be able to set up a custom time format that would show this
many digits, but I doubt that it would be accurate much beyond the
tenths of a second, i.e the last four digits would be questionable.

Chip Pearson has an explanation of how Excel stores dates and times at
http://www.cpearson.com/excel/datetime.htm. There are six digits set
aside to store 24 hours of time. Do the math and you'll see that you
won't get any closer accuracy than tenths of a second.

Geez, I'm happy to know whether it's time for breakfast, lunch, or dinner.

Bill
 
J

joeu2004

Alex Rauket said:
I'm trying to increace the accuracy with which EXCEL reports the time.
I'm using the NOW function to write the time that an entry is entered
into a spreadsheet. I would like to know the time to better than 1 second.
For example, instead of getting 13:45:45, I would like to see
13:45:24.56785.

With custom format, the best you can do is "h:mm:ss.000" -- seconds to 3
decimal places. Arguably, you could do the math yourself. But I don't think
it is worth it because Excel NOW() is accurate to only 10 milliseconds, at
least on my MS Win XP system with Excel 2003. And by the way, the VBA Now()
function is accurate to only 1 second.

Why do you want to determine the data entry time to less than a second, in
the first place?

If you are really trying to compute the time between cell changes, there are
more accurate methods for measuring time intervals using VBA.
 
J

joeu2004

Chip Pearson has an explanation of how Excel stores
dates and times at
http://www.cpearson.com/excel/datetime.htm.
There are six digits set aside to store 24 hours of time.
Do the math and you'll see that you won't get any closer
accuracy than tenths of a second.

Exactly how did you "do the math"?

First, Chip's description of the internal format is over-simplified
and frankly incorrect to the degree that you interpreted it (literally
6 digits "set aside" for time).

Second, even if we assume there are 6 digits for time, I do not see
how to "do the math" to reach the conclusion that you did. (Hint:
Try entering =0.01/86400 and format it as Number. How many decimal
places do you need?)

Nonetheless, your conclusion is correct.
 
J

joeu2004

Do the math and you'll see that you won't get any
closer accuracy than tenths of a second.
[....]
Nonetheless, your conclusion is correct.

Sorry, I misread. Your conclusion is incorrect. The Excel NOW()
function is accurate to 10 milliseconds. That is hundredths of a
second.


----- original posting -----
 
J

joeu2004

Errata....

Do the math and you'll see that you won't get any closer
accuracy than tenths of a second.
[....]
Second, even if we assume there are 6 digits for time,
I do not see how to "do the math" to reach the conclusion
that you did.  (Hint: Try entering =0.01/86400 and format
it as Number.  How many decimal places do you need?)

Okay, I see now. I kept misreading your "tenths of seconds" as
hundredths of seconds, which is the correct resolution of the Excel NOW
() function.

It is true that 0.1 is the smallest fraction of a second that can be
represented in 6 decimal fractions. (Actually, about 0.0432 is. But
let's not split hairs ;->.)

The important "take aways" are: (a) I believe Chip's description is
incorrect; and (b) the Excel NOW() function is accurate to 10
milliseconds -- hundredths of a second.
 
B

Bill Sharpe

joeu2004 said:
Exactly how did you "do the math"?

First, Chip's description of the internal format is over-simplified
and frankly incorrect to the degree that you interpreted it (literally
6 digits "set aside" for time).

Second, even if we assume there are 6 digits for time, I do not see
how to "do the math" to reach the conclusion that you did. (Hint:
Try entering =0.01/86400 and format it as Number. How many decimal
places do you need?)

Nonetheless, your conclusion is correct.
My math was based on Chip's information that date/time information is
stored as an integer and decimal combination with six digits set aside
for the time component. That means you can slice 24 hours into 1,000,000
equal pieces. There are 24x60x60x10 = 860,000 tenths of a second in a
day. Therefore Excel can potentially track only to the tenths of a
second. The hundredths of a second and beyond will result in unreliable
numbers.

I agree with your questioning as to why in the world the OP wants to be
that accurate.

Got to go now. It's time for breakfast...

Bill
 
J

joeu2004

My math was based on Chip's information that date/time information is
stored as an integer and decimal combination with six digits set aside
for the time component. That means you can slice 24 hours into 1,000,000
equal pieces. There are 24x60x60x10 = 860,000 tenths of a second in a
day. Therefore Excel can potentially track only to the tenths of a
second. The hundredths of a second and beyond will result in unreliable
numbers.

Your reasoning would be sound, if the information were correct. But
Chip's information is incorrect, at least in a literal sense.

First, I hasten to note that according to http://support.microsoft.com/kb/214094,
the format of the time component is just 5 decimal fractions. By your
reasoning, that would slice 24 hours into 100,000 equal pieces, and
the smallest increment of time would be 24x60x60 = 86400 seconds(!).
Who ya' gonna believe?

The answer is: neither is correct.

You can easily verify this by entering the time 0:0:0.001 and
formatting the cell as Number with at least 8 decimal places. (13 dp
is more representative.)

Although h:mm:ss.000 is the most precise custom format supported and
recognized for data entry, we can actually input any fractional time.
As always, the displayed value is subject to Excel's rounding rules.
For example, the largest number displayed as 23:59:59.999 is about
0.999499999999999/86400.

(FYI, KB 214094 asserts that 0.99999 is 23:59:59. It is true that
that value is displayed that way. But if we compare it with time
entered as 23:59:59, the comparison is false(!).)

The hundredths of a second and beyond will result in
unreliable numbers.

I am not sure what you mean by that.

All decimal fractions and non-integer expressions are subject to the
oddities of IEEE floating point representation. But I would say that
the internal representation of 0.1 seconds is no less "reliable" than
the representations of 0.01, 0.001 and even 1 second expressed as
fractions of a day (i.e. divided by 86400), at least within 15
significant digits. In fact, I believe the internal representation of
0.01 seconds is closer to its infinite representation than 0.01
seconds, and both are closer than 1 second is(!) [1].

But all of that is about representation of time. The original posting
raised issues about the resolution of the Excel NOW() function.

I believe the following macro demonstrates that the smallest time
increment reflected by NOW() is 10 milliseconds. (Sometimes we see a
20-msec difference. There are for various possible reasons.)

CAUTION: This macro overwrites A1:K3 of the current worksheet, and it
changes the column width of A:K.

Private Sub measXLNow()
Dim n As Long
Dim st As Double, et As Double
Dim c As Integer
Dim oldCalc

Application.ScreenUpdating = False
oldCalc = Application.Calculation
Application.Calculation = xlCalculationManual
Range("A1") = "start Now()"
Range("A2") = "end Now()"
Range("A3") = "#loops"

For c = 1 To 10
Range("B1").Cells(1, c).Formula = "=Now()"
st = Range("B1").Cells(1, c)
n = 0
Do
n = n + 1
Range("B2").Cells(1, c).Formula = "=Now()"
et = Range("B2").Cells(1, c)
Loop Until et <> st

Range("B1").Cells(1, c) = st
Range("B1").Cells(1, c).NumberFormat = "h:mm:ss.000"
Range("B2").Cells(1, c) = et
Range("B2").Cells(1, c).NumberFormat = "h.mm:ss.000"
Range("B3").Cells(1, c) = n
Range("B3").Cells(1, c).NumberFormat = "0"
Next c

Range("A1", Range("B3").Cells(1, c)).Columns.AutoFit
Application.Calculation = oldCalc
Application.ScreenUpdating = True
End Sub


Endnotes:

[1] 0.1/86400 =
0.00000115740740740740,7392286300719386193947002539061941206455230712890625
0.01/86400 =
0.000000115740740740740,7418756080321083079542887617208180017769336700439453125
0.001/86400 =
0.0000000115740740740740,738566885581896197254803126952538150362670421600341796875
1/86400 =
0.0000115740740740740,73499346533566711769935864140279591083526611328125

(The comma demarcates 15 significant digits to the left.)

[2] All of my examples were derived on MS Win XP using Excel 2003 and
VB 6.3.
 
J

joeu2004

Typos....

I would say that the internal representation of 0.1 seconds
is no less "reliable" than the representations of 0.01

"No __more__ reliable".
I believe the internal representation of 0.01 seconds is
closer to its infinite representation than 0.01 seconds

"Than 0.1 seconds".



----- original posting -----

My math was based on Chip's information that date/time information is
stored as an integer and decimal combination with six digits set aside
for the time component. That means you can slice 24 hours into 1,000,000
equal pieces. There are 24x60x60x10 = 860,000 tenths of a second in a
day. Therefore Excel can potentially track only to the tenths of a
second. The hundredths of a second and beyond will result in unreliable
numbers.

Your reasoning would be sound, if the information were correct.  But
Chip's information is incorrect, at least in a literal sense.

First, I hasten to note that according tohttp://support.microsoft.com/kb/214094,
the format of the time component is just 5 decimal fractions.  By your
reasoning, that would slice 24 hours into 100,000 equal pieces, and
the smallest increment of time would be 24x60x60 = 86400 seconds(!).
Who ya' gonna believe?

The answer is:  neither is correct.

You can easily verify this by entering the time 0:0:0.001 and
formatting the cell as Number with at least 8 decimal places.  (13 dp
is more representative.)

Although h:mm:ss.000 is the most precise custom format supported and
recognized for data entry, we can actually input any fractional time.
As always, the displayed value is subject to Excel's rounding rules.
For example, the largest number displayed as 23:59:59.999 is about
0.999499999999999/86400.

(FYI, KB 214094 asserts that 0.99999 is 23:59:59.  It is true that
that value is displayed that way.  But if we compare it with time
entered as 23:59:59, the comparison is false(!).)
The hundredths of a second and beyond will result in
unreliable numbers.

I am not sure what you mean by that.

All decimal fractions and non-integer expressions are subject to the
oddities of IEEE floating point representation.  But I would say that
the internal representation of 0.1 seconds is no less "reliable" than
the representations of 0.01, 0.001 and even 1 second expressed as
fractions of a day (i.e. divided by 86400), at least within 15
significant digits.  In fact, I believe the internal representation of
0.01 seconds is closer to its infinite representation than 0.01
seconds, and both are closer than 1 second is(!) [1].

But all of that is about representation of time.  The original posting
raised issues about the resolution of the Excel NOW() function.

I believe the following macro demonstrates that the smallest time
increment reflected by NOW() is 10 milliseconds.  (Sometimes we see a
20-msec difference.  There are for various possible reasons.)

CAUTION: This macro overwrites A1:K3 of the current worksheet, and it
changes the column width of A:K.

Private Sub measXLNow()
Dim n As Long
Dim st As Double, et As Double
Dim c As Integer
Dim oldCalc

Application.ScreenUpdating = False
oldCalc = Application.Calculation
Application.Calculation = xlCalculationManual
Range("A1") = "start Now()"
Range("A2") = "end Now()"
Range("A3") = "#loops"

For c = 1 To 10
   Range("B1").Cells(1, c).Formula = "=Now()"
   st = Range("B1").Cells(1, c)
   n = 0
   Do
      n = n + 1
      Range("B2").Cells(1, c).Formula = "=Now()"
      et = Range("B2").Cells(1, c)
   Loop Until et <> st

   Range("B1").Cells(1, c) = st
   Range("B1").Cells(1, c).NumberFormat = "h:mm:ss.000"
   Range("B2").Cells(1, c) = et
   Range("B2").Cells(1, c).NumberFormat = "h.mm:ss.000"
   Range("B3").Cells(1, c) = n
   Range("B3").Cells(1, c).NumberFormat = "0"
Next c

Range("A1", Range("B3").Cells(1, c)).Columns.AutoFit
Application.Calculation = oldCalc
Application.ScreenUpdating = True
End Sub

Endnotes:

[1] 0.1/86400 =
0.00000115740740740740,7392286300719386193947002539061941206455230712890625
    0.01/86400 =
0.000000115740740740740,741875608032108307954288761720818001776933670043945­3125
    0.001/86400 =
0.0000000115740740740740,73856688558189619725480312695253815036267042160034­1796875
    1/86400 =
0.0000115740740740740,73499346533566711769935864140279591083526611328125

    (The comma demarcates 15 significant digits to the left.)

[2] All of my examples were derived on MS Win XP using Excel 2003 and
VB 6.3.
 
B

Bill Sharpe

See inline comments
Your reasoning would be sound, if the information were correct. But
Chip's information is incorrect, at least in a literal sense.

First, I hasten to note that according to http://support.microsoft.com/kb/214094,
the format of the time component is just 5 decimal fractions.

And the reason KB214094 uses a five-digit representation for an elapsed
time of 12 hours 45 minutes is because 0.53125 is the exact decimal
value for that time period. 12.75/24 = 0.53125.

By your
reasoning, that would slice 24 hours into 100,000 equal pieces, and
the smallest increment of time would be 24x60x60 = 86400 seconds(!).
Who ya' gonna believe?

My reasoning was based on 1,000,000 slices of time each 24 hours per
Chip's site. That means the 864,000 tenths of a second in one day can
theoretically be handled by the available decimals. Anything beyond that
(hundredths of a second, thousandths of a second is unreliable. If Chip
is wrong then obviously my conclusion is wrong, but see below.
The answer is: neither is correct.

You can easily verify this by entering the time 0:0:0.001 and
formatting the cell as Number with at least 8 decimal places. (13 dp
is more representative.)

Although h:mm:ss.000 is the most precise custom format supported and
recognized for data entry, we can actually input any fractional time.
As always, the displayed value is subject to Excel's rounding rules.
For example, the largest number displayed as 23:59:59.999 is about
0.999499999999999/86400.

(FYI, KB 214094 asserts that 0.99999 is 23:59:59. It is true that
that value is displayed that way. But if we compare it with time
entered as 23:59:59, the comparison is false(!).)



I am not sure what you mean by that.

See my comment above. Perhaps I should have said unreliable digits
rather than unreliable numbers.
All decimal fractions and non-integer expressions are subject to the
oddities of IEEE floating point representation. But I would say that
the internal representation of 0.1 seconds is no less "reliable" than
the representations of 0.01, 0.001 and even 1 second expressed as
fractions of a day (i.e. divided by 86400), at least within 15
significant digits. In fact, I believe the internal representation of
0.01 seconds is closer to its infinite representation than 0.01
seconds, and both are closer than 1 second is(!) [1].

I'm not getting into the decimal system vs. the binary system and the
loss of accuracy that entails at some point in working with very small
numbers. Some fractions are exactly equal in decimal and binary. 1.1
binary exactly equals 1.5 decimal. 1.11 exactly equals 1.75 decimal.
Most such conversions are approximate, although close enough for most
practical purposes.
But all of that is about representation of time. The original posting
raised issues about the resolution of the Excel NOW() function.

I believe the following macro demonstrates that the smallest time
increment reflected by NOW() is 10 milliseconds. (Sometimes we see a
20-msec difference. There are for various possible reasons.)

Ten milliseconds is 0.01 seconds. And I noted above that the numbers
beyond the tenths column are unreliable. That could explain the
occasional 20 millisecond difference.
CAUTION: This macro overwrites A1:K3 of the current worksheet, and it
changes the column width of A:K.

Private Sub measXLNow()
Dim n As Long
Dim st As Double, et As Double
Dim c As Integer
Dim oldCalc

Application.ScreenUpdating = False
oldCalc = Application.Calculation
Application.Calculation = xlCalculationManual
Range("A1") = "start Now()"
Range("A2") = "end Now()"
Range("A3") = "#loops"

For c = 1 To 10
Range("B1").Cells(1, c).Formula = "=Now()"
st = Range("B1").Cells(1, c)
n = 0
Do
n = n + 1
Range("B2").Cells(1, c).Formula = "=Now()"
et = Range("B2").Cells(1, c)
Loop Until et <> st

Range("B1").Cells(1, c) = st
Range("B1").Cells(1, c).NumberFormat = "h:mm:ss.000"
Range("B2").Cells(1, c) = et
Range("B2").Cells(1, c).NumberFormat = "h.mm:ss.000"
Range("B3").Cells(1, c) = n
Range("B3").Cells(1, c).NumberFormat = "0"
Next c

Range("A1", Range("B3").Cells(1, c)).Columns.AutoFit
Application.Calculation = oldCalc
Application.ScreenUpdating = True
End Sub


Endnotes:

[1] 0.1/86400 =
0.00000115740740740740,7392286300719386193947002539061941206455230712890625
0.01/86400 =
0.000000115740740740740,7418756080321083079542887617208180017769336700439453125
0.001/86400 =
0.0000000115740740740740,738566885581896197254803126952538150362670421600341796875
1/86400 =
0.0000115740740740740,73499346533566711769935864140279591083526611328125

(The comma demarcates 15 significant digits to the left.)

[2] All of my examples were derived on MS Win XP using Excel 2003 and
VB 6.3.

Neat macro!

The repetition of the digits "740" four times before the comma in the
examples above( and almost five times as the next sequence after the
comma is 739, 741, 738, and 734) is suspect, although I guess no more so
than the endless threes in the decimal representation of 1/3. I'd be
less suspicious if the 740's kept on repeating.

I just had an "aha" moment. If I store just the time in a cell, e.g.
12:01:01 PM, Excel displays the time but stores the information as
0.500011574074074 -- fifteen digits and there's those suspicious 740
digits repeating again in there. If I add today's date and store the
cell formatted as date and time, Excel stores the information as
39889.5000115741 -- five integer digits and ten decimal digits.
Apparently Excel does keep track of time to more than six decimal digits.

I'm still curious to know why the OP wants to be so precise in Excel.

And it's not nearly close enough to dinner time -- corned beef tonight!

Bill
 
J

joeu2004

And the reason KB214094 uses a five-digit representation for an
elapsed time of 12 hours 45 minutes is because 0.53125 is the
exact decimal value for that time period. 12.75/24 = 0.53125.

I was referring to the general statement at the beginning of the KB
article, to wit: "Times are stored as decimal numbers between .0 and .
99999, where .0 is 00:00:00 and .99999 is 23:59:59".

I was trying to contrast that with statements on Chip's web page,
namely: "Excel stores dates and times as a number representing the
number of days since 1900-Jan-0, plus a fractional portion of a 24
hour day: ddddd.tttttt. [....] The integer portion of the number,
tttttt, represents the fractional portion of a 24 hour day".

(I presume Chip meant to say the __fractional__ portion, tttttt.)

Your conclusion that time is accurate to 0.1 seconds was based on the
fact that Chip showed only 6 decimal places for time. But if Chip
were wrong and MS were right (an interesting proposition ;->), you
would have to conclude that time is accurate to only 1 second because
the MS KB shows only 5 decimal places for time.

That "proof by contradition" was intended to be an eye-opener for
you. Obviously, the strategy failed.

No matter, because ....

I just had an "aha" moment. If I store just the time in a cell,
e.g. 12:01:01 PM, Excel displays the time but stores the
information as 0.500011574074074 -- fifteen digits [...].
Apparently Excel does keep track of time to more than six decimal
digits.

Ding! And with that, the student becomes the master ;-). I'm sorry
that my explanations did not help you. But there is nothing like self-
discovery to advance your own education. I was hoping that you would
try to fill in the blanks with your own experiments anyway. (BTW,
apparently you entered 12:00:01, not 12:01:01.)

I'm not getting into the decimal system vs. the binary
system and the loss of accuracy that entails at some
point in working with very small numbers.

Small numbers per se are not the problem. For example, 2^-1022 is a
__very__ small number, yet it can be represented with no loss of
accuracy. The issue is whether or not a decimal value can be
represented exactly as a sum of a finite number (53) of consecutive
powers of 2. That is as much a problem with 12345678901234.1 as it is
with 0.0000000000123456789012341.

It's a little more involved than that. For example, small changes to
large numbers can sometimes be problematic. (Not in this case,
though.) But I'm not getting into the whole decimal v. binary system
thing ;-).

Perhaps I should have said unreliable digits rather than
unreliable numbers.
[....]
Some fractions are exactly equal in decimal and binary.
1.1 binary exactly equals 1.5 decimal. 1.11 exactly equals
1.75 decimal. Most such conversions are approximate,
although close enough for most practical purposes.

And my point was.... The approximation of 0.01 seconds is just as
"close enough for most practical purposes" as is the approximation for
0.1 seconds in Excel's time format. The representation of 0.01
seconds is no less "reliable" than the representation of 0.1 seconds,
by your definition. They are both accurate to +/- 8.64 microseconds,
less than 1/1000 of 0.01 seconds. (See below.)

(Here, I am referring to the "reliability" of the representation and
presentation of the time format, not the reliability of the Excel NOW
() function per se.)

Ten milliseconds is 0.01 seconds. And I noted above
that the numbers beyond the tenths column are
unreliable. That could explain the occasional 20
millisecond difference.

No, it is not due to the potential "loss of accuracy", which you
explained is what you mean by "unreliable". Today's date/time in
numeric format to 15 significant digits is 39890.5031526620. The
potential "error" (numerical abberation) in the last digit represents
+/- 8.64 microseconds (10^-10 * 86400 seconds).

The variability in the minimum non-zero difference between calls to
the Excel NOW() function (10 and 20 msec) is because: (a) the Excel
NOW() function truncates time to the lowest multiple of 10
milliseconds (hundredths of a second); and (b) system time is
incremented every 15.625 milliseconds, not a multiple of 10
milliseconds. Consider these two examples, recorded by a VB macro.

(1) Actual time: start= 0:12:21.062 end= 0:12:21.078 diff= 0.016 sec
NOW: start= 0:12:21.060 end= 0:12:21.070 diff= 0.010
sec

(2) Actual time: start= 0:12:31.484 end= 0:12:31.500 diff= 0.016 sec
NOW: start= 0:12:31.480 end= 0:12:31.500 diff= 0.020 sec

("Actual time" is returned by a kernel function.)

The repetition of the digits "740" four times before
the comma in the examples above [....] is suspect,
although I guess no more so than the endless threes
in the decimal representation of 1/3.

Exactly right!

I'd be less suspicious if the 740's kept on repeating.

But they do, at least in decimal. As you say, "do the math". Compute
1/86400 on paper. You should get 0.0000115(740)..., which is my
notation for the infinitely repeating pattern 740.

The only reason that the pattern stops repeating in the binary
representation is because of the finite number of bits in the binary
representation. 1/86400 is represented internally exactly by
0.0000115740740740740,73499346533566711769935864140279591083526611328125.
If we add 2^-69, the smallest possible one-bit increment [1], the
internal representation is exactly
0.0000115740740740740,7519341242807531244807250914163887500762939453125.
As you might note, the first is below ",740", and the second is above
",740" [2]. In other words, for this value, ",740" cannot be
represented within the limits of the IEEE 64-bit floating point
format.

HTH.

Neat macro!

Thanks. I'm having a lot of fun with this :).


Endnotes:

[1] Actually 2^-70 is the smallest possible one-bit increment. But
adding either 2^-69 or 2^-70 has the same effect. This is because
binary floating-point arithmetic is actually performed in 80-bit FP
registers, then stored into a 64-bit result. The conversion might
result in rounding of the least-significant bit, which is represented
by 2^-69 in this case.

[2] You cannot see the difference using Excel formatting because Excel
formats only the first 15 significant digits.



----- original posting -----

See inline comments
Your reasoning would be sound, if the information were correct. But
Chip's information is incorrect, at least in a literal sense.
First, I hasten to note that according tohttp://support.microsoft.com/kb/214094,
the format of the time component is just 5 decimal fractions.

And the reason KB214094 uses a five-digit representation for an elapsed
time of 12 hours 45 minutes is because 0.53125 is the exact decimal
value for that time period. 12.75/24 = 0.53125.

By your
reasoning, that would slice 24 hours into 100,000 equal pieces, and
the smallest increment of time would be 24x60x60 = 86400 seconds(!).
Who ya' gonna believe?

My reasoning was based on 1,000,000 slices of time each 24 hours per
Chip's site. That means the 864,000 tenths of a second in one day can
theoretically be handled by the available decimals. Anything beyond that
(hundredths of a second, thousandths of a second is unreliable. If Chip
is wrong then obviously my conclusion is wrong, but see below.

The answer is: neither is correct.
You can easily verify this by entering the time 0:0:0.001 and
formatting the cell as Number with at least 8 decimal places. (13 dp
is more representative.)
Although h:mm:ss.000 is the most precise custom format supported and
recognized for data entry, we can actually input any fractional time.
As always, the displayed value is subject to Excel's rounding rules.
For example, the largest number displayed as 23:59:59.999 is about
0.999499999999999/86400.
(FYI, KB 214094 asserts that 0.99999 is 23:59:59. It is true that
that value is displayed that way. But if we compare it with time
entered as 23:59:59, the comparison is false(!).)
I am not sure what you mean by that.

See my comment above. Perhaps I should have said unreliable digits
rather than unreliable numbers.

All decimal fractions and non-integer expressions are subject to the
oddities of IEEE floating point representation. But I would say that
the internal representation of 0.1 seconds is no less "reliable" than
the representations of 0.01, 0.001 and even 1 second expressed as
fractions of a day (i.e. divided by 86400), at least within 15
significant digits. In fact, I believe the internal representation of
0.01 seconds is closer to its infinite representation than 0.01
seconds, and both are closer than 1 second is(!) [1].

I'm not getting into the decimal system vs. the binary system and the
loss of accuracy that entails at some point in working with very small
numbers. Some fractions are exactly equal in decimal and binary. 1.1
binary exactly equals 1.5 decimal. 1.11 exactly equals 1.75 decimal.
Most such conversions are approximate, although close enough for most
practical purposes.


But all of that is about representation of time. The original posting
raised issues about the resolution of the Excel NOW() function.
I believe the following macro demonstrates that the smallest time
increment reflected by NOW() is 10 milliseconds. (Sometimes we see a
20-msec difference. There are for various possible reasons.)

Ten milliseconds is 0.01 seconds. And I noted above that the numbers
beyond the tenths column are unreliable. That could explain the
occasional 20 millisecond difference.

CAUTION: This macro overwrites A1:K3 of the current worksheet, and it
changes the column width of A:K.
Private Sub measXLNow()
Dim n As Long
Dim st As Double, et As Double
Dim c As Integer
Dim oldCalc
Application.ScreenUpdating = False
oldCalc = Application.Calculation
Application.Calculation = xlCalculationManual
Range("A1") = "start Now()"
Range("A2") = "end Now()"
Range("A3") = "#loops"
For c = 1 To 10
Range("B1").Cells(1, c).Formula = "=Now()"
st = Range("B1").Cells(1, c)
n = 0
Do
n = n + 1
Range("B2").Cells(1, c).Formula = "=Now()"
et = Range("B2").Cells(1, c)
Loop Until et <> st
Range("B1").Cells(1, c) = st
Range("B1").Cells(1, c).NumberFormat = "h:mm:ss.000"
Range("B2").Cells(1, c) = et
Range("B2").Cells(1, c).NumberFormat = "h.mm:ss.000"
Range("B3").Cells(1, c) = n
Range("B3").Cells(1, c).NumberFormat = "0"
Next c
Range("A1", Range("B3").Cells(1, c)).Columns.AutoFit
Application.Calculation = oldCalc
Application.ScreenUpdating = True
End Sub
Endnotes:

[1] 0.1/86400 =
0.00000115740740740740,7392286300719386193947002539061941206455230712890625
0.01/86400 =
0.000000115740740740740,7418756080321083079542887617208180017769336700439453125
0.001/86400 =
0.0000000115740740740740,738566885581896197254803126952538150362670421600341796875
1/86400 =
0.0000115740740740740,73499346533566711769935864140279591083526611328125
(The comma demarcates 15 significant digits to the left.)
[2] All of my examples were derived on MS Win XP using Excel 2003 and
VB 6.3.

Neat macro!

The repetition of the digits "740" four times before the comma in the
examples above( and almost five times as the next sequence after the
comma is 739, 741, 738, and 734) is suspect, although I guess no more so
than the endless threes in the decimal representation of 1/3. I'd be
less suspicious if the 740's kept on repeating.

I just had an "aha" moment. If I store just the time in a cell, e.g.
12:01:01 PM, Excel displays the time but stores the information as
0.500011574074074 -- fifteen digits and there's those suspicious 740
digits repeating again in there. If I add today's date and store the
cell formatted as date and time, Excel stores the information as
39889.5000115741 -- five integer digits and ten decimal digits.
Apparently Excel does keep track of time to more than six decimal digits.

I'm still curious to know why the OP wants to be so precise in Excel.

And it's not nearly close enough to dinner time -- corned beef tonight!

Bill
 
J

joeu2004

PS....

My reasoning was based on 1,000,000 slices of time each 24 hours per
Chip's site. That means the 864,000 tenths of a second in one day can
theoretically be handled by the available decimals. Anything beyond that
(hundredths of a second, thousandths of a second is unreliable. [....]
I'm not getting into the decimal system vs. the binary system and the
loss of accuracy that entails at some point in working with very small
numbers. [....]
Ten milliseconds is 0.01 seconds. And I noted above
that the numbers beyond the tenths column are
unreliable. That could explain the occasional 20
millisecond difference.

No, it is not due to the potential "loss of accuracy", which you
explained is what you mean by "unreliable".

I meant: the "loss of accuracy" either due to the alleged
quantization of time in 1 million time slices, which is the only
explanation of "unreliable" that you offered "above", but which you
yourself disproved "below", or due to the limits of binary
representation of seconds/86400, which you allude to, perhaps
unwittingly, in your "aha!" "below".

The variability in the minimum non-zero difference between successive
calls to NOW() is indeed due to __a__ "loss of accuracy", namely
truncation "error". But in that sense, there is "loss of accuracy"
whether you look at hundredths or tenths of a second.

Consider this example. Actual Time1 is xx:xx:xx.149875, as it is
maintined by the kernel. Time2 is xx:xx:xx.165500, 15.625 msec later,
the minimum system time difference.

If we look at NOW() to hundredths of a second (finest resolution of
the result from NOW), Time1 will be xx:xx:xx.14 and Time2 will be
xx:xx:xx.16, for a difference of 0.02 -- 20 msec, which is "less
accurate" than 15.625 msec

But if we look at NOW() to tenths of a second, Time1 will __appear__
to be xx:xx:xx.1, Time2 will __appear__ to be xx:xx:xx.2, and the
difference will be __appear__ to be xx:xx:xx.0(!).

How is that more "reliable"?

First, an apparent difference of zero contradicts our expectation
based on the __displayed__ values. But we all remember ;-) that the
displayed value can be misleading because it might not reflect exactly
the underlying value.

Second, whether we consider the difference of 0 or 0.1 seconds (100
msec), that is significantly "less accurate" than the 20 msec
difference that we see when we look "beyond the tenths column", not to
mention the actual 15.625 msec difference.

So I don't see any way that looking at time to hundredths of a second
can be considered more "unreliable" than looking at time to tenths of
a second.


----- original posting -----

And the reason KB214094 uses a five-digit representation for an
elapsed time of 12 hours 45 minutes is because 0.53125 is the
exact decimal value for that time period. 12.75/24 = 0.53125.

I was referring to the general statement at the beginning of the KB
article, to wit:  "Times are stored as decimal numbers between .0 and .
99999, where .0 is 00:00:00 and .99999 is 23:59:59".

I was trying to contrast that with statements on Chip's web page,
namely:  "Excel stores dates and times as a number representing the
number of days since 1900-Jan-0, plus a fractional portion of a 24
hour day:  ddddd.tttttt.  [....] The integer portion of the number,
tttttt, represents the fractional portion of a 24 hour day".

(I presume Chip meant to say the __fractional__ portion, tttttt.)

Your conclusion that time is accurate to 0.1 seconds was based on the
fact that Chip showed only 6 decimal places for time.  But if Chip
were wrong and MS were right (an interesting proposition ;->), you
would have to conclude that time is accurate to only 1 second because
the MS KB shows only 5 decimal places for time.

That "proof by contradition" was intended to be an eye-opener for
you.  Obviously, the strategy failed.

No matter, because ....
I just had an "aha" moment. If I store just the time in a cell,
e.g. 12:01:01 PM, Excel displays the time but stores the
information as 0.500011574074074 -- fifteen digits [...].
Apparently Excel does keep track of time to more than six decimal
digits.

Ding!  And with that, the student becomes the master ;-).  I'm sorry
that my explanations did not help you.  But there is nothing like self-
discovery to advance your own education.  I was hoping that you would
try to fill in the blanks with your own experiments anyway.  (BTW,
apparently you entered 12:00:01, not 12:01:01.)
I'm not getting into the decimal system vs. the binary
system and the loss of accuracy that entails at some
point in working with very small numbers.

Small numbers per se are not the problem.  For example, 2^-1022 is a
__very__ small number, yet it can be represented with no loss of
accuracy.  The issue is whether or not a decimal value can be
represented exactly as a sum of a finite number (53) of consecutive
powers of 2.  That is as much a problem with 12345678901234.1 as it is
with 0.0000000000123456789012341.

It's a little more involved than that.  For example, small changes to
large numbers can sometimes be problematic.  (Not in this case,
though.)  But I'm not getting into the whole decimal v. binary system
thing ;-).
Perhaps I should have said unreliable digits rather than
unreliable numbers.
[....]
Some fractions are exactly equal in decimal and binary.
1.1 binary exactly equals 1.5 decimal. 1.11 exactly equals
1.75 decimal.  Most such conversions are approximate,
although close enough for most practical purposes.

And my point was....  The approximation of 0.01 seconds is just as
"close enough for most practical purposes" as is the approximation for
0.1 seconds in Excel's time format.  The representation of 0.01
seconds is no less "reliable" than the representation of 0.1 seconds,
by your definition.  They are both accurate to +/- 8.64 microseconds,
less than 1/1000 of 0.01 seconds.  (See below.)

(Here, I am referring to the "reliability" of the representation and
presentation of the time format, not the reliability of the Excel NOW
() function per se.)
Ten milliseconds is 0.01 seconds. And I noted above
that the numbers beyond the tenths column are
unreliable. That could explain the occasional 20
millisecond difference.

No, it is not due to the potential "loss of accuracy", which you
explained is what you mean by "unreliable".  Today's date/time in
numeric format to 15 significant digits is 39890.5031526620.  The
potential "error" (numerical abberation) in the last digit represents
+/- 8.64 microseconds (10^-10 * 86400 seconds).

The variability in the minimum non-zero difference between calls to
the Excel NOW() function (10 and 20 msec) is because:  (a) the Excel
NOW() function truncates time to the lowest multiple of 10
milliseconds (hundredths of a second); and (b) system time is
incremented every 15.625 milliseconds, not a multiple of 10
milliseconds.  Consider these two examples, recorded by a VB macro.

(1) Actual time: start= 0:12:21.062  end= 0:12:21.078  diff= 0.016 sec
    NOW:         start= 0:12:21.060  end= 0:12:21.070  diff= 0.010
sec

(2) Actual time: start= 0:12:31.484  end= 0:12:31.500  diff= 0.016 sec
    NOW:         start= 0:12:31.480  end= 0:12:31.500  diff= 0.020 sec

("Actual time" is returned by a kernel function.)
The repetition of the digits "740" four times before
the comma in the examples above [....] is suspect,
although I guess no more so than the endless threes
in the decimal representation of 1/3.

Exactly right!
I'd be less suspicious if the 740's kept on repeating.

But they do, at least in decimal.  As you say, "do the math".  Compute
1/86400 on paper.  You should get 0.0000115(740)..., which is my
notation for the infinitely repeating pattern 740.

The only reason that the pattern stops repeating in the binary
representation is because of the finite number of bits in the binary
representation.  1/86400 is represented internally exactly by
0.0000115740740740740,73499346533566711769935864140279591083526611328125.
If we add 2^-69, the smallest possible one-bit increment [1], the
internal representation is exactly
0.0000115740740740740,7519341242807531244807250914163887500762939453125.
As you might note, the first is below ",740", and the second is above
",740" [2].  In other words, for this value, ",740" cannot be
represented within the limits of the IEEE 64-bit floating point
format.

HTH.
Neat macro!

Thanks.  I'm having a lot of fun with this :).

Endnotes:

[1] Actually 2^-70 is the smallest possible one-bit increment.  But
adding either 2^-69 or 2^-70 has the same effect.  This is because
binary floating-point arithmetic is actually performed in 80-bit FP
registers, then stored into a 64-bit result.  The conversion might
result in rounding of the least-significant bit, which is represented
by 2^-69 in this case.

[2] You cannot see the difference using Excel formatting because Excel
formats only the first 15 significant digits.

----- original posting -----

My math was based on Chip's information that date/time information is
stored as an integer and decimal combination with six digits set aside
for the time component. That means you can slice 24 hours into 1,000,000
equal pieces. There are 24x60x60x10 = 860,000 tenths of a second in a
day. Therefore Excel can potentially track only to the tenths of a
second. The hundredths of a second and beyond will result in unreliable
numbers.
Your reasoning would be sound, if the information were correct.  But
Chip's information is incorrect, at least in a literal sense.
First, I hasten to note that according tohttp://support.microsoft.com/kb/214094,
the format of the time component is just 5 decimal fractions.
And the reason KB214094 uses a five-digit representation for an elapsed
time of 12 hours 45 minutes is because 0.53125 is the exact decimal
value for that time period. 12.75/24 = 0.53125.
   By your
My reasoning was based on 1,000,000 slices of time each 24 hours per
Chip's site. That means the 864,000 tenths of a second in one day can
theoretically be handled by the available decimals. Anything beyond that
(hundredths of a second, thousandths of a second is unreliable. If Chip
is wrong then obviously my conclusion is wrong, but see below.
See my comment above. Perhaps I should have said unreliable digits
rather than unreliable numbers.
All decimal fractions and non-integer expressions are subject to the
oddities of IEEE floating point representation.  But I would say that
the internal representation of 0.1 seconds is no less "reliable" than
the representations of 0.01, 0.001 and even 1 second expressed as
fractions of a day (i.e. divided by 86400), at least within 15
significant digits.  In fact, I believe the internal representationof
0.01 seconds is closer to its infinite representation than 0.01
seconds, and both are closer than 1 second is(!) [1].
I'm not getting into the decimal system vs. the binary system and the
loss of accuracy that entails at some point in working with very small
numbers.  Some fractions are exactly equal in decimal and binary. 1.1
binary exactly equals 1.5 decimal. 1.11 exactly equals 1.75 decimal.
Most such conversions are approximate, although close enough for most
practical purposes.
Ten milliseconds is 0.01 seconds. And I noted above that the numbers
beyond the tenths column are unreliable. That could explain the
occasional 20 millisecond difference.
CAUTION: This macro overwrites A1:K3 of the current worksheet, and it
changes the column width of A:K.
Private Sub measXLNow()
Dim n As Long
Dim st As Double, et As Double
Dim c As Integer
Dim oldCalc
Application.ScreenUpdating = False
oldCalc = Application.Calculation
Application.Calculation = xlCalculationManual
Range("A1") = "start Now()"
Range("A2") = "end Now()"
Range("A3") = "#loops"
For c = 1 To 10
   Range("B1").Cells(1, c).Formula = "=Now()"
   st = Range("B1").Cells(1, c)
   n = 0
   Do
      n = n + 1
      Range("B2").Cells(1, c).Formula = "=Now()"
      et = Range("B2").Cells(1, c)
   Loop Until et <> st
   Range("B1").Cells(1, c) = st
   Range("B1").Cells(1, c).NumberFormat = "h:mm:ss.000"
   Range("B2").Cells(1, c) = et
   Range("B2").Cells(1, c).NumberFormat = "h.mm:ss.000"
   Range("B3").Cells(1, c) = n
   Range("B3").Cells(1, c).NumberFormat = "0"
Next c
Range("A1", Range("B3").Cells(1, c)).Columns.AutoFit
Application.Calculation = oldCalc
Application.ScreenUpdating = True
End Sub
Endnotes:
[1] 0.1/86400 =
0.00000115740740740740,7392286300719386193947002539061941206455230712890625
    0.01/86400 =
0.000000115740740740740,741875608032108307954288761720818001776933670043945­3125
    0.001/86400 =
0.0000000115740740740740,73856688558189619725480312695253815036267042160034­1796875
    1/86400 =
0.0000115740740740740,73499346533566711769935864140279591083526611328125
    (The comma demarcates 15 significant digits to the left.)
[2] All of my examples were derived on MS Win XP using Excel 2003 and
VB 6.3.
Neat macro!
The repetition of the digits "740" four times before the comma in the
examples above( and almost five times as the next sequence after the
comma is 739, 741, 738, and 734) is suspect, although I guess no more so
than the endless threes in the decimal representation of 1/3. I'd be
less suspicious if the 740's kept on repeating.
I just had an "aha" moment. If I store just the time in a cell, e.g.
12:01:01 PM, Excel displays the time but stores the information as
0.500011574074074 -- fifteen digits and there's those suspicious 740
digits repeating again in there. If I add today's date and store the
cell formatted as date and time, Excel stores the information as
39889.5000115741 -- five integer digits and ten decimal digits.
Apparently Excel does keep track of time to more than six decimal digits.
I'm still curious to know why the OP wants to be so precise in Excel.
And it's not nearly close enough to dinner time -- corned beef tonight!
[/QUOTE]
 
J

joeu2004

Clarification....

And my point was.... The approximation of 0.01 seconds is just as
"close enough for most practical purposes" as is the approximation for
0.1 seconds in Excel's time format. The representation of 0.01
seconds is no less "reliable" than the representation of 0.1 seconds,
by your definition. They are both accurate to +/- 8.64 microseconds,
less than 1/1000 of 0.01 seconds. (See below.)

I wish I had not thrown that last sentence ("accurate to +/- 8.64
microseconds") in at the last minute. It obscures my point, and it is
likely to be misinterpreted and misapplied by literal readers.

For that statement to be correct, I would have to add "in the worst
case" or "when today's serial number or later is included", a point I
made later in the posting. But if we just enter time in the form
h:mm:ss.fff, it is __much__ more accurate than +/- 8.64 microseconds
-- from about 1E-10 to 1E-17 seconds, depending on the magnitude of
the h:mm:ss component (up to 23:59:59). And if we enter time as
seconds/86400, the accuracy can be even better for "seconds" less than
0.001.


----- original posting -----

PS....

My reasoning was based on 1,000,000 slices of time each 24 hours per
Chip's site. That means the 864,000 tenths of a second in one day can
theoretically be handled by the available decimals. Anything beyond that
(hundredths of a second, thousandths of a second is unreliable. [....]
I'm not getting into the decimal system vs. the binary system and the
loss of accuracy that entails at some point in working with very small
numbers. [....]
Ten milliseconds is 0.01 seconds. And I noted above
that the numbers beyond the tenths column are
unreliable. That could explain the occasional 20
millisecond difference.
No, it is not due to the potential "loss of accuracy", which you
explained is what you mean by "unreliable".

I meant:  the "loss of accuracy" either due to the alleged
quantization of time in 1 million time slices, which is the only
explanation of "unreliable" that you offered "above", but which you
yourself disproved "below", or due to the limits of binary
representation of seconds/86400, which you allude to, perhaps
unwittingly, in your "aha!" "below".

The variability in the minimum non-zero difference between successive
calls to NOW() is indeed due to __a__ "loss of accuracy", namely
truncation "error".  But in that sense, there is "loss of accuracy"
whether you look at hundredths or tenths of a second.

Consider this example.  Actual Time1 is xx:xx:xx.149875, as it is
maintined by the kernel.  Time2 is xx:xx:xx.165500, 15.625 msec later,
the minimum system time difference.

If we look at NOW() to hundredths of a second (finest resolution of
the result from NOW), Time1 will be xx:xx:xx.14 and Time2 will be
xx:xx:xx.16, for a difference of 0.02 -- 20 msec, which is "less
accurate" than 15.625 msec

But if we look at NOW() to tenths of a second, Time1 will __appear__
to be xx:xx:xx.1, Time2 will __appear__ to be xx:xx:xx.2, and the
difference will be __appear__ to be xx:xx:xx.0(!).

How is that more "reliable"?

First, an apparent difference of zero contradicts our expectation
based on the __displayed__ values.  But we all remember ;-) that the
displayed value can be misleading because it might not reflect exactly
the underlying value.

Second, whether we consider the difference of 0 or 0.1 seconds (100
msec), that is significantly "less accurate" than the 20 msec
difference that we see when we look "beyond the tenths column", not to
mention the actual 15.625 msec difference.

So I don't see any way that looking at time to hundredths of a second
can be considered more "unreliable" than looking at time to tenths of
a second.

----- original posting -----

I was referring to the general statement at the beginning of the KB
article, to wit:  "Times are stored as decimal numbers between .0 and
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top