Windows Scheduler Latency after WinAPI "Sleep"-Command or Timers

J

John Lafrowda

Dear all,

I have detected a very strange behaviour of the Windows scheduler concerning
its latency. This can be illustrated by the code segment at the end of this
mail.

The code should simply read the system time in milliseconds (using WinAPI's
GetTickCount) then sleep for a specified number of milliseconds and repeat
this loop for a given number of cycles. In the end, it presents the mean
value of the time needed per loop - which should be more or less the time
specified with the Sleep() command as the rest of the commands should take
much time.

On various PCs with different configurations (Intel and Athlon, different
boards, but all on WinXP) we find that the code is running as expected. When
setting SleepTime to 1 ms, 10 ms, and 100 ms, we receive messages that tell
us that the cycle time was 1 ms, 10 ms, and 100 ms, respectively. On other
PCs (also with XP), however, we get the following cycle times:

SleepTime = 1 ms --> CycleTime = 15 ms
SleepTime = 10 ms --> CycleTime = 15 ms
SleepTime = 100 ms --> CycleTime = 109 ms

The strange thing is, that on all PCs that were tested with this straneg
behaviour, the resulting times are exactly the same. Moreover, we found that
some PCs showed the wrong times but were ok after a they were rebooted.

For your own tests, you can compile the code below as a plain WinAPI or
console programme. In our case, Visual C++ 6.0 and Visual .net (2003)
produced the same results. BTW: Results were equal when using timers instead
of Sleep

Could anyone give me some information on what's going wrong here?

Regards,

John

------------------------------

#include <windows.h>
#include <stdio.h>

const int SleepTime = 100;

int main(){
int NewTime;
int OldTime;
int CycleTime = 0;
int Cycles = 100;
int i;
int SleepTime = 1;
char TextBuffer[256];

OldTime = GetTickCount();
Sleep(SleepTime);
for(i = 0; i < Cycles; i++){
NewTime = GetTickCount();
CycleTime += (NewTime-OldTime);
OldTime = NewTime;
Sleep(SleepTime);
}
sprintf(TextBuffer, "CycleTime: %d\n", CycleTime/Cycles);
MessageBox(NULL, TextBuffer, "TickTest", MB_OK);
return 0;
}
 
G

Guest

Sleep() and delay times have granulatity of the current time resolution.
So if you specify time < resolution it is rounded up.

Some apps can increase the clock resolution. This can explain your results.
By default the resolution is one system tick, which typically is 10 or 15 ms
depending on hardware.

Below is your modified program with added GetSystemTimeAdjustment
and NtQueryTimerResolution calls - explained in
http://www.sysinternals.com/Information/HighResolutionTimers.html

Run this with and without media player playing in background.

------- cut here ----
// timerres.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"

#include <windows.h>
#include <stdio.h>

typedef DWORD ( __stdcall * tNtQueryTimerResolution)( PULONG, PULONG, PULONG
);

void printres()
{
HMODULE hl = LoadLibrary( "ntdll.dll" );
tNtQueryTimerResolution NtQueryTimerResolution;
DWORD ( __stdcall *p)( PULONG, PULONG, PULONG );
NtQueryTimerResolution = (tNtQueryTimerResolution)GetProcAddress(hl,
"NtQueryTimerResolution");
if( !NtQueryTimerResolution ) return;

ULONG minr, maxr, actr;
DWORD s = NtQueryTimerResolution( &minr, &maxr, &actr );
printf(" NtQueryTimerResolution (100 ns) min=%ld max=%ld current=%ld\n",
minr, maxr, actr );
minr /= 10000; maxr /= 10000; actr /= 10000;
printf(" NtQueryTimerResolution (ms) min=%ld max=%ld current=%ld\n",
minr, maxr, actr );
}

void printinterval()
{
DWORD adjustment, clockInterval;
BOOL adjustmentDisabled;

GetSystemTimeAdjustment( &adjustment, &clockInterval,
&adjustmentDisabled );
printf( "The system clock interval is %.06f ms\n", (float) clockInterval
/ 10000 );
printf( "adjustment disabled?: %d Increment=%ld\n", adjustmentDisabled,
adjustment );
}


int main(){
int NewTime;
int OldTime;
int CycleTime = 0;
int Cycles = 100;
int i;
int SleepTime = 1;
int mint = 99999, maxt = 0;

OldTime = GetTickCount();
Sleep(SleepTime);
for(i = 0; i < Cycles; i++){
int d;
NewTime = GetTickCount();
d = (NewTime-OldTime);
CycleTime += d;
OldTime = NewTime;
if( d > maxt ) maxt = d;
if( d < mint ) mint = d;
Sleep(SleepTime);
}
printf( "CycleTime: %d min=%d max=%d\n", CycleTime/Cycles, mint, maxt );
printinterval();
printres();
MessageBox(NULL, "end", "TickTest", MB_OK);
return 0;
}
------- cut here ----

Regards,
--PA
 
J

John Lafrowda

Dear Pavel,

thanks for your advice. In fact when running 100 cycles, the loop was always
processed with a lot of "zero-delay cycles" and only few at the minimum
system clock time - so only the mean value of the delay time was correct.

One more question: NtQueryTimerResolution tells me that the minimum timer
resolution is 156250 (--> 15 ms) and the maximum resolution is 10000 (-->
1ms) on my system. It also states (equally to NtSetTimerResolution) that the
currently used value is 9766 - which is "higher" the maximum, but that's not
a problem to me. The only thing that worries me is the resolution I receive
by GetSystemTimeAdjustment. This command tells me that the system clock is
still running at 156250 - although kernel functions say it's at 9766 which
cannot be modified by NtSetTimerResolution since this value is already "at
the max". By processing the test application, however, I can see that 156250
is used. Could you give any explanations why I cannot run at lower cycle
times?

Regards,

John
 
N

news

: On other
: PCs (also with XP), however, we get the following cycle times:
:
: SleepTime = 1 ms --> CycleTime = 15 ms
: SleepTime = 10 ms --> CycleTime = 15 ms
: SleepTime = 100 ms --> CycleTime = 109 ms
:
: Could anyone give me some information on what's going wrong here?

Aren't you simply seeing the effect of the underlying hardware
tick rate? That varies between different PCs, is often 100 Hz
but can be significantly lower. When you request a Sleep the
time will be rounded up to the next integer number of ticks.
So if the hardware tick rate was 64 Hz a Sleep(1) and Sleep(10)
would both sleep for 1 tick period (15.6 ms) and a Sleep(100)
would sleep for 7 tick periods (109.4 ms).

You may find it instructive to monitor the values returned from
GetTickCount. You will typically find that, although increasing
monotonically at an average rate of 1000 counts per second, many
values are missing.

Richard.
http://www.rtrussell.co.uk/
To reply by email change 'news' to my forename.
 
J

John Lafrowda

Dear Richard,

your figues are correct and I see that I am running on a clock rate of
15.625 ms really.
Inspired by Pavel, however, I wonder why I cannot change this rate using the
NtSetTimerResolution command (see
http://www.sysinternals.com/Information/HighResolutionTimers.html) as this
should allow me to change timer resolution in between 15.625 and 1.000 ms.
Any comments are welcome.

Regards,

John
 
G

Guest

John Lafrowda said:
Dear Pavel,

thanks for your advice. In fact when running 100 cycles, the loop was always
processed with a lot of "zero-delay cycles" and only few at the minimum
system clock time - so only the mean value of the delay time was correct.

Well actually it gets even worse as you've seen: because there are two
different
values; the increment and the tick, you can get zero delta if the tick <
increment :)
But as you also noticed, the average for long runs tends to be correct.
So, for intervals <= 2 ticks you may want to use yet another pair
of functions, QueryPerformanceCounter & QueryPerformanceFrequency.
These give the highest accuracy one can get with documented API.
One more question: NtQueryTimerResolution tells me that the minimum timer
resolution is 156250 (--> 15 ms) and the maximum resolution is 10000 (-->
1ms) on my system. It also states (equally to NtSetTimerResolution) that the
currently used value is 9766 - which is "higher" the maximum, but that's not
a problem to me. The only thing that worries me is the resolution I receive
by GetSystemTimeAdjustment. This command tells me that the system clock is
still running at 156250 - although kernel functions say it's at 9766 which
cannot be modified by NtSetTimerResolution since this value is already "at
the max". By processing the test application, however, I can see that 156250
is used. Could you give any explanations why I cannot run at lower cycle
times?

Btw, about documented API:
NtSetTimerResolution is undocumented and probably requires some magic
incantation...

Try timeBeginPeriod() and timeGetDevCaps(). These functions are documented
and should work.
( Eventually timeBeginPeriod is based on NtSetTimerResolution ).

Regards,
--PA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top