The real-time clock (RTC) built into most machines is far from reliable. Unless its battery
dies or it encounters a Y2K problem, it does a fairly good job of remembering the time
while the computer's power is turned off -- as long as you don't leave the computer off more
than several hours, and don't care if the clock is wrong by a minute or two...or three...or more.
The resolution of most PC real-time clocks is one full second, and most RTCs drift considerably over time. It is not unusual for an
RTC to gain or lose several seconds or even minutes a day, and some of them -- while still considered
to be operating correctly by the manufacturer-- can be off by an hour or more after a week or two without correction.
The RTC is used at boot time to obtain an estimate of the current time. From then on,
Windows keeps track of the time internally using somewhat more accurate methods. When you
set the time on a Windows machine, the time is set both on the RTC (to the nearest second)
and internally in Windows. If no time service such as Domain Time II is running, Windows compares its
internal idea of the time with the RTC approximately once per hour. If the disparity is greater
than a pre-defined limit, Windows changes the time to match the RTC and resets its internal timer.
While this technique usually helps keep the RTC and Windows time more accurate than either alone,
it can also lead to sudden large clock corrections, either forward or backward, and to the Windows
time tracking errors in the RTC.
The internal Windows time's accuracy varies greatly among operating systems. In all cases, however, Windows tracks the time
by incrementing an internal counter at regular intervals. Under Windows 3.1 and Win9x, the interval
is approximately 18 times a second, while under Windows 2000 and later the interval can be as small
as several hundred thousand times per second. Accuracy is limited to the number of
increments and the regularity of their application -- both of which are determined, ultimately, by
supporting chips on the motherboard and how reliably Windows responds to hardware interrupts.
The absolute resolution on a Windows machine is one ten-thousandth of a millisecond (0.0000001 seconds),
or one hectonanosecond. See Terms and Definitions for an explanation of these
terms.
Different versions of Windows, with varying hardware, will report different resolutions,
but internally all times are based on the hectonanosecond. Each machine
uses a multiplier derived from its hardware capabilities to convert reported timings
to and from hectonanoseconds. A practical problem for time synchronization
software is that, although Windows can report time in hectonanoseconds, it only
allows setting the time to the nearest millisecond (0.001 seconds) on NT, and the
next-lowest 18th of a second on Win95 or Win98. This means that even if it were
possible to obtain UTC to the nearest hectonanosecond, the extra precision
is discarded by Windows when the time is set. For the purposes of time synchronization,
therefore, the absolute resolution of an NT-series operating system machine is one millisecond. After the
time is set, Windows then begins accumulating elapsed time according to its internal
multiplier -- which may be as small as a microsecond, or as large as
an eighteenth of a second. (To see the capabilities of a particular machine, you can
use DTCheck.exe /test from the command line. The DTCheck utility will
report the machine's hardware capabilities and internal Windows multiplier. If the
machine is NT-based, DTCheck will also test the accuracy and reliability of the
clock setting/reading mechanisms and report a best-guess for the machine's maximum
accuracy. Win95 and Win98 machines always round
down to the nearest 18th of a second when setting the clock, so accuracy on these
machines is limited to approximately 55 milliseconds, and testing is skipped.)
Because hardware events can take varying amounts of time to satisfy (during which time
other hardware events, such as the clock update, are skipped or delayed), the regularity
of the clock increment also varies. Windows employs sophisticated algorithms to
compensate for this variation, with the result that over a period of a day, the total
internal clock drift forward or back should be less than a few seconds. At any one
time, however, the clock may be ahead or behind by dozens of milliseconds. This small
drift is not detectable from the machine doing the drifting. Successive time queries
will appear to show that the internal clock is accumulating elapsed time in a linear
and reliable manner. In particular, although the clock is drifting, it will never
appear to be going backward, since some amount of time elapses between checks, and that
amount is always positive (or zero).
When comparing times between two machines, however, the drift becomes immediately
apparent. Since machine A is drifting at one rate, and machine B at another, rapid successive
comparisons will show a delta between the two clocks based on both the amount of drift
and the granularity of each machine's internal elapsed time accumulation. Assuming that
the network latency and calculation time is excluded accurately, it is still
likely that successive rapid comparisons will show a delta of plus or minus 10 milliseconds.
The first comparison may show machine A ahead of machine B by 5 milliseconds, the second
that they match exactly, and the third that machine B has crept ahead by 5 milliseconds.
The absolute magnitude of the delta will vary based on the operating environment, but
is not generally considered significant because the clocks will drift back toward
congruency as predictably as they drifted away from it.
Calculation of network latency and computation time is also limited by the resolution
of the machine on which the calculation is performed, including the number of times and
the duration of each time the calculation task is interrupted by other tasks. Domain
Time sets the process priority to real-time while performing calculations, which reduces
(but does not eliminate) interruptions by other tasks. Even so, measurement of latency
and computation relies on the accuracy of the very thing being measured, and the same calculations
performed on different machines will provide different results.
The practical implication of this discussion is that, even under idealized conditions
which never obtain in the real world, it is possible to set two machines to the "same" time,
then immediately query them and perceive a variance.