Why computers make terrible clocks
2024-08-11 00:43 by Ian
Like any physical system, a computer's operation is subject to its environmental conditions. As a particular case of physical systems, clocks have only gotten "better" with the state of the art. But at some point, we began going backwards.
I cite the excellent post "Falsehoods programmers believe about time". Reading this will give you an idea about some of the consequences of using a computer as a clock.
What is a clock, really?
Humanity has been using the sun to track time for as long as we've been a species. And we aren't the only species that does so. Our first acts of artifice for the sake of timekeeping were basically calibrated sticks that used a shadow to point to marks representing whatever system of time its builder used.
But I would argue that sundials aren't "clocks", because they cannot be stopped or calibrated, and their "cogwork" is bound to the pace of a day on Earth, literally (they can't work at night). At best, they are astronomical tools.
An hourglass, on the other hand, does meet my criteria for a clock. It can be made to be arbitrarily accurate (down to a certain precision), and measures the passage of time by dead-reckoning of a more-contained physical system (irrespective of the arrangement of the solar system). And for this reason alone, it forces our concept of time to be much more nuanced.
Now we can drift.
What do we do when we find that two hourglasses differ?
They almost certainly will differ.
A large enough hourglass will run faster or slower in certain parts of the world, due to the fact that Earth's surface gravity isn't the same everywhere.
Without intending to be, an hourglass is also a micrometer-scale ball mill. Hourglasses run times change with use, since the sand is (over time) beating itself into a finer, less irregular texture. Literal silica sand is not the optimum material for an hourglass for this reason.
If ratios aren't sufficient, we can't even measure how much two hourglasses differ from one another without a third hourglass to mark the time delta. And that third hourglass will certainly differ from both of the hourglasses under measurement.
What all of this boils down to is that we need a "reference standard" for time itself that depends on as few factors as possible. After millenia of argument and improvement, we have settled (for now) on this:
The second is defined by taking the fixed numerical value of the caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, to be 9,192,631,770 when expressed in the unit Hz, which is equal to s−1.
Essentially, our notion of time is now based on an electronic property of cesium, and a color of light. Objectively, this is the best standard for "1 second" that we've ever had. But a practical clock built on this principle is an expensive masterwork of artifice to build. And even if we had such a definitionally-pure clock, we'd still have to dead-reckon to track a period of time.
And this, then brings us to the core argument of my post:
Unless you are this clock, Tracking the time and date is an I/O concern.
32.768 kHz
This is a magic number in computer science and digital design. An oscillator at exactly this frequency will overflow a 7-bit counter once per-second. That overflow event is used to discipline a network of counters that track every second of every day as long as they have enough power to keep the oscillator running, and the counter states stored. This usually amounts to a few microamps (or less), because the current is related to the frequency of the oscillator.
Typically, the designers of clocks in computers and small digital devices intend their clocks to run on a coin cell for years between replacements, and in a context where setting the time from a network happens on a regular basis while the device is running. Low-drift oscillators at 32.768kHz are also cheap to produce. And for all of these reasons, they are the standard frequency for low-power clocks in computers.
Fine assumptions. But a 7-bit count means a period of about 30.5 microseconds per-bit. So without periodic resyncing against some master clock, a computer's hardware clock can drift by more than 18 seconds every week, and still be considered within acceptable limits.
If you are willing to throw cost and power concerns to the wind, the same methodology can be taken to an extreme by using a time base akin to this rubidium oscillator driving a 24-bit counter to overflow at 10 million. Such a clock would cost (today) about $2,100. But would allow you to dead-reckon time with a drift of less than one second in 633 years. Such clocks are used in RADAR systems, GPS satellites, cellular base stations, and other applications where high-resolution time with low-drift and low-jitter is the primary value.
Coping
Because the clocks are so bad, there is a fair amount of daily traffic on the internet that is solely for the purpose of keeping everyone's clocks sync'd to one of several global time standards. This is good-enough for basically any computer connected to an IP-based network. And that timing is good enough for IP-based applications, and the variety of things stacked on top of them. But it does have the rather steep technical requirement of IP connectivity. Many things need to be working to allow a computer to find and talk to a computer running the rubidium time standard. There are service assurance and security concerns associated with this strategy. And the time you can get this way is of limited precision.
For technical reasons I won't digress into, timing is crucial in TDMA radio applications (cellular telephony being among them). Time on the spectrum must be tightly regimented by timing that is controlled by the base station, which has a top-shelf clock for this purpose. Base stations usually serve the current time and date at high-accuracy as a side-effect. And so computers that have cellular radios typically have a time and date that is slaved to the network's clock (however arranged by a given carrier).
In a similar way, other computers have radios that listen to GPS satellites, and take time and date from the high-accuracy clocks in orbit. In that case, the high-accuracy is needed for time-of-flight determination, rather than organizing many transmitters on a confined spectrum. This strategy functions well, and has many fewer external dependencies.
Previous: Scuttlebutt
Next: Silver chloride