I'm also sharing some common knowledge among electronics enthusiasts, but other readers may find it interesting: besides these custom top performance ones in physics labs or miniature chip-scale ones for embedded applications, it's worth pointing out that "regular" cesium atomic clocks are readily available as standard commercial products, anyone with enough cash can just purchase it today and mount it on a rack in your server room tomorrow.
The workhorse in the industry is the Hewlett-Packard^W Agilent^W Symmetricom^W Microsemi^W... oh I meant the Microchip 5071A Cesium Clock Primary Frequency Standard. [1] The TAI & UTC time are literally powered by these clocks - more than half of the atomic clocks in national standard labs are Microchip 5071As.
Also, if much lower performance is acceptable, a rubidium frequency standard is extremely affordable. You can get second-hand modules (usually retired from base stations) for just $200. All they do is outputting an extremely accurate 10 MHz reference frequency, you can use a frequency divider to get an 1 PPS signal and drive a mechanical clock, or connect it to your oscilloscopes, spectrum analyzers, or frequency counters as an external timebase, or time a digital clock using a microcontroller, many interesting possibilities.
And of course, if you have access to an outdoor radio antenna, you can outsource the task of generating an atomic-accurate frequency to a government's shortwave radio station or GPS, your tax dollar does the rest.
The rise of GPS clocks has really killed the commercial market for cesium beam clocks-- the 5071 is a design over twenty five years old. It's a fine piece of engineering (I have three, two I obtained broken and repaired) but its age is starting to show, including the fact that it is phenomenally expensive to get replacement tubes for (the tubes are limited life).
Similar is true for rubidium standards, though there are some somewhat more modern models-- though since they aren't primary standards most places that use them will still use GPS to keep them on frequency. A primary standard like a 5071 can internally compensate for every major systematic effect and so they can autonomously derive the second without external calibration. Telecom rb's can't self-calibrate for their gas cell pressure.
All this has lead to a worrisome dependence on GPS just as a precise source of frequency.
Hopefully in the long run we'll see single chip optical clocks with GPS-clock beating performance at competitive prices-- if they got even close to GPS they would rapidly displace it, as the need to put up an antenna for GPS is a real nuisance, and GPS jammers are a sadly too common source of trouble.
I know the NIST Time and Frequency division is based there, but what would you do in person once you got there? Do they have a service of some kind to help synchronize things in person for visitors?
There were even more portable units built specifically for this time transfer purpose.
I don't think NIST has a regular service for this anymore, but since they do offer a host of calibration and verification services, I'd be surprised if you couldn't road trip a clock to obtain the second... at least for a price.
I've seen some publications on transportable optical clocks in recent years, which I assume are motivated by validating newer higher precision TWSTT and fiber time transfer techniques.
Nice description. I didn't know atomic clocks just sampled Cesium atoms for a second at a time every once in a while -- I always thought they were somehow continuously amplifying that oscillation. So in reality, they are just recalibrating a more traditional circuit at regular intervals, which serves as the clock for some digital circuit?
Most atomic clocks work so, i.e. they serve only to calibrate periodically an oscillator, which is the only one which matters for measuring the time during short times, e.g. under an hour.
Many atomic clocks use just quartz oscillators (high quality Oven Controlled Crystal Oscillators).
The more expensive atomic clocks use either dielectric resonators (e.g. sapphire resonators), or superconducting cavities (e.g. from niobium or lead) or optical resonators (e.g. single-crystal silicon or germanium resonators for infrared light) which are cooled to very low temperatures (cryogenic resonators), to ensure much higher quality factors than possible with quartz resonators (the oscillators with cryogenic resonators can also have a better short-term stability than the active hydrogen masers).
The atomic clocks that produce continuous oscillations, so that they do not need another oscillator, are active masers or lasers. While there are many experimental types, the only commercial type are the active hydrogen masers.
The cheaper hydrogen masers are passive hydrogen masers, which are also used only to calibrate periodically another type of oscillator.
Usually something with good short-term stability, e.g. a hydrogen maser, generates a steady signal that can then be measured against a frequency reference such as a caesium fountain, and the output can be adjusted to correct for frequency drift.
For anyone with even a passing interest in frequency standards, I recommend watching this recent seminar by Bill Phillips on recent changes to the SI. Bill provides some interesting history on atomic clocks, context on why Cs was chosen, and perspective on the remaining problems with the definition of the second.
It seems like a challenge to tune anything to the exact resonant frequency of the caesium, since it's an attempt to find a maximum output. My first thought was to take multiple measurements at different frequencies and curve fit the response to get a maximum. But then you'd have to somehow compensate for differences among sources and maybe detectors.
The HP-5061 Cesium Beam clock (I've repaired a few) frequency modulates the sample frequency just a little bit, using a 137 hertz sine wave.
It then compares the phase of the output with the frequency shift.
If the output goes up in phase with the rise in frequency, the frequency is too low.
If the output goes down in phase with the rise in frequency, the frequency is too high.
This signal is used to very slowly pull the 5 Mhz local oscillator into lock with the bandpass of the Cesium beam.
At that point, the output dips on both excursions from zero, a 274 hz "second harmonic" is detected, and this turns on the "Lock" light, letting the user know the standard is now on frequency.
[Edit/More Info]
Here's a link to the Operating and Repair Manual.[1] (PDF)
There is a simplified block diagram on page 35 of the PDF. (Figure 4-1)
Cesium fountains were a great advance, but some time during the next decade they will become completely obsolete.
The optical atomic clocks have already much better performances, but they are not mature enough to replace cesium fountain clocks, because they still cannot operate continuously for long times and they are hard to transport for now.
As soon as the optical atomic clocks will become more rugged and reliable, the cesium fountain atomic clocks will become obsolete.
Only the miniature cesium atomic clocks, like those made by Microchip, will remain useful for a longer time, because much more years will be needed until the optical atomic clocks will become so small and cheap (i.e. a few thousands dollars like the miniature Cs clocks).
You know it's the future when collecting and tossing a ball of supercooled gas up a tube and catching it again using lasers and measuring time with it to a part per ten quadrillion is the old and busted method.
While the fountain clocks use atoms that fall slowly in a chamber, the optical clocks use ions (a single ion or more ions) or neutral atoms (usually a relatively large number disposed regularly in a so-called lattice) which are trapped in fixed positions in vacuum, using various combinations of lasers and electromagnetic fields.
So the control over the ions or atoms is even more advanced than in fountains, because they stay fixed (except for thermal vibrations) like in a solid, even if they are widely spaced in vacuum.
The reason why a real solid is not used is that the atoms or ions are too near one from the other in a solid, and their interactions modify the atomic resonance frequencies.
When a kind of artificial solid is made, where the atoms or ions stay in vacuum, in fixed positions, but which are much more widely separated than in a natural solid, the interactions between the atoms or ions can be minimized. Because the atoms or ions are cooled to very low temperatures, their movements are reduced to relatively slow vibrations, so there are no large frequency deviations caused by the Doppler effect, like when the atoms or ions are free to move in a gas.
> The reason why a real solid is not used is that the atoms or ions are too near one from the other in a solid, and their interactions modify the atomic resonance frequencies.
Why don't we use the frequency of a solid transition, then? I'm sure there are ways, like the Hartree-Fock method, to compute energy levels of atoms in a solid, maybe the accuracy is not good enough?
The quality factor of resonant frequencies in solids is too low to be competitive, i.e. the spectral lines of solids are not narrow lines but wide bands.
When 2 atoms come very near, what happens is that the frequency of each spectral line splits into 2 lines, 1 with a slightly lower frequency and 1 with a slightly higher frequency. When there are N atoms close to each other, each spectral line splits into N lines with slightly different frequencies, which can no longer be discriminated by detectors, so they look like a continuous band.
So solids cannot be used to
stabilize the frequency of an oscillator to a very precise value.
The highest possible quality factor, i.e. the narrowest possible spectral line is achieved when using a single isolated ion or atom.
There are atomic clocks which use a single ion trapped in a fixed position in vacuum.
Unfortunately, while this has the narrowest spectral line, it also is the worst from the point of view of the sensitivity to noise. The signal that can be obtained from a single ion or atom is extremely weak, so it is drowned in the ambient noise.
The atomic clocks with single ions need a very long integration time to average the effect of the noise, e.g. a half of day or even more, up to many days.
Therefore such atomic clocks must be used together with oscillators which have an exceptional frequency stability for long times, up to a day or more, because their frequency can be compared with the resonance frequency of the single ion at most a few times per day, so it must not have unpredictable changes between comparisons.
The solution to overcome the noise problem is to use a large number of neutral atoms or ions, e.g. a few thousands, to provide a much greater amplitude of the signal, allowing the reduction of the integration time to 1 hour or less.
Those many neutral atoms or ions must be kept separated by large distances, so that the electrons will continue to move around them like around isolated ions or atoms, not influenced by their neighbors, thus maintaining the same differences between the energy levels of various states, so the same resonant frequencies as in isolated ions/atoms.
The CSAC has found a lot of use in applications where just a high performance OCXO would be fine-- but large OCXOs draw a low of power and particularly in low power mode the CSAC doesn't. I wouldn't be too shocked to see improved MEMS oscillators displacing CSACs in applications ahead of future miniature optical clocks.
If the miniature Cs clocks would not have been at least 10 times more expensive than OCXOs (several thousands $ vs. several hundreds $), they could have replaced them in almost all their applications.
Much more accurate optical lattice clocks definitely are affected by gravity - scientists have determined hight by measuring the difference between clocks at different elevations, linked by optical fibre [1], and that article also makes reference to atomic clocks that have been transported by plane having been observed to have recorded slightly different amounts of elapsed time than ones that stayed on the ground (as is expected).
I assume it is affected to some extent by the gravitational force of the moon and the sun, but I expect that would be harder to measure.
Which is what physicists call zero g. For purposes of this application, that practical zero g (versus the red herring you constructed) is the phenomenon of value, as it would permit suspending the cesium atom in the fountain.
https://www.microsemi.com/product-directory/clocks-frequency...
10^-11 short-term stability; about 10^-9/month drift