Hacker News new | past | comments | ask | show | jobs | submit login

That is very interesting! how do they know that a difference in a satellite measurement is not due to slippage in its orbit?

The nice thing about satellite orbits is that they are extremely steady and predictable. Over long time scales, a satellite's orbit drifts due to many effects, such as non-uniformity of Earth's gravity. But over short timescales, its motion is very precisely determined by its orbital parameters.

In particular, there's a precise relationship between a satellite's orbital period and its orbital radius (technically, its semi-major axis). A one-centimeter variation in altitude would result in a timing error of several hundred microseconds per day, which is enough to be detected using precise clocks and Doppler effect measurements.

> A one-centimeter variation in altitude would result in a timing error of several hundred microseconds per day

Source or math for this? Because for any signal in the MHz range, I’m not sure I believe it necessarily.

Several hundred microseconds of a 150Mhz wave is several thousand cycles. That seems... questionable.

I did a check on a decibel calc with a 150Mhz signal and a 1 meter change was approx .01db... which is effectively undetectable to a real world application. Signal strength isn’t the same as propagation delay, I know. But yea...

I look forward to being corrected, but I can’t say that claim seems legitimate on its face.

EDIT: Nope. Did some probably bad math on this on my own, claim is very nonsense. Esp because the delta distance is in space where radio has the speed of light.

I don't understand what you think is nonsense about this claim. Can you elaborate?

The timing numbers I quoted are purely based on the orbital motion of a (hypothetical) satellite, and have nothing to do with radio signals. Kepler's third law states that a body's orbital period varies in proportion to the 1.5th power of its semi-major axis. A 1cm altitude difference for a satellite in LEO corresponds to a change of about 1.5 parts per billion, which translates to a 2.2 ppb change in orbital period. As I said, this amounts to a cumulative difference of a couple hundred microseconds per day.

And it's actually much easier to precisely measure frequency differences than amplitude differences, if you have sufficiently accurate clocks. If you have a 150.000000MHz reference signal and a 150.000001MHz doppler-shifted signal, you can simply multiply them together to get a 1Hz beat frequency. Using this technique, you can measure phase differences that are considerably less than a single cycle of the original signal.

A major limiting factor, of course, is the stability and precision of your reference clocks. Apparently, the Jason-2 satellite that (until recently) was responsible for a lot of these measurements had a high-precision quartz oscillator that was stable to roughly one part per trillion: https://www.ncbi.nlm.nih.gov/pubmed/30004875

Measuring the absolute position and velocity of a satellite is comparatively a lot more difficult. But with sufficiently precise Doppler relative-velocity measurements from multiple points, you can solve for both the orbital parameters and the slowly-varying perturbations with a high degree of accuracy.

> extremely steady and predictable.

I don't agree with this claim, unless you quantify it. This has already been touched upon before, for example here:

> "It depends upon the orbit and what time scales you are talking about. Satellites are subjected to many perturbations in its orbit. There are effects due to atmospheric drag, which as you'd expect affect lower satellite orbits more than higher orbits, but the atmosphere swells up all the time depending upon the level of solar activity. Gravitationally, the Earth is not a point mass and it has regions where the gravity gradient changes, which causes the satellite to get pulled one way or another (very slightly) as it orbits around."


The link in the top-level comment addresses all of these concerns, among other considerations and carefully calibrated corrections. They clearly know what they are doing.

They have a high quality map of the variations of gravity across the surface of the Earth. They also have a model that accounts for atmospheric drag.


The problem is that every instrument involved in the chain of measurement has its own inaccuracies, and at the end of the day you would need to make sure that once you add each of their inaccuracies it does not compound up to more than what you are actually measuring. This is a very complex subject and I'm not sure it's as "settled" as you seem to portray it.

Oh, sure, I don't mean to minimize the engineering challenges involved. I'm far from an expert in the details of how these particular satellites work; I'm just trying to describe the general principles, to make the point that this level of measurement accuracy shouldn't be viewed as intrinsically unattainable.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact