Edit: The comment from jcr is correct--this was not in GPS, but in an earlier system. http://www.leapsecond.com/history/Ashby-Relativity.htm
> At the time of launch of the first NTS-2 satellite (June 1977), which contained the first Cesium clock to be placed in orbit, there were some who doubted that relativistic effects were real. A frequency synthesizer was built into the satellite clock system so that after launch, if in fact the rate of the clock in its final orbit was that predicted by GR, then the synthesizer could be turned on bringing the clock to the coordinate rate necessary for operation. The atomic clock was first operated for about 20 days to measure its clock rate before turning on the synthesizer. The frequency measured during that interval was +442.5 parts in 1012 faster than clocks on the ground; if left uncorrected this would have resulted in timing errors of about 38,000 nanoseconds per day. The difference between predicted and measured values of the frequency shift was only 3.97 parts in 1012, well within the accuracy capabilities of the orbiting clock. This then gave about a 1% validation of the combined motional and gravitational shifts for a clock at 4.2 earth radii.
"The blast generated infrasonic waves that propagated all the way to the upper atmosphere causing small variations in the density of electrons there.
By analyzing the signals from GPS satellites collected at ground-based monitoring stations in South Korea and Japan, scientists [...] confirmed the ionospheric disturbance generated by the North Korean test."
2d maps of rainfall have also been derived from interpolated GSM antennas, using the signal loss.
Disclaimer: GPS is one of those parts of science that gives me the nerd chills.
I worked a few years ago on precision guidance. I was supposed to assess the performance of automatically guided tractors. Sub centimetric guidance on 4 tons huge tractors was pretty tricky to test. On what exactly did you work?
I am biased here, because I know the people who are developing this technology, so I'd be welcome to further information on this point! E.g., I took a look at the thesis abstract pointed to by @kubiii, but it wasn't specific enough to tell what their exact sensing method was.
There are some generally good-quality summaries of GPS radio occultation here: http://geooptics.com/?page_id=148
The main advantages of GPS occultation for H2O sensing are that it's relatively bias-free (compared to microwave radiometers), that it can see through clouds, and that it has better usable vertical resolution.
It mentions an earlier project also done by an individual:
So even a few individuals are able to wrap their heads around the problem (I guess commercial projects would also tend to have one or two people doing significant amounts of the work).
I hope he does some videos at some point.
If so, how is this accuracy achieved? (Obviously the satellites themselves cannot use gps to determine their position...). Measurement from the ground plus manouvering?
Are there any events (micro debris strike, solar wind?) which cause drift in our estimate of their position?
The ephemeris is the precise orbital data used to calculate position. Each satellite broadcasts only its own ephemeris, which takes up to about 50s. This is where AGPS can help - if you can get that data from some other channel then you don't have to wait.
Don't celltowers provide AGPS data to connected client devices (i.e. phones)? Those towers need the AGPS data for enhanced 911 cellular service.
This is often confused for a similar mechanism for triangulation of a position using the relative powers of multiple cell towers. This provides a low-resolution position fix suitable for use as the initial seed of the kalman filter (which narrows down the position iteratively).
So - there are ground-based observatories continuously determining the absolute position of the satellites?
This seems really hard. The satellites are ~20,000Km up. I think that means a 1 millimetre difference would be a 5 x 10^-11 radians difference (theta ~ tan theta), or 0.0001 arc seconds - surely this is beyond any telescope technology?
And even if they could resolve at that level, we're trying for an absolute fix, so we're also trying to measure the alignment of a (moving?) telescope with 0.0001 arc second precision (hoping no mice cough nearby?)
That can't be right, so what am I missing?
Wikipedia says that there are lots (~13?) stations: http://en.wikipedia.org/wiki/Global_Positioning_System#Contr...
Are there potential issues with them being covered by weather etc?
[Ah - found the link to 'http://en.wikipedia.org/wiki/Kalman_filter' - this is probably the magic?]
1) You have a defined position of your antenna... make the ephemeris "work" such that your antenna gets the right signal from the satellite. In a philosophical sense, where is the satellite? Well... does it really matter? This ephemeris says your antenna is in the right place, so...
I'm not implying this is how it work or its a good idea, but it certainly is a good unit test if your "real" method when run thru a test bench implies you're on the moon instead of at the (note singular) base station...
2) Those numbers are no big deal with doppler / frequency ranging. If you transmit at 1500 MHz its a little higher as it approaches and lower as it leaves. Ask a ham radio operator to demonstrate with their 144/440-ish MHz satellites, the doppler in low earth orbit is maybe 15 KHz or so. Anyway sub-Hz accuracy measurement (no big deal) of a 1.5e9 Hz signal for a couple seconds gives you the -11th class of accuracy you're looking for. The absolute freq would be nice to know, but you can figure out the instant the satellite passed zenith (or any other elevation relative to your position) as long as the freq is "short term more or less constant". Of course giant and heavy earth bound clocks can give you that precise freq you're looking for, which is also cool.
The doppler of a satellite pass is pleasingly non-linear and they're high up enough so make for long passes and proper data analysis means you can downsample maybe 10000 samples to find the theoretical best RMS zenith instant for all 10000 samples, so oversampling and averaging gives you another couple orders of magnitude.
Maybe another way to say it, is if you have basically perfect accuracy clocks, and you sample and literally count every incoming cycle of a 1.5e9 RF signal for only 100 seconds even if you ignore phase data (why would you? But for the sake of the argument...) then thats 1.5e11 cycles in a given time, a bit of division and you have a freq accurate to one cycle or part in 10 to the 11th.
Its more complicated in reality because the GPS signal is not a simple RF carrier but is a spread spectrum signal so you need a reasonably low noise and stable PLL to lock onto the SS signal and then you actually measure the SS signal.
There is still a simple carrier; the modulation only affects the sidebands (which contain the broadcast data).
Also, don't forget that GPS is a dual-frequency system (civilian receivers don't tend to use the L2 band because they can't decode the data it broadcasts). Finally, the control segment is not limited to passively listening for signals from the satellite - it has the entire resources of the USAF available to it.
Maybe you're talking about the L3 signal? I find that part of GPS to be spooky. Or that experimental L5 stuff that I don't know anything about. Everything I do know about GPS is just BPSK and the "old" stuff like L1, L2, etc..
Does the system rely on knowing the position of the
satellites to equivalent accuracy?
i.e. "millimetres" accuracy for differential GPS.
If so, how is this accuracy achieved? (Obviously the
satellites themselves cannot use gps to determine
The most recent satellite position measurements are extrapolated forward to work out near future positions, which are then transmitted to the satellites.
Are there any events (micro debris strike, solar
wind?) which cause drift in our estimate of their
For more information, the satellite orbits are called 'ephemerides' or 'ephemeris' and you'll find lots of info on Google now you know the right keyword to look for!
 http://www.colorado.edu/ASEN/asen6090/broadcast_vs_precise.p... http://igscb.jpl.nasa.gov/igscb/center/analysis/noaa.acn
> From the gravity of jupiter to the fact there are reflected photons on one side of the earth but not the other (the latter is admittedly only 30cm per day).
That we face these issues is quite beautiful in many ways. In the tech side, we're probably more used to seeing problems with scaling things down, it's nice to think find out that something as simple as finding out how to get to the shops must take into account the position of Jupiter. I feel my complaints about cleaning messy data quite minor in comparison.
How automated are the control segment operations? After the zombie apocalypse, if the power grid is still up, will I still be able to use my GPS?
Even if the entire control segment is lost, the designers of GPS are way ahead of you - although I suspect rather than thinking of zombies they were thinking of nuclear war with the USSR.
You can predict orbits in advance - but the results become less accurate the further out you try to predict. According to  we can predict 7 days out and get a range error of 10m, and 28 days out with a range error of 100m.
Some of these longer-term predictions are uploaded to the satellites in advance; I've heard 14 days  and 60 days  quoted.
TLDR: Designed to be accurate enough that your submarine can pop up 14 days later and nuke Moscow. After a few months, all bets are off.
Yes, if you know where the satellites are to the same degree of precision. That's what I was wondering about.
I am always amazed how normally skeptical people suspend their skepticism and believe anything a physicist or a learned doctor writes. I would be grateful if you can supply a real evidence that without GR GPS will not work.
"to counteract the General Relativistic effect once on orbit, they slowed down the ticking frequency of the atomic clocks before they were launched"
The mind boggles.
It makes me uneasy that we only have this single system though (the effects of a GPS failure would be quite severe I guess). Do the people behind GPS collaborate with the Galileo folks?
Thanks a lot for the submit, now I need to find a good book on the history behind the GPS!
Loran-C was a high-power, low-frequency signal as opposed to the very-low-power, high-frequency signal of GPS. This made Loran-C less susceptible to jamming. It was a nice backup.
If so, what happens when the two participants get closer? Do clocks start "getting in sync"?
If we had an atomic clock with a mechanism to self-destruct when not running at the original "right" time, does the atomic clock self-destruct? Can it "tell" it is running slower?
In more biological settings (say, a human being) what are the effects of "aging faster"? Is it possible that the body would "not work correctly" under certain gravitational forces because of the effects of time? (i.e. maybe the blood flows faster, just a terrible example, I hope you get the idea).
As acchow says, this question doesn't really make sense, but if you want a short answer, I agree with his: they are running differently.
> what happens when the two participants get closer? Do clocks start "getting in sync"?
I don't understand what you're asking here: can you be more specific about what "getting closer" means and what "getting in sync" means?
> Can it "tell" it is running slower?
If the clock can only use measurements internal to itself, then no, it can't tell. It needs to be able to compare itself to some external reference (such as another clock in a different state of motion) to "know" that it is running slower. And even then, all it "knows" is that it is running slower relative to whatever other clock it is comparing itself to.
> Is it possible that the body would "not work correctly" under certain gravitational forces because of the effects of time?
Not because of the effects of time alone, no; those just make the body age differently relative to a similar body that is in a different state of motion or at a different place in a gravity well. Gravity can cause a body not to "work correctly" in other ways, such as tidal gravity disrupting its structure, but that's not an "effect of time".
Short answer: they are running differently. Complicated answer: I don't think this question really makes sense in relativity.
> If so, what happens when the two participants get closer? Do clocks start "getting in sync"?
> If we had an atomic clock with a mechanism to self-destruct when not running at the original "right" time, does the atomic clock self-destruct? Can it "tell" it is running slower?
There is no "right" time in relativity, only frames of reference.
> In more biological settings (say, a human being) what are the effects of "aging faster"? Is it possible that the body would "not work correctly" under certain gravitational forces because of the effects of time? (i.e. maybe the blood flows faster, just a terrible example, I hope you get the idea).
Relativistic time dilation will have no effect on chemical and biological reactions. But I'm sure there are aspects to our biology that go beyond chemical reactions and actually rely on quantum effects - time dilation might have some effect here? I don't have the answer for this.
This is not correct. Relativistic time dilation affects all processes. If you flew a chemical reaction and a living organism on a GPS satellite, and then brought them back and compared them with a similar reaction and organism that stayed behind on Earth, the reaction on the satellite would have proceeded further and the organism on the satellite would have aged more. (In practice, the differences would be too small to detect with our current technology, but if we had accurate enough ways of measuring we could detect them.)
You can interpret things this way, I suppose, but that doesn't change the fact that it applies to all processes; from the viewpoint of someone who sees the processes as moving, all processes are time dilated--the "movement of time changing" applies to all of them. There's no special exception for certain quantum effects, or anything else. They're all the same: they all are unchanged to someone moving with them, and they all are time dilated to someone not moving with them.
In theory, there should then, assuming the EU/Chinese navigation systems have an equal amount of satellites as GPS/GLONASS, be always 24 or more satellites in view of the user - which should be enough to provide millimetre-precision positioning in real time.
I have a friend who build an automatic lawnmower as his pet project at uni,and he used two GPS receivers to get <10cm accurancy which allowed the lawnmower to make automatic turns around the garden.
Multi-system GPS receivers have been around for some time now, and they're quite cheap:
mm-level accuracy is more difficult, but even the accuracy specs of those cheap modules is usually rather conservative.
The effect is a lot smaller in Low Earth Orbit. ISS is a mere ~423km above surface. GPS satellites are ~20,200km above surface. Here is the effect for different distances: http://en.wikipedia.org/wiki/File:Daily_satellite_time_dilat...
Relativity affects people as well as clocks.
It's not really valid to say simply that one is aging faster than the other as if one is doing something ordinary and one is doing something extraordinary. In this case, it really is relative. The people flying around in space are wondering what the hell you and your clock's problem is, too.
The rub is, time is relative, so even though your personal clock always reads 1 second in your own frame of reference, the clocks of two individual people in two different frame of references can disagree. So, it only really looks like the person in the space station "ages faster", but if both the people in both frames lived 100 years, they would be experience 100 years in their personal reference, but the space station guy might die when the earth guy is like 99.9999(or something, I didn't do the math here) years old.
This is due to gravitation time dilation, and it's not the only type. For instance if I went off on a super space ship at a very high fraction of the speed of light, traveled around the galaxy and returned 10 years later, situation would be reversed from the space station/earth gravitation scenario, and while 10 years may have passed ship time, many hundreds of years(once again, I didn't do the math, it's just more the higher fraction of the speed of light you are going) may have passed on earth. In this case, it is the accelerated frame of reference in which time "slows down" relative to the non-accelerated frame.
If you're talking about an object in orbit compared to an object at rest on the Earth's surface, both gravitational time dilation and the other type you describe come into play, because the objects are in motion relative to each other as well as being at different altitudes in the Earth's gravitational field. Many other comments in this thread have addressed this.
> In this case, it is the accelerated frame of reference in which time "slows down" relative to the non-accelerated frame.
Not really. The key difference is not acceleration; it's the fact that the super space ship is in motion relative to the center of mass of the galaxy, while the Earth is not. (Strictly speaking, the Earth is too, but its motion with respect to the galaxy's center of mass is so slow that it can be ignored in this scenario.) Similarly, if I sit at rest on the Earth's equator and you move westward around the equator at the same speed as the Earth is rotating (about 450 meters per second), then when we meet up again, my clock will have less elapsed time than yours, because you will have been at rest with respect to the Earth's center of mass, not me (because I am rotating with the Earth, but you are not).
In the spaceship and the earth example, I can't see what your reference of the galactic center of mass has to do with anything. If you take the space and earth out of the galaxy into empty space, the result would be very much the same. You could say there would be some slight differences because of the extremely minor differences caused by the gravitation effects of the galaxy and the acceleration due to galactic orbit, but being that relativistic speeds are require to see the big differences(we were talking about a spaceship capable of high fractions of C here), I can't see what you were getting at. It is the accelerated frame in which time is "slower" relative to the rest frame, it has nothing to do with gravity.
The galactic center of mass defines a reference frame that is special with respect to this problem, because that frame is the one that makes manifest the time translation symmetry of the spacetime. However, I do see that I left out an important piece of that: see below.
> If you take the space and earth out of the galaxy into empty space, the result would be very much the same
Yes, because empty space has a similar time translation symmetry, as long as the Earth is at rest in it. If the galaxy is included, the Earth has to be at rest relative to the galactic center of mass; I see now that I was implicitly assuming that it was, without saying so (I did hint at it when I commented about the Earth also rotating around the galactic center, but too slowly to make a difference). So you're right that the galaxy itself isn't really relevant; but the underlying time translation symmetry is.
The point is that what makes the Earth observer in these scenarios have the longest proper time is the fact that he is the one who is "at rest" with respect to the underlying time translation symmetry of the spacetime. Acceleration only comes into it because that is the particular mechanism you chose to make the space ship move relative to that time translation symmetry. In flat spacetime (which is essentially the idealization you're adopting here), accelerating is the only way to move relative to the underlying time translation symmetry. But this does not generalize: there are plenty of examples in curved spacetime where unaccelerated observers can be the ones with shorter elapsed proper time, because they are moving with respect to an underlying time translation symmetry.
The faster you move, the lesser "Time" needs to be in order to be equal to c.
But you got it backwards, they age slower.
Edit: or maybe not, now in am confused.
Traveling fast slows time down, and going closer to heavy object slows down time so if earth is heavy enough time would speed up by being in orbit. You are right, if that is the case.
There are two effects here.
First, gravitational time dilation slows the on board tick rate deep in the gravity well. As you get out of the well your on board tick rate increases relative to the surface due to decreasing strength of the gravitational field.
Second, motional time dilation slows the on board tick rate down as your velocity increases. This effect always slows the on board tick rate down and can never increase it relative to the surface. However, the magnitude of the affect decreases. As you move to higher orbit your orbital velocity decreases and so does the motional time dilation.
There are a few useful reference points which help to think about tick rate on board various satellites relative to the tick rate on the surface.
1. On the hypothetical orbit at altitude zero gravitational time dilation relative to the surface has no effect and tick rate slows down solely due to motional time dilation. On board tick rate is lower than for a stationary observer on the surface.
2. On the geostationary orbit motional time dilation has no affect (since relative velocity is zero) and tick rate increases solely due to reduced gravitational time dilation (we're further out the gravity well). On board tick rate here is higher than for a stationary observer on the surface.
3. Somewhere between these two extremes is an orbit where the two effects balance out and the on board tick rate is the same as the tick rate on the surface. This occurs roughly one half of Earth's radius (~3186km) up above the surface.
ISS is merely ~423km above the surface and so it's the tick-rate-slowing motional time dilation that matters and their clocks run slower from our point of view.
GPS is over 20,000km above the surface and it's the tick-rate-increasing effect of the reduced gravitational time dilation that matters so onboard tick rate is higher from out point of view.
I believe that this has been posted before
So you tune in and hear "Hi there I'm a rogue air national guard F-16 flying at mach 2.0 precisely over the sears tower at 2000 ft ASL and headed due North at precisely noon"
Dude on the ground in .. Racine, lets say, hears that message and then a sonic boom a bit somewhat after noon.
So your math gets fed all that stuff, and then you draw a strangely shaped line on the map that predicts at that instant on your clock when you heard the sonic boom, that means you're standing on the ground somewhere along that peculiar shaped line. If you were being constantly overflown by rogue supersonic ANG F-16s from all different directions, your map would rapidly have one intersection point with a whole lot of lines drawn thru it, and thats where you are standing, at least relative to the sears tower, plus or minus your math ability.
But what if the broadcasting F-16 was just kidding about mach 2.0 and was actually chugging along at mach 1.9. Your line on the map is going to be all screwed up compared to where you actually are. Worst case isn't being 5% off and you laugh and discard the data, but every overflying rogue F-16 being a tiny little bit off, ruining your standard deviation and requiring a ton more samples be averaged together for the same accuracy.
The clock in your GPS is in fact disciplined to the clocks in the satellites, although its noise level is pretty bad (well, relatively, compared to the fancy clocks). If you've ever heard one of us RF guys talking and GPS Disciplined Oscillators or GPSDO that is exactly what we're talking about, an oscillator on the lab bench that is continuously aligned to be in perfect tune with the average GPS satellite, and this works shockingly well. For a couple hundred bucks your RF lab or ham radio station can have a "near atomic clock accuracy" reference clock for its oscillators, spectrum analyzers, freq counters, etc. Ham radio guys can get "hz" level accuracy at 10 GHz microwave freqs without much effort using this tech, for only a couple hundred bucks, which is pretty impressive and handy. All GPS rx have one of these, although performance and convenience vary. So the one in your phone doesn't have a convenient sma connector to feed 10 MHz refclock into my frequency counter... oh well, the innards are exactly the same. The innards of precision surveyor instruments are also lower noise and "better" than the innards of a phone. But the innards are functionally the same, more or less.
Those stations have to know the satellite time, so the compensation has to happen somewhere.
I'll leave the rest to actual physicists. :)