It is a lot of work to acquire and track the signals and this is often implemented in software. And then afterwards it takes a lot of work to calculate the navigational solution from these raw measurements.
This basically keeps a microprocessor busy which uses energy.
One trick people do for ultra-low energy non-realtime GPS tracking of wildlife is to store the “raw” radio samples and do as little processing on the tracking device as possible.[1] Then once the tracker gets back they batch process all the samples recorded and reconstruct where the animal has been. That is obviously a tradeoff of more storage used for less energy consumption. Obviously that trick does not work if you want to know where the animal is live, it can only tell you where it has been once you have processed the data.
> Obviously that trick does not work if you want to know where the animal is live, it can only tell you where it has been once you have processed the data.
why can't you just forward those radio samples back, and process that on the receiver end (your phone), vs processing on device and sending back the location?
It should be a tradeoff of less battery consumption on device, and no need for large storage, at the expense of a lot more bandwidth? Would having to send all the raw samples consume more or less battery than just processing it and sending the result?
The "chip rate" of GPS L1 C/A (the main one) is 1023 kchips/sec. So you end up with a signal that is over 1 MHz wide to encode 50 bits/sec. Nyquist-Shannon theorem says* you thus need over 1 Msamp/sec (using complex numbers), probably more like 2 Msps because of Doppler, to capture that. GPS is pretty forgiving and 4 bits/sample is plenty (2 bits is usable), but that would still be 1 MB/s of high entropy data. Note the system linked in the parent comment only records in 12 ms bursts. That captures enough info to find position offline, but only if you add in the historical orbit information that normally takes 30 sec to download off of the GPS signal. Streaming 1 MBps of data is doable, but I think would draw much more power than solving on device. Just recording to an SD card is far less.
* The Nyquist-Shannon theorem actually says the converse, but for anything you'll encounter outside the recesses of a math department, it's still the optimal solution.
> It should be a tradeoff of less battery consumption on device, and no need for large storage, at the expense of a lot more bandwidth?
Yes. I believe so.
> Would having to send all the raw samples consume more or less battery than just processing it and sending the result?
Good question. I don't know the answer to that. Maybe someone can chime in?
What would be interesting if you could send a "ping" signal to the tracker and it only start sending samples back then. After all most of the time you don't need to know where the cat is, only when it is "overdue" and the owner starts worrying.
One of the things I found initially surprising (but less so when I thought about it), was that when I switched from wired headphones to bluetooth, I got longer battery life from my phone. Apparently, there’s less energy involved in sending a radio signal than in driving a speaker.
I'd guess that it would take more energy to send all of the samples to a central server over an IoT network (which are optimized for infrequent, very small messages) than to calculate a position fix locally.
Maybe it might be worth it for a complete "from scratch" GPS fix (where the receiver needs to figure out where it approximately is and needs to re-download the complete GPS almanach and ephemerides), though – I have no idea how short a server-side-decodeable sample actually is!
But since we're talking about a network-connected device anyway, I'd guess that downloading the ephemerides via the IoT connection (single-digit kilobytes) would be much more economical in that case too.
Maybe as an analogy, consider local vs. cloud QR code decoding: If you can manage to send a single, shart photo of the code, the cloud might be better, but since you can't actually be sure when the camera is pointed at the code and not moving, you might need to take a minute-long video, and look for the one non-blurry frame on the cloud.
Edit: Just saw the other comment – apparently we're talking about a short video moreso than a small JPEG :)
"Ordinary" antennas are completely passive, just a bit of metal. It's the maths for data recovery and trilateration (not triangulation!) that is processor heavy.
You can buy chips which only do the RF tasks. But you can also buy chips which do “everything”, the RF, the maths, some even do sensor fusion with data from accelerometers and wheel odometry sensors. All in one chip.
You can read here[1] more about what the RF frontend does. This is the crux of it: “Its two-stage receiver amplifies the incident 1575.42MHz GPS signal, downconverts it to a first IF of 37.38MHz, further amplifies it, and then downconverts to a second IF of 3.78MHz. An internal 2- or 3-bit ADC (selectable as a 1-bit sign with a 1- or 2-bit magnitude) samples the second IF and outputs a digitized signal to the baseband processor.”
Thanks for this! It's interesting that the article says doing it in software can be more power efficient than having a dedicated chip for the intermediate frequencies. Does that mean there exists "dumber" RF chips that do less, offloading the math to the main CPU for power savings? It seems like the kinda thing that would be commonplace if so...?
I guess my fundamental confusion is why "listening" to a broadcast signal takes so much power, vs say a FM receiver or passive wifi snooping.
All of them I am aware of do hardware decoding of the signal and do the linear algebra to find their location in software. Speaking mostly based on the cheap ublox chips and partially open source navspark chips I've dealt with.
The problem is the signal processing part of GPS is quite computationally difficult. I think it was around 10ish years ago it even became possible to do the full real-time decoding on a laptop. At startup, you need to find the GPS signals. This means searching for all 32 possible satellite code patterns across the range of possible Doppler shifts. During testing, this was what took most of the startup (cold start) time. You need roughly 4-6 of them to get a position, so this has to be done in parallel. Once you've found the satellite it takes another 30 seconds to get the satellite position. GPS signals are very slow at 50 bits/sec.
By comparison, actually solving for location is a simple linear algebra problem with 4 unknowns (lat, long, alt, time; but in a more convenient coordinate system). You only do this a few times per second. The hardware does the higher rate signal phase estimation. For example the navspark is a single core SPARC microcontroller running at 100MHz with 200 kB of RAM. That's enough to do 50 solutions per second, though they reduced that to 10 Hz to make space for a user program too.
A ton of work goes into caching strategies to narrow down that initial search space. Modern chips will let you load in exactly which satellites to expect overhead (e.g. based on position and orbit info from cell network). There is a whole other caching strategy based on a approximate "almanac" in the GPS signal for offline devices. With all of that known before the receiver turns on, you can get a solution in a couple seconds.
What do you mean by "hardware decoding"? There are custom chips which do the whole GNSS business. So they are doing "hardware decoding", the chip is the hardware.
I'm not aware of how all of them are implemented internally, but my understanding is that the acquisition&tracking of signals usually happens in the digital domain. Can be wrong on that of course.
GPS receivers are passive only in the sense that they don't transmit. But pulling a meaningful signal from a 25W transmitter 20,000 km away requires some seriously power-hungry signal processing. It's an engineering miracle that it works at all.
Weak signal, small antenna, high gain, lots of current to run amplifiers. And you need to keep receiving signal to get the position, not just enable it for a second then disable