Now that Fountain Code patents have started to expire, hopefully they'll start to be incorporated in applications like these. Most recently this one [0] expired today, on June 11th!
With fountain codes [1], you could decode the correct encoded message after receiving a certain number of any randomly generated sub-messages. The number of required sub-messages depends on the original encoding parameters. This encoding is great for very lossy one-way channels.
The sibling post did a good job at outlining some techniques. I’m going to give you a simple example that might help with “ahhh you can get stuff under the noise”
Let’s say you have a noise source made up of random numbers from -1 to 1 (mean 0). And a signal that represents a binary 1 as 0.1 and a binary 0 as -0.1. Our binary signal gets added to the noise.
With one bit and one noise sample, we don’t really get much out of it. 0.567 - 0.1 = 0.467 and 0.567 + 0.1 = 0.667. Looking at 0.467 and 0.667, we can’t really make any judgement of whether either of those samples is a 1 or a 0.
If you extend your bits out though so that, say, one bit gets transmitted 100 times, then you can take 100 samples on the receive end and take the mean of those. Because the noise source has mean zero, the noise component of the (noise+sample) mean should come out around zero. So you get a mean of maybe -0.075, or a mean of 0.083. At that point, it’s reasonable to say “it was likely a -0.1 or 0.1” that was transmitted.
All of the fancy techniques enhance this process, but at its core that’s fundamentally what’s happening. Some of the techniques spread things out over different frequencies, some spread out over time, but it’s all roughly the same idea.
I dont know why you're being downvoted, it's a truly fascinating concept.
One method has your input data taken bit by bit and combined with a pseudo random code, which never changes and is pre shared with all participants. Effectively each bit gets transmitted at multiple frequencies concurrently and as the receiving side knows the pseudo random code, it uses statistical inference to decide if it's seeing enough evidence of a 0 or a 1 in the various frequencies as dictated by the code.
You pull something out of the noise floor, without applying the statistical methods the received signal is indistinguishable from noise. It's certainly not possible to look at 1 frequency and decode the transmission because the transmission medium is lossy and constantly corrupts the transmission of individual bits randomly
Probably the simplest explanation I could give would be: when you're communicating, you can always repeat your message to have a better chance of it being received. The example I like to use to remember this concept is one of people speaking in noisy places. When someone is having trouble understanding
you, some options are to talk louder, talk slower, or repeat what you said. However, in the example given, the power is fixed, so talking louder isn't an option.
A more complicated explanation: the fundamental reason why this is possible is due to Shannon's channel capacity theorem [0]. This theorem tells us that the parameter that tells us whether we can communicate reliably is not the signal-to-noise ratio (SNR) but is instead the Energy-per-bit to Power Spectral Density ratio (Eb/N0). The difference is that Eb/N0 accounts for the total energy dedicated to sending a bit, whereas SNR only accounts for the rate at which you send that energy. The channel capacity theorem further tells us that the minimum Eb/N0 required to communicate reliably is about -1.6dB [1]. In the context of Olivia MFSK, the article claims that this communication scheme can
communicate at -10dB SNR, which is possible as long as the waveform does something to increase its Eb/N0. The article says that Olivia MFSK uses error correction codes, which is one way to increase Eb/N0. Essentially, error correction codes add redundancy to the transmitted bit stream to correct errors. The simplest example of error correction is the repetition code in which, for every bit that you want to send, you send an agreed-upon number of copies. The more copies you send, the less likely it is that over half of them will be wrong. As you might imagine, there are also much more complicated error correction codes. Another way to increase your Eb/N0 is through Direct Sequence Spread Spectrum (DSSS), which is the technique that Craig mentioned.
If you're interested, [2] is a good reference book on digital communications, and [3] is a detailed, but still very readable, text on information theory.
Well that is sort of a misnomer, as it depends on your receiver bandwidth. They always say such and such modulation (say JT65) is "under the noise floor". Of course it is when your bandwidth definition is 2.5 KHz (an HF SSB channel). But the symbol rate for JT65 is maybe 10 Hz, so if you filter to 10 Hz, it isn't under the noise floor.
Same with GPS, sure it's way under the noise floor of your receiver BW is at the 2 MHz, but once it is de-spread to the information bandwidth, it is not under the noise floor.
You can pretty close to Shannon's limit, which I suppose is under the noise floor at it's -1.6 dB limit, but in practicality you need extra margin, then you can usually see the signal with the proper filtering.
We're currently down in the lowest part of the solar cycle (e.g. very few sunspots) which has a great influence on skywave propagation.
When there are a lot of sunspots, you will be able to make contacts in modes like Olivia (and nowadays FT8) but also classic Morse code across continents pretty much every day if you pick the right frequency on the shortwave bands. The throughput is relatively low, but in Amateur Radio the "journey is the reward", so it's more about making contacts, testing what is possible etc. rather than exchanging a lot of information.
With the current low sun activity, the skywave propagation is a lot worse, but even now you will be able to make contacts across continents pretty much every day, especially on the 14 MHz band (20m), even with power as low as 1W and simple antennas (like a full size dipole). The "windows" when propagation between two places on the planet is possible are much shorter though.
You can experience this in real time by observing the beacon transmitters of the NCDXF International Beacon Project, which transmit in fixed timeslots on various frequencies, with beacon locations around the world.
There are web-sdr receivers where you can listen to the shortwave bands in real-time from your browser (http://websdr.org/ for an extensive list) and if you tune one of them to 14100 kHz (CW mode), you will - over the 5 minute cycle that it takes for all beacons to transmit - receive a number of the NCDXF beacons. To identify which one was transmitting without learning Morse code, refer to the chart at https://www.ncdxf.org/beacon/.
The beacons transmit their callsign followed by four long dashes at 100W, 10W, 1W and 100mW transmitter power.
Try this at different times to see how band conditions change; they do a lot over the course of the day!
I could go on and on, shortwave propagation is one of the most fascinating things, isn't it? :-)
For phone (aka voice audio) you can have some success with 100 watts and a simple dipole antenna. You'll be better off using an antenna with more gain and/or transmitting more power, though. The highest power ham radio operators can use on HF is 1500 watts, but this requires a significant investment in amplifiers etc.
Well... If FT8 is considered normal these days then 1 watt is maybe 2-5x too high of power if you have a nice antenna and favorable conditions for intercontinental short text messages.
Absolutely with a good antenna! It is super super slow as far and transferring text data goes. I can type faster in many cases. At 10W with a good antenna I'm booming the airwaves with my signal! I've made contacts all over the place using Olivia and it's lots of fun. Get a General Amateur radio license (US FCC) and get on the air! There are regular meetings that does this if you want to listen in: http://idigit4u.com/ccara/digitalmodes.html
It is a shame that SMS and other low MTU packets aren't transmitted over a low bandwidth channel, or at least as a failover option. SMS coverage would be doubled in range, at least.
When you consider fiber as just a very narrow RF waveguide (ever seen or held in your hands a 120GHz band radar waveguide?), frequencies in the terahertz make absolute logical sense.
For folks that want to sniff around a little first, you can hit websdr.org and listen to receivers all around the world. Most digital mode demodulation software just listens to the audio pipeline on the PC, so you can actually decode Olivia, FT8, JT65, CW, etc from these sites as well.
[Meta] This is probably the one clickbait practice at HN I like: a no-context, you'll-find-out link to Wikipedia. Don't change it! A description would be a spoiler.
With fountain codes [1], you could decode the correct encoded message after receiving a certain number of any randomly generated sub-messages. The number of required sub-messages depends on the original encoding parameters. This encoding is great for very lossy one-way channels.
[0]: https://patents.google.com/patent/US6307487B1/en
[1]: https://en.wikipedia.org/wiki/Fountain_code