Hacker News new | past | comments | ask | show | jobs | submit login

Where did I say AM and FM are close to 100% efficiency?

I was only replying to an obviously incorrect statement that by using more bandwidth you decrease SNR. If it were the case, Shannon theorem would not work.

It doesn’t matter how close to the limit your encoding is, whether it is 20% or 99% the relationship between the bandwidth, noise floor and how much data you can send stays the same - by increasing bandwidth you can usually send considerably more information even if your encoding is poor. Which in translates to either a wider useful bandwidth or lower noise floor or any combination of both.

A trivial thought experiment to illustrate this: For any analog encoding, if I double the transmission bandwidth by encoding the same signal over 2 channels instead of one, I can average the output signal coming out the receivers and get better SNR than using one channel and one receiver. That works regardless of AM, FM or whatever fancy encoding you could use.






> A trivial thought experiment..

That's not how this works. That's not how any of this works. Averaging a high SNR channel with a low SNR channel is likely to produce something less good than the high SNR channel. Could you get an improvement over the high SNR channel? Yes, and the limit of the improvement is related to the SNR of each and averaging the signals won't get you anywhere near that.


Averaging two noisy signals increases SNR. That’s not even a thought experiment, that’s a reality. This is a technique used by probably all modern smartphone cameras to do night photos, as well as a common technique used by astrophotographers. Instead of taking one picture, you take a series of pictures and then align them and average. This improves SNR dramatically. A very long time ago we used this technique to get razor sharp, low noise pictures of the Moon at 3k x 3k resolution using… a cheap VGA internet camera: https://astronet.pl/wydarzenia/n2309/ Note that cameras at those times were barely capable of doing videoconferencing in artificial evening light - what you saw was mostly noise. Those sensors were really, really terrible.

What you seem to be missing is the fact we’re talking here about transferring the same fixed bandwidth signal over a wider channel, not transferring a wider bandwidth signal over a wider channel.

// edit: just noticed someone else gave another nice application of this phenomenon: GPS


> This is a technique used by probably all modern smartphone cameras to do night photos, as well as a common technique used by astrophotographers...

I think this is a lot simpler because each of your pixels is assumed to have a single, correct DC value. This doesn't hold for a time varying signal like AM/FM.


If I send the same audio over 2 or more parallel radio channels, that’s essentially the same as taking multiple shots of the same subject. Substitute a pixel with an audio sample. The transmission noise being uncorrelated between channels will average out.

There are notable differences between radio and imagery that might explain why it might be a tricky analogy:

An image is quantized into pixels. A camera pixel is a receiver for a specific wavelength, subjected primarily to internal wideband thermal noise during the read-out process. Each final output pixel is averaged both in time (exposure and stacking) and space (debayer and noise reduction), with the final single being singular amplitudes per location.

An AM audio signal is a single wideband receiver subjected to wideband noise. Or, if viewed differently, a series of quantized frequency receivers each subject to a matching noise frequency. The sampling is in the frequency domain, but the final signal captured is is the amplitude variance over time for each frequency, responsible for a single audio frequency.

But yes, your underlying point stands: A theoretical AM receiver that demodulated repeated signals independently and correctly averaged their outputs might gain better wideband noise rejection. Better, but not good, and at a cost of complexity approaching that of better modulations.


Let's take it to the limit:

Signal0: infinite SNR. Signal1: anything less.

I just don't see how the output of averaging these would improve over Signal0. I don't think it can.


They're thinking about when you sample from the same noise distribution, averaging gives an unbiased estimator of the mean. But when you know one SNR is higher than the other, maybe this doesn't hold? But maybe if you transform the distributions to look the same, thus taking a weighted average? I'm not sure.

It does hold, it is just weaker. You can improve the SNR of a better signal by adding another signal with worse SNR to it. But you need to normalize the signals the way their noise floors are the same amplitude before addition.

We’re not talking about signals with different SNRs. Where did you get the assumption from?

No, that's one time varying signal.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: