Hacker News new | past | comments | ask | show | jobs | submit login

> This is a technique used by probably all modern smartphone cameras to do night photos, as well as a common technique used by astrophotographers...

I think this is a lot simpler because each of your pixels is assumed to have a single, correct DC value. This doesn't hold for a time varying signal like AM/FM.






If I send the same audio over 2 or more parallel radio channels, that’s essentially the same as taking multiple shots of the same subject. Substitute a pixel with an audio sample. The transmission noise being uncorrelated between channels will average out.

There are notable differences between radio and imagery that might explain why it might be a tricky analogy:

An image is quantized into pixels. A camera pixel is a receiver for a specific wavelength, subjected primarily to internal wideband thermal noise during the read-out process. Each final output pixel is averaged both in time (exposure and stacking) and space (debayer and noise reduction), with the final single being singular amplitudes per location.

An AM audio signal is a single wideband receiver subjected to wideband noise. Or, if viewed differently, a series of quantized frequency receivers each subject to a matching noise frequency. The sampling is in the frequency domain, but the final signal captured is is the amplitude variance over time for each frequency, responsible for a single audio frequency.

But yes, your underlying point stands: A theoretical AM receiver that demodulated repeated signals independently and correctly averaged their outputs might gain better wideband noise rejection. Better, but not good, and at a cost of complexity approaching that of better modulations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: