At first, I remember seeing an image embedded in Aphex Twin's "Windowlicker" and thinking it can't be that hard to write software to recreate it. Not knowing much at the time, I came to a friend of mine who was all about signal processing, who whipped up a binary to do it a few days later.
Skip forward a few months to when me and a friend are discussing communicating with submarines and transmitting stuff like video data, and how impermeable salt water is to RF. So one of us pitched the idea of using that method of embedding images in sound via that method, and we figured if we use a high-enough audio frequency and low-res image (say 400x400px), we can transmit a roughly 4 fps video stream and use ~50px for control data.
Also, noise won't be an issue here, since the signal is for human consumption and blips and distortion can be overlooked. I ended up buying a pair of really decent transducers removed from an old nuclear submarine (!), but ended up not being able to implement this as life got in the way.
If anyone needs a pair of decent 10W underwater transducers, let me know :P