The last two videos on the page are particularly impressive, which demonstrate video mixing using two channels of an audio mixer, and video blurring from multi-path audio reflections.
Being able to do interface with external off the shelf audio gear with an otherwise software only solution would be the goal of all this.
So, yeah, I agree. Alexander Zolotov is brilliant and prolific. The demo videos for Sunvox are worth a watch.
I just finished adding video support (and stubbed out audio support).
Due to already using ffmpeg for image manipulations, this turned out to be rather easy.
Give it a try using something like this:
$ ./mangle.sh in.mp4 out.mp4 --color-format=yuv444p --bits=8 --blend=0.4 overdrive 17 hilbert -n 5001
For folks wanting to play around with this on OSX:
brew install ffmpeg lame
brew install sox --with-lame
The --with-lame is important if you want to process mp4s.
Might want to consider putting some kind of progress indicator (simple polling on tmp_audio_in.u8 vs tmp_audio_out.u8). I ended up looking at the output directory with
watch -n .5 ls -la /tmp/audio_shop-DIRECTORY
I think the cleanup isn't working quite right, getting a rm unlink failure.
Seriously, this is a neat hack.
I'll keep hacking at it this stuff for at least a little bit.
I also tried different audio compressions on the picture. Ogg vorbis even looked better than mp3, haha.
Right now I've mostly been exploring YUV and RGB colorspace in either packed or planar formats.
Would this be middle-out compression?
Nope! Less fancy than that. I just take the bitstream and 'choose' to interpret it as audio.
For it to work somewhat well, a raw video format like YUV444P is used.
That might be the a better way of doing things actually, since visual tools are 2-dimensional by nature.
I just added video support.
Working on audio support now.