
Making Music with Shaders: Practical Additive GPU Audio Synthesis [pdf] - based2
http://www.graffathon.fi/2016/presentations/additive_slides.pdf
======
sprash
On shadertoy there are many examples of GPU shaded audio synthesis. My
favorites are:

[https://www.shadertoy.com/view/4ts3z2](https://www.shadertoy.com/view/4ts3z2)
which is a perfect example for ambient sound

[https://www.shadertoy.com/view/ldfSW2](https://www.shadertoy.com/view/ldfSW2)
for an example of acid techno music

[https://www.shadertoy.com/view/lsSXzD](https://www.shadertoy.com/view/lsSXzD)
a doom "port" rendered in a single shader including music

~~~
abedef
Your links include the closing quotes for those clicking in a browser. Fixed
links:

Which is a perfect example for ambient sound [1]

For an example of acid techno music [2]

A doom "port" rendered in a single shader including music [3]

[1]
[https://www.shadertoy.com/view/4ts3z2](https://www.shadertoy.com/view/4ts3z2)

[2]
[https://www.shadertoy.com/view/ldfSW2](https://www.shadertoy.com/view/ldfSW2)

[3]
[https://www.shadertoy.com/view/lsSXzD](https://www.shadertoy.com/view/lsSXzD)

~~~
sprash
Sorry/Thanks... apparently one should not quote URLs on HN.

------
xattt
Slightly unrelated, but I find that slide decks posted only at face value miss
a lot of contextual information, discussion and audience interest that comes
from watching a presentation on a video or in person with commentary.

What makes a good slide deck for a presentation makes for terrible reading.
Likewise, what makes for good reading makes for a terrible PowerPoint
presentation.

------
TheOtherHobbes
GPUs have single cycle sin/cos LUT implementations, so they should be good for
this kind of thing - competitive with an FFT for a relatively small number of
overtones.

But there are some technical issues. Older GPUs don't have full IEEE 754
floating point support, and compilers are optimised for graphics, not for
audio, so you won't necessarily get the same output as you would from DSP code
running on an x86.

If shader compilers included some audio-friendly tweaks and perhaps a standard
audio API for co-processing, they could see a lot more use in music editing
and synthesis.

------
amirhirsch
It’s worth pointing out that there is a closed form for the sum of a finite
geometric series used for summing harmonic sine waves into bandlimited impulse
trains (BLIT) which are filtered to create alias-free classic analog
oscillators:

[https://ccrma.stanford.edu/~stilti/papers/blit.pdf](https://ccrma.stanford.edu/~stilti/papers/blit.pdf)

------
wool_gather
It may just be a limitation of the slides rather than the full presentation,
but this is almost entirely about the audio synthesis side. There is nearly
nothing specific to GPUs or shaders that I can see.

So fair warning: if, like me, you already know the fundamentals of audio and
synthesis (and it starts with "what's a sine" and "how does 440Hz map to a
piano), but were interested in the shader part, there's not much here for you.

~~~
Impossible
The reason is there isn't much special about his use of shaders. The first two
slides cover shader specific details (write to a 1D render target where each
pixel is a 44khz sample). The audience is people who are familiar with
fragment shaders, and specifically familiar with shadertoy, but with limited
exposure to additive synthesis.

------
0815test
Very nice work. This could be used to efficiently re-implement 8-bit- and
16-bit-era sound systems, which are mostly emulated on the CPU at present. I
wonder if the same techniques can also be used to enable DSP-like workloads of
a more general sort on GPU compute.

~~~
Lorkki
Most of those are either very straightforward subtractive synthesizers, or use
digital FM synthesis. It's much more efficient (and accurate) to start from
sampled basic waveforms or reverse-engineered data from the actual systems,
rather than building them using additive synthesis techniques.

