Hacker News new | past | comments | ask | show | jobs | submit login
Natural Sounding Artificial Reverberation (1962) [pdf] (charlesames.net)
49 points by sctb on Feb 4, 2018 | hide | past | web | favorite | 13 comments

While Schroeder's algorithm sounds like reverberation, these days it might not be confused for 'natural sounding'. Well, to be fair there is worse sounding reverberation in real life (garages).

A better sounding algorithm is to convolve the signal with decaying random noise - impractical in Schroeder's day - but that's sort of what his and later algorithms are approximating.

Edit: Improvements on Schroeder's algorithm include feedback delay networks (where the comb filters don't just feed back to their own input, but to the others too, and/or nesting allpass filters inside each other. Both of these increase the density of echoes faster.

Does that mean convolve (e.g.) white noise w some decay model (or apply adsr envelope?), and then again convolve your subject w that resultant decayed random noise?

Do my definitions for “random” and “decay” even fit your thought?

Yes, take a few seconds of white noise, apply an exponentially decaying envelope, and convolve with the input signal.

Of course there is a lot more you can do: high frequencies decay faster in a real room, the envelope could be a more complex shape, or just skip to near-perfect and convolve with an impulse response measured from a real room.

That’s exactly how I was interpreting your original post, but I had to confirm. The impulse response (real or constructed) was my first thought too... this all sounds intriguing, and it’s a synth day for me today ;)

Why does white noise work? Is it because the spectral density of both the Dirac impulse and white noise are flat, and our ears don't care about phase?

It doesn't have to be white - pink noise works the same just sounds more muffled/bassy. It's more that there is no distinct pattern in the time domain for the ear to pick out, which would add echoes or metallic ringing. The same as a boxy room or shower cubicle will ring/resonate at certain frequencies, but a bigger more complex shaped room will have an effectively random pattern of reflections for a given source and listener position.

It's simulating a randomly distributed set of acoustic reflectors. Well, ish. Makes it sound more like a room with stuff in it rather than being in a box.

Not an expert in the field, but it's possible to convolve a signal on the recorded impulse response of a room [1] or a 'Slinky' toy [2], or that of a subjectively 'desirable' piece of audio equipment [3].

The result is as if your signal were played in that room ( with all its reverb ) or through that thing ( with its 'desirable'-or-otherwise characteristics modeled in its impulse response ).

Thinkable in 1962, but not thought possible. Now it's part of an industry.


[1] https://en.wikipedia.org/wiki/Convolution_reverb

[2] http://www.openairlib.net/auralizationdb?page=1

[3] https://www.soundonsound.com/techniques/convolution-processi...

To add an aside to my other comment, as a grad student in a spatial audio class, I led a project to create a new method of simulating physical reverb by using mics and speakers pointed at each other in an anechoic space, letting the mutual feedback create reverb. You could apply delay to "push back" the walls and apply filters matching the absorption characteristics of different materials. It was a fun project, which won my group Grad Project of the Year at my school's demo day. A couple commercial products and systems use similar concepts to achieve active acoustic control.

The paper is unfortunately behind a pay wall: http://www.aes.org/e-lib/browse.cfm?elib=16101

The downside of convolutional reverb is the lack of parameterization. You're kind of stuck with one fixed geometry of source, receiver and surfaces. It can also be expensive to apply in real-time processing.

A lot can be done post-processing the impulse response (volume envelopes, timestretching, combining with other parts, etc)

As for the efficiency: a modern laptop can easily run ~100 channels of multi-second convolution reverb in realtime on 44.1/48kHz sample rate and <10ms latency in real-time on 1 core.

Yeah, but if you post-process the impulse response, it's difficult to end up with something else that "looks" as much like a real impulse response. If that's important to you.

Don't get me wrong, I'm not trying to argue against using convolutional reverb. I was just throwing out a couple reasons why people still use other approaches.

I think Abbey Road (and probably other studios of the era) had an ambiphonic system similar to as described in the paper. They had an array of speakers built into the walls of their recording room so that could feed back signals from it, artificially changing the length of the reverb decay of the room.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact