A better sounding algorithm is to convolve the signal with decaying random noise - impractical in Schroeder's day - but that's sort of what his and later algorithms are approximating.
Edit: Improvements on Schroeder's algorithm include feedback delay networks (where the comb filters don't just feed back to their own input, but to the others too, and/or nesting allpass filters inside each other. Both of these increase the density of echoes faster.
Do my definitions for “random” and “decay” even fit your thought?
Of course there is a lot more you can do: high frequencies decay faster in a real room, the envelope could be a more complex shape, or just skip to near-perfect and convolve with an impulse response measured from a real room.
The result is as if your signal were played in that room ( with all its reverb ) or through that thing ( with its 'desirable'-or-otherwise characteristics modeled in its impulse response ).
Thinkable in 1962, but not thought possible. Now it's part of an industry.
The paper is unfortunately behind a pay wall: http://www.aes.org/e-lib/browse.cfm?elib=16101
As for the efficiency: a modern laptop can easily run ~100 channels of multi-second convolution reverb in realtime on 44.1/48kHz sample rate and <10ms latency in real-time on 1 core.
Don't get me wrong, I'm not trying to argue against using convolutional reverb. I was just throwing out a couple reasons why people still use other approaches.