

Raya – Real-time sound simulation library for games [video] - kgabis
https://www.youtube.com/watch?v=EWatzCC7rk0

======
highCs
Nice. It remembers me what Carmack wanted to do in Quake 1: add a Potentially
Hearable Set (PHS) in addition to the Potentially Visible Set (PVS) used for
hidden surface removal of the game. After investigation, it seems that it was
actually used in QuakeWorld:

 _An interesting extension of the PVS is what John calls the potentially
hearable set (PHS)—all the leaves visible from a given leaf, plus all the
leaves visible from those leaves—in other words, both the directly visible
leaves and the one-bounce visible leaves. Of course, this is not exactly the
hearable space, because sounds could echo or carry further than that, but it
does serve quite nicely as a potentially relevant space—the set of leaves that
have any interest to the player. In Quake, all sounds that happen anywhere in
the world are sent to the client, and are heard, even through walls, if
they’re close enough; an explosion around the corner could be well within
hearing and very important to hear, so the PVS can’t be used to reject that
sound, but unfortunately an explosion on the other side of a solid wall will
sound exactly the same. Not only is it confusing hearing sounds through walls,
but in a modem game, the bandwidth required to send all the sounds in a level
can slow things down considerably. In a recent version of QuakeWorld, a
specifically multiplayer variant of Quake I’ll discuss later, John uses the
PHS to determine which sounds to bother sending, and the resulting bandwidth
improvement has made it possible to bump the maximum number of players from 16
to 32. Better yet, a sound on the other side of a solid wall won’t be heard
unless there’s an opening that permits the sound to come through. (In the
future, John will use the PVS to determine fully audible sounds, and the PHS
to determine muted sounds.) Also, the PHS can be used for events like
explosions that might not have their center in the PVS, but have portions that
reach into the PVS. In general, the PHS is useful as an approximation of the
space in which the client might need to be notified of events._

[http://www.phatcode.net/res/224/files/html/ch70/70-02.html](http://www.phatcode.net/res/224/files/html/ch70/70-02.html)

------
rasz_pl
It uses same algorightms 3d engines use to compute lightning and shadows

[http://youtu.be/IyUgHPs86XM](http://youtu.be/IyUgHPs86XM)

should be easy to implement in hardware in GPU.

BTW I seem to remember vaguely Carmack wanted to implement fully software 3D
sound engine for Doom3, but some patent dispute forced ID to support Creative
EAX.

~~~
kgabis
It's quite different from what we do. However we also do have working GPU
implementations in CUDA and OpenCL :)

~~~
jbarrow
I have to admit, I thought it was just pretty standard ray-tracing as well.
I'd be curious to find out how it's different. I followed the link through to
the website, but didn't see much additional information on it.

~~~
kgabis
It's different mostly due to diffraction which is neglible in case of light
and very important when simulating sound.

~~~
nitrogen
Diffusion is important for light too; see
[https://en.wikipedia.org/wiki/Radiosity](https://en.wikipedia.org/wiki/Radiosity)

~~~
kgabis
Diffusion is not diffraction, they're different phenomena.

~~~
nitrogen
Radiosity simulates diffusion, though. Are you saying you also simulate
diffraction? In visual rendering, that might fall under something called
"caustics" or "photons" (I use quote marks because renderer photons are not
quite the same as real photons).

~~~
kgabis
Caustics are caused by refraction or reflection, not diffraction. There are
many analogies between simulating light and sound, but you take different
shortcuts to make those simulations fast. E.g. you don't simulate diffraction
when simulating light (it's effect is neglible) and you discard small objects
when simulating sound. Those differences exist mostly because visible light
spectrum and hearable sound spectrum are so far apart.

------
spyder
Looks very similar to GSound:
[http://www.carlschissler.com/gsound/index.php?page=home](http://www.carlschissler.com/gsound/index.php?page=home)
But sounds like none of those uses HRTF for producing realistic binaural
audio.

------
angry_octet
Please don't even bother trying to peddle your stuff on HN. No technical
content, nothing open source, why are you wasting our time? If readers in
interested in this area:
[http://googl.com/#q=acoustic+ray+tracing](http://googl.com/#q=acoustic+ray+tracing)

------
Arelius
It's actually pretty impressive, what do your licensing terms look like?

~~~
kgabis
Please contact bziolko [at] agh.edu.pl for information about licensing.

------
hcarvalhoalves
Does it simulate different delays for left / right ear?

~~~
kgabis
Yes, we have implemented HRTF ([http://en.wikipedia.org/wiki/Head-
related_transfer_function](http://en.wikipedia.org/wiki/Head-
related_transfer_function)).

