Hacker News new | comments | show | ask | jobs | submit login
Raya – Real-time sound simulation library for games [video] (youtube.com)
31 points by kgabis on Aug 7, 2014 | hide | past | web | favorite | 18 comments



Nice. It remembers me what Carmack wanted to do in Quake 1: add a Potentially Hearable Set (PHS) in addition to the Potentially Visible Set (PVS) used for hidden surface removal of the game. After investigation, it seems that it was actually used in QuakeWorld:

An interesting extension of the PVS is what John calls the potentially hearable set (PHS)—all the leaves visible from a given leaf, plus all the leaves visible from those leaves—in other words, both the directly visible leaves and the one-bounce visible leaves. Of course, this is not exactly the hearable space, because sounds could echo or carry further than that, but it does serve quite nicely as a potentially relevant space—the set of leaves that have any interest to the player. In Quake, all sounds that happen anywhere in the world are sent to the client, and are heard, even through walls, if they’re close enough; an explosion around the corner could be well within hearing and very important to hear, so the PVS can’t be used to reject that sound, but unfortunately an explosion on the other side of a solid wall will sound exactly the same. Not only is it confusing hearing sounds through walls, but in a modem game, the bandwidth required to send all the sounds in a level can slow things down considerably. In a recent version of QuakeWorld, a specifically multiplayer variant of Quake I’ll discuss later, John uses the PHS to determine which sounds to bother sending, and the resulting bandwidth improvement has made it possible to bump the maximum number of players from 16 to 32. Better yet, a sound on the other side of a solid wall won’t be heard unless there’s an opening that permits the sound to come through. (In the future, John will use the PVS to determine fully audible sounds, and the PHS to determine muted sounds.) Also, the PHS can be used for events like explosions that might not have their center in the PVS, but have portions that reach into the PVS. In general, the PHS is useful as an approximation of the space in which the client might need to be notified of events.

http://www.phatcode.net/res/224/files/html/ch70/70-02.html


It uses same algorightms 3d engines use to compute lightning and shadows

http://youtu.be/IyUgHPs86XM

should be easy to implement in hardware in GPU.

BTW I seem to remember vaguely Carmack wanted to implement fully software 3D sound engine for Doom3, but some patent dispute forced ID to support Creative EAX.


It's quite different from what we do. However we also do have working GPU implementations in CUDA and OpenCL :)


I have to admit, I thought it was just pretty standard ray-tracing as well. I'd be curious to find out how it's different. I followed the link through to the website, but didn't see much additional information on it.


It's different mostly due to diffraction which is neglible in case of light and very important when simulating sound.


Diffusion is important for light too; see https://en.wikipedia.org/wiki/Radiosity


Diffusion is not diffraction, they're different phenomena.


Radiosity simulates diffusion, though. Are you saying you also simulate diffraction? In visual rendering, that might fall under something called "caustics" or "photons" (I use quote marks because renderer photons are not quite the same as real photons).


Caustics are caused by refraction or reflection, not diffraction. There are many analogies between simulating light and sound, but you take different shortcuts to make those simulations fast. E.g. you don't simulate diffraction when simulating light (it's effect is neglible) and you discard small objects when simulating sound. Those differences exist mostly because visible light spectrum and hearable sound spectrum are so far apart.


looks like you shoot rays around from the player


It's not that easy (unfortunately). Those are sound paths.


Yeah. Carmack wanted to calculate the distortion of the ear-canal on positional audio and to calculate the effects of sound diffusion off of surfaces like light. Creative owns patents on both of those techniques. :(


Looks very similar to GSound: http://www.carlschissler.com/gsound/index.php?page=home But sounds like none of those uses HRTF for producing realistic binaural audio.


Please don't even bother trying to peddle your stuff on HN. No technical content, nothing open source, why are you wasting our time? If readers in interested in this area: http://googl.com/#q=acoustic+ray+tracing


It's actually pretty impressive, what do your licensing terms look like?


Please contact bziolko [at] agh.edu.pl for information about licensing.


Does it simulate different delays for left / right ear?





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: