Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: An algorithmic audio landscape (ambient.garden)
193 points by pierrec on March 3, 2024 | hide | past | favorite | 40 comments
This is an web audio experiment I've been wanting to do for a long time. Basically an ambient music composition, but all the sound elements are laid out in space, and that musical space can be explored freely.

It's definitely inspired by in-world music that sometimes appears in games. I basically took that concept, keeping the music aspect, and dropping the entire "game" aspect.

I also turned it into a more "traditional" non-interactive album, but since I started with code, why not program the whole thing? Had a blast making the entire album from code, the complete source for the album is here: https://github.com/pac-dev/AmbientGardenAlbum




OK, the "everything is made of a million dots" rendering style is pretty neat. I don't think I've seen its kind before. I also like that it somehow ran perfectly on my phone.

IMO interactive in-world music peaked with Portal 2, the effort put into that was impressive.


I think I agree on Portal 2! Honestly, the rendering style may have been an unnecessary challenge. It was way harder to implement than I thought and caused me to almost abandon the whole thing.


isn't the "everything is made of dots" the definition of pointillism?


well it’s definitely inspired by pointillism, but they’re doing things like scaling the dots based on depth etc


I don’t bother to log in and vote often, but this is such a cool idea AND so well executed. Fantastically well done. Thanks for sharing


The procedurally generated audio is really cool, and it feels similar to Cocoon's background music.

Do you know of any Python library that lends itself well to experiment with procedural audio generation?



This is really cool. The music and world (especially the mysterious structure in the distance) remind me of the eeriness of Myst.

Can you share more info about how you built the 3D world made of dots?


Thanks! It was honestly kind of a dark path. The graphics are based on Three.js (which is awesome), and I initially rendered everything with a rarely-used WebGL/OpenGL feature called GL_POINTS, which seemed like a smart way of doing an impressionist style. Three.js exposes it like this: https://threejs.org/docs/#api/en/objects/Points

Of course, things are never that simple, and it turns out you can't just draw a million points with GL_POINTS, performance is terrible, they don't efficiently occlude each other, etc. So I rendered the terrain as regular triangles, and wrote a terrain shader that makes it look like the triangles are made of points. I re-wrote that shader several times, and actually, it's still not working correctly, but at some point you have to stop. That shader is here: https://github.com/pac-dev/AmbientGarden/blob/master/Web/ver...


I love it. I tried the new Zelda game during lockdown and wandering around as the music changes is really fun to do.

Digging around your website a bit I see you've also made an interactive version of Paul Batchelor's sporth language cookbook[0], this is very cool, thank you! I'd like to get into sporth but I had a hard time with the stack-based language approach. Your version of the cookbook seems like a very nice way to get more comfortable with it.

[0] https://audiomasher.org/learn


Sporth is the most fun I've had programming audio. The Forth-like syntax is perfect for building audio graphs in a way that's simple and terse. It was also like a "gateway drug" to go from programming synths/effects to programming actual music. I ended up not using it for this project, just JS+Faust, partly because I wanted to get more familiar with Faust.


What's the copyright license on the files?

That aside, changing instrument loops at timestamps is an interesting method of making music compared to "the usual" method of having an instrument play a pitch for this long, play another pitch for that long, pause for this long.


It's an interesting question. The original website actually ran the music generation code straight in the browser with no samples used at all, but it was a bit CPU-intensive. You can still access that version at https://ambient.garden/edit So my first question would be, if someone records the output of that version, how does licensing work on that recording? Their browser is generating the audio.

The current main version plays the same sounds, but pre-rendered, using cross-faded loops like you're describing. I considered the audio files to be purely an optimization, so I didn't consider licensing on those. So if the code is MIT-licensed and generates the audio, what's the license on the audio? The repository doesn't contain audio files, but code that generates them. Can I add a license for those non-existing files in the repository? Unfortunately, all I have is more questions.


From that, I take it the code was meant to be MIT-licensed?

For example, I found the https://ambient.garden/patches/ wandering through the source code (little worried by the wall of shaders), and was wondering if the audio patch portions were useable. Probably take me quite a while to figure out what it all does. Yet looks like a good starting point for music projects.

Also, like the references to other people's work in the code base. From my own perspective, nice way to acknowledge where you found something. Plus, if someone else is curious, there's at least somewhere to look.

Also, also, kind of neat you bothered to put in quasi-secrets (although, did not sit through the entire walkthrough, so maybe you eventually take the walkthrough path through all the nodes).


Impressive ! Just added on the https://faust.grame.fr/community/powered-by-faust/ page.


> So my first question would be, if someone records the output of that version, how does licensing work on that recording? Their browser is generating the audio.

> The current main version plays the same sounds, but pre-rendered, using cross-faded loops like you're describing. I considered the audio files to be purely an optimization, so I didn't consider licensing on those.

IANAL and my comment depends on US copyright law. I would argue that the source code files and the audio that the user's computer generates from the source code have the same encoding-decoding relation that an MP3 file does to the audio that the user's computer generates from the MP3. The copyright holder of the MP3 file has copyright control of the decoded audio even if the user's computer does the decoding, so you should have copyright control of the audio directly generated by your code on the user's computer. If you put a certain license on the source code but don't explicitly specify a license for the audio files you generated for Spotify and Apple Music, then I think the permissions granted by the license will apply to the audio generated by the user's computer but not necessarily to the audio generated by your computer. You could explicitly grant permit certain actions to the source code and explicitly prohibit those actions to the audio files generated by the user's computer, with the caveat that fair use might make some of the prohibitions on the latter unenforceable due to the permissions on the former. Since you are the author of the source code files, you can make the generated audio files you put on Spotify have an identical or different license from the repository files. You can specify both the license for the repository files and the license for the generated audio files you put on Spotify anywhere, including in the repository readme.

> So if the code is MIT-licensed and generates the audio, what's the license on the audio? The repository doesn't contain audio files, but code that generates them. Can I add a license for those non-existing files in the repository? Unfortunately, all I have is more questions.

If you put the code under the MIT license then the license on the audio generated from the code by the user's computer is the user's choice, because the MIT license is a permissive open-source and free license. If you put the source code files under the MIT license then the user can do just about anything with the source code files and anything derived from them as long as the user follows the few requirements in the license (in the case of the MIT license, providing correct attribution, preserving the license notice, and minding the lack of a warranty). (Well, the user also has to obey trademarks and patents on the original files. The MIT license doesn't have a patent grant, while the Apache 2.0 license frees the user from the author's patents on the files.) "just about anything" includes releasing copies or derivatives for any purpose under any license (including proprietary ones) of the user's choosing. I think of it as: the user can wrap the user's copy of MIT-license code into any license that doesn't prohibit inclusion of MIT-license code, but cannot remove or change the license of the original code.


Thanks for sharing this impressive work and all the codes. I'm listening to Ambient Garden right now, it is perfect for coding! And as I'm working on the Faust compiler itself, the recursiveness of the situation is quite ... interesting ;-)


It's so cool. It doesn't work on chrome&brave browsers' latest versions on apple silicon M3 Pro. It works smoothly on Safari though.


Interesting, if anyone can reproduce the problem on Chrome+Apple Silicon, I would appreciate if you open an issue and include the browser console output on https://github.com/pac-dev/AmbientGarden


Extremely impressive and really inspiring. I'm actually working on something similar but in Unity3D. Would love to get your feedback. It's way more hand crafted and curated compared to yours but I'd like to explore generative aspects like you have. The garden sounds really organic, very similar feeling to Deep Listening by sound artist Pauline Oliveros.


Sure, my email is on my website which is in my profile. I will gladly check out other project/works-in-progress in this genre.


I spent a while trying to hear objects positioned in space, so that if the drone is to my left then I hear it from the left... but I guess it doesn't work like that, the position just affects the amplitude. Still very neat, of course.


You are completely right, I never got around to implementing panning (or more advanced spatialization), which should be kind of a no-brainer for a project like this. The more I worked on it, the less I noticed this feature was missing! I'd like to add some subtle panning at some point.


I also like the impressionist art style, never saw something like that before. But it runs with very low frame rates even on a Snapdragon 8+ Gen 1, which has a fairly beefy GPU for a phone. It performs a bit better in Firefox than in Chrome on Android 14.


Yeah, I'm not sure beefiness is the actual problem, I tested it on a phone with a Snapdragon 855, which is supposed to be slower than yours in every respect, and it ran smoothly enough. I guess that's why real gamedevs spend a lot on device testing.


It actually performs fine if I limit the screen to 60 Hz in the Android settings. Apparently the frame rate gets lower the higher the Hz setting for the screen. On 144 Hz it basically stops. There is also some crackling noise after a while, but that might be unrelated.


Thanks. I just pushed a new version that throttles the FPS to 60. If you could test again on your device that would be great :D


Cool, that fixed it. It works well now on higher Hz settings.


Great work! Why does autopilot stop around 12 min mark?

UPD: I see that it cannot load an mp3 file, net cache operation not supported. I reloaded / cleaned my cache, maybe you regenerated the sounds and I just had stale cache.

UPD2: Nope, still failing, ex: GET https://ambient.garden/vA/generated/audio/rdrone_2o3_2o3_7o6... net::ERR_CACHE_OPERATION_NOT_SUPPORTED

Seems to be specific to Chromium based browsers


Very cool! I like how the various sources blend well together, regardless of where you go.

Nice work!


I do not hear any music on an iphone 15. Tried Firefox and Safari.


I don't have the real device but I tested it on Safari on iPhone 15 using BrowserStack and it seemed to work.


Is your hardware mute switch enabled?


That was the problem, thanks :) So I learned something today: apps like Spotify will play regardless of the hardware mute. But a browser clearly mutes.


Yes, the AVAudioSession.Category parameter defaults to “soloAmbient”, signifying the sound playback is non primary and nonmixable. This pauses all other audio, respects the mute switch, and pauses on lock.

For cases when the sound playback is central to the successful use of the application, the “playback” Category should be used. This allows audi to be generated even when the hardware mute is enabled and the screen is locked, and has options for mixing. This is set by apps like Spotify, Apple Music, and Safari when playing video content.

I do not know of a way in HTML to configure this, I think it’s up to Safari to interpret the context based on user intention. It is possible that if the app defaulted to mute and had a user feature that started the audio session, Safari might assign it the “playback” Category.

https://developer.apple.com/documentation/avfaudio/avaudiose...


i didn't get music with Firefox on Windows either. Will keep trying though.

Edit: Oddly, if i open and close uBlockOrigin (nothing listed as blocked) then the animation will proceed frame by frame. One frame for open, one frame for close. Dropped an issue for you on your github.


Amazing idea and execution. Thank you for sharing.


Nice work. I particularly like the vocal sounds.


This looks really awesome. Great work!


Very cool. Great work!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: