- https://gibber.cc/ is also JS-based. The library and functions provided make for some pretty terse yet readable code, though focused on somewhat traditional synthesis methods (and I still don't think JS is ideal for this...)
- http://ctford.github.io/klangmeister/ is based on Clojurescript, which is a safe choice since Overtone has long demonstrated that Lisp is great for music programming. Right now there's not much of a library to go with it, though.
- http://faust.grame.fr/editor/ - Faust is one of the greatest music programming languages, and it also runs in the browser, but I think they haven't sufficiently polished the browser experience in a way that lets a novice learn the language in the browser.
- https://audiomasher.org/browse - I'm biased on that one since I made it. An obvious weakness is that the language is a little frightening at first glance, but there's a tutorial. It's terse yet open-ended, and the built-in library is pretty extensive.
For SOUL, it seems running in the browser is just a little demo, and the language is meant to be used in other situations. In fact, most of the above projects have grander aspirations than running in the browser - it's generally just a way of getting people to try out the language or system. But I still wish more projects would take the extra step and create a community-style website where anyone can upload and play with each other's code, otherwise the web experience just seems like a proof-of-concept.
Impressive what people are making with something so low level. Geez, cool mixture of artistic and technical skill.
This is one of the first ones, a little bit dated now but there is ton of material and nice books
- Pure Data (max/msp)
Graphical audio language, also popular and very nice for beginners
Object oriented platform for audio synthesis (smalltalk)
Functional audio synthesis platform using Clojure (totally awesome)
Was he actually the original dev?
I get that he contributed a lot to the project, but overtone isn't dead, and I don't think he was the original dev.
I'm the developer of SOUL - happy to answer any questions you guys have.. :)
In the golang community for example Google has recently decided to build and officially support a language server (called 'gopls'). How it works: You build a language server once and get features like typecheck errors, auto complete, go to definition, code peaking, documentation on hover, etc... in all editors/IDEs that support the protocol.
LSP Website: https://microsoft.github.io/language-server-protocol/
Recent golang talk: https://youtu.be/bNFl7HcyDao?t=354
Is there going to be a marketplace for professional SOUL-based VST plugins ?
A way to browse all open source SOUL scripts and share modules so that the community will be able to build on top of previous scripts (without copy pasting all the time) ?
Will all SOUL scripts be runnable in the browser like Shadertoy.com, if no what will be the separate features between web / desktop / hardware ? How are you going to handle the diversity of situation where one could use SOUL ?
Side note : http://soul.dev/ doesn't have https ?
Yep, we'd love to make soul.dev into a place where you can browse snippets of code, search for people's implementations of various DSP algorithms and try them out and put them together - that's all definitely in the plan!
And as far as cross-platform-ness goes, I don't currently foresee a situation where any SOUL code wouldn't run on all platforms.. Running it as WASM in a browser may not be particularly quick compared to the same code running via LLVM JIT or on a DSP, but it should still work.
(And yes... soul.dev certainly uses HTTPS.. maybe whoever posted this story typed it without that)
I own the domain adc.io, for a project that went nowhere years ago, so now I mostly use it for some personal things. would you guys be interested in doing a trade or want to buy it or something? I feel kinda trashy asking that here since it doesn't add to the discussion, but I like what y'all are doing, and didn't even know that conference existed until I clicked that link.
could you offer to compile/package SOUL code into a VST as a service?
And yes, the same thing would also work nicely if hosted as a web-service
A few more questions:
- How will the language/API/reference VM be licensed?
- Will the API depend in any way on the JUCE ecosystem?
- Do you expect users to bundle the VM with their plugins/applications or to have one installation on a target machine? Or will plugins need to be supported by a host that embeds the VM?
Licensing: very permissive for developers; probably commerical deals for companies who want to ship SOUL-compliant hardware or drivers
JUCE dependency: no, we'll want to make this as vanilla as possible, to encourage its use in many ecosystems. There'll be JUCE integration, but also stand-alone C++ and a flat C API so it can be wrapped in other languages like python, C#, Java etc
We'll offer an embeddable JIT VM, but our end-goal with this is for there to be drivers in the OS or external hardware which does the work, and the API would just send them the code (like e.g. openGL shaders)
RE the licensing, I'm asking more about the language itself, the IR, and whether it is going to be possible to develop independent implementations.
Lastly because I always forget to ask about the IR - why not WASM?
Re: building a 3rd party back-end, I guess that might be technically possible but we'd rather keep control of that side of things. We've not fully decided our approach there yet.
Why not WASM? Well.. quite a few complicated reasons, involving things like the ability to generate continuations, and to get good auto-vectorisation, and being portable to weird DSP architectures. Also, the system can't just be a straight program, it had to be proprietary, at least at a high-level. And secure, so LLVM IR also wasn't quite the right shape for it. We sweated over this decision, believe me!
The thing I tried experimentally implementing is basically a 128x128 matrix mixer with a little bit of extra processing. On a three-year-old MacBook Pro, the GPU barely had to lift a finger to crunch the data, but the round-trip latency was still high enough that it would struggle to keep up with anything less than a buffer size of 512 or so at 48kHz (which is on the high end for live mic processing). It would be fantastic for offline processing with larger buffers, though.
I haven't tried CUDA or OpenCL, so I don't know if the situation is the same there—but of course they have the problem of vendor support as well.
..however, having a few conversations with more knowledgable people has changed our mind and we're definitely going to give it a try on Metal and Vulkan to see what happens. Should be interesting.
I don't have any experience with Metal or Vulcan but my intuition is that audio DSP is going to include a healthy dose of linear algebra and multi-variate calculus. That points towards some kind of fit with the GPU. Given that basic GPUs are available on practically every device (including phones) it seems like a fantastic fit. Even audio hardware developers would benefit since it would open up access to commodity priced chips (rather than custom asic/fpga/whatever).
And even for very simple audio loads, there's often unused capacity in compute cores (even when running games) which would could do the job "for free" without bothering the CPU.
Also, we do know a few people who want to use SOUL to write audio code which does need high parallelism. It's not a super-common use-case for audio, but it does exist.
It's hard because you need some serious mathematical skills to understand the DSP itself (that's the bit I'm lacking in..).
Then if you're building a "real" product, you're going to have to write in C++ and have a rock-solid understanding of concurrency, real-time coding and many other very tricky subjects which take huge amounts of experience to get good at.
I've been doing this for over 20 years so have lost sight of how beginners should learn it.. But most people seem to just dive in with an idea they want to build, and start trying to swim! It can be painful to see people using JUCE/C++ who don't really have any interest in C++ for its own sake, but who are struggling to get anywhere without putting the effort in to learn the language properly.
Currently, there are many blockers to using Julia in real-time application, such as dynamic memory allocation and lack of thread safety. But I find it promising that a subset of Julia could be used for real-time programming.
Some quick 2-second feedback - I tried running the default example in Brave, it failed, and then as a good user I wanted to report the bug to the community, but that requires finding the ROLI forum and signing up (providing DOB!) etc. I think it would be great if it were a bit easier to submit bug reports without as much hassle.
(We might open the soul.dev website too at some point, but that's private for now.)
To quote the overview: "The SOUL platform is a language and an API. The language is a small, carefully crafted DSL for writing the real-time parts of an audio algorithm. The API is designed to deploy that SOUL code to heterogeneous CPUs and DSPs, both locally and remotely."
That said, still disappointing that there doesn't appear to be any info on an actual compiler or runtime or however this is supposed to be used in actual projects; the editor configs and code samples are a start, but kinda useless if the only way to actually put 'em to use is to use a buggy web app :)
And speaking of "buggy web app", mousewheel-scrolling in the playground seems to be entirely broken on Firefox on Linux. Apparently this is true of all instances of Microsoft's web-based "Monaco" editor widget (on which the Soul playground appears to be built). What the hell, Microsoft?
Especially if the syntax mimics C-like languages, what important advantages do we get?
The precedent to consider is shader languages. A language like GLSL constrains what the programmer can do, with very good reason: the probability a newcomer will write a performant-enough and correct-enough shader in a general purpose programming language is low.
Similarly with audio programming. Not blocking the audio thread is hard! Heck, just dealing with multithreading itself is hard. JUCE tries to make audio programming much simpler, but at the end of the day it's still C++, and footguns abound.
So I like to think of SOUL as a sort of shader language for audio. We built https://soul.dev/playground/ to be a sort of "Shadertoy for Audio."
..but we are attempting to do something that nobody else is, which is to provide a platform they can use as a target.
If we can do the hard work and graft of turning this into a super-portable runtime that can target whatever low-latency hardware the user has (and create a market for new audio accelerators that don't yet exist), then we expect it to be attractive for all those existing frameworks to use SOUL (or our IR) as the thing they emit, rather than using LLVM's JIT or WASM or just interpreters as they do now.
The main TL;DRs are:
- this needs to get JITed to compete with C++ performance, so dynamic and interpreted languages are out.
- it needs to stop people doing anything which is real-time unsafe, so any language which involves a heap or GC is out
- it needs to be secure enough to not pose a security risk if deployed to an embedded bare-metal device, so C is out.
- it needs to strongly enforce and represent a graph structure at a syntactic level, so.. pretty much all existing languages are out.
... except languages designed for real-time signal processing based on dataflow graphs such as Kronos (https://www.mitpressjournals.org/doi/pdfplus/10.1162/COMJ_a_...), Céu (http://www.ceu-lang.org/chico/ceumedia_webmedia16_pre.pdf), Antescofo (https://hal.inria.fr/hal-01585489) and the oldies such as SIGNAL, Lustre, Esterel, etc... :p
Then a faust2soul script can be used to directly compile Faust DSP to SOUL source code:
To be tested, the generated SOUL code can then simply be copy/pasted in the SOUL playground.
Yes, could be that it works really well for machine control, robotics, etc. Hopefully as it gets more established, people from those worlds will get involved and see what happens.
One final note I'd like to add is that we've also been running a closed early-adopter group for industry insiders, where we're sharing some other non-web tools including our LLVM JIT engine and C++ generator, and discussing partnerships etc.
If you represent a company that think should join this group, ping me and let me know..
For anyone interested apparently there's a free MOOC in may: https://www.kadenze.com/courses/introduction-to-programming-...
And a TED talk: https://www.ted.com/talks/ge_wang_the_diy_orchestra_of_the_f...
They've built a Music Learning platform with Elm (afaik): https://learningmusic.ableton.com/
Elm wants to target WebAssembly, wouldn't it be able to compile to Soul too in theory?
The target use is to unify DSP development across different platforms, whether you are developing web based audio, or gaming, or pro audio, you should be able to write your algorithm in SOUL and get it up and running.
I can make some guesses by looking at the example in OPs link. But what is the primary goal of an audio programming language? Who's using it? How? Alternatives?
Simply put, it's a language designed to make creating and processing high-quality audio quickly more straightforward than a general purpose language would make it.
...and without wanting to "throw shade" at csound or anyone else, we think the SOUL syntax is much nicer to write of course :)
POST https://media.noise.fm/soul net::ERR_CERT_AUTHORITY_INVALID
What browser and OS are you using?
Yeah AWS should work just fine.
I'm on macOS 10.14.4, and tried with the current version of Firefox, Safari, and Chrome.
Different reasons, but same result for all. That error came from Chrome and was the most descriptive. Firefox threw a CORS request blocking error.
Edit: Yeah and Safari is also throwing a cert error:
[Error] Failed to load resource: The certificate for this server is invalid. You might be connecting to a server that is pretending to be “media.noise.fm” which could put your confidential information at risk. (out.wasm, line 0)
I simply won't be able to access this from work. I can't see the noise.fm root path as the corporate routers here are blocking it due to "Domain Parking". (eyeroll) I'll check this out at home later on!
Thanks for checking in.
Tried in FF as well and got CORS errors.
Can I have it against CoreAudio or Pulse or anything other than WebAudio?
The browser-based stuff is just a demo to let people explore the language and learn about what we're trying to do. The actual goal is to create the fastest, lowest-latency infrastructure for audio on all platforms.