Hacker News new | comments | show | ask | jobs | submit login
Musical User Interfaces (arthurcarabott.com)
274 points by acarabott on Apr 14, 2017 | hide | past | web | favorite | 64 comments



Speaking of experimental music interfaces: if anyone is interested, I made an iPad app called Composer's Sketchpad[1] to explore a more freehand process for musical composition. Instead of placing note symbols on a stave, you draw notes on a continuous time/pitch grid using your finger or the Pencil, bending to any pitch and extending to any length. Makes it super easy to jot down complex rhythms and bendy guitar solos in an instant. I'm working on other things right now but I hope to come back to it in the near future to add MIDI support, microtonal grids, repeating sections, and other features.

[1]: http://composerssketchpad.com


Excellent work, I would get it if it was available on other platforms. Do you have (or plan to add) velocity per note? I find it important to compose expressively. I'm also wondering how it would be visualized with your interface. Maybe it's best to keep it minimal and uncluttered, you can only add so much information on the canvas before it becomes a mess.


Yeah, I've thought about it but couldn't figure out a good way to do it UI-wise. In terms of my design goals, it's important that every audible element be visible on screen, so I can't just shove it into a separate mode.

I'd love to do an Android version at some point. Wonder if the audio stack is up to snuff?


Android's audio stack is pretty poor. Audio callbacks can happen at very uneven intervals, unlike iOS's reliable high-priority scheduling. The result is that you either have to buffer more (resulting in high latency) or radically reduce the time spent rendering your audio buffers, which of course means less time for making nice sound. Google actually recommends that you use no more than 1/4 of the typical time budget for rendering, to account for these non-deterministic callback times.


Wow! Congratulations that is very cool! Is there any reason why that would not work on an iPhone Plus?


Thank you! I've wanted to make an iPhone version from the start, but I would need to rethink a lot of the UI and possibly integrate iCloud syncing, and that would take me a while... but definitely at some point in the future.


One of the musical UI elements I always rant about (here we go...) is the global modular view, which is a way for a DAW to show you everything that's going on in your project. It should show the project's complete signal routing. This is particularly useful for electronic genres (as opposed to acoustic/instrumental genres, where the traditional rack-based paradigm is well adapted).

Here's an example: http://www.lazytrap.com/img/cut_four_isolated.png

Every DAW has an internal graph of how everything is routed together: the waves you put on your timeline, every effect and synth, etc. I just want that graph to be exposed, visible, and freely modifiable.

The problem is that DAW makers and users seem very attached to interface elements that are somewhat incompatible with this paradigm: mostly the track mixer and "insert effects". This requires the DAW to manage the routing behind closed curtains and hide everything behind layers of abstraction that I consider unnecessary and often opaque.

There's a variety of weird tradeoffs that are made to get modularity while still having these traditional features, like per-track modular views (eg. presonus), modular views contained inside plugins (eg. bidule, reaktor). While these are nice in their own way, I find that nothing can replace a global, freely modular view. The bee's knees is when you can nest it arbitrarily like Reaktor does (as a plugin). If I want a mixer, I can place it myself in the modular view.

Does any DAW do this? So far I'm using Buzz, which pulls this off perfectly, but has its own limitations (no good pianoroll, no arbitrary timeline placement...). From my rather extended foray into the topic, I've found that Mulab seems to fit the bill, though I'm not sure. Some software like Max/MSP, PureData, and Usine are 100% centered on modularity, but they cheap out on the "timeline" aspect, which I find very important when I'm working with lots of samples and video.


I wouldn't call them DAWs so much as object-oriented composition environments, but you appear to be describing Reaktor[1], PD[2], Max/MSP[3], etc., except that the nodes in those apps usually have controls on them.

1. https://macprovid.vo.llnwd.net/o43/hub/media/1150/10978/R6_1...

2. https://crab.files.wordpress.com/2008/05/regen-pd.png

3. https://www.youtube.com/watch?v=o412npUqvcM


Renoise has this kind of view, bigwig is working on this as well.


There are a number of utilities on Linux that show a graphical representation of the signal routing graph. The nice thing is that it kind of doesn't matter what DAW you use, if it uses JACK for signal routing, it can integrate with anything else that uses JACK, including your pick of the graphical patch bay apps.



From memory, Traktion DAW had this sort of routing display as a default when setting up your plugins and tracks ?!


That's right and why I used it (v2) extensively.


You can always build this yourself with SuperCollider.


Music production software is home to some of the most complex and inventive UIs out there, so significantly improving upon the status quo is tough.

As an aside, when looking for Qt alternatives a few months ago I stumbled upon JUCE.[0] Prior to that I had no idea there were general purpose UI libraries with such heavy emphasis towards music creation. Granted, the cross-platform support was quite lacking, but perhaps things have improved since.

[0] https://www.juce.com/


> Music production software is home to some of the most complex and inventive UIs out there

Complex and inventive, yes. I would also add that most of the time (for me at least) it goes too far. For example, there are a fair number of synth UIs modeled after (or inspired by) real analog synths, and for people who are familiar with the real-world counterparts, I suppose that makes the software more approachable. For someone unfamiliar with that hardware, however, the UIs just feel like over-the-top skeuomorphic eye candy with little functional advantage in pixel form.

Then there's an infinite array of virtual instruments, synths, effects, etc. that don't have any real-world counterpart, and they're styled to look like super-advanced alien sci-fi machines... I mean, the UIs certainly look cool (at least in the context of visual art), but they're a frustratingly convoluted mess to actually work with using a mouse and keyboard[1].

I've never quite understood why pro audio software is so strongly coupled with that design aesthetic. If it's truly driven by functional necessity, I guess I wasn't born with the necessary set of alien tentacles to take advantage of it.

[1] As I understand, certain controls in these UIs are often linked to knobs, sliders, and buttons on real audio gear, but that still doesn't answer the question. The on-screen representation undoubtedly looks nothing like the real gear, and the heavily-stylized form doesn't exactly seem to serve a purpose beyond what could be achieved with a more conservative aesthetic.


There's a few different levels to those forms of UI design.

On one end is the skeumorphic approach with interfaces that directly resemble audio hardware. At the other end is total flexibility only achieved using path markers / bands. Product designers often choose one of these extremes to maximize approachability or flexibility.


Problem with the skeumorphic design is that it's only approachable by people who are already familiar with the hardware, as dperfect already said. I share their frustration with such interfaces.

Also, non-skeumorphic doesn't have to maximize flexibility. It could just as well be designed for ease-of-use.


Currently writing my master's thesis on HCI factors for synthesizers.

I think you're right that significantly improving the status quo is really hard, but I think that there is room for small improvements that seem to be overlooked.

Stuff that seems to be taken for granted, like tooltips, cursor changes for functionality, identifiable buttons/controls, could all go a long way. The difference between something like Lightroom and many of the premier synthesizers is pretty staggering. Not that they're perfectly comparable, but that's another topic.


I'm working on something you may be interested in.

This is a way to simultaneously interface multiple linear controls without having to point and click or even look at the screen. Some of this was inspired by Englebarts key cord in his 1968 demo and keyboard based Raskin style quasi modes as described in the Humane Interface.

Go to http://9ol.es/input.html on a desktop.

Hold down any combination of the 1, 2, 3, and 4 keys on the keyboard with one hand.

With the other hand, move the mouse either up or down.

You will see that by pressing down the key, you've selected one or more sets of the numbers, indicated by them turning bold.

Then the mouse moving will affect only the selected "controls".

This cognitively frees the user from the task switching of engaging with the interface while composing.

It also detaches the layout on the screen from the interfacing of the computer and presents a new generic paradigm of using the computer.

There's an additional demo I have that changes the background color to indicate a mode so that a bank of keys can be assigned to setting traditional modes on top of the quasi modal interface.

The objective is to have a system that can be modified quickly and simultaneously with minimal active cognition that distracts one from the artistic task.

Easy to use and easy to learn are distinct things. Usually the latter is done at the expense of the former.

In the long term, I am making this a virtual midi device that can be interfaced into any music software that accepts midi devices.


I find that the Korg Kaossilator has a really interesting "user interface". It's a trackpad where left<->right map to two octaves of notes (you have to select the scale using another control), and up<->down map to the effects wheel sometimes found on the left hand side of a traditional synthesizer. It's super fun to play with anyway.


This is speculative, but Max/MSP 5 – which was a very dated piece of software at the time – was re-written using JUCE for the UI, and IIRC JUCE wasn't music-focused at the time. Quite likely the current music focus has everything to do with Cycling '74 making that choice, the subsequent advent of Max4Live (which popularized Max), and the dominance of that kind of rounded-corners, pre-flat design aesthetic so common in music software since Ableton Live took over the world, which happens to be JUCE's default style.


JUCE has always been music focused. Originally it was created at the same time as the DAW Tracktion. It was split out of Tracktion and become its own product and Tracktion was sold to Mackie. Roli ended up purchasing JUCE and Tracktion was spun off into its own company.

Source: Tracktion developer since 2005.


Ah, thanks for the clarification! Turns out I could have discovered that if I had Googled: https://www.juce.com/history-and-development – so much for speculation.


3D software is also very complex. And I wish more complex programs used the window system of Blender.

Just arrange your windows and for each window choose a window type you want to see. And ofcourse the ability to save this.

I think this is a big plus as a UI because every person and project is different.


Definitely, but the UI in 3D apps usually isn't as crazy as music is.

Viewport tools and interactive graph visualizations aside, 3D is mostly just text fields and sliders editing values. With music, there's all manner of knobs and dials. Each VSTi plugin tends to have its own wildly different interface. Case in point:

https://www.google.com/search?q=VSTi&source=lnms&tbm=isch


Didn't someone (well-known on HN) write an essay with descriptions of UIs that would be controllable using Solresol-ish voice commands​, with more advanced users using three-or-four syllable commands up from the monosyllabic ones for lay users?

It was a very 90s website.

Edit: found it. http://www.erasmatazz.com/library/the-journal-of-computer/jc...


thank you for posting this, fascinating


The second constraint in the article is key: given that there is a computer, how can the computer help?

I can't believe we have these ridiculous music/sound editor UIs, full of awkward, knob-looking controls that are almost impossible to operate with the mouse. Clearly the standard computer interaction controls such as a menu or slider or even a text field would be easier for computer-based settings.


Most people want their DAW or plugin to resemble the real thing if it's an emulation or at least to offer a familiar look if it's not, so that the user is subject to sometimes frustrating usability compromises. The problem IMO is not the look but how we manipulate its elements: every solution out there takes for granted the use of a keyboard and a mouse, which is wrong because neither was intended to control such interfaces. If I had to design an alternative controller, that would be a digital joystick with an incremental encoder plus a push button on the lever top: you use the lever to navigate the controls and once you reach the intended one, you activate its edit mode on through the button and use the encoder to change analog values or the lever (which would be in edit mode so the focus would not change) for digital ones).


The problem is the look. To someone new to audio editing world, there is no need to mimic the look of the old hardware interfaces. Knobs in particular were used in hardware because potentiometers were the method of controlling signals, but there is no longer the need to replicate such an archane control for UI. The look of a pot has little to do with its function. But with software we can make the look of the control express its function.


I have to disagree. Rotary encoders and faders are the way to make smooth, gradual changes to various parameters by gut feel, which is more or less what a mixer is actually doing.

When money is no object, music is both mixed live and produced in the studio on enormous digital consoles which replicate their DSP parameters onto hundreds or thousands of tactile faders and rotary encoders.

The keyboard and mouse are a terrible way to mix. Fortunately small physical control surfaces can be had for not too much, though then you have the problem of matching your limited controls to the thousands of parameters in the DAW.


There seems to be such a strong consensus in slow to operate radial knobs. The most frustrating parts are when the knob goes from say 1-100 but the range of say 1-5 is the one of interest.

Instead of allowing me to set my own range and instead of making an interface where choosing a value works with common input interfaces we get tricked out, 32-bit radial controls that fail with regard to any operative goal on their use.


This is why I get so excited about the (very rich) iOS music ecosystem: Audiobus, Ableton Sync, etc. You can see what audio is flowing into what, you can use touch to change things, and you can use Bluetooth and MIDI to outsource controls to different hardware as needed.


I'm not particularly musical myself, but lots of people around me are (siblings, friends and coworkers - one group of which are signed to Sony music), so I like to play around with music production software on my iPad (Korg Gadget, Garage Band, Moog Model 15, Odesi, Fugue machine being my main tools + MIDI and Audio routers and USB MIDI controllers). It's a pretty great environment for someone like me (interested, but not very good).

However, a major downside to this setup is that if I close the apps, the setup disappears. That is, my setup uses multiple distinct apps that are connected together (directly, or through the MIDI and Audio routing apps) and there's no way to persist this multi-app setup between sessions, so any time I have to set it up from scratch (because I closed the apps or rebooted the iPad), I lose all at state and have to set it all up again from scratch.

Until that is solved, I don't think I'd really count it as a serious setup. It's an incredibly powerful toy though!

Of course, this applies to an iOS-only (or iOS-centric at least) setup and not what it sounds like you're referring to: using iOS as cintoellets for an otherwise laptop/desktop-based setup. I imagine that has fewer of the above issues.


> Fugue machine

Speaking of musical interfaces, Fugue Machine is practically perfect. I just wish you could have a longer loop length.


Audiobus provides State Saving, have you tried that? Feed it into AUM for IAA, AU, and their state gets saved, plus the routing and mixer setup.


No, I've only used AUM. Thanks for the tip! I'll give it a try.

Although part of the problem is that, beyond the router state, there is configuration data spread out between the different apps (e.g. Some midi stuff setup in Gadget or whatever), so we will see how much it helps. Hopefully you're right!!


One of the best little things that makes a big difference in making music on the iPad, is how you can simultaneously control two apps in split-view, i.e. each finger on a different widget in each app.

Does anyone know if that's possible on a touchscreen PC/tablet running Windows?


I'm confused, are you asking whether it's possible to put two application windows next to each other? If so, yes, of course you can.


Sounds like they want to put them next to each other and simultaneously interact with both of them via multiple presses. Which, given the way desktop OSs often want to only have one app focused, is a valid question.

I just tested this on my Win10 tablet; I could indeed poke at the control panel app on one side of the split screen while manipulating Chrome on the other. I can't vouch for any other apps, or for running Win10 in 'desktop mode'.


I really want a force-feedback x/y-controller. I built a prototype of a slider and it worked quite amazing at emulating physics.

I'm now thinking about building a passive x/y controller with responsive breaks: I's super hard for motors and rails to actually push against a human arm with some force, just clutching a break should be able to counter quite a bit of strength though. I'll just have to be a little creative on how to make tough pressure sensors on the handle.

So I'll have: one knob on a flat pane, pressure sensors measuring what force goes in to the knob, optical encoders to know the position of the knob and 2 or 3 motorized breaks that adjust for the correct counter-force. It should be quite easy to simulate a guitar-string that than flings away, I just can't emulate slow relaxation of the "string".


You might be interested in AudioKit, audio synthesis, processing, & analysis platform for iOS, macOS and tvOS.

http://audiokit.io/

https://github.com/audiokit/AudioKit

Full disclosure: I'm a core member of the project.

Oh and also check out WebMIDIKit, https://github.com/adamnemecek/WebMIDIKit, it tries to wrap the terrible CoreMIDI APIs.


The link to your profile from the audiokit.io About page[1] links to "http://https://github.com/adamnemecek/". Same for everyone else with a link to an HTTPS page. There's a bug in the script[2]—it assumes template data will be a domain, and not a fully qualified URL with protocol.

1. http://audiokit.io/about/

2. https://github.com/audiokit/audiokit.github.io/blob/master/_...


Hi Adam, thanks for your work on WebMIDI and AudioKit.

I am currently experimenting with the current Midi functionality in AudioKit along with a Swift music theory library https://github.com/danielbreves/MusicTheory to compose and sequence music. Using Audiokit to send Midi events to Logic pro

Does/will WebMIDI support any kind of Automation events through an easy to use abstraction? Thanks again


Virtual reality and augmented reality have so much potential for innovative interfaces. I'm really excited to see what full-featured, professional-quality music production applications will look like there.


i was really excited when i found this: https://www.youtube.com/watch?v=YCuGpiR7FRY

(don't think it's too OT... we could imagine this being a delay/echo sequencer...)

strangely, the name of the app is not in the title or the description of the video. took me a while to find this demo. (can't remember the name).

on one hand i feel like there are endless possibilities. on the other hand, why can't i think of one? not that i am the most creative but... i don't see anyone making any either. most VR audio apps are contrived- they don't make any more sense in VR than they do on a desktop.

i'm excited either way. i think even contrived instruments have potential if you add remote multiplayer.


>on one hand i feel like there are endless possibilities. on the other hand, why can't i think of one?

Music doesn't have any innate visual element. It's an auditory and tactile medium. In theory, Photoshop or Final Cut could add all sorts of "audiolization" tools, but the idea seems quite odd. Representing sound as visual images scarcely makes any more sense. We're living in a visual culture, so we tend to overlook the importance of the other senses.

VR currently provides no tactile feedback, which is absolutely vital in musical performance and production - muscle memory doesn't function without it. The theremin has been around for a century, but only a handful of people have ever learned to play it well. It's extraordinarily difficult to wave your hands around in mid-air with any amount of precision.

https://www.youtube.com/watch?v=7l9YcewEumw


Another interesting one is/was AudioGL (not AR but 3D):

https://www.youtube.com/watch?v=H-RCzeJQazA

https://www.audiogl.com/en/audiogl


I had a ton of fun playing with SoundStage: http://www.soundstagevr.com/

It's far from "full-featured, professional-quality", but it completely sold me on the viability of VR as a medium for audio production.


I think the key is going to be pairing the extra human bandwidth you get from VR controllers, with these new types of generative networks that are emerging out of machine learning research. I'm playing around with this idea for my next VR experience after Soundboxing. But I haven't made great progress on autoregressive acoustic sample based generators like wavenet, which I had hoped would work better for this.


Musical interfaces, more than any other type of interface seem to try to 'materialise', my guess it's to compensate for the abstract a-material nature of music. There are 2 main direction this goes, for one the super slick retro realistic renderings with lots of shaded knobs and sliders, complete with screws and even simulated rack-mounts, and then the (not so numerous) super stylised futuristic and sometimes borderline experimental interfaces.

For plugins, the beautiful interface with poetic descriptions could be regarded even more important (for sales) than the actual sound, thus the vst world might be the most 'full of bullshit' realm of software development.

From my experience the best musical interface is a dedicated hardware unit. Mouse, keys, and a big screen are generally inadequate and inherently contraproductive, and a small 20x40 lcd should be more than enough as visual info.


With regards to taking an instruction and converting it into a state representation, I think creating music is very similar to programming. Coming from a web background, I've seen experiments like React Music which I think are really interesting. If the question becomes "how can we best represent the changing state in code?" Then I think that's a question that can be solved for both interface programming and music programming at the same time. This is why I find React Music interesting: React's declarative syntax makes it easier to understand what the final product is going to look like. Can the same thing be done for programming sound, in a way that represents the final product in a more naturally understood way?


I've been working on a declarative music framework called Dryadic. Like React it is designed to diff a tree and make changes while playing.

Currently it works with suoercollider but I had planned to write bindings for webaudio and other targets.

https://github.com/crucialfelix/dryadic/blob/master/README.m...

https://github.com/crucialfelix/supercolliderjs-examples/tre...

There will be a formal language that can be parsed, but at this stage it uses JavaScript objects


Speaking of innovative interfaces, I love using my iPad as a music production tool... In terms of innovating interfaces, I love Animoog from the point of view that the keyboard is not just a 'press and hold' thing - it reacts to up/down and sideways motion too, once the key is pressed - far better than trying to use a mod wheel or tweak another MIDI parameter at the same time as pressing keys. Bebot is another one with a similarly cool interface, and a lot simpler than Animoog.

Also, I love using NodeBeat HD - That one has a very unique 2D way of representing a beat pattern, but placing nodes and joining them up at different distances etc.


Really love this write up. I've gone back to my MPC just for something different for a while but the right interface depends on what style of music or sound design you're doing, IMO. If I'm sampling, I want something I can chop up real fast and have the ADSR and other fields on each chop. That's why I think Ableton is one of the best DAW's out there, workflow wise (not sound quality), because you can change up your workflow in there in more ways than one.


My favorite user interface is the rubber bands i use to connect pots/encoders on synths like mopho, ms2000 or MIDI controllers. You have to find the right length and thickness to get kind of a sigmoid acceleration curve to the connected-over knob with not too much slippage.

Aside from that, like many guitar/piano people, I'm pedal oriented, I see all these new things like Elektron Analog Heat and i think, that would be a great pedal. I don't know if it's possible for a single pedal to control multiple parameters but if it exists I'm willing to practice dexterity for months to get the control


Here was my humble attempt to create a non-linear music interface (I'm not a musician by any stretch): http://buildanappwithme.blogspot.in/2016/04/lets-make-music....

I have since abandoned iOS development in favor of mobile web apps


On a related note: http://www.pushturnmove.com


I think the main reason for all the skeumorphisms is Gear Lust. Musicians are obsessive collectors and they can't afford all that hardware. So plugins are like collecting trading cards. Even software that is not a replica still wants to look like the other metal plated gear.

I really detest screen knobs myself.


This post immediately made me think of Spencer Salazar's Auraglyph. www.auragly.ph


Read about half of it so far. I love exploring UI for music, so I'm happy to see this, though I've got to make my complaints, eh? (Also it's a Friday :P )

About the "tiny windows" section, Fabfilter has had interfaces very similar to what they're describing for a long time. I think they're some of the most intuitive visual interfaces for these musical tools. I'm really surprised they weren't mentioned.

Their limiter [0] easily lets you see volume before (light blue) and after (dark blue) limiting, as well as what gain reduction (red) is being applied over time and RMS (white line in the area between -10 and -16).

Their EQ [1] lets you see the affect each individual band has (blue and green), the overall eq curve (yellow) and frequency spectrum before and after.

> There is a bigger underlying issue: we are making decisions that will affect the whole recording based on this tiny real-time view of the world. This is like trying to decide on which filter to use on an image by shifting a tiny square preview around the image, trying to imagine what the whole thing will look like try it below:

I disagree with this though. Visuals tell so little compared to your ears. It's not like deciding an image filter by scanning a small square across an image, it's like deciding an image filter by converting the image to a .wav and listening to the output. The Fabfilter plugins have great GUI's but you can't make a mix by turning off your speakers and only relying on visuals.

About envelopes, for example. On my hardware synths, I'm comfortable with the position of the ADSR knobs and what effect it will give. I know that if I want a plucky sound, the decay knob goes to a certain point. With visual envelopes, the shape seems more intuitive, but because they scale their lengths to fit within the display, it actually gets very hard to tell exactly how long something is just by looking. In Massive, if you turn the release up to 10000ms, setting decay between 50ms and 500ms provides almost no visual difference because it stretches the envelope graphic. So you end up using the virtual knob positions anyways and ignore the graphic for the most part. I don't use Logic, but I get the same vibe from the screenshot.

You often do not want "their values in proportion to each other". For example, changing attack time doesn't change the sound of the decay/sustain or release sections, so they should not affect it visually. Serum is the only synth I've used where the envelope graphics actually add a lot to the value of the interface. The way you can draw curves or steps is also genius. [2]

The ADSR model also responds to your playing, unlike the programmatically made examples. You can't hold two notes, let go of one and have it slowly release while the other is still sustaining because you're pressing the key. Would singing the envelope or using an audio sample to generate the initial envelope data be useful? Maybe, I'd certainly like to try. But my first guess is that it wouldn't be that helpful. Most of my time with envelopes is spent adjusting values by milliseconds or so to get it to sound perfect, and not by a whole second or so. I couldn't achieve that accuracy with my mouth, and dialing in an initial envelope is already easy enough that I wouldn't want to plug in a microphone instead.

I'm really interested in better interfaces to musical instruments, or sets of them, especially in real time. There are some amazing things people are doing with grid controllers like the monome or Push. PXT-Live-Plus is one I've been playing with. One of my favorite additions is the drum mode where you can set pads to not be an individual sample, but a set which rotates to the next each time you hit it (so it's entirely deterministic and predictable). From a small number of pads you can build really intricate melodies/rhythms by managing what notes/samples will be available to you next, it's a very different way of thinking.

[0] http://www.fabfilter.com/images/screenshots/screenshot_pro-l...

[1] http://www.fabfilter.com/help/pro-q/images/analyzer@2x.jpg

[2] https://www.audiotent.com/wp-content/uploads/2017/02/lfo.jpg


Visuals don't just tell you little compared to your ears, they also distract you.

It's not quite true to say you cannot make a good mix while staring at your monitor, but it certainly makes it a lot harder.

This is why the current crop of GUIs doesn't really matter. They're good enough for the job, and people who know what they're doing spend a lot more time listening than they do looking at the screen.

Modern music technology is so insanely powerful already the real limiting factor is user skill and creativity.


I think VR interfaces will be super useful for music making because you'll be able to place audio sources in 3D space.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: