Hacker News new | past | comments | ask | show | jobs | submit login
Get started making music (ableton.com)
2106 points by bbgm on May 9, 2017 | hide | past | favorite | 461 comments

Anecdotal: there's a few different approaches to learning songwriting that seem to click for beginners. The "build up" approach is the most common and is what this link offers: It first teaches beats, then chords, then melodies and then, in theory, vocals etc. These lessons in this order make sense to many people, but not everyone.

If you're interested in learning to make music and the lessons in the link are confusing or overwhelming or boring, some students find a "peel back" approach to learning songwriting easier to grasp at first. A peel back approach just involves finding a song then teaching by stripping away each layer: start with stripping away vocals, then learn melodies, then chords, then finally learn about the drum beat underneath it all. A benefit of the peel back approach to learning is melodies and vocals are the memorable parts of a song and easiest to pick out when listening to the radio so a student can learn using songs they know and like. Either way, songwriting is hard and fun. Best of luck.

P.S. I think Ableton makes good software and I use it along with FL and Logic. They did a solid job with these intro lessons. But worth mentioning, there is free software out there (this includes Apple's Garageband) that offers key features a beginner just learning songwriting can practice on and mess around on before purchasing a more powerful DAW software like Ableton.

> there is free software out there

If anyone is interested in a Free/Libre/Open Source Software option (cross-platform Linux/Windows/Mac) I've really enjoyed producing with LMMS over the past 18 months or so: https://lmms.io/

It's definitely got room to grow in terms of functionality/interface but the development community is of such a size that it's possible to still make meaningful code contributions. I've contributed a couple of small patches to improve the Mac UI as a way to get familiar with the code base.

Of course, the downside is that I have to decide whether to write code or make music whenever I sit down to use it. :)

There's also a new project called Helio Workstation https://github.com/peterrudenko/helio-workstation it doesn't have much built-in instruments so you need plugins for everything, but the UI looks awesome

I like LMMS but it has enough vst issues that I just went with a cheap Reaper license. Plus I need an audio recorder as well.

I wouldn't say that "LMMS has vst issues". I would say that vst has a serious issue: it is not a open standard. Although it seems they are trying to improve that for the linux community: http://cdm.link/2017/03/steinberg-brings-vst-linux-good-thin...

Good to know. That said Reaper enabled me to use the 7-8 vsts I was really interested in using that didn't work (or worked poorly) in lmms. If I new c++ better I would contribute.

For recording, there's Ardour (among several other FLO options)

The Song Exploder podcast is awesome for both approaches, and I would really recommend anyone interested in writing and/or producing music to give it a go.

"A podcast where musicians take apart their songs, and piece by piece, tell the story of how they were made." @ http://songexploder.net/

Seconding this recommendation. The more electronic- or instrumental-oriented songwriters on the show tend to get more into the nitty-gritty details of production, layering sounds, etc. (the first two episodes with The Postal Service and The Album Leaf, for instance) while rock and singer-songwriter types tend more towards the songwriting aspect (the Long Winters is a personal favorite).

I sort of wish there were more technical details as a rule, but it's understandable given the relatively short format that they can only cover so much ground. I'd prefer longer episodes personally, but I suppose not everyone might, and there's tradeoffs in producing more content. I guess I'm just glad that the show caught on and is still going strong.

+2. Only podcast I listen to. The first few episodes are a little rough around the edges, bad questions, people not knowing exactly what they were getting into, but the rest are all incredible.

Protip: sample the guest's clips they put on his show :) I've gotten some really great material from this show sonically since most of them seem to be the individual instrument tracks.

+1 for SE. Also there is an amazing set of Motown tracks split into instrumentals and acapellas out there. I'm not sure if it can be legally obtained easily, but it's a great master class.

Is this a common distinction to acknowledge in general education environments? You pretty succinctly described the struggles I've tended to have in my education, and described it in a refreshing/revealing (for me) way.

I love looking at systems and peeling back the layers to find out what makes something tick. That's not an approach to learning that I really encountered until I entered the workforce and was met with complex systems that I needed to understand. And I loved it!

Interesting, I've never heard of the "peel back" approach, and I can totally understand why it would be instantly satisfying for a beginner in music to get started that way. Do you have any articles or books on the subject manner?

How would this approach apply to a more traditional instrument that doesn't have the advantages of having a "good" sounding sample already preloaded that can be easily layered into a song that you are composing? I grew up learning the violin and it was endless disjointed drills until it was put together in a classical song that I never heard before nor had the desire to play. 8 year old me just wanted to play the theme song to "Jurassic Park" and roar like a T-Rex.

I think there's a difference between learning composition, and learning to play an instrument.

In my view, learning an instrument has a lot in common with learning to code, in that some people take to it, and others don't. And we probably know some of the reasons, but not all of them. Of course teachers and teaching programs vary, as do kids and their family milieu. But nonetheless, music education has huge attrition.

For instance, by way of anecdata, I took string lessons as a kid and loved it, and my kids have gotten pretty serious on violin and cello. They actually like classical music, and it probably helped that both of their parents also enjoy it. So it definitely works for some people.

What you said about learning melodies and beats and chords kind of confused me. Do people actually learn how to make up music? I always thought it was just some natural ability that people have. For as long I can remember if somebody told me to write a song I would just spit it out after a while. Am I unuiqe in this respect?

I created an account just to reply to your comment. As someone who has played keyboard instruments for all my life, it's not crossed my mind lately that the idea that there is structure to music is not well known.

Just for fun: chords in scales are numbered from bottom to top in Roman numerals. I feels like home base, V feels like wanting to go home. If you want to create the feeling of going home but then not really go there you can go from V to VI instead of I. 'Sad but I have closure'-type ending? Major IV - Minor IV - I. Bluesy feeling? Add a minor seventh to your I, IV and V chords. Dreamy? Major seventh instead there, except on the V.

It's even entirely possible to learn to recognize all of these types of chord progressions and sounds instantly. I'm working on and off on an ear training app that randomly generates them that musicians can use to train their musical ear.

>I'm working on and off on an ear training app that randomly generates them that musicians can use to train their musical ear.

Sounds interesting. Please do a "Show HN" post when your app is ready for it.

Will do, thanks!

As someone who's recently tried to get into (basic) music theory, trying out the chord progressions you mentioned was fun! Do you happen to know of any resources that go through more of these well-known chord progressions?

I'm also wondering if these chord progressions work the same way for all scales, or if, for example, the 'sad but I have closure'-type ending only sounds that way in major scales? From experimenting I think it only works for major scales, but I'm not sure :)

You probably have learned most/all of your musical knowledge implicitly.

Some people have a great ear for music and can write solid songs without formal training in music. Other folks come at music from the more theoretical side, although usually with a lot of implicit knowledge of and experience with music as well.

For most people who are not formally trained in music, their songs can be improved upon on a technical level by someone who has deeper theoretical knowledge (learned either explicitly or implicitly).

For a good discussion of this, check out Tim Ferris' podcast interview with Derek Sivers. Derek talked about how he had learned a lot about music implicitly. In one summer, a teacher of his formalized that knowledge so efficiently that he was able to test out of lot of classes (1.5 years worth?) once he went to Berklee School of Music.

Songwriting can be taught, yes. In most music courses you start by analyzing the Bach Chorales, which (along with some Gregorian work from the middle ages) is what really kicked off contemporary music composition. By analyzing the Chorales and moving forward from there you learn how to manipulate chord progressions, harmony, point and counterpoint.

Composers classically trained this way tend (!) to have an easier time writing melodies, harmonies, and progressions in a consistent manner, ie not having to wait for "inspiration to strike". The composer, of course, still needs to develop an emotional connection in the music, but the point is that it can, and routinely is, taught.

My girlfriend is a trained classical singer whereas I'm a self taught musician. She doesn't really gravitate towards the rock music I like to write but because of her training she can easily jam, riff, or write anything far quicker than I can. Songwriting is a very technical skill indeed.

What I find difficult is that by the time I've got my DAW going and found some synths I like, the tune in my head has evaporated. Do all people find musical thoughts so insubstantial, or is it just me? If I imagine a picture or a paragraph of text, it'll stick around and I can remember it more or less indefinitely. I still recall snatches of crap poetry I thought up when I was a teenager, but any music I imagine just disappears before I can get it down.

The most successful tunes I made were more or less "discovered" from incrementally experimenting in the DAW, and not from any kind of original plan or idea. Maybe I'm just not a musician! (I'm an indie game dev who started making my own tunes for my games)

you could consciously decide NOT to use synths to lay down the bones, always use a piano to begin with, once you have the tune idea down then you can move onto orchestration and picking synths and so on. Always keep the piano track as a guide and start adding tracks for all the other components until you have what you need.

From a remembering the tune perspective, I have the same issues, but I think it's more related to not applying musical lexicon and hearing skills the same way: you remember poetry or a paragraph of text because you remember the ideas and how to go from one to the other, if you are a musician and have something in your head and start thinking along the lines of "this is using a lydian mode, the progression is ii IV V I then it modulates to the relative minor and switches to dorian, also the theme is going down in thirds for two bars, then it will stay on the chord root for one and move to the dominant 7th" you are going to remember it a lot more easily than just by remembering the melody itself

It would be like comparing how easily you can remember poetry in English vs poetry in, say, Russian, where you only have the "sounds of the words" in your head to remember, but you don't have the syntax or the meanings to help you as well.

For me one of two ways works. Most often I start designing a patch on one of my synths and that ends up becoming a full song. Other times I start by noodling on the piano or organ and ending up with something I like. I suspect the more musically gifted do the latter more often, while the more technical ones like the process of patch creation, etc.

I evolved this way, though I'm far from gifted. Starting out, anything I made was driven by whatever sounds I was noodling with. Now, I almost start on the piano, compose the outline, and then pick the sounds that I think fit it.

The first approach has a sense of creative wonder to it, where your being guided by an outsider. As much fun as that is, it is very limiting and I suspect most people abandon that approach as their skill improves.

Imagine that you're a writer, and you have an idea, so you turn on your computer, wait for it to boot, log in, open up Word, and fiddle around with fonts for a bit... that's what you're doing.

Writers keep pens and notebooks by their bed so if they wake up in the middle of the night they can start writing right now. Or they have tape recorders. Anything works as long as it's immediately available. The iPhone has a "Music Memos" app, I'm sure there's something similar for Android. That's what I use.

Learning music theory and how to write music properly can come later. As long as you can sing, whistle, or hum a tune, you can record it.

I found the same to be true. I've been trying lately to give up approaching music from the "I have this idea I want to get down" perspective. Instead, I set up my studio in such a way that I can easily "play around" and come up with ideas on the fly, and then elaborate on those.

Switching from a DAW to a mostly-hardware setup helped with this, as it's easier to "play" with knobs/sliders/keys/pads than virtual objects accessed via mouse/keyboard. Once you get things wired up, it's pretty straightforward: play around, find something you like, track it in, build more stuff over it.

Ever since making this switch, I found the parts that I used to practice/enjoy (like slicing and manipulating samples, for instance) feel much more tedious.

Another benefit is that it's easier to make mistakes, which often have more interesting results than the thing you originally intended. My guess is because this violates your internal "patterns" and forces you to think outside of your normal "music creation" schema, resulting in a more creative/unique outcome.

I've also tried to switch to "totally live" recording (i.e. minimal sequencing beyond loops and patterns, all automation and non-repeating parts done on the fly), and that's a bit more challenging, because you have to redo everything if you, say, screw up a little solo bit.

>by the time I've got my DAW going and found some synths I like, the tune in my head has evaporated

That's where music theory pays off. Learning to name chords, scales and arpeggios gives your brain a framework to reason about and remember musical ideas. It allows you to break the music into a more concise abstract representation, rather than holding it in your head as sound. If you understand the structure of music, it's far easier to make connections between different pieces of music.


Do you have much formal knowledge of music theory? If not, that might help.

When you "get a tune in your head", if you can describe it to yourself in abstraction, it will probably be easier to remember (or even just write down).

Check on this page on 12 bar blues for some examples of easy music notations. Similar types of notations and/or terms exist for different parts of a song.


I had exactly the same thing happen, losing the core ideas by the time I wired up the synth I wanted. So I dropped the DAW entirely. Now I create most of my music using Loopy[0] to layer the parts that I sing (or occasionally play). It's been fantastic for my creativity.

I'm starting to hit its limits for my workflow though. One of the really nice things about how easy it's getting to write software these days is that I can now fire up, say, a Swift playground, and after getting the fiddly basics of "how to record and loop audio buffers" with AudioKit, there are very few limits on what kind of idiosyncratic workflow tool I can design for myself. The UI looks and acts how I want it to, and since over the years I've trained myself to act like a human synthesizer, I can[1] compose a whole song without even worrying about having an instrument nearby.

[0]Loopy - Multitrack audio looping with very simple and expressive control https://itunes.apple.com/us/app/loopy/id300257824?mt=8

[1]The "can" is theoretical. This is my next big hobby project, and I'm still in the fiddly phase.

Maybe don't use synths for getting your ideas down.

If I have the beginnings of a song in my head, or I have been humming to myself, sometimes I just record the parts I have as vocals - humming or full-on beatboxing the bass/strings/lead/beats separately and as close as I can make them to my head-song (including filters with my mouth)- and then replace as I go, figuring out how to achieve the sounds that were in my head.

My fiance who was a professional musician (she had record deals) always keeps a Tascam recorder. When she comes up with a melody or lyrics she puts them on that until we can get in the studio and record.

She hates music theory and trying to use her left brain for art. I'll say oh that is in F and she gets mad so, easier to just let her record it than try to notate it down.

I have the same problem too - I solve it by humming/singing the melody and recording it on my phone. Afterwards, I'll find nice synths, put chords to melodies, write horn arrangements, tweak drum fills, etc.

I whistle into my phone. A few of my most complex pieces started that way.

Happens to me too. Ear training might help? There's an exercise where you take a song you know or a familiar recording and try to transcribe it by ear. (I find it's a pretty hard exercise.)

i spent 10 years building a project studio and optimizing the workflow and patch bay so that i can be recording almost any instrument within 30 seconds or so. that was a huge breakthrough and a huge commitment!

Improvising, screwing around, experimentation are great ways to generate ideas. I think most composers work this way, most generally don't start with some huge structure, they just find some germ cell ideas by improvising that then can be build upon afterwards with analytic techniques. I think there are two main approaches, the intuitive feelings and pure ear based approach which is what most people call the "talent" aspect and the analytic approach which is a lot like mathematics, it is about studying structure, and is a learned skill. The best composers will use both of these together. You should learn chord structures, scales and how to read sheet music. This will allow you to conceptualize a musical idea as a concrete mathematical object, and it will help you to not lose the idea. The reason you forget the music in your head is because you don't have enough reference points to define it in a memorable way.

You can understand a musical idea as a kind of memory impression, an echo that you can play back in your head, and also as a pattern of pitches and rhythmic structures . Having two reference points , sensory and abstract mathematical is very useful.

I think your natural composition skills are unusual, but not unique. I also begain to compose music at a very early age. My knack for picking melodies up out of the air and playing them on a piano when I was 7 years old was how my parents knew that I needed lessons. Naturally, I was surprised later in life to learn that other people had to learn basic things like pitch and rhythm; to me it had always been just as natural as speaking.

I believe the same is true with song writing, in a sense. You're still applying some parts of music theory, but most by-ear learners like ourselves simply grasp the concepts and have internalized them naturally, without needing to be taught. Music is little more than patterns at the end of the day, and our brains are very good at recognizing patterns. What you and I know intuitively, others can learn through training and repetition. Both approaches are valid, and yield interesting (and often different) observations.

I went going through Music Theory classes during my brief adventure with Liberal Arts Majors in college. I felt like I already "knew" the material in a way I couldn't quite put my finger on. It was like I was finally understanding what my brain had been doing all these years. I recommend it if you haven't yet had the experience.

>Do people actually learn how to make up music?

People have studied music and composition since at least ancient Babylon, so, well, yes?

>I always thought it was just some natural ability that people have.

With natural ability you can sing some melodies. For learning to play an instrument, adding chords to the melody, you need studying, even if you learn by yourself and by ear (as many folk musicians did). Song melody, one can have a natural feel for creating, but nobody just starts writing songs in full form "from natural ability".

>For as long I can remember if somebody told me to write a song I would just spit it out after a while.

What would that mean? You'd write a song on the guitar for example? If so, then you already know the chords. And not all of the theory, so how complex is your song? Just barebones songwriting (country/folk style)? Can you take it further? Can you write the parts for musicians to play on your song? Can you write different genres on spec?

There are more things in making music/songs than "spitting out" some melody.

When somebody asks me how to solve a particular database problem, or an IT problem, or how to write an algorithm to do something, I will think about it in the back of my mind and "just spit it out after a while," unless it's something difficult enough to warrant a literature review.

That doesn't mean that those subjects aren't covered in detail in textbooks and university courses, or that people cannot learn how to do it.

There are certainly people who have natural ability, and compose melodically, applying varying levels of knowledge in music theory.

There are other people who can't make heads or tails out of a keyboard, compose a tune in their head, or understand chordal progressions, but nevertheless compose music in layers and still do extraordinary work. They find what they like by playing with notes on the screen. Joel Zimmerman, a.k.a. Deadmau5, is an example of this.

I am an example of the former, with natural ability, bolstered by training in music theory. But I still use a layered approach when I am composing, generally starting with a beat or bassline, playing with melodic progressions in snippets, and eventually moving into a traditional composition process when I have something started that I like. Ableton makes this process extremely easy and productive.

Indeed. As a classically-trained musician, watching Joel's class on Masterclass and seeing him compose melodies by dragging notes around in Ableton until they "sound right to him" was eye-opening.

What surprised me was how he makes melody lines: Playing with chords until he likes the progression, and then pulling notes out of the chords to form a melody. And of course it makes sense on one level.

But I think melodically and tend to do a lot of counterpoint. Getting the chords out of my head and onto the screen is often the last thing I do. I don't know how well his approach would work with counterpoint, since counterpoint often creates and resolves dissonance using passing tones in double time.

Very few skills are just "natural ability". Music theory is an interesting and pretty important topic if you want to make music. Since people have been creating music for millennia, they have figured out many things that help composers.

Do you really mean to say you write songs without using any theory or explicitly sought knowledge whatsoever? Let's hear one.

I know a little rudimentary theory. Just enough to get mocked by someone with a real education. "without any theory" is sort of an impossible standard to satisfy, but I will say I never think about theory consciously, and go by how things sound. Anyway, this is what I'm working on:


Are the free instructions like this out somewhere (build-up and peel-back)?

Is Apple's Garageband free? I thought you need to own an OSX device to run it? (my understanding is OSX only runs on Apple hardware and also is not a free OS)

Yes, it's "free" (not open source). It's included with the purchase of a Mac.

The point is, that kind of free is marketing speak. More accurately, you can purchase Garageband as part of a package including Apple hardware. Or you could say Apple hardware is free with the purchase of Garageband.

Not any more.

So no, it's not even gratis, it's $500+ depending on how crappy hardware you want it to be tethered to.

That's a slippery slope to saying that OpenOffice for Windows isn't free software either because you have to buy a Windows box. This is not a useful definition of free you're using. GarageBand is not Libre software.

I dunno if it's a slippery slope starting at Garageband. I say the slippery slope starts at OpenOffice or GoogleDocs or something along those lines, given that OpenOffice could probably be run on a potato if you can find a way to install ubuntu on it and stick some RAM into it.

That's absurd, since OpenOffice runs in Linux, and is free, as in freedom.

when it comes to getting software, we've stopped including the price of the required computer since like… 1995?

Garageband is completely non-free/libre/open which seems what you are saying

For those wondering, this is made with Elm lang, Web Audio & Tone.js [1]

[1] https://twitter.com/AbletonDev/status/861580662620508160

This is some good coverage of the music theory behind songwriting, which is important in making songs that sound good.

However, there's another part of making music which is not covered at all here, which is the actual engineering of sounds. Think of a sound in your head and recreate it digitally—it'll involve sampling and synthesizing, there's tons of filters and sound manipulation to go through, they all go by different names and have different purposes—it's a staggering amount of arcane knowledge.

Where is the learning material on how to do this without experimenting endlessly or looking up everything you see? I want a reverse dictionary of sorts, where I hear a transformation of a sound and I learn what processing it took to get there in a DAW. This would be incredibly useful to learn from.

This is something I struggle as a weekend hobbyist musician: There is some kind of black art involved in making music, in how to get that sound you enjoy on the music you like (which is probably the music that inspires you to make music, at least in my case).

What I found was that as your music making experience unfolds, you start amassing these little tricks here and there and they're only yours, usually tied to your stack of tools and the way you think. That is extremely hard to replicate and also very personal, imho that's why it's so difficult to actually pass that sound-sculpting knowledge to others, and that's why (besides the odd youtube tutorial on how to make a specific sound -- usually targeted at a specific vst, explaining which knobs to turn), we won't find many general sound sculpting learning material online. Even tho it is available if you gather around from forums and etc, it is still pretty much a personal experience.

Answering your question: As the time passed, the endless experimenting diminished and I got a proper sense of what does what, and after 5 years making music I'm more able to pinpoint what I need to fiddle to transform the sound the way I want/imagine in my head.

I'm still not quite there yet but if I can offer one piece of advice, that is: Don't shun the 'endlessly experimenting to find a sound'-thingy, because that's the best way you can grasp the tools. Over time you'll be able to get there faster but it's a necessity..

This is how much I evolved, without even noticing, only making tracks after tracks:

Sep 07 / 2012 http://codegrub.org/flipbit/musicmaking/equal02.mp3 cringe

Mar 25 / 2017 http://codegrub.org/flipbit/tracks/flipbit03%20-%20Twothousa...

cya o/

Picking up an analogue synth with all the knobs on the front is a good way to get your head around sound design, and very quickly discovering what does what (sound-wise). An oscilloscope on the output also allows you to see what is physically happening. VSTs tend to 'get in the way' because of the interface, but obviously you could get something like Diva and experiment in the same way. I think reading up on the physics of oscillators, filters, envelopes etc. can be a real help getting that picture in your mind of how to make the sound you want as well.

I've been building up bit of an epic studio [1] over the past few years after being in-the-box for years. And the hands on nature of real synths is so much more intuitive that VSTs imho.

[1] https://tinyurl.com/kzl97vl

> a bit of an epic studio

sir, you have already reached it: it is fucking epic, wow! Congratulations, it must be really fun being on that room, and it must be difficult getting out of it hehehe.

I want to get more into the hardware side of music making but being cost efficient is paramount to getting up and running in the cheapest way possible, specially (in my case) this is a hobby I consider myself 'just starting out'. If I have some cash to invest in it, I go to what will give me the most return (what will enable me to study the most). In my experience that meant DAW Software (Renoise), MIDI KEYS (Axiom 25), interface (Yamaha AG06) and a pair of monitors (Yamaha HS8's). Now that I've the basic kit 'sorted out' it is time to get some hardware.

What would you suggest? I've been eyeballing a KORG MS-20 mini but I don't know...

> Congratulations, it must be really fun being on that room, and it must be difficult getting out of it hehehe.

Indeed it is!

Monitoring and room acoustics are definitely the very first thing to focus on. It was something I neglected for far too long. If you can't hear what's going on it doesn't matter how much gear you've got.

My favourite hands-on synth is the Roland Juno 106 [1], it's so god damn simple to use, everything is there, and so tweakable. They seem to have gone back up in price, but I picked up a pristine version for £600 off ebay. Obviously you need to be careful with older gear, and definitely try before you buy to make sure the thing isn't falling apart.

For mono synths my favourite is the Moog Sub 37 [2], it's knob central and sounds amazing, as all Moogs do. Although I was considering replacing it with the simpler (but more classic sounding) Model D which has just been re-issued.

The best modern analogue synth I have is the DSI OB-6 [3]. Although we're getting into the expensive end of the market here, I reckon it's a future classic. These things will hold their value very well. It's also got all the knobs and controls you'll need, but with slightly different filters to most other synth manufacturers, which is good for the contrast.

The Korg MS-20 would definitely be a good place to start (I haven't got one myself, but many friends have, and rate them highly), the fact that it has all the knobs on the front for every component of the synth and has the patchbay is perfect for experimentation.

You'll never regret getting an analogue synth, the sound just dwarfs what VSTs do imho. They're _alive_ in a way that you just don't hear from VSTs.

It's also interesting how different analogue compressors and EQs sound compared to VSTs. There's a rawness and sexiness that I have yet to achieve in-the-box (not saying it's impossible, just I'm too lazy to spend ages trying to achieve the sound I can get from hardware by simply switching it on).

> making but being cost efficient is paramount to getting up and running in the cheapest way possible

I have the Chandler Curve Bender EQ [4] which is based on the EMI Abbey Road desk that was used to record Beatles and Pink Floyd albums. It is super expensive (£5000+), but as soon as I heard what it could do I just needed it in my life. I call the on/off switch on the front of it the "it's just better switch" because as soon as I press it the sound in my studio turns 3D and everything is good in the world. I have the plugin version of it (UAD), which is very good, probably the best VST EQ I've heard - but it's not a patch on the gear and doesn't invoke that emotional feeling.

The reason I'm saying this is that yeah this stuff is expensive, some of it super expensive, but if you pick up one piece of gear a year and learn it inside out you'll be in a great place - creating awesome sounds quicker than you ever could before in-the-box. Most people I know with killer studios took a decade to get there.

[1] http://www.vintagesynth.com/roland/juno106.php

[2] https://www.moogmusic.com/products/phattys/sub-37

[3] http://www.soundonsound.com/reviews/dave-smith-instruments-o...

[4] https://www.youtube.com/watch?v=aUv9GtMlUwA

Thank you for your tips, my friend! You are totally right: go slow, pick your gear one at a time and after some time I will have a great little home studio to play with :-)

Are more of your tracks posted online?

Yes, I post all my stuff to Soundcloud and Youtube, this way I can get constructive feedback and learn even more.

Here are the channels you can listen to more of my stuff, and by all means please help me get better by commenting and feedbacking me if you can. If you make music as well I will gladly return your energy and time by commenting and giving feedback. :)

Also, I usually participate on the listen thread/feedback rounds on reddit's /r/edmproduction, you'll find me there as well commenting on everyone's tracks ;)



cya o/

You're right that it's a staggering amount of arcane knowledge. But starting out I always recommend experimentation over getting too deep into the theory. It does help to have some baseline understanding of:

1. Frequency 2. Harmonics 3. Oscillators/Waveforms 4. Envelopes 5. Filters

The only problem with the last part of your request is that even if you are to watch people design sounds for a couple of hours you might find that when you try to replicate that somewhere else it doesn't sound right. This is partially because every synth/softsynth is different and will produce different sounds and have different parameters. It can be infuriating to get a tutorial on how to produce that perfect "Bladerunner Blues" synth and come out with something that sounds totally flat and bad.

To make matters worse, there are apparently 0 good tutorials on the subject - I just googled for 15 minutes to no avail. The two below cover some of it but I personally can't bear listening to the people who make these videos.

https://www.youtube.com/watch?v=TvQVQuV-Kys https://www.youtube.com/watch?v=lJVlWdzoZ0w

I would even narrow that list down to one: Harmonics. Once it clicked in my mind that every sound is just a combination of sine waves, and that it's the intervals, amplitudes, and dynamics of those sines that make up everything we hear, it made sound design a lot clearer for me.

Of course, finding the right waveforms, filters, and envelopes required to get to a particular pattern of sines is still the challenge, but having that understanding of the medium underlying it all makes experimentation that much more productive (and fun).

Also people who have really hot Sound Tips generally don't want to give them away. If you can make a unique sound with some special trick you will have an advantage over your enemies (other musicians).

One problem is that every machine tends to be designed just a little bit differently. Therefore, tips on exactly recreating the sound might not necessarily translate well from one machine to another.

For instance, the "Blade Runner Blues" patch as I understand it is actually one of the brass presets on the Yamaha CS-80. (Bad recording but here: https://www.firsthomebank.com/personal-banking/deposit-produ...) The CS-80 has a pretty unique architecture for a polyphonic analog. (http://www.cs80.com/tour.html) To get a patch exactly right would require replicating layout, filter architecture and structure, etc.

Knowing basics synthesis, however, can get you pretty close. I have a patch on my Alesis Andromeda (which has some CS-80 type elements such as a ribbon controller, dual resonant filters, and an unfiltered sine that goes to the post-filter mix) that someone did in a user community -- it came out decently good. I was able to Google a book page that gives a good overview of recreating it on other synths. (https://books.google.com/books?id=Jz1JMnZNO88C&pg=PA74&lpg=P...)

Now, to really get the Vangelis Blade Runner type effect, you have to be able to play a synthesizer expressively. This is unfortunately is tougher on most synths compared to the CS-80, due to the CS-80's polyphonic aftertouch that most synths lack. That being said, there are other techniques people could do. I understand that Vangelis used pedals to manipulate filter and volume, and that is something that can be done on many synths that I don't see a lot of people taking advantage of. Don't discount playing technique when it comes to the art of sound design, in other words.

That's an understatement. Vangelis improvises whole soundtracks live, playing fairly simple melodic lines and counterpoints with his hands, but manipulating LOTS of pedals to arrange on the fly. By that, I mean more than ten pedals, arrayed in an enormous bank at his feet. It's staggering, and I can't think of a single other electronic musician with nearly that proficiency at foot-pedals.

I really like that you just went off on a huge tangent about this, no sarcasm. I really agree with your last line too, Kevin Shields is another example of this. By perfecting a unique playing style (holding the pitch bar while strumming) he was able to come up with a sound so unique that it spawned a subgenre

Only if your claim to fame is primary sound transduction and not, say, being a guru of giving other people tools and help with their ideas. My own career over the last ten years or so has been based on the latter.

I will say that I think the 'power-law' nature of that is not dissimilar to being a primary sound transduction artist. You don't get a large number of people being celebrities at tutorials, or of disseminating free plugins.

And yeah, I do mean to expand upon this: got a likely domain for it just yesterday. The trick there is that you need to be inter-disciplinary enough that you can produce a really wide range of content, that by definition a newbie couldn't possibly process. I can go from 'slew rates in op-amps in boutique guitar stompboxes' to 'exploiting unusual interpretations of the Circle of Fifths' (did you know the Four Chord Song can be read as a atomically contained minimum-area space in an extended diagram of the circle of fifths?) but a newbie wouldn't cover that range.

There are no secret weapons, just secret masteries: by that, I mean 'stuff that's sensible and obvious, but to the contextless outsider seems like black magic coming out of nowhere'. Any sufficiently deep context seems like magic to someone who has no idea of the scope of that context.

At least when it comes to synths, check out Syntorial. It's very similar to what you want, though it only covers a certain kind of synthesis.


Syntorial is pretty good. If you're looking for something free (and much more basic), I have a quick video series on YouTube: https://www.youtube.com/watch?v=VSwjp7Zt1GY&list=PLKzX4WhkkV... I built it as part of my course I'm working on for Sagefy, so that would include some multiple choice practice questions too: https://sagefy.org/subjects/CgDRJPfzJuTR916HdmosA3A8/landing But there's nothing quite like just using the tools and getting experience directly, for sure.

Seconded. Syntorial is an awesome way to learn to program synthesizer sounds. It plays a sound, then you replicate it with the synth controls. It starts easy and gradually ramps up the difficulty, adding more knobs to twiddle, explaining the concepts as it goes along. It's a million times better than staring at a full synth control board, moving knobs and hoping that you figure it out eventually.

There's no easy way. As you mention, this is arcane knowledge--people really do study it for years or decades to train their ears to the appropriate levels. The two big components are standard music production techniques (reverb, compression, EQ, appropriate mic usage, etc.) and then domain-specific knowledge for the instrument (characteristics of different guitar amps, say, or synth filters).

as other replies allude it's also highly genre/goal dependant.

if you're trying to make your rock band sound more like led zeppelin there is a fairly fixed set of tools and instructions (albeit futile, ultimately)

if you are imagining a pure sound in your head that is not straightforwardly produced by an instrument, then it gets a lot more complicated, and there are countless routes to the same goal. the experimenting is the fun part though!

Post youtube link of sound to genre appropriate producers forum or facebook group.

For the longer route this is a classic http://www.soundonsound.com/techniques/synth-secrets

Oh man that is a rabbit hole you don't want to go down. Modern software synths are extremely complex, but also extremely powerful.

I mean, look at the interface for Serum, probably the best synth on the market right now:


It looks like an airplane's cockpit.

Sound design is a whole another part of music. Most amateur musicians don't even bother with it because it is way to technical to master. They just use presets.

I personally hate it, but if you have a technical bent, you might enjoy it

Serum is the easiest synth I have ever used. There are a lot of controls, but they all make sense. And if one is just starting out, they can ignore most of the controls and focus on just the basics: Waveform, envelop, filter. Then move to modulation starting with LFOs. (Modulation is where Serum really shines. It is literally drag-and-drop. Compared to the modulation matrices that many synths use, it's a cake walk.)

If you think Serum look complex, take a look at Zebra 2.

The people at Ableton are well aware of this (the term "the studio as an instrument" comes up quite often at their event "Loop") but I'm not sure there's a better way to learn besides experimenting. I've heard quite a lot of stories about (electronic) music creators telling about their beginnings, how they would ask for help to more experimented friends, how they would reply "no, just try things", then how they felt in the end it was a good thing once it "clicked."

THIS, this right here. That idea has come to my mind many times, having something like a library of "recipes" for sounds. The hardest part for me whenever I try to do something in FL Studio is getting the source samples and make them sound the way I want individually. It's a shame to have an idea for a song in my head and not being able to materialize it just because I don't know what kind of plugin or instrument I should use.

It's one reason I like monosynths (or or two oscillators). Really helps you understand basic subtractive synthesis. Then you can layer on. A very simple modular system can help too because you have to understand the fundamental of sound and modulation.

Aside from training the ear, you need a pretty sophisticated understanding of how soundwaves translate into sound.

I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

I mean the conventional music notation represents tones in five lines, each capable of holding a "note" (is that the right word?) on a line, as well as in between lines, possibly pitched down and up, resp., by B's and sharps (depending on the tune etc.).

Since western music has 12 half-tone steps per octave (octave = an interval wherein the frequency is doubled, which is a logarithmic scale so compromises have to made when tuning individual notes across octaves) this gives a basic mismatch between the notation and eg. the conventional use of chords. A consequence is that, for example, with treble clef, you find C' in the top but one position between lines, and thus at a very different place than C (one octave below) visually, which is on, rather than between, an additional line below the bottom-most regular line.

I for one know that my dyslexia when it comes to musical notation (eg. not recognizing notes fast enough to play by the sheet) has kept me from becoming proficient on the piano (well, that, and my lazyness).

> I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

You're not alone, this is a common reaction to music notation by engineers; a lot of people have wondered the same thing, even here on HN. For example https://news.ycombinator.com/item?id=12528144 https://news.ycombinator.com/item?id=12085844

I see some great responses, but I wanted to add that you have to keep in mind that tons of people have actually tried to make a better system, and nobody has succeeded. That should give you enough pause to ask why and consider the possibility that the system we have is really good in a way that you haven't recognized yet.

I think the problem is that difficult to learn and bad are easily confused. It is difficult to learn.

Also keep in mind that music notation has undergone many iterations, and it represents developments over hundreds and hundreds of years and covers every instrument under the sun - the breadth of what it has done throughout history and what can do might be hard to see.

>I see some great responses, but I wanted to add that you have to keep in mind that tons of people have actually tried to make a better system, and nobody has succeeded. That should give you enough pause to ask why and consider the possibility that the system we have is really good in a way that you haven't recognized yet.

I think that this is the incorrect way of looking at it. I suspect it is less that the traditional notation system is highly evolved and effective, and more that getting a critical mass of musicians to transition/relearn/teach/translate into a newer system is incredibly difficult.

For instance, while Imperial units aren't without some advantage, they are pretty generally inferior to the Metric system. But the US hasn't really switched because it requires a significant level of coordination and control that simply isn't easy to access. And getting musicians to learn and teach a brand new, objectively better system would be much much harder.

The current system is 800 years old, and over that time it has won over hundreds of different systems. A new system is proposed every now and then, and even though they might be better in a specific problem domain (say, microtonal music), but they always fall apart.

I have thought a lot about the problem (worked as a professional bassoon player for a very long time), and I can't say I have had many good ideas. There are some ideas for simplified music notation (with different shapes for flats and sharps) which work _very_ well for making sight reading easier. Until it doesn't: It can't express enharmonics (different ways of writing the same note), which makes tonality analysis harder, and can actually hamper readability since most people that are fluent in reading music usually "stay in key" when reading music.

A quick google gave me this: http://musicnotation.org/ and I can't say I am very impressed by anything I see there. But as you notice, most systems are oriented by lines. I don't think that is because people lack fantasy, but because it is a pretty good way to write music.

What do you think about parallel visualization? Right now, musical notation strives for a single notation that tries to encompass the entire work—and to also serve as a canonical, lossless transcription of the work, from which it can be recovered.

If you drop that requirement (and then assume digital storage) you could have 1. an underlying canonical format that has "all the information" but which is never presented to the performer, nor to the composer; and 2. a number of views that expose various dimensions of the composition. Like orthographic projections of a model in CAD software.

Presuming an interactive display (touchscreen, etc.) you could switch between these views at will; but even for printed sheet music, you could just isolate one measure at a time and then display several "stacked" views of that measure per page.

(Basically, picture widely-spaced, annotated sheet music, but where the annotations are themselves in the form of more musical notation, rather than words, appearing in additional sub-staffs attached to the measure.)

"Right now, musical notation strives for a single notation that tries to encompass the entire work—and to also serve as a canonical, lossless transcription of the work, from which it can be recovered."

I don't believe this to be true. (Modern) Guitar Music is most often written in tab often without accompanying staff notation. Also staff notation is not loseless, musicians will interpret the music differently. For example, with violin, whilst some instruction is given on bowing it is almost never complete and the musicians will find different ways to fit the bowing to the rhythm, this can make a huge different to overall tone as (most simply) the up bow sounds distinctly different to the down bow.

I do think this is the direction it is heading. There are new "smart" music stands coming to the market now with similar features.

Conductors can write notes about certain parts that can be accessed by musicians. Opera musicians (where different people play the same music every night) can have their own personal notes.

Most exciting is ofcourse that everyone has instant score access. That removes a shit-tonne of time wasted during rehearsals.

Those are just traditional use cases. I'm excited to see what will come. I don't know ifusic as it is practicedtoday can be "expanded" in any meaningful way, but that only time will tell

I think this is a great idea as part of a learning tool, being able to simultaneously visualise a musical idea on a score, in guitar TAB, woodwind fingering, piano roll etc.

I've got a plan on the backburner to do something like this using Ohm https://ohmlang.github.io/

You have a great point, that getting the world to switch would be very very hard. But it's not black and white, you can't compare that to anything.

If there's a viable alternative to music notation that you know of and is superior to what we know as standard western notation, feel free to share.

Your choice of example is interesting, considering metric has won, and the US is switching slowly.

But there is no incorrect way of looking at it, music is an art. Standard notation is highly evolved and effective, it has been iterated on for millennia. Getting a critical mass of musicians to learn a newer system would be incredibly difficult. Both are true, and you can't compare them and say that one is "more", that's flatly not true in any meaningful sense.

I hope people don't think I'm being brusque here, but these comments are a classic case of an outsider looking at the system, admitting to be lazy and wondering why the rest of the world differs from their expectations vs. asking musicians what they think.

At its core, musical notation is succinct: a mixture of logic and unique symbols. Note markers are isomorphic to pitch. Rhythms subdivide with vertical lines. Special symbols and brief phrases denote beginnings, ends and loop points. (They're not usually in English) Geometric figures indicate volume and speed changes.

A competing system in my purview is "tracker" notation. It's vertical and generally only used on machines, but hand writable: It looks like: C-3 Eb3 G-3 Bb3

I have the same feeling. Music notation might be hard to interpret sometimes, but none of the alternatives actually solve anything. They do however introduce a whole lot of questions.

I think a valid comparison is the regular alphabet. It is, after all, a coding system for language in the same way that notation is a coding system for music. Most of the problems of that coding system (my pet peeve is english spelling) generally stem from conventions rather than problems with the alphabet (italian and german is much easier to spell correctly).

There might be some interesting alternatives (hangul!), but those systems come with their own share of problems and generally have no big benefits. I actually believe that musical notation is better fit for it's task than our current coding system for language.

> the US is switching slowly.

As a US citizen who is a metric fan and loves using it, in what way? The government did switch - in the 70s - according to its own statutes, it had to.

It has crept up in various places (and I find it hilarious) innoculously, like in 2 liters of soda, or in how computer processors are talked about in mm² die areas.

But the average American still uses imperial units religiously, anywhere they approach a problem involving any unit of measurement they always default to imperial, and having a 14 year old brother I see no change in his education or habits to indicate a slow transition of mindshare. The government moved decades ago, but the people aren't moving at all.

I get the impression it is much like high school language classes - you learn it once early on, never practice it, and by the time you are a full adult you have completely forgotten it. I'm not sure how to improve the situation to actually get the people to start using international standards, because if you were to start trying to force it on the supply side people would just not buy metric tools and information because they forgot it back in primary school.

I think the way to switch would be done the same way other countries have done so:

1. Make sure everyone is educated in metric

2. Change the easy things: the paper size the government uses, the units on food labels, the measures legal to use for sales of loose food or other goods, the units the government uses for all types of reporting. (Therefore if businesses want government contracts, they'll need to use metric.)

3. Change other standards, like residential construction, preferred fasteners, wire sizes. Where old measures are required for compatibility, write "24.5mm" in the standard. If the dimension could be changed to 25mm without any side effect, use that.

4. Change other things people see daily: I don't know if doctors use metric in the US, but I assume they communicate to patients in old units. Change the default, but accomodate older people. Change the road signs. Is anything left?

The UK is part way through 4, but has been stuck there for decades.

Musicians frequently get taught music in large batches at schools, though, which means you don't have to worry about network effects—there are choke-points in the network.

There's no reason a given school couldn't teach a "colloquial notation" first, with the "Lingua Franca" musical notation taught later on, for everyone in that given school. Then everyone who comes from that school would know that colloquial notation.

Consider: the "Chicago school" of Economics; "Rugby School" football; etc. These things start as colloquialisms, then spread to global awareness.

England had a colloquial notation, taught in schools, for several decades: tonic sol-fa. But it could only talk about melody and rhythm, not harmony. It fell out of mainstream use in the late 1960s, perhaps as the music publishing industry consolidated and globalized, making it easier to have a single international edition of each song instead of separate editions by country.

Music notation is to music as qwerty is to keyboards?

No, qwerty for keyboards is more like the lay out of the 12 notes on an instrument, and there are many instruments.

Music notation is more like a programming language. The score is like a program that you can read/interpret and play.

For instance, while Imperial units aren't without some advantage, they are pretty generally inferior to the Metric system.

You say this pretty matter of factly, but I actually vehemently disagree. Many imperial measurements are better than their metric counterparts for day-to-day lay usage.

- Fahrenheit is a better scale than Celsius - Inches, Feet, & Miles are very practical units. Centimeters, and Meters much less so. - Pounds are smaller and offer better delineation than Kilograms. - Liters are pretty similar to quarts, though I admit the various Imperial sub-units are annoying.

Sure, it's easier to convert between metric scales, but the number of times I actually do that?: approximately zero.

“In metric, one milliliter of water occupies one cubic centimeter, weighs one gram, and requires one calorie1 of energy to heat up by one degree centigrade—which is 1 percent of the difference between its freezing point and its boiling point. An amount of hydrogen weighing the same amount has exactly one mole of atoms in it. Whereas in the American system, the answer to ‘How much energy does it take to boil a room-temperature gallon of water?’ is ‘Go fuck yourself,’ because you can’t directly relate any of those quantities.” Wild Thing by Josh Bazell.

That's great.

When is that EVER useful to the layperson?


You don't really give any reason why Fahrenheit is a better scale or why inches, or why feet and miles are particularly practical units. It seems to me that people say this simply because they are used to them. You don't convert to metric units and it feels awkward because you don't use metric units.

There is an issue with "kilometer" being a complex word for everyday use (as compared to a mile) in the English language. That's more a linguistic issue that about the unit itself. Other languages have solutions to that with shorter colloquial name for the unit.

Of course the imperial units give a good opportunity for being funny, in ways like specifying speeds in furlongs per fortnight. But you can do the same in SI-derived units, like parsecs per picosecond.

> There is an issue with "kilometer" being a complex word for everyday use (as compared to a mile) in the English language. That's more a linguistic issue that about the unit itself. Other languages have solutions to that with shorter colloquial name for the unit.

Even in English people of a certain age can say "klicks" and be understood.

Exactly, it's something that mass usage will solve, even if the folk song will not sound just the same with "a hundred klicks, a hundred klicks, I am five hundred klicks away from home".

Other languages often say just letters "k" or "km".

Inches, Feet, & Miles are very practical units. Centimeters, and Meters much less so.

Really? Do you know how much easier it is to compute surfaces and volumes in metric systems compared to imperial? Concrete example. Figure how much soil you need to buy to fill a box knowing L, W and H. In metric it is a 10s process. In imperial i do not even know how you are supposed to do it. Does anybody even know how many quart are in a cubic foot?

> Does anybody even know how many quart are in a cubic foot?

They know it, after they do the conversion, via metric system.

(OK, nowadays you can just enter "1 quart to cubic feet" in Google. And the funnier ones you get at https://en.wikipedia.org/wiki/List_of_humorous_units_of_meas... )

No, I don't know the number of quarts in a cubic foot, but no one does because they're two different measurements for two completely different uses.

No wonder that you miss the point of metric units if you don't get why doing such transformations is useful.

In the metric system, converting between length, volume and weight is trivial and straightforward. This comes into play neatly whenever you need to pile up a precise amount of batter or liquid from containers measured with a different unit.

Another way to look at it is that the current notation system isn't the best overall, it's just the most tolerable trade-off between a bunch of mutually-incompatible requirements.

Replacing standard notation for all uses may be doomed to failure, but replacing standard notation for some particular use case (especially new use cases that weren't anticipated when standard notation settled into its current form) may be a very useful thing to do.

Computers also give us a few new options, such as displaying notation in a time-varying form, or using three dimensions, or notating the music in some universal language that isn't necessarily easy to read but that can be easily rendered in any desired notation.

Lattice notation for instance is something I really like, but I don't know how to represent it without some kind of animation.

Here's an example I stumbled across on Youtube awhile back of the kind of thing I mean: https://www.youtube.com/watch?v=jA1C9VFqJKo

Lattices generalize to higher dimensions, which means they might be amenable to virtual reality or even some sort of human-brain interface that allows you to experience 4 or 5 spacial dimensions at the same time.

> Another way to look at it is that the current notation system isn't the best overall, it's just the most tolerable trade-off between a bunch of mutually-incompatible requirements.

Isn't most tolerable trade-off between multually-incompatible requirements another way of saying "best overall"?

Totally agreed there are useful local overrides of standard notation. Tablature is one example, and there are others. I wouldn't call those replacements for standard notation though. Both notations exist, both serve different purposes, neither is going away, there's no either-or question to be resolved.

The lattice videos are super interesting! Thanks for sharing that. I want to watch a few more and understand his layout choices -- I think I kinda get it, triads form triangles. These don't encode anything temporal though, so this is a visualization that helps understand harmony spatially, but is not a musical notation and can't encode a song, right?

> Isn't most tolerable trade-off between multually-incompatible requirements another way of saying "best overall"?

I could have said that better. What I meant was that standard notation isn't better than every other system according to every metric we could use to compare such things.

Gary Garrett has more lattice demos on Youtube. Here's one that's an animation of an example in Harmonic Experience by W. A. Mathieu (which uses lattices extensively to explain harmony and is the best reference I know of for explaining how to understand them): https://www.youtube.com/watch?v=I49bj-X7fH0

A 3-5 lattice is a grid where one axis is fifths (powers of 3 in just intonation) and another axis is major thirds (powers of 5). Garrett implies a third axis for septimal flatted seventh (i.e. barbershop 7th) intervals. Since the grid is leaning to the right, the diagonals that lean the left are minor thirds. Powers of 2 (octaves) are usually ignored. Triangles that are flat on the bottom are major triads. Triangles that are flat on top are minor triads.

There isn't an obvious way to encode a whole song onto a single lattice diagram in a way that could be printed on a page and still be readable. They seem to work pretty well as animations or as static illustrations to explain chord transitions, though.

> What I meant was that standard notation isn't better than every other system according to every metric we could use to compare such things.

This is totally true; tablature is better for beginning guitar players to learn to play specific songs on the guitar.

The only reason tablature doesn't supplant standard notation is that the metric under which it's superior is much narrower -- it's only for guitars, and only better than standard notation for beginners.

I don't think standard notation is necessarily the best overall, by I do think it happens to be the best overall, the best we've got today. And I'm not convinced it will ever become a choice, as opposed to standard notation evolving like it has in the past to incorporate new ideas.

Thanks for the explanation of the lattic layouts; I hadn't noticed the triangle orientation part, I only got as far as seeing that horizontal lines formed the circle of fifths. I can't tell what the plus and minus symbols mean, do you know? Usually those are used for diminished and augmented chords and not single notes, so is Bb- another name for A that is useful under the lattice system?

It's a way to identify distinct pitches that are usually treated as the same in equal temperament.

For instance, in just intonation 2 (the major second of the scale) has a frequency that makes a ratio of 9/8 relative to the tonic, but sometimes you might want a slightly flatter major second with a ratio of 10/9. So, that note is label 2- to distinguish it from the regular major second.

Maybe no one has succeeded with a general replacement, but there are different notations for guitar. I assume some other instruments have their own notation too. When electric music kicked off, to reproduce sound you have to trade setups / circuit diagrams, old music notation can't encode that! I kind of think of it like x86 assembly. It's here to stay, for better or worse, but that doesn't mean you can't have nicer things on top, and there are still things that don't make any sense at all in the x86 world (like FPGAs for one).

Tablature has a long history as well, it didn't start with guitar. Before guitar there was lute and cittern tablature -- which typically use letters and not numbers. I play both guitar and lute and I actually wish the letters convention had stuck, it's more fun. Wikipedia says that the first known tablature was for an organ. https://en.wikipedia.org/wiki/Tablature#Origin

Yes, some other instruments have their own specific notations & tablatures as well. These aren't replacements for standard notation though, and never will be. They have a place, and they are useful, but they aren't in competition with standard notation. Tablature has its disadvantages (https://en.wikipedia.org/wiki/Tablature#Disadvantages) but also the single biggest reason for standard notation -- groups, band, ensemble & orchestral playing -- is something tablature can't help with at all.

Totally agreed that standard notation doesn't help with electronic sound reproduction, but I'd suggest that standard notation isn't for sound reproduction in the analog world either, that's not it's purpose. Standard notation is the sequencer, not the synthesizer. You can use standard notation to encode songs in the electronic music world, but it's definitely not super convenient, hardly anyone does that. The analog version of trading setups and circuit diagrams is carving your violin using plans and specifications of a Stradivarius violin.

I have a theory for this. Please, do not down vote me, I am here with limited English but really good intentions.

QWERTY keyboard is something humanity found a better solution, people have developed better layouts like Dvorak, per example, and world keeps using QWERTY (not my case).

I have studied long time ago that TCP protocol is also not the best protocol, there are much betters and faster, but people keeps using the old TCP for Internet...

I believe when something is already consolidated, it's expensive to change, sometimes it's not worthwhile do update all the consolidated knowledge/investment, even when having better solutions.

World updates consolidated solutions just when the gain really worth it, it's not the case for music notation.

I also agree with you, the music notation could be easier, but I believe they don't upgrade because the masters musicians have mastered it, so they like the actual notation, and they are the fellows with enough knowledge to create a better version. I believe there is others types of notation, but it would need to be used by the masters musicians, and music schools, and universities to start a wave that could replace the actual notation (that already works pretty well).

The argument that Dvorak is superior and that inertia is keeping people from converting has been studied, and while I think there's some element of truth, it doesn't seem particularly compelling, since big disruptive changes occur all the time.

"the best-documented experiments, as well as recent ergonomic studies, suggest little or no advantage for the Dvorak keyboard."


"The trap constituted by an obsolete standard may be quite fragile. Because real-world situations present opportunities for agents to profit from changing to a superior standard, we cannot simply rely on an abstract model to conclude that an inferior standard has persisted. Such a claim demands empirical examination."


Musical notation is a vastly more complex system than keyboard layout, and I don't believe we have a Dvorak of music notation to even compare with. There are no contenders for musical notation that a large group of people believe are superior. So there's no reason to believe that inertia is keeping people from using another notation.

To go one step further, music notation is constantly changing, it has been evolving, adopting and incorporating the best ideas for thousands of years. What reason is there to not start with the assumption that it already took the best changes so far? I have no doubt that if superior ideas for notation develop in the next hundred years, that at the end of it, we'll still call the result 'standard music notation'.

My intention was not to compare music notation and dvorak, but write about human behavior in similar situation against the "inertia" you cited.

Totally, I understand. And mine wasn't to counter Dvorak specifically, but mention that the inertia theory has been questioned, and also mention that sometimes things are believed to be better by some people but in reality aren't much better if at all for most people. Sometimes inertia is posed as a reason for not changing when in fact the reason is the accepted system is the superior system for the largest number of people.

The latter is my theory about music notation; that inertia is not even at issue yet because there are no serious alternatives.

And inertia might never be an issue, because music notation is a fluidly changing system. TCP and qwerty/Dvorak are static systems that don't ever change, so you can argue about which one's better. Music notation is changing and improving, so it's hard to suggest that people are resisting change, and hard to suggest that something better will supplant it, right?

I agree with your theory in general though, outside of the issue of music notation, and I think a lot of people do. It's just a matter of finding the right examples that clearly demonstrate it. And it would be really interesting to somehow quantify the amount that something needs to be better before people will adopt it. It's like static friction in physics -- it takes more force to get something started moving than it does to keep it moving.

Now I got your point! Thank you.

That's something I've done time and time again, and seen others do too. It's easy to look at something and think you understand it well enough to know how it can be improved. But when you find out the rationale and reasons it is the way it is, it's kind of humbling. Like how it surprised me to learn that there's a lot of valid, practical reasons to use the Imperial measurement system over Metric.

> Like how it surprised me to learn that there's a lot of valid, practical reasons to use the Imperial measurement system over Metric.

I've always wondered about that. Why?

I don't remember where I read it, but one of the big reasons was that Imperial units are much easier to divide in ways that make a lot of sense in practical usage, whereas Metric is designed to make conversions easier for doing science, which puts practical usage on a lower priority. But take this with a grain of salt.

* 64.7989 mg of salt

Machinists and engineers often prefer Imperial "mils" (thousandths of an inch), for instance. It's easy to convert from kilograms to pounds or kilometers to miles, but there's no convenient metric unit for expressing typical distances and tolerances used in mainstream machining. A millimeter is way too coarse, a micrometer is way too fine.

As a specific case, in electrical work, it's easy for me to specify "6 mil trace/space" attributes for a PC board design. Not so easy to say "0.1524 mm" or "152.4 microns." If I round my specification down to 0.1 mm, the resulting copper features will carry less current and cost more money. If I round it up to 0.2 mm, other physical and/or electrical requirements won't be met. So now I have to add at least one more sig fig, which is a pain in the neck for no obvious benefit.

Also, what we have got this way through quite a bit of evolution...

The first thing that looks a bit like modern notation is probably plainchant, originating in the catholic church circa 14th centure:


The basic system we use today originates from about the 1600's or so, but has still evolved a lot.

There were tons of historical warts along the way that have largely dropped off - for instance, figured bass notation (https://en.wikipedia.org/wiki/Figured_bass) or the French violin clef (https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Fr...)

See also: https://news.ycombinator.com/item?id=12159224 . Which I shamelessly plug since I was a participant in that one. :-)

I got a reply there that the current system is only suitable for professional musicians, and that you'd need something like shape notes to reach mass musical literacy. Now I'm hopelessly biased as a music degree-holder, a semi-professional musician, and a Presbyterian to boot ;-), but this strikes me as setting the bar way too low. Given levels of overall literacy in the US (which were very different when shape notes were developed) I don't think it's that difficult to learn the notation itself – the difficulty I think is in mastering the music system.

Think of it as data compression that shows you the notes you're most likely to play, without taking up space for notes you probably won't.

If there's a piece in C, for example, in most traditional Western music you're unlikely to play off-key notes. So why take up valuable space for those when you can denote that unlikely event with a sharp/flat symbol?

Traditional music notation made no sense at all to me until I realized this.

Edit: For those that don't know, in most western music you're only going to use 8 out of the 12 possible notes most of the time. This is not universally true especially of modern non-pop music, but traditionally if you played off-key notes people thought you might summon evil spirits so it's easy to understand why things would be written down this way. Not only is it space efficient, but you wouldn't accidentally summon the devil. To summon the devil you have to really want to and write a flat or sharp in there.

> Edit: For those that don't know, in most western music you're only going to use 8 out of the 12 possible notes most of the time.

You mean 7 notes. Traditional music notation and terminology is confused in many ways, one of which is a fencepost-counting error. As a result: octaves are actually seven notes apart in a scale, two major seconds make a major third (2+2=3?), and two octaves make a fifteenth (8+8=15!?).

This is a really great answer! This is also a big part of why different instruments read different clefs.

>Traditional music notation made no sense at all to me until I realized this.

Can't thank you enough because your characterization is the first one I've heard that adequately explains the foundation of the visual system as one of condensation. I've been shrugging my shoulders about this for many decades!

Agreed, but reading sheet music is a very small part of playing piano proficiently (say, at the 97th percentile). Once you get past knowing the notes in a piece, there's the much more difficult task of being able to manipulate the force that you exert from your fingers to create the right volume balance. For example, your untrained thumb will naturally play notes much louder than it should, and it takes a lot of practice to be able to play notes with it at the right volume; the opposite is true with your pinky and ring fingers.

...Not to mention the even more difficult task of knowing what you want the piece to sound like in the first place. A novice playing a piece at 100% accuracy sounds nothing like a concert pianist playing the piece. There's a world of depth to music beyond just learning the right notes.

Here's an example: listen to this performance of Debussy's "Reflets dans l'eau" by Arturo Michelangeli, one of the greatest pianists of the 20th century:


And then listen to this student play it (she is still a high-skill player, just not world-class talent):


I don't know a whole lot about concert piano, but I don't think you could have picked two better videos to illustrate your point. That student is obviously very practiced and skilled but there's just no comparison.

Thanks! When I learned this piece, I listened to that recording on repeat. Michaelangeli is simply amazing.

The student asks, "how?". The master asks, "why?". That's why you feel a difference in the two performances.

> I always wondered why musicians keep up with the > conventional musical notation system, and haven't come > up with something better (maybe a job for a HNer?).

Before starting down that path, I would recommend familiarizing yourself with the wide range of music notations that already exist and continue to be used, and then the ridiculously varying plethora of failed alternative music notations that have been invented over the centuries, and why they failed to see wider adoption.

And, of course, it's fascinating to study the evolution of the existing "standard" music notation, and see the changes that have been adopted, and the ones that weren't. For all its apparent stasis, it has definitely evolved over the centuries, in response to the changing needs of musicians.

Agree 100% with all this. Modern drum notation is probably the easiest case to look at with regards to evolution of "western notation", with jazz chord symbols being another.

Some other reasons why musical notation prevails:

- There's a huge switching cost, as much of the world's written music is in some form of "western notation". Being able to read standard notation unlocks a huge wealth of knowelege from books, etc.

- Standard notation is one of the most flexible ways to create readable music, playable and easy to read across a wide variety of instruments and ranges (clefs, transposing score, etc).

- It's a common language, in the way that a programming language is. Some of the conventions may be confusing to outsides (i.e. why is the term "puts" used for printing in ruby? this seems normal to any ruby hacker but is completely unintuitive to a layperson). Once these conventions are learned, they provide common reference point. Like a lot of languages, it's far from perfect, but much like spoken language, more likely to evolve than be replaced.

- There's almost no motivation for anyone to replace standard notation. Notation isn't required for all forms of music (many great jazz and blues musicians don't read music), and for the forms of music where it is required, it's by far the quickest and most efficient way to communicate the information.

In summary, I think the question of "why can't we do better" is valid, but you could ask the same question about programming in C. There are good reasons to write C in 2017, and there are still good reasons to write musical notation.

What great jazz musicians don't read music?

Wes Montgomery, Erroll Garner, Django Reinhart and obviously Roland Kirk are probably the most well known that couldn't read at all. There are many, many more jazz musicians that were/are very poor sight readers.

Sure, but those guys are all (sadly) long gone, and the parent comment said "don't", not "didn't".

Bireli Lagrene, Scott Hamilton and George Benson have all said they don't really read music.

It's definitely true that most jazz musicians can read passably, but my original point was that it's an aural tradition. no one learns to play jazz reading notes off a page. Whereas in western classical music, it's an essential skill.

agree that probably all currently popular jazz musicians read music, but is this necessarily an improvement?


That doesn't jive with the "I can do better" mantra around these parts. I say that both sincerely and sarcastically.

Jibe means to be in accordance. Jive is a dialect and a dance. FYI. I realize we will probably lose these words, but for now.....

I get your point, but at the same time I would say that it's often much harder to "do better" when you don't even have a clear idea what it is you're trying to improve upon.

They are not saying you cannot be better. They're just saying to respect what came before and learn from it as you devise the better scheme.

Closer to this: https://xkcd.com/1831/

I have played piano and guitar (piano for almost my entire life and guitar for several years) and have used both tabular sheet music and traditional sheet music.

Tabular sheet music is much easier to read initially as it provides a one to one mapping between the visual representation and the physical location of the notes - i.e. 5 frets along on this string. However, from my experience there is a cap on the 'bandwidth' at which you can sight read this. It is just too hard to mentally parse a bunch of numbers on lines and turn that into notes when playing at speed. (For non musicians, 'sight reading' means to read the notes and play fluently at the same time)

Traditional sheet music has a steeper learning curve, however, I've found that reading this music becomes much more subconscious with practice and the bandwidth at which you can parse the notes is much higher. Also, it is much easier to notice patterns in sheet music - i.e. a major 7th chord in the key of the song is visually obvious no matter what the key.

Great point.

To a first approximation:

Tablature is a _physical description_ of how a particular stringed instrument should be played, and the notes are a side effect of that. It is instrument specific and it doesn't contain much information about the musical details of the piece.

For example, tablature doesn't describe the key the piece is to be played in. To figure that out, you have to mentally translate the mechanical description into notes, and from there determine the key.

Standard notation is a _musical description_ of how a particular song should be played, and the physical act of playing is a side effect of that. It is not instrument specific, and it contains a lot of information about the musical details of the piece, but usually no information at all about how a the instrument should be played. (There are a few minor exceptions.)

For example, standard notation tells you exactly the key the piece is in, but the player has to mentally translate the notes into the physical steps of getting that note out of the piece.

Basically standard notation adds a layer of indirection from the music to the mechanical act of playing. Like many indirections, it can be hard to understand at first, but has adds great power and flexibility that a direct system doesn't have.

What you're saying makes sense, but it applies oppositely too in that tab is non-physical and notation very physical. Example, if you see a scale in musical notation, it's immediately obvious that it's a scale just from a 50 millisecond glance, whereas in tab it's not obvious that it's a scale until you read/play through it.

When you become adept with musical notation, this is one of the primary hindrances of tab.

Tablature is also needed because the same note can be played on different strings, notes can be doubled, there are different ways to transition from one way to another, and there are various other nuances that are messy at best to try to express in standard notation.

another feature/side effect is that you can use any instrument to play any part of a written piece, as long as it's physically possible to play the notes as written. and if not - you can improvise easily by dropping superfluous or unnecessary components/notes without changing the overall sound of the music.

Another side effect is that as a guitar player you can look at the tuba player's part to figure out what he is playing even if you have no idea how to play tuba, if you read music. With tablature only a guitar player will know what you are playing.

> However, from my experience there is a cap on the 'bandwidth' at which you can sight read this. It is just too hard to mentally parse a bunch of numbers on lines and turn that into notes when playing at speed. (For non musicians, 'sight reading' means to read the notes and play fluently at the same time)

I've noticed this as well and my team has developed a notation based of key/scale and a new user interface for the guitar so that experienced players and beginners can sight read on their first attempt at a new song.

We reduced the cognitive load of sight reading music, not only that, we then back fill technique like chord fingering where we introduce traditional chords one at a time, here is a series of three videos of what I'm talking about: https://www.youtube.com/watch?v=KXpTGIzBONU&list=PLvoNIaPTga...

> However, from my experience there is a cap on the 'bandwidth' at which you can sight read this. It is just too hard to mentally parse a bunch of numbers on lines and turn that into notes when playing at speed. (For non musicians, 'sight reading' means to read the notes and play fluently at the same time)

Sorry but this is wrong IMO. You've been reading sheet music your entire life, but you've only been reading tab for the past few years.

I've been reading tab for 10 years. I think in tab. There are a bunch of songs that I can't be bothered learning (sultans of swing, metallica songs+solos, oasis songs.. you get the idea) because I don't like them enough but are fun to play along with, and I do so with Guitar Pro playing the tab at full speed. It's basically like rocksmith/guitar hero but in "real life" mode.

I started with tab and learned to read music 15 years later. I was amazed at the vast amount of information in sheet music.

Tab is great for messing around, beginners or simple songs. I can't even imagine trying to learn to play complex jazz or classical music using tab. Sheet music also guides you right into learning scales and intervals.

Tab is great for playing guitar hero but, even on a real guitar, it's like pressing buttons. It doesn't help you learn much at all. I'll never go back to using tab even though I can visualize it easily in my head.

I wonder if you had been a guitar + tablature player your entire life and picked up piano a few years ago, if you would come to the same conclusion.

I've tried learning guitar a few times and when I've asked accomplished players how they get by with tabs, it's been explained as tab music establishes a minimal framework that you play within. It's a lossy compression scheme (and traditional sheet music is less lossy). Would you agree with that?

>I wonder if you had been a guitar + tablature player your entire life and picked up piano a few years ago, if you would come to the same conclusion.

I was a tab-reading guitar player for years, then learned classical notation. Classical notation is undoubtedly faster to parse. For whatever reason, there seems to be a much more direct connection between your eyes and hands when you're reading dots.

It seems to be much more amenable to chunking[1] - you stop seeing individual notes and start seeing chords and scale fragments. Tab is a meaningful and direct representation of the physical parameters of the guitar fretboard, which I think is a shortcoming; classical notation represents information in a way that more directly corresponds with musical theory.

Tab is lossy, but it discards some very important information. Unlike classical notation, it has no native means of indicating note length and can't accurately represent rhythmic subdivisions. If a piece of music has any real rhythmic complexity, tab alone is insufficient.


I think most guitarists (including me) use tabulature as a loose framework to extemporize around rather than as an exact transcription. You can find lots of youtube videos of people playing exact versions of old favorites (stairway to heaven for example) but they are usually the musical equivalent of painting by numbers, lacking feel.

Jazz musicians typically learn the changes (chords and melody line) to tunes and improvise around that from a sophisticated understanding of harmony, a variation on the tab approach.

Sight reading music, especially for guitarists, is more akin to tightrope walking in my opinion but typically a combination of tablature, staves and chord changes gets me to where I need to be

Totally disagree.

Tab is LESS lossy than traditional sheet music because it encodes the string as well as the pitch.

A given note could be played in as many as 5 different places, and they will ALL sound different. An open A (5th string) will sound different than the same A played on the low E string, 5th fret.

(This is completely unrelated to the woeful quality of most of the tab floating around on the net. You can write down a piss-poor transcription as sheet music too.)

Tab is terrible at conveying rythmic information, playing anything moderately complex is very hard unless you're already familiar with the material. And I'd say it's a very lossy format if it's reliant on out of band information like a recording to make sense.

Fingering is a problem that mostly goes away as you gain an innate sense of what sounds good versus economy of movement and the ability to mute. For music written on guitar, it's usually relatively easy to tell what position works best.

Every guitar is different, too. String gauges, pickups, resonant notes, action height and intonation all play into it, and most of those are subject to personal preferences.

> Tab is LESS lossy than traditional sheet music because it encodes the string as well as the pitch.

But it doesn't encode the note type, right? All the tab books I've bought don't differentiate between whole notes, quarter notes, etc... So that seems pretty lossy. Look at any guitar fake book for an example.

Plus, I never looked at tablature as a literal transcription. That's why I would describe it as a more of a framework. Like you say, a note can be played in a lot of different places. Once you internalize the fretboard logic, when you see an A in the tab, you play the one you think will sound right or is physically accessible.

A lot of the nicer tablature is in a hybrid format that borrows symbols from standard notation, like attaching stems and flags and dots to notes as appropriate to make the rhythm explicit.

Voicings can be up to the conductor, the lossyness is a feature not a bug.

Music notation does take time to learn how to read well at, but it's no different anything else that takes time to learn and master.

Once you get past a cursory "eff this" reaction, you start to see how downright brilliant notation is.

The vast majority of music focuses on 7 notes at a time. If you alter a key signature, you are playing 7 other (non-distinct) notes. Music notation encapsulates this concept very well.

That's only one example, but telling musicians their notation sucks and needs to be fixed because it's hard for a non-musician is akin to a musician telling a programmer that Python and Linux needs to be fixed because it doesn't look like a violin.

> Once you get past a cursory "eff this" reaction, you start to see how downright brilliant notation is.

In this it reminds me of vim.

Oven the last 800 years, hundreds of different systems have been proposed to the system that has evolved to be the one in use today.

Generally it can be said that some have been better in a specific use case (klavar notation was pretty big in the Netherlands among those who didn't know regular notation), but they fall apart pretty quickly when you try to write Liszt or Rachmaninov in it.

I might be a bit rigid (I have played bassoon professionally for most of my adult life), but I can't really see how it can be made much better and still keep the same utility.

While chords might be not optimal today, we can still express things like enharmonics easily (which, at least for me, is something that can make sight reading easier as it allows for the notes to stay "in key").

As with the spoken word, music has an advanced coding system. Both coding systems are flawed in their own way (as someone with a different mother tongue than english, I have a hard time spelling just about anything), but they have also stood the trial of time.

Well, considering this article on Ableton never even uses conventional musical notation and many working musicians sit in front of their DAW all day, I think it's safe to say that the virtual piano roll has largely taken over the roll of classic notation. I don't think too many people making rock music, EDM, or hip-hop has really touched classic notation ever.

I'm a classically trained pianist, and I basically agree with you. I read sheet music because that's what there is. As a young composer, I wrote in standard notation because I hadn't questioned it. As an old composer, I don't think there's anything great about it, and I have no need of it. I 'write' all my music on hardware or software synths. It's much easier. All I care about are the parameters. What note, when, how long, how loud, etc.

It's clear that current music software is poor for conveying information when compared to editors for many other tasks. It is easy to blame musical notation for that, but in fact in most music software there are several equivalent views (tracks, piano roll, notation), and you'll find that notation is the _most_ efficient of those.

Consider this: In this system, your most complex Classical scores for an entire orchestra are written, and present day trained composers continue to work efficiently in it. That tells you about its expressive power. It is in fact not stupid, but very well tuned to a lot of music theory. Other than complex timbre manipulation (and even that), you can do probably everything you want to accomplish with just software that does nothing but notation.

Instead, what most music software lacks is in the organization department. The organization of non-linear ideas, their programmatic (as in music) occurrence, the automation of repetitive tasks, and the completion of obvious intent. Tracks and loops are probably not the right view of musical structure, at least far from a _complete_ view. There needs to be a better bridge between musical phrases and ideas at the local level (for which musical notation is perfectly suited) and the organizational structure of a complex piece at the macro level (for which tools are very lacking). There also needs to be a better bridge between some conception of events (for which musical notation is slightly ill suited, being restricted to notes) and the microscopic world of timbres, effects, and transformations.

Until music software makers recognize that what they should be helping with is neither engraving, nor mixing console simulation, but a non-linear creative task, music software will continue to suck.

Speaking as a classical pianist, I think the conventional musical notation system is actually pretty good. My only issue is having to memorize Italian, French, and German phrases to be able to read music properly. IMHO, music notation should be localized.

Many non classical musicians use other notations. For example, many guitarists and bassists use tab notation. It's simply a visual representation of the strings and a number for which fret to play.

It's not as expressive but is far easier to get started with.

MOST guitar players I know use tabs. Personally, I'd rather see the chord type and root string. Ie. 6th string AMin, 4th string GMaj7. Tabs are almost as confusing because I memorized the notes, not the fret number.

I memorize chord progressions in terms of interval-number. E.g. {F# m; B dom7; E Maj} becomes {ii; V7; I}. I've memorized which intervals are major chords and which intervals are minor chords. So if I can figure out the tonic, then I can figure out the key. And if I figure out the key, I need only remember a pattern of ordinals (modulo non-triads).

This is a great illustration of how we grapple with the abstraction of scales and key over time.

First, you have tabs, which describe the physical position of the notes on the instrument.

Then, we have root / chord type notation, in which we describe the starting position and shape of the notes on the instrument, and the musician must translate that information to the physical position of the notes, on the fly.

What is important about this second stage is that the musician has a pretty good grasp on how to play, and can usually sight read a piece and get a pretty decent version of it just by tracking chords, or in the case of the piano, just chords and the melody on the other hand, or a small pattern.

Finally, we come to roman numeral notation, which describes the chords based on their relative position to the root note of key, not the chord. This is a powerful abstraction. It provides incredible insight into the relationships between music, notes, chords, and progressions of chords at a level divorced from the 'root' of that key. A 9th played over a minor 7th chord is going to give you a very similar sound in any key. This is a great skill for songwriters and composers, who need to have a strong working intuition about things like what chord will sound good in this progression, or what notes we want to appear in our melody (which is related to the chords beneath it).

Yes, thank you. This is particularly frustrating when you play with an alternate tuning such as DADGAD. Tabs are pretty much useless then.

"It's not as expressive"

Have you ever used Guitar Pro or Tux Guitar? It can be INSANELY expressive. Grab a MIDI of Van Halen's "Jump" (IIRC The best one was about 76kB) and import it into either of those. Guitar Pro will be noticeably more expressive vs TuxGuitar. Inside of that MIDI, the solo is 100% dead-on note-for-harmonic-for-slide-for-hammer. Both programs output the exact same tablature. You will get the solo perfect.

Most people that have read tablature haven't read the guitar-specialized notation found in Guitar Pro or TuxGuitar. It's far more instructive.

MIDI isn't tab though. It requires note velocities and durations for a start, which tab doesn't. You can go from MIDI to tab but you couldn't go from tab to MIDI.

"MIDI isn't tab though. It requires note velocities and durations for a start, which tab doesn't."

This is entirely incorrect. You can get velocities (Mezzo-forte, mezzo-piano, etc.) and such is expressed if you hover over the note itself in Guitar Pro or Tuxguitar. Sure they change the granularity of it, but the general range remains the same and for all practical purposes sounds the same if played properly.

There have been dozens of suggestions for alternative notation systems over centuries. Many documented here: http://musicnotation.org/

I guess it's just inertia.

There are a lot of books on the subject, but the short answer is that it's just the most common standard at this point. Yes there are some things about it that don't make sense, but for whatever reason it seems to be the most coherent way for musicians to communicate using a common language.

It's popularity also has to do with what sounds pleasing to the ear (and brain) on a biological level.

A number of people have come up with alternative scales and notations systems over the years, but none of them have really stuck for one reason or another. Nonetheless, they are pretty fun to read about.

here's the whole history of notation https://en.wikipedia.org/wiki/Musical_notation

Also, if you aren't familiar with John Cage, you should check him out. His music and writing deals with a lot of the stuff you just brought up, and it's also a really great jumping off point to find other interesting artists and musicians.

Indeterminacy, a work he did with David Tudor is a great starting point https://www.youtube.com/watch?v=_lOMHUrgM_s

I once thought the same thing, but after months of studying our music system and our way of notating it, I came to understand why it's so difficult to improve upon staff notation.

First of all, Western music has complex structure both horizontally and vertically. This makes it rather difficult to encode and visualize, right at the outset. You need some sort of matrix visualization, like a staff or piano roll, to capture all of the nuance.

What makes the staff so useful is that it also captures the tonal aspects of music in compact way -- those that relate to the key the music is written in. Every triad in the same inversion looks the same in every key. A triad is three consecutive lines or spaces. And then deviations from the standard triad for that tonal function are marked with accidentals.

This turns out to be extremely useful for performers, because you learn to play an instrument by learning to play in all the keys, rather than learning what the 12 notes are and playing note by note. I realized this when taking piano class and doing exercises where we'd transpose to another key while sightreading in the original key.

There are other notation systems that have been as successful as the staff, but they tend to be specific to particular instruments or styles. For example, most guitarists find tablature much easier to play than standard notation, especially if the tablature is augmented with note durations and rests.

Also, although I've become a true believer when it comes to the staff, I have less rationale for why the traditional clef system has stuck around. It seems like something that is more regular as you go up and down the scale would be more helpful. There are systems that use things like note shapes or colors to help mark the note name. I guess we just haven't found a standard.

I'm a programmer/musician and I can read on guitar and piano. I'm somewhere on the middle on this debate. I really dislike conventional notation but I also agree that the alternatives have some big downsides as well.

My biggest objection to conventional notation is that it gives a profoundly misleading picture of how music and harmony really work. It defines one reference key (C Major/A Minor) with a certain pattern of steps and gaps, starting on a certain note. Then for all of the other keys you add more and more sharps or flats until you get into ridiculous keys where all 7 notes are modified. The truth is that there's just one evenly spaced set of 12 tones, and all it means to be "in a key" is that you've picked a certain note out of the 12 to start the pattern on. There's nothing special about C. We could have chosen the key we call F# as the reference key and named it C, and everything would work the same.

It's hard to overstate the damage from this. Lots of musicians I know—serious players, people who took music in college—still think of "complicated keys" and "easy keys" and are only vaguely aware that the keys are actually all the same and they're just being tormented by the notation and terminology. I'm teaching guitar to a friend who was first trombone in high school and it blows her mind that she can play the same scales starting anywhere up and down the fretboard and it sounds the same.

It all comes from the design of the keyboard, where the notes of C major are evenly spaced (white keys) and the sharps/flats are stuck in between. There's also the fact that in the past the 12 notes weren't evenly spaced, so the different keys really did all sound different back then.

Conventional notation does have one big advantage, though: every line or space represents one note in the scale. This is more how musicians think: you don't care that much about the notes outside your key, and having the other ones "tucked away" in between makes it easy to see what's going on. That's why it's so quick to read once you know it. Out of the hundreds of alternative notations, I haven't seen one that's both key-neutral and also makes it easy to see things in terms of scale degrees.

(One idea I've had is a 12-tone staff with Sacred Harp-style shaped note heads to show you what scale degree you're playing. Not sure if that's ever been tried.)

I agree with your main point -- standard notation is basically just piano tablature and it tends to confuse as much as it enlightens about how music works. However, I disagree about the "there's just one evenly spaced set of 12 tones" bit. This is a simplifying assumption of standard notation that makes it hard to express the idea of notes that are outside of the well-known 12.

Even in the key of C major, this is a problem in just intonation. Say you want to play a G major chord, so it's made up of G, B, and D (3/2, 15/8, and 9/8). Later in the song you want to play a D minor, so you play D, F, and A (9/8, 4/3, and 5/3). That doesn't sound right, though. It turns out that the D you want is actually 10/9, which is just a bit flatter than 9/8. In standard notation, you can't distinguish.

It's possible to get around this by adding non-standard modifiers to notes aside from the usual ones (sharp/flat/natural), but unmodified standard notation misleads people into thinking that those two notes are the same. Which is another example of your main point, that "standard notation gives a profoundly misleading picture of how music and harmony really work".

I agree that intonation matters a lot; I guess I'm thinking about how you'd make a notation that better conveys the information already there in the current scores, which is 100% equal tempered. I actually like the idea of modifiers for microtones, and presumably any of that stuff would work just as well on a 12-tone staff.

Also, with a 12-tone staff plus shape notes, you'd get a little extra information for just intonation because you can tell for sure what key was intended for a given note.

DAWs don't use the classical notation system, neither does this tutorial, so I don't understand the context of your comment.

Well, technically some DAWs have a notation view (Logic and Sonar, for example), but it's pretty much useless.

Since music notation is a form of communication, wide adoption is a huge factor in what is considered better.

We could come up with more precise and effective languages than the ones we naturally speak, as well, but the good-enoughness of the ones we already have and the fact that others around us are very likely familiar with them is more important. Utility trumps quality, and worse is better.

That said, if all you want is a different notation system for you to use personally or with small groups of other proponents, there are plenty to choose from. ABC and MML variants use letters for notes and numbers for note lengths, for example. Probably not optimal for sight reading, but maybe better than staff notation when writing or transcribing music. There's also trackers and piano rolls. Neither is very good for quick conprehension, but maybe lay things out in a way that makes more intuitive sense.

One advantage of the 5-Line Staff is use of both lines and spaces. It's compact, easy to print, and easy to stack notes vertically.

Another advantage: each note of a diatonic scale is mapped injectively. Cf. representing each line (or space) as a whole-tone, which leads to hash-collisions (e.g. "is that a G or a G#?"). Each note on a line (or space) on which collisions occur would need an accidental. Which defeats the purpose of key signatures.

A diatonic scale contains an odd number of unique notes. The fact that C lies on a line while C' lies on a space is an unfortunate artifact of representing a 7-note scale with alternating lines and spaces.

shameless plug :) lightspeed, the sightreading flashcard game.


Requires windows and a MIDI keyboard.

> I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

Is this supposed to be satire? Invoking Poe's Law on this one

> I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

Me too. But you think about it, all you really is a graphical representation that describes the pitch of sounds relative to each other as well as their duration relative to the beat. And the conventional notation is not bad at it !

The current system is essentially:

a dot on a coordinate system representing the pitch, duration, and position of the sound in a sequence of sounds.

- a horizontal position axis: you draw an invisible x-axis representing the position of the note in its ordered sequence. It gives no indication on its duration.

- a vertical pitch axis defined by western notes (do, re, mi, etc): You draw your pitch lines, y-axis with y=Do, y=Re, y=Mi etc.

- a duration axis (let's say it points towards you): We can't draw it for a 2d representation of music, so we'll project this coordinate on the time-pitch plane which is your staff. We'll decorate the dot representing the note w.r.t. to it's duration coordinate: say it's duration is half a beat, the the dot is a black filled circle; if it's a full beat then it's a white circle; it's its a 4th of a beat the it'll be a black filled circle with a hook. Etc etc etc.

And then you start making all the addition of music notation: blank for 1/2 beats, vibrato, tempo, etc

Now there is this choice not representing note position and duration on a single axis. That may very well be so it's easier to standardise and read probably. You could also choose to represent the duration coordinate with colour, would that make it easier ? :)

Maybe the problem doesn't come from the notation, but the system in itself. The half step between B and C, the 12 notes but really it's more, etc. That's why solfeggio is hard ! I think some greeks considered the study of harmony to be at least as intellectual as that of counting ! I wonder if there's an algebra for harmonie. An H-Algebra why not ?

But really, it's not the only notation: guitar tabs, guitar chord representation, etc

Tempered tuning indeed divides the octave into 12 half-steps, but a huge amount of music uses only 7 or fewer of them for long stretches (or entire pieces) with a few exceptions. So think of the lines & spaces as being a compressed representation that doesn't waste vertical space for the tones that a piece isn't going to use.

Me, I love standard notation. Common chord voicings and interval patterns stand out as easily recognizable patterns on the page.

I wonder how many of us "skilled musical technicians" there are - people who can read music really well, produce those notes on our instruments predictably enough to play in a group, but just aren't that "musical" - we're boring to listen to on our own and have trouble singing. I'm a competent flute player, but it's a good thing I was just as interested in computers as a teen.

They actually disucss that in this course. They talk about pelog scales and 19 note divisions of octaves (not 12) https://learningmusic.ableton.com/advanced-topics/pelog.html

Roman numerals may seem simpler than arabic, but turns out arabic are more convenient for complex operations like multiplication.

I think that you are approaching the notation from a very left brained kind of logical point of view. Once you learn to recognize the patterns in musical notation, none of these concerns actually matter. Musicians just see the pattern and play it, and then focus on the stuff that is really difficult, which is the musicality.

piano rolls are used instead in production. (source: I make lots of music - http://www.soundcloud.com/decklyn)

It is confusing at first, but once you memorize where all the notes are it is very good. Notation is based around the idea of key signatures, and once you have that down it becomes very intuitive and you can actually know what a piece of music sounds like just by looking at the notation. Western music has 12 distinct pitch classes, but typically the notes are used in scale groupings of usually 7 notes, with accidental "outside" notes being easily recognized by sharp and flat symbols. Doing it that way gives easy visual cues for musical "events" such as key changes, outside chords.. etc. There is a reason it has stuck around, it is a quite ingenious system.

I have mild dyslexia myself and I think any kind of notation is going to be a problem for us. The good news is you don't need musical notation to play music. You can play by ear. Don't let it stop you if you're really interested in music.

Guys if you haven't seen Sonic PI (http://sonic-pi.net/), this is also a great tool! You can write beats using a Ruby DSL and it runs them real time.

I sat down and did this in an hour: https://github.com/exabrial/sonic-pi-beats/blob/master/house...

Sam Aaron is the guy behind the project, he does a lot of ambient type stuff: https://www.youtube.com/watch?v=G1m0aX9Lpts

I wanted to come and post this. Sonic Pi is an amazing tool with a compelling raison d'etre that I would think resonates with the HN community. It's a realtime code as performance tool aimed at teaching kids programming but is also used by advanced users to create wonderful pieces of music.

I'm actually working full time on a new DAW that should make writing music a lot faster and easier. Current DAWs don't really understand music. Also the note input process and experimentation is extremely time consuming and the DAW never helps. Current DAW : my thing = Windows Notepad : IDE. The HN audience is definitely one of my core groups.

If you are interested, sign up here https://docs.google.com/forms/d/1-aQzVbkbGwv2BMQsvuoneOUPgyr... and I'll contact you when it's released.

I've also made a few iOS apps for the purpose of simplifying composition, though they're pretty limited in scope (on purpose). Although it seems most composers would prefer to use full DAWs from the start, I'm personally much more creative when I'm able to jot down and edit my fragmented musical ideas as quickly as possible, if only to make the initial draft. (If I were a better singer or musician, I'd just use a recorder or a looper — but my skills aren't quite there yet, and besides, it's hard to note-edit a recording.) Composer's Sketchpad[1] lets you paint notes directly onto a time/pitch canvas, bending and stretching them as they go along. (This works great for e.g. guitar solos.) MusicMessages![2] is a more basic piano roll that lets users quickly tap buttons across several layers to enter notes. (Musical bubble wrap! Works great for riffing on short drum sequences and chord progressions.)

There's another similar-sounding project called Helio that was posted a few weeks ago: https://news.ycombinator.com/item?id=14212054

I hope that in time, we get more Markdown-style composition tools vs. the full DAW suite. Good luck! I'm looking forward to seeing what you make.

P.S. AudioKit is pretty dope. :)

[1]: http://composerssketchpad.com

[2]: http://musicmessages.io -- working on turning it into a full iOS app, so will probably have to shut it down and fold it into the new app at some point

Sounds really cool! Actually, I'm close to releasing a (looper) DAW -- kinda geared towards live use, but I've thought a lot about composition too.


Send me an email at mpercossi at zenaud.io , always fun to talk to fellow audio devs :)

And for all the vim lovers out there -- my app supports vi commands for movement and editing :)

That looks really nice... too bad there's no Linux support :/

Yes, it is a shame. But: I will add it, along with Windows support.

Indeed, I'll go further. I'm really starting to believe that the only way not to get royally screwed as an app developer is by abandoning the "major" platforms -- which all want to turn you into a serf -- and target OSS platforms like Linux. I'm honestly tiring of dealing with the artificial roadblocks Apple (and Microsoft is no better) throw at me to further their own ends. I actually analysed SteamOS with this intent, but sadly it looks like SteamOS is geared towards the "living room" experience.

Anyway, long story short: there will be Linux support in 2018.

Does it have per note editing? For example in trackers you can specifically set a note to play volume X, pan Y, pitch Z

zenAud.io is designed for live use, so it currently doesn't have a piano roll -- instead, you define record loops using editing tools in the arrangement view and record MIDI or audio into it. You can also drag and drop to import standard MIDI files into the arrangement if you want to use pre-written stuff.

I realize this is a big limitation, but we intend to add a piano roll in the next few months.

Well you're surely over-promising, here's hoping you won't under-deliver. Do you have anything at all to show yet?

What do you think is an over promise? The resulting app will be less than 10KLOC, discounting third party libs.

I don't have a demo yet if that's what you are asking about but I've open sourced this for example


Actually I do have some old demos but they don't show the best parts. It's actually kind of hard to show those right now.

>a new DAW that should make writing music a lot faster and easier.

>Current DAWs don't really understand music.

>Current DAW : my thing = Windows Notepad : IDE.

It really sounds like you're promising a lot.

Have you seen e.g. synfire?

I'd be really interested to hear the concept of how you are making things more IDE-esque

I think my idea of a perfect music program is closer to vim than an IDE, but you're on the right track.

Have a look at extempore, a lispy live music/notation language and environment. Only emacs bindings, no vim, but impressive preformance nevertheless...

[0]: http://extempore.moso.com.au/

[1]: https://github.com/digego/extempore

I played around with this a while back and there are Vim plugins for it. My biggest problem was having to compile the thing from source which involved also compiling a custom version of LLVM, which took forever. It's possible this is no longer a problem.

I'm aware of extempore; it's impressive. I actually use SuperCollider quite a lot (in vim), which is not lispy (more OO) but in a similar space. But what I want is something that can operate on music how vim operates on text, not just operating on music-written-as-text in vim!

IMO, classical notation is the Vim of music - it looks bizarre to outsiders, it's totally unintuitive, it requires a lot of memorisation and practice to use effectively, but it's extraordinarily efficient in the hands of an expert user.

How is it efficient? What exactly is the alternative?

It's very quick to read and write. It contains all of the vital musical information in a very concise format. Numerous alternative schemes for musical notation have been tried, but none have achieved significant adoption.

Use vim to compose an abc file, then play it with software listed at http://abcnotation.com or https://en.wikipedia.org/wiki/ABC_notation

Have you looked at Lilypond? http://lilypond.org/

I have. "LilyPond is a music engraving program, devoted to producing the highest-quality sheet music possible. "

I don't need or want any of that. In fact when I write music, music engraving is the least of my concerns. Actually music engraving is generally the least of my concerns period.

Also I find the current music notation to be kinda idk outdated. I can read it, but I feel like it's a system designed by someone who had the mathematical knowledge of a 15th century farmer (which is probably how it came to be).

>Also I find the current music notation to be kinda idk outdated. I can read it, but I feel like it's a system designed by someone who had the mathematical knowledge of a 15th century farmer (which is probably how it came to be).

What specifically about it?

I can read (and prefer) standard musical notation but when handwriting I use Hummingbird [0] because I find it aides itself to handwriting. But I can't really imagine a "better" musical notation than what is the standard today, except a better way to communicate natural/flat/sharp notes.

[0] http://www.hummingbirdnotation.com/

This is a long discussion. But fundamentally music notation is very paper oriented and doesn't exploit the advantages screens offer.

> (and prefer) standard musical notation

Prefer it over what?

I wanted to bring up Hummingbird notation with a specific context in which I prefer it (handwriting) while still being clear that I prefer standard notation over Hummmingbird as a whole.

There's a lot of vim too :-).

Great. To be specific, what I dream of is something that can operate on higher-level musical constructs analogous to vim's text objects (works, lines, parentheses, etc). Chords, scales, rhythms, melodies? I don't really know exactly what this would look like but I suspect it could be very slick if someone got it right. I had some ideas and thought about implementing them a while ago but it got deprioritised next to making my own music and had to get to the back of the "some day I'll..." queue.

I've signed up to your google form, so I'll look forward to seeing what, if anything, you come up with :) I am on linux (and yes, I agree that music on linux is a pain), so I might not get to use it unless you port it, but I still look forward to seeing it, whatever it is.

Yes, you get it! That's exactly what this is. You sound exactly like me lol. I love higher order things and I've been chasing this "mirage" since I was like 12 but I never had the chops and time to really devote to this. Do you think that we could chat sometime? My email is my username at gmail.

What don't current DAWS understand about music?

Same thing that pencils don't understand about writing, and paintbrushes don't understand about art.

I'd argue that current DAWs expect the user to understand at least something about music. Sounds like OP is working on some "syntax-aware" features for their DAW.

Also, if this DAW understands something about music, will it constrain me to its understanding about music?

Most of what I write is highly dissonant or straight up microtonal.

> will it constrain me to its understanding about music

This is actually exactly what I'm trying to prevent. Most of the current solutions only kind of constrain you to a certain tonal space that you can maybe explore but the space of possible compositions is actually insanely large. My DAW is going to try to help you explore all that.

Microtonality is definitely something I've thought about and I think I can make it work but I'm curious to know what do you use currently to compose?

I lean towards Reaper the most as far as DAWs go.

Often I'll use http://www.huygens-fokker.org/scala/ and my synths and a fair bit of SuperCollider/Overtone.

Isn't knowing something about music something of a prerequisite for someone who wants to make music? Of course everyone has to start somewhere, but as musician of 20 years who loves DAWs, I would say learn an instrument first, or at least concurrently, if you want to start producing music.

The thing is that once you learn the music theory, few DAWs let you leverage that to be more productive.

Also why do you have to learn music theory first, why can't the DAW teach you as you go?

Mostly they don't understand well all the possibilities outside of typical meter and tuning systems. They can do some but tend to push you to writing 4-beat meters in 12-note-equal-temperament. Rhythm and pitch both have far far far more possibilities which DAWs either ignore or at least make second-class options you have to kind of fight for or try a few limited tastes.

Better question is what do they do understand about music?

Well can you tell us about that? You're the one who made the claim.

Check out synfire, it's the only sw that somewhat similar. but it's very expensive, and the ui isn't great (sometimes it looks like writing music in excel, like it can provide "intelligence" but you have to check check boxes and click on things, aintnobodygottimeforthat.maymay. Some people might find it interesting that it's written in Smalltalk tho.

Can you explain what a DAW is?

A DAW, or Digital Audio Workstation, is to building music what Final Cut Pro is to building video, or what Eclipse is to developing software. Most DAWs consist of multiple tracks which hold multiple audio clips, each of which are scheduled to play at a certain time. You build a song out of these clips, which you have loaded into the DAW. You can also add various effects to the clips and manipulate them. You can store abstract music event data ("play note A here at this time, then play note B Flat at this time") in additional tracks. This data has no sound associated with it, but like a piano player roll, you can set it up to play notes in some instrument, either an internal software instrument provided by the DAW or a third party, or emitted via MIDI to a remote hardware music synthesizer.

DAWs are used to produce the huge majority of music you hear in the media, from commercials to hip hop songs. Even seemingly real orchestral pieces for movies are often composed entirely using artificial instruments. For example, here is Junkie XL showing how he composed themes for Mad Max Fury Road.


Digital Audio Workstation. Think Ableton Live, Apple Logic Pro, Avid's Pro Tools, FL Studio, Cakewalk Sonar, Propellerheads Reason... tools to record/arrange/produce/master music.

They are all different beast that once you learn one you don't want to relearn another.

The only difference is that Abelton Live and Bitwig (Runs on Linux) are designed for live performance.

I like Reaper (Cost is 1/5 but equally capable) and it also runs reasonably well under Linux. https://linuxmusicians.com/viewtopic.php?t=15280

Actually many people never pay for a license it has a similar model as Sublime Text.

>The only difference is that Abelton Live and Bitwig (Runs on Linux) are designed for live performance.

Ableton, at least, also functions perfectly fine in the traditional piano-roll and timeline paradigm of DAW workflow too. Don't let the 'Live' part of the name mislead you into thinking it's only for live performers; it does everything the 'old DAWs' do, AND it's got great features to assist in live performance.

Also in terms of underlying concepts, if you know one DAW well, you can usually learn another one fairly quickly, as it becomes more a question of learning the interface more than anything else.

> if you know one DAW well, you can usually learn another one fairly quickly

I couldn't disagree more, but I am talking about doing professional work. The concepts are all the same but getting where you are proficient in a DAW takes a very long time to find the quirks and strangeness that each one comes with to produce a quality piece.

Video Editors are hundred times harder to switch.

Ableton is probably the best DAW right now simply because it has the most tutorials online.

I remember way back when I used Cubase. Couldn't find any decent help online.

With Ableton, you are spoiled for choice when it comes to tutorials and lessons.

Quick google: Digital Audio Workstation (DAW)

Many comments here mostly mention software. But there are some interesting exceptions. Check Surgeon for example, who likes to use his custom controllers with Ableton. You actually can see him re-wire the controllers every now and then. (Great music too ;))


Will this be for electronic music?

Ofc! Im working on this cause I wanted to make some electronic music but none of the current daws really let me express myself the way I want.

Cool, I added myself to your list. What would you say makes your interface different than others?

It's gonna be clean and fast, no clutter. Recently there was this on HN https://github.com/peterrudenko/helio-workstation which kind of scared me cause my UI is somewhat similar (but after thinking about it more, I actually find mine a lot better). Also note that the UI is only like 20% of the whole thing, the thing that I'm really trying to improve is the workflows. I will make Hypersphere the fastest DAW in the world when it comes to expressing your ideas

When I'm in the zone, I don't care about check boxes. I have some new user interface paradigms that I haven't seen done before (I can imagine they have been tried before tho) that should make writing music super painless and should let you express yourself.

Good answer, thanks! I look forward to seeing more and hopefully watching it blossom as it grows. Cheers!

When you say HN audience would be one of your target groups, do you mean that your DAW would be more like a development environment/programming language (like Sonic Pi), or would it have a more traditional interface?

Both actually :-). And those aren't even all of the "composition paradigms" and they are all first class citizens in the UI.

Awesome, I'm stocked to get my hands on this!

Sounds intriging. Any chance it would work under linux?

I kinda wish it would but audio on Linux is such a pain. I think that porting it won't be too bad once it's done but I'm not promising anything.

Can you share some more details? I'm interested, but hesitant about putting my email into some random Google form.

It's gonna be the fastest piano roll. It will have semi live performance kinda like Ableton live but live is sample based, mine is music based. I don't want to reveal too much, but I've talked to professional composers and kind of described the work flow I envision and they all were like "i need to this asap".

Idk if this will appease some of your concerns but I've been around hn for a while (I'm in the top 30 karma wise), I won't spam you.

Have you toyed with live music programming ? just curious

Also nice endeavour

So I'm working on this mostly to scratch my personal itch. I'm aware of those but to be honest I never found them to be more than toys. When I listen to music made in these, I feel like they generally lack some structure. My thing is all about helping you structure things.

Aight. I wasn't comparing btw, just wanted to have your point of view.

Just FYI there will be a small JS programming environment in my thing.

Any further details may be?

Are you interested in anything particular? I can provide a lot more detail but I think that none of that will do it any justice. Sign up and check it out when it's out.

No offense, but it sounds like an empty sales pitch. You're trying to bring attention to the product you're building - and there's nothing wrong about it - but you're only presenting only vague promises, without even discussing what do DAWs get wrong about music in any significant detail.

In the spirit of constructive criticism, may be you could at least point to specific negative sides of existing DAWs that you're willing to eradicate?

Will this have any support for external VSTs like Massive?

ofc, This is kinda standard. Generally I'll try to go well above and beyond what possible today.

Note that I'm on the core team of AudioKit https://github.com/audiokit/AudioKit which is a platform for AudioUnit development so I know all about how dope plugins are :-).

Hi, I make audio plugins. Let me know if you need OEM plugins for your DAW.

Hm this would have come in really handy a while back lol. I might take you up on the offer still.

>Also the note input process and experimentation is extremely time consuming and the DAW never helps

What is so arduous about plugging a midi keyboard in?

I can think a lot more complicated music than I can play. I don't always have access to a keyboard. Piano is a good instrument but sometimes I want to write drums. Also sometimes I want to express relationships between the single notes, not just have the notes themselves. To record 10000 notes from a piano, you need to hit 10000 keys possibly more than once to record them. My thing will let you achieve the same thing with less than 10000 actions.

I agree with you with respect to representation. Pattern/sequence generation is something most DAWs don't have outside of something like Cthulhu which the languages can do easily.

Another thing I've been dying for is an easier way of layering sounds, for example drum hits. Multiple midi sends feels hacky in Ableton and certainly not a first class feature. On the other side of things, the pain of rearranging multiple wavs after wanting to change a note is even more painful.

I totally agree with you about the actions though. Configuring plugins etc can be a huge drain and its very mouse heavy.

I'm not sure if I understood what you meant with layering drum hits. If you mean being able with a single trigger to have layered samples go off that form a drum hit like a snare or bass, then some drum samplers come to mind like Geist. That's the one I personally chose due to wealth of flexible features, but layering a group of samples into a single "hit" is a foundational feature in the program.

> Another thing I've been dying for is an easier way of layering sounds, for example drum hits. Multiple midi sends feels hacky in Ableton and certainly not a first class feature.

Layering drum sounds is a typical feature in all DAWs, and in ableton, with it's instrument and drum racks, it's even easier to layer whatever you want. May be I don't understand what exactly are you trying to achieve?

I see - so almost like you could have a macro that knows certain changes / scales and just hit :dorian or whatever? That's interesting...

Exactly. Be assured tho that you are just barely scratching the surface :-). Sign up above if you haven't, it will be dope.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact