If you're interested in learning to make music and the lessons in the link are confusing or overwhelming or boring, some students find a "peel back" approach to learning songwriting easier to grasp at first. A peel back approach just involves finding a song then teaching by stripping away each layer: start with stripping away vocals, then learn melodies, then chords, then finally learn about the drum beat underneath it all. A benefit of the peel back approach to learning is melodies and vocals are the memorable parts of a song and easiest to pick out when listening to the radio so a student can learn using songs they know and like. Either way, songwriting is hard and fun. Best of luck.
P.S. I think Ableton makes good software and I use it along with FL and Logic. They did a solid job with these intro lessons. But worth mentioning, there is free software out there (this includes Apple's Garageband) that offers key features a beginner just learning songwriting can practice on and mess around on before purchasing a more powerful DAW software like Ableton.
If anyone is interested in a Free/Libre/Open Source Software option (cross-platform Linux/Windows/Mac) I've really enjoyed producing with LMMS over the past 18 months or so: https://lmms.io/
It's definitely got room to grow in terms of functionality/interface but the development community is of such a size that it's possible to still make meaningful code contributions. I've contributed a couple of small patches to improve the Mac UI as a way to get familiar with the code base.
Of course, the downside is that I have to decide whether to write code or make music whenever I sit down to use it. :)
"A podcast where musicians take apart their songs, and piece by piece, tell the story of how they were made." @ http://songexploder.net/
I sort of wish there were more technical details as a rule, but it's understandable given the relatively short format that they can only cover so much ground. I'd prefer longer episodes personally, but I suppose not everyone might, and there's tradeoffs in producing more content. I guess I'm just glad that the show caught on and is still going strong.
Protip: sample the guest's clips they put on his show :) I've gotten some really great material from this show sonically since most of them seem to be the individual instrument tracks.
I love looking at systems and peeling back the layers to find out what makes something tick. That's not an approach to learning that I really encountered until I entered the workforce and was met with complex systems that I needed to understand. And I loved it!
How would this approach apply to a more traditional instrument that doesn't have the advantages of having a "good" sounding sample already preloaded that can be easily layered into a song that you are composing? I grew up learning the violin and it was endless disjointed drills until it was put together in a classical song that I never heard before nor had the desire to play. 8 year old me just wanted to play the theme song to "Jurassic Park" and roar like a T-Rex.
In my view, learning an instrument has a lot in common with learning to code, in that some people take to it, and others don't. And we probably know some of the reasons, but not all of them. Of course teachers and teaching programs vary, as do kids and their family milieu. But nonetheless, music education has huge attrition.
For instance, by way of anecdata, I took string lessons as a kid and loved it, and my kids have gotten pretty serious on violin and cello. They actually like classical music, and it probably helped that both of their parents also enjoy it. So it definitely works for some people.
Just for fun: chords in scales are numbered from bottom to top in Roman numerals. I feels like home base, V feels like wanting to go home. If you want to create the feeling of going home but then not really go there you can go from V to VI instead of I. 'Sad but I have closure'-type ending? Major IV - Minor IV - I. Bluesy feeling? Add a minor seventh to your I, IV and V chords. Dreamy? Major seventh instead there, except on the V.
It's even entirely possible to learn to recognize all of these types of chord progressions and sounds instantly. I'm working on and off on an ear training app that randomly generates them that musicians can use to train their musical ear.
Sounds interesting. Please do a "Show HN" post when your app is ready for it.
I'm also wondering if these chord progressions work the same way for all scales, or if, for example, the 'sad but I have closure'-type ending only sounds that way in major scales? From experimenting I think it only works for major scales, but I'm not sure :)
There are also desktop apps and DAWs that have chord intelligence built in (e.g. Cubase and Reason do).
Some people have a great ear for music and can write solid songs without formal training in music. Other folks come at music from the more theoretical side, although usually with a lot of implicit knowledge of and experience with music as well.
For most people who are not formally trained in music, their songs can be improved upon on a technical level by someone who has deeper theoretical knowledge (learned either explicitly or implicitly).
For a good discussion of this, check out Tim Ferris' podcast interview with Derek Sivers. Derek talked about how he had learned a lot about music implicitly. In one summer, a teacher of his formalized that knowledge so efficiently that he was able to test out of lot of classes (1.5 years worth?) once he went to Berklee School of Music.
Composers classically trained this way tend (!) to have an easier time writing melodies, harmonies, and progressions in a consistent manner, ie not having to wait for "inspiration to strike". The composer, of course, still needs to develop an emotional connection in the music, but the point is that it can, and routinely is, taught.
The most successful tunes I made were more or less "discovered" from incrementally experimenting in the DAW, and not from any kind of original plan or idea. Maybe I'm just not a musician! (I'm an indie game dev who started making my own tunes for my games)
From a remembering the tune perspective, I have the same issues, but I think it's more related to not applying musical lexicon and hearing skills the same way: you remember poetry or a paragraph of text because you remember the ideas and how to go from one to the other, if you are a musician and have something in your head and start thinking along the lines of "this is using a lydian mode, the progression is ii IV V I then it modulates to the relative minor and switches to dorian, also the theme is going down in thirds for two bars, then it will stay on the chord root for one and move to the dominant 7th" you are going to remember it a lot more easily than just by remembering the melody itself
It would be like comparing how easily you can remember poetry in English vs poetry in, say, Russian, where you only have the "sounds of the words" in your head to remember, but you don't have the syntax or the meanings to help you as well.
The first approach has a sense of creative wonder to it, where your being guided by an outsider. As much fun as that is, it is very limiting and I suspect most people abandon that approach as their skill improves.
Writers keep pens and notebooks by their bed so if they wake up in the middle of the night they can start writing right now. Or they have tape recorders. Anything works as long as it's immediately available. The iPhone has a "Music Memos" app, I'm sure there's something similar for Android. That's what I use.
Learning music theory and how to write music properly can come later. As long as you can sing, whistle, or hum a tune, you can record it.
Switching from a DAW to a mostly-hardware setup helped with this, as it's easier to "play" with knobs/sliders/keys/pads than virtual objects accessed via mouse/keyboard. Once you get things wired up, it's pretty straightforward: play around, find something you like, track it in, build more stuff over it.
Ever since making this switch, I found the parts that I used to practice/enjoy (like slicing and manipulating samples, for instance) feel much more tedious.
Another benefit is that it's easier to make mistakes, which often have more interesting results than the thing you originally intended. My guess is because this violates your internal "patterns" and forces you to think outside of your normal "music creation" schema, resulting in a more creative/unique outcome.
I've also tried to switch to "totally live" recording (i.e. minimal sequencing beyond loops and patterns, all automation and non-repeating parts done on the fly), and that's a bit more challenging, because you have to redo everything if you, say, screw up a little solo bit.
That's where music theory pays off. Learning to name chords, scales and arpeggios gives your brain a framework to reason about and remember musical ideas. It allows you to break the music into a more concise abstract representation, rather than holding it in your head as sound. If you understand the structure of music, it's far easier to make connections between different pieces of music.
Do you have much formal knowledge of music theory? If not, that might help.
When you "get a tune in your head", if you can describe it to yourself in abstraction, it will probably be easier to remember (or even just write down).
Check on this page on 12 bar blues for some examples of easy music notations. Similar types of notations and/or terms exist for different parts of a song.
I'm starting to hit its limits for my workflow though. One of the really nice things about how easy it's getting to write software these days is that I can now fire up, say, a Swift playground, and after getting the fiddly basics of "how to record and loop audio buffers" with AudioKit, there are very few limits on what kind of idiosyncratic workflow tool I can design for myself. The UI looks and acts how I want it to, and since over the years I've trained myself to act like a human synthesizer, I can compose a whole song without even worrying about having an instrument nearby.
Loopy - Multitrack audio looping with very simple and expressive control https://itunes.apple.com/us/app/loopy/id300257824?mt=8
The "can" is theoretical. This is my next big hobby project, and I'm still in the fiddly phase.
If I have the beginnings of a song in my head, or I have been humming to myself, sometimes I just record the parts I have as vocals - humming or full-on beatboxing the bass/strings/lead/beats separately and as close as I can make them to my head-song (including filters with my mouth)- and then replace as I go, figuring out how to achieve the sounds that were in my head.
She hates music theory and trying to use her left brain for art. I'll say oh that is in F and she gets mad so, easier to just let her record it than try to notate it down.
You can understand a musical idea as a kind of memory impression, an echo that you can play back in your head, and also as a pattern of pitches and rhythmic structures . Having two reference points , sensory and abstract mathematical is very useful.
I believe the same is true with song writing, in a sense. You're still applying some parts of music theory, but most by-ear learners like ourselves simply grasp the concepts and have internalized them naturally, without needing to be taught. Music is little more than patterns at the end of the day, and our brains are very good at recognizing patterns. What you and I know intuitively, others can learn through training and repetition. Both approaches are valid, and yield interesting (and often different) observations.
I went going through Music Theory classes during my brief adventure with Liberal Arts Majors in college. I felt like I already "knew" the material in a way I couldn't quite put my finger on. It was like I was finally understanding what my brain had been doing all these years. I recommend it if you haven't yet had the experience.
People have studied music and composition since at least ancient Babylon, so, well, yes?
>I always thought it was just some natural ability that people have.
With natural ability you can sing some melodies. For learning to play an instrument, adding chords to the melody, you need studying, even if you learn by yourself and by ear (as many folk musicians did). Song melody, one can have a natural feel for creating, but nobody just starts writing songs in full form "from natural ability".
>For as long I can remember if somebody told me to write a song I would just spit it out after a while.
What would that mean? You'd write a song on the guitar for example? If so, then you already know the chords. And not all of the theory, so how complex is your song? Just barebones songwriting (country/folk style)? Can you take it further? Can you write the parts for musicians to play on your song? Can you write different genres on spec?
There are more things in making music/songs than "spitting out" some melody.
That doesn't mean that those subjects aren't covered in detail in textbooks and university courses, or that people cannot learn how to do it.
There are other people who can't make heads or tails out of a keyboard, compose a tune in their head, or understand chordal progressions, but nevertheless compose music in layers and still do extraordinary work. They find what they like by playing with notes on the screen. Joel Zimmerman, a.k.a. Deadmau5, is an example of this.
I am an example of the former, with natural ability, bolstered by training in music theory. But I still use a layered approach when I am composing, generally starting with a beat or bassline, playing with melodic progressions in snippets, and eventually moving into a traditional composition process when I have something started that I like. Ableton makes this process extremely easy and productive.
But I think melodically and tend to do a lot of counterpoint. Getting the chords out of my head and onto the screen is often the last thing I do. I don't know how well his approach would work with counterpoint, since counterpoint often creates and resolves dissonance using passing tones in double time.
However, there's another part of making music which is not covered at all here, which is the actual engineering of sounds. Think of a sound in your head and recreate it digitally—it'll involve sampling and synthesizing, there's tons of filters and sound manipulation to go through, they all go by different names and have different purposes—it's a staggering amount of arcane knowledge.
Where is the learning material on how to do this without experimenting endlessly or looking up everything you see? I want a reverse dictionary of sorts, where I hear a transformation of a sound and I learn what processing it took to get there in a DAW. This would be incredibly useful to learn from.
What I found was that as your music making experience unfolds, you start amassing these little tricks here and there and they're only yours, usually tied to your stack of tools and the way you think. That is extremely hard to replicate and also very personal, imho that's why it's so difficult to actually pass that sound-sculpting knowledge to others, and that's why (besides the odd youtube tutorial on how to make a specific sound -- usually targeted at a specific vst, explaining which knobs to turn), we won't find many general sound sculpting learning material online. Even tho it is available if you gather around from forums and etc, it is still pretty much a personal experience.
Answering your question: As the time passed, the endless experimenting diminished and I got a proper sense of what does what, and after 5 years making music I'm more able to pinpoint what I need to fiddle to transform the sound the way I want/imagine in my head.
I'm still not quite there yet but if I can offer one piece of advice, that is: Don't shun the 'endlessly experimenting to find a sound'-thingy, because that's the best way you can grasp the tools. Over time you'll be able to get there faster but it's a necessity..
This is how much I evolved, without even noticing, only making tracks after tracks:
Sep 07 / 2012 http://codegrub.org/flipbit/musicmaking/equal02.mp3 cringe
Mar 25 / 2017 http://codegrub.org/flipbit/tracks/flipbit03%20-%20Twothousa...
I've been building up bit of an epic studio  over the past few years after being in-the-box for years. And the hands on nature of real synths is so much more intuitive that VSTs imho.
sir, you have already reached it: it is fucking epic, wow! Congratulations, it must be really fun being on that room, and it must be difficult getting out of it hehehe.
I want to get more into the hardware side of music making but being cost efficient is paramount to getting up and running in the cheapest way possible, specially (in my case) this is a hobby I consider myself 'just starting out'. If I have some cash to invest in it, I go to what will give me the most return (what will enable me to study the most). In my experience that meant DAW Software (Renoise), MIDI KEYS (Axiom 25), interface (Yamaha AG06) and a pair of monitors (Yamaha HS8's). Now that I've the basic kit 'sorted out' it is time to get some hardware.
What would you suggest? I've been eyeballing a KORG MS-20 mini but I don't know...
Indeed it is!
Monitoring and room acoustics are definitely the very first thing to focus on. It was something I neglected for far too long. If you can't hear what's going on it doesn't matter how much gear you've got.
My favourite hands-on synth is the Roland Juno 106 , it's so god damn simple to use, everything is there, and so tweakable. They seem to have gone back up in price, but I picked up a pristine version for £600 off ebay. Obviously you need to be careful with older gear, and definitely try before you buy to make sure the thing isn't falling apart.
For mono synths my favourite is the Moog Sub 37 , it's knob central and sounds amazing, as all Moogs do. Although I was considering replacing it with the simpler (but more classic sounding) Model D which has just been re-issued.
The best modern analogue synth I have is the DSI OB-6 . Although we're getting into the expensive end of the market here, I reckon it's a future classic. These things will hold their value very well. It's also got all the knobs and controls you'll need, but with slightly different filters to most other synth manufacturers, which is good for the contrast.
The Korg MS-20 would definitely be a good place to start (I haven't got one myself, but many friends have, and rate them highly), the fact that it has all the knobs on the front for every component of the synth and has the patchbay is perfect for experimentation.
You'll never regret getting an analogue synth, the sound just dwarfs what VSTs do imho. They're _alive_ in a way that you just don't hear from VSTs.
It's also interesting how different analogue compressors and EQs sound compared to VSTs. There's a rawness and sexiness that I have yet to achieve in-the-box (not saying it's impossible, just I'm too lazy to spend ages trying to achieve the sound I can get from hardware by simply switching it on).
> making but being cost efficient is paramount to getting up and running in the cheapest way possible
I have the Chandler Curve Bender EQ  which is based on the EMI Abbey Road desk that was used to record Beatles and Pink Floyd albums. It is super expensive (£5000+), but as soon as I heard what it could do I just needed it in my life. I call the on/off switch on the front of it the "it's just better switch" because as soon as I press it the sound in my studio turns 3D and everything is good in the world. I have the plugin version of it (UAD), which is very good, probably the best VST EQ I've heard - but it's not a patch on the gear and doesn't invoke that emotional feeling.
The reason I'm saying this is that yeah this stuff is expensive, some of it super expensive, but if you pick up one piece of gear a year and learn it inside out you'll be in a great place - creating awesome sounds quicker than you ever could before in-the-box. Most people I know with killer studios took a decade to get there.
Here are the channels you can listen to more of my stuff, and by all means please help me get better by commenting and feedbacking me if you can. If you make music as well I will gladly return your energy and time by commenting and giving feedback. :)
Also, I usually participate on the listen thread/feedback rounds on reddit's /r/edmproduction, you'll find me there as well commenting on everyone's tracks ;)
The only problem with the last part of your request is that even if you are to watch people design sounds for a couple of hours you might find that when you try to replicate that somewhere else it doesn't sound right. This is partially because every synth/softsynth is different and will produce different sounds and have different parameters. It can be infuriating to get a tutorial on how to produce that perfect "Bladerunner Blues" synth and come out with something that sounds totally flat and bad.
To make matters worse, there are apparently 0 good tutorials on the subject - I just googled for 15 minutes to no avail. The two below cover some of it but I personally can't bear listening to the people who make these videos.
Of course, finding the right waveforms, filters, and envelopes required to get to a particular pattern of sines is still the challenge, but having that understanding of the medium underlying it all makes experimentation that much more productive (and fun).
For instance, the "Blade Runner Blues" patch as I understand it is actually one of the brass presets on the Yamaha CS-80. (Bad recording but here: https://www.firsthomebank.com/personal-banking/deposit-produ...) The CS-80 has a pretty unique architecture for a polyphonic analog. (http://www.cs80.com/tour.html) To get a patch exactly right would require replicating layout, filter architecture and structure, etc.
Knowing basics synthesis, however, can get you pretty close. I have a patch on my Alesis Andromeda (which has some CS-80 type elements such as a ribbon controller, dual resonant filters, and an unfiltered sine that goes to the post-filter mix) that someone did in a user community -- it came out decently good. I was able to Google a book page that gives a good overview of recreating it on other synths. (https://books.google.com/books?id=Jz1JMnZNO88C&pg=PA74&lpg=P...)
Now, to really get the Vangelis Blade Runner type effect, you have to be able to play a synthesizer expressively. This is unfortunately is tougher on most synths compared to the CS-80, due to the CS-80's polyphonic aftertouch that most synths lack. That being said, there are other techniques people could do. I understand that Vangelis used pedals to manipulate filter and volume, and that is something that can be done on many synths that I don't see a lot of people taking advantage of. Don't discount playing technique when it comes to the art of sound design, in other words.
I will say that I think the 'power-law' nature of that is not dissimilar to being a primary sound transduction artist. You don't get a large number of people being celebrities at tutorials, or of disseminating free plugins.
And yeah, I do mean to expand upon this: got a likely domain for it just yesterday. The trick there is that you need to be inter-disciplinary enough that you can produce a really wide range of content, that by definition a newbie couldn't possibly process. I can go from 'slew rates in op-amps in boutique guitar stompboxes' to 'exploiting unusual interpretations of the Circle of Fifths' (did you know the Four Chord Song can be read as a atomically contained minimum-area space in an extended diagram of the circle of fifths?) but a newbie wouldn't cover that range.
There are no secret weapons, just secret masteries: by that, I mean 'stuff that's sensible and obvious, but to the contextless outsider seems like black magic coming out of nowhere'. Any sufficiently deep context seems like magic to someone who has no idea of the scope of that context.
if you're trying to make your rock band sound more like led zeppelin there is a fairly fixed set of tools and instructions (albeit futile, ultimately)
if you are imagining a pure sound in your head that is not straightforwardly produced by an instrument, then it gets a lot more complicated, and there are countless routes to the same goal. the experimenting is the fun part though!
For the longer route this is a classic http://www.soundonsound.com/techniques/synth-secrets
I mean, look at the interface for Serum, probably the best synth on the market right now:
It looks like an airplane's cockpit.
Sound design is a whole another part of music. Most amateur musicians don't even bother with it because it is way to technical to master. They just use presets.
I personally hate it, but if you have a technical bent, you might enjoy it
If you think Serum look complex, take a look at Zebra 2.
I mean the conventional music notation represents tones in five lines, each capable of holding a "note" (is that the right word?) on a line, as well as in between lines, possibly pitched down and up, resp., by B's and sharps (depending on the tune etc.).
Since western music has 12 half-tone steps per octave (octave = an interval wherein the frequency is doubled, which is a logarithmic scale so compromises have to made when tuning individual notes across octaves) this gives a basic mismatch between the notation and eg. the conventional use of chords. A consequence is that, for example, with treble clef, you find C' in the top but one position between lines, and thus at a very different place than C (one octave below) visually, which is on, rather than between, an additional line below the bottom-most regular line.
I for one know that my dyslexia when it comes to musical notation (eg. not recognizing notes fast enough to play by the sheet) has kept me from becoming proficient on the piano (well, that, and my lazyness).
You're not alone, this is a common reaction to music notation by engineers; a lot of people have wondered the same thing, even here on HN. For example https://news.ycombinator.com/item?id=12528144 https://news.ycombinator.com/item?id=12085844
I see some great responses, but I wanted to add that you have to keep in mind that tons of people have actually tried to make a better system, and nobody has succeeded. That should give you enough pause to ask why and consider the possibility that the system we have is really good in a way that you haven't recognized yet.
I think the problem is that difficult to learn and bad are easily confused. It is difficult to learn.
Also keep in mind that music notation has undergone many iterations, and it represents developments over hundreds and hundreds of years and covers every instrument under the sun - the breadth of what it has done throughout history and what can do might be hard to see.
I think that this is the incorrect way of looking at it. I suspect it is less that the traditional notation system is highly evolved and effective, and more that getting a critical mass of musicians to transition/relearn/teach/translate into a newer system is incredibly difficult.
For instance, while Imperial units aren't without some advantage, they are pretty generally inferior to the Metric system. But the US hasn't really switched because it requires a significant level of coordination and control that simply isn't easy to access. And getting musicians to learn and teach a brand new, objectively better system would be much much harder.
I have thought a lot about the problem (worked as a professional bassoon player for a very long time), and I can't say I have had many good ideas. There are some ideas for simplified music notation (with different shapes for flats and sharps) which work _very_ well for making sight reading easier. Until it doesn't: It can't express enharmonics (different ways of writing the same note), which makes tonality analysis harder, and can actually hamper readability since most people that are fluent in reading music usually "stay in key" when reading music.
A quick google gave me this: http://musicnotation.org/ and I can't say I am very impressed by anything I see there. But as you notice, most systems are oriented by lines. I don't think that is because people lack fantasy, but because it is a pretty good way to write music.
If you drop that requirement (and then assume digital storage) you could have 1. an underlying canonical format that has "all the information" but which is never presented to the performer, nor to the composer; and 2. a number of views that expose various dimensions of the composition. Like orthographic projections of a model in CAD software.
Presuming an interactive display (touchscreen, etc.) you could switch between these views at will; but even for printed sheet music, you could just isolate one measure at a time and then display several "stacked" views of that measure per page.
(Basically, picture widely-spaced, annotated sheet music, but where the annotations are themselves in the form of more musical notation, rather than words, appearing in additional sub-staffs attached to the measure.)
I don't believe this to be true. (Modern) Guitar Music is most often written in tab often without accompanying staff notation. Also staff notation is not loseless, musicians will interpret the music differently. For example, with violin, whilst some instruction is given on bowing it is almost never complete and the musicians will find different ways to fit the bowing to the rhythm, this can make a huge different to overall tone as (most simply) the up bow sounds distinctly different to the down bow.
Conductors can write notes about certain parts that can be accessed by musicians. Opera musicians (where different people play the same music every night) can have their own personal notes.
Most exciting is ofcourse that everyone has instant score access. That removes a shit-tonne of time wasted during rehearsals.
Those are just traditional use cases. I'm excited to see what will come. I don't know ifusic as it is practicedtoday can be "expanded" in any meaningful way, but that only time will tell
I've got a plan on the backburner to do something like this using Ohm https://ohmlang.github.io/
If there's a viable alternative to music notation that you know of and is superior to what we know as standard western notation, feel free to share.
Your choice of example is interesting, considering metric has won, and the US is switching slowly.
But there is no incorrect way of looking at it, music is an art. Standard notation is highly evolved and effective, it has been iterated on for millennia. Getting a critical mass of musicians to learn a newer system would be incredibly difficult. Both are true, and you can't compare them and say that one is "more", that's flatly not true in any meaningful sense.
At its core, musical notation is succinct: a mixture of logic and unique symbols. Note markers are isomorphic to pitch. Rhythms subdivide with vertical lines. Special symbols and brief phrases denote beginnings, ends and loop points. (They're not usually in English) Geometric figures indicate volume and speed changes.
A competing system in my purview is "tracker" notation. It's vertical and generally only used on machines, but hand writable: It looks like:
I think a valid comparison is the regular alphabet. It is, after all, a coding system for language in the same way that notation is a coding system for music. Most of the problems of that coding system (my pet peeve is english spelling) generally stem from conventions rather than problems with the alphabet (italian and german is much easier to spell correctly).
There might be some interesting alternatives (hangul!), but those systems come with their own share of problems and generally have no big benefits. I actually believe that musical notation is better fit for it's task than our current coding system for language.
As a US citizen who is a metric fan and loves using it, in what way? The government did switch - in the 70s - according to its own statutes, it had to.
It has crept up in various places (and I find it hilarious) innoculously, like in 2 liters of soda, or in how computer processors are talked about in mm² die areas.
But the average American still uses imperial units religiously, anywhere they approach a problem involving any unit of measurement they always default to imperial, and having a 14 year old brother I see no change in his education or habits to indicate a slow transition of mindshare. The government moved decades ago, but the people aren't moving at all.
I get the impression it is much like high school language classes - you learn it once early on, never practice it, and by the time you are a full adult you have completely forgotten it. I'm not sure how to improve the situation to actually get the people to start using international standards, because if you were to start trying to force it on the supply side people would just not buy metric tools and information because they forgot it back in primary school.
1. Make sure everyone is educated in metric
2. Change the easy things: the paper size the government uses, the units on food labels, the measures legal to use for sales of loose food or other goods, the units the government uses for all types of reporting. (Therefore if businesses want government contracts, they'll need to use metric.)
3. Change other standards, like residential construction, preferred fasteners, wire sizes. Where old measures are required for compatibility, write "24.5mm" in the standard. If the dimension could be changed to 25mm without any side effect, use that.
4. Change other things people see daily: I don't know if doctors use metric in the US, but I assume they communicate to patients in old units. Change the default, but accomodate older people. Change the road signs. Is anything left?
The UK is part way through 4, but has been stuck there for decades.
There's no reason a given school couldn't teach a "colloquial notation" first, with the "Lingua Franca" musical notation taught later on, for everyone in that given school. Then everyone who comes from that school would know that colloquial notation.
Consider: the "Chicago school" of Economics; "Rugby School" football; etc. These things start as colloquialisms, then spread to global awareness.
Music notation is more like a programming language. The score is like a program that you can read/interpret and play.
You say this pretty matter of factly, but I actually vehemently disagree. Many imperial measurements are better than their metric counterparts for day-to-day lay usage.
- Fahrenheit is a better scale than Celsius
- Inches, Feet, & Miles are very practical units. Centimeters, and Meters much less so.
- Pounds are smaller and offer better delineation than Kilograms.
- Liters are pretty similar to quarts, though I admit the various Imperial sub-units are annoying.
Sure, it's easier to convert between metric scales, but the number of times I actually do that?: approximately zero.
When is that EVER useful to the layperson?
There is an issue with "kilometer" being a complex word for everyday use (as compared to a mile) in the English language. That's more a linguistic issue that about the unit itself. Other languages have solutions to that with shorter colloquial name for the unit.
Of course the imperial units give a good opportunity for being funny, in ways like specifying speeds in furlongs per fortnight. But you can do the same in SI-derived units, like parsecs per picosecond.
Even in English people of a certain age can say "klicks" and be understood.
Other languages often say just letters "k" or "km".
Really? Do you know how much easier it is to compute surfaces and volumes in metric systems compared to imperial? Concrete example. Figure how much soil you need to buy to fill a box knowing L, W and H. In metric it is a 10s process. In imperial i do not even know how you are supposed to do it. Does anybody even know how many quart are in a cubic foot?
They know it, after they do the conversion, via metric system.
(OK, nowadays you can just enter "1 quart to cubic feet" in Google. And the funnier ones you get at https://en.wikipedia.org/wiki/List_of_humorous_units_of_meas... )
In the metric system, converting between length, volume and weight is trivial and straightforward. This comes into play neatly whenever you need to pile up a precise amount of batter or liquid from containers measured with a different unit.
Replacing standard notation for all uses may be doomed to failure, but replacing standard notation for some particular use case (especially new use cases that weren't anticipated when standard notation settled into its current form) may be a very useful thing to do.
Computers also give us a few new options, such as displaying notation in a time-varying form, or using three dimensions, or notating the music in some universal language that isn't necessarily easy to read but that can be easily rendered in any desired notation.
Lattice notation for instance is something I really like, but I don't know how to represent it without some kind of animation.
Here's an example I stumbled across on Youtube awhile back of the kind of thing I mean: https://www.youtube.com/watch?v=jA1C9VFqJKo
Lattices generalize to higher dimensions, which means they might be amenable to virtual reality or even some sort of human-brain interface that allows you to experience 4 or 5 spacial dimensions at the same time.
Isn't most tolerable trade-off between multually-incompatible requirements another way of saying "best overall"?
Totally agreed there are useful local overrides of standard notation. Tablature is one example, and there are others. I wouldn't call those replacements for standard notation though. Both notations exist, both serve different purposes, neither is going away, there's no either-or question to be resolved.
The lattice videos are super interesting! Thanks for sharing that. I want to watch a few more and understand his layout choices -- I think I kinda get it, triads form triangles. These don't encode anything temporal though, so this is a visualization that helps understand harmony spatially, but is not a musical notation and can't encode a song, right?
I could have said that better. What I meant was that standard notation isn't better than every other system according to every metric we could use to compare such things.
Gary Garrett has more lattice demos on Youtube. Here's one that's an animation of an example in Harmonic Experience by W. A. Mathieu (which uses lattices extensively to explain harmony and is the best reference I know of for explaining how to understand them): https://www.youtube.com/watch?v=I49bj-X7fH0
A 3-5 lattice is a grid where one axis is fifths (powers of 3 in just intonation) and another axis is major thirds (powers of 5). Garrett implies a third axis for septimal flatted seventh (i.e. barbershop 7th) intervals. Since the grid is leaning to the right, the diagonals that lean the left are minor thirds. Powers of 2 (octaves) are usually ignored. Triangles that are flat on the bottom are major triads. Triangles that are flat on top are minor triads.
There isn't an obvious way to encode a whole song onto a single lattice diagram in a way that could be printed on a page and still be readable. They seem to work pretty well as animations or as static illustrations to explain chord transitions, though.
This is totally true; tablature is better for beginning guitar players to learn to play specific songs on the guitar.
The only reason tablature doesn't supplant standard notation is that the metric under which it's superior is much narrower -- it's only for guitars, and only better than standard notation for beginners.
I don't think standard notation is necessarily the best overall, by I do think it happens to be the best overall, the best we've got today. And I'm not convinced it will ever become a choice, as opposed to standard notation evolving like it has in the past to incorporate new ideas.
Thanks for the explanation of the lattic layouts; I hadn't noticed the triangle orientation part, I only got as far as seeing that horizontal lines formed the circle of fifths. I can't tell what the plus and minus symbols mean, do you know? Usually those are used for diminished and augmented chords and not single notes, so is Bb- another name for A that is useful under the lattice system?
For instance, in just intonation 2 (the major second of the scale) has a frequency that makes a ratio of 9/8 relative to the tonic, but sometimes you might want a slightly flatter major second with a ratio of 10/9. So, that note is label 2- to distinguish it from the regular major second.
Yes, some other instruments have their own specific notations & tablatures as well. These aren't replacements for standard notation though, and never will be. They have a place, and they are useful, but they aren't in competition with standard notation. Tablature has its disadvantages (https://en.wikipedia.org/wiki/Tablature#Disadvantages) but also the single biggest reason for standard notation -- groups, band, ensemble & orchestral playing -- is something tablature can't help with at all.
Totally agreed that standard notation doesn't help with electronic sound reproduction, but I'd suggest that standard notation isn't for sound reproduction in the analog world either, that's not it's purpose. Standard notation is the sequencer, not the synthesizer. You can use standard notation to encode songs in the electronic music world, but it's definitely not super convenient, hardly anyone does that. The analog version of trading setups and circuit diagrams is carving your violin using plans and specifications of a Stradivarius violin.
QWERTY keyboard is something humanity found a better solution, people have developed better layouts like Dvorak, per example, and world keeps using QWERTY (not my case).
I have studied long time ago that TCP protocol is also not the best protocol, there are much betters and faster, but people keeps using the old TCP for Internet...
I believe when something is already consolidated, it's expensive to change, sometimes it's not worthwhile do update all the consolidated knowledge/investment, even when having better solutions.
World updates consolidated solutions just when the gain really worth it, it's not the case for music notation.
I also agree with you, the music notation could be easier, but I believe they don't upgrade because the masters musicians have mastered it, so they like the actual notation, and they are the fellows with enough knowledge to create a better version. I believe there is others types of notation, but it would need to be used by the masters musicians, and music schools, and universities to start a wave that could replace the actual notation (that already works pretty well).
"the best-documented experiments, as well as recent ergonomic studies, suggest little or no advantage for the Dvorak keyboard."
"The trap constituted by an obsolete standard may be quite fragile. Because real-world situations present opportunities for agents to profit from changing to a superior standard, we cannot simply rely on an abstract model to conclude that an inferior standard has persisted. Such a claim demands empirical examination."
Musical notation is a vastly more complex system than keyboard layout, and I don't believe we have a Dvorak of music notation to even compare with. There are no contenders for musical notation that a large group of people believe are superior. So there's no reason to believe that inertia is keeping people from using another notation.
To go one step further, music notation is constantly changing, it has been evolving, adopting and incorporating the best ideas for thousands of years. What reason is there to not start with the assumption that it already took the best changes so far? I have no doubt that if superior ideas for notation develop in the next hundred years, that at the end of it, we'll still call the result 'standard music notation'.
The latter is my theory about music notation; that inertia is not even at issue yet because there are no serious alternatives.
And inertia might never be an issue, because music notation is a fluidly changing system. TCP and qwerty/Dvorak are static systems that don't ever change, so you can argue about which one's better. Music notation is changing and improving, so it's hard to suggest that people are resisting change, and hard to suggest that something better will supplant it, right?
I agree with your theory in general though, outside of the issue of music notation, and I think a lot of people do. It's just a matter of finding the right examples that clearly demonstrate it. And it would be really interesting to somehow quantify the amount that something needs to be better before people will adopt it. It's like static friction in physics -- it takes more force to get something started moving than it does to keep it moving.
I've always wondered about that. Why?
As a specific case, in electrical work, it's easy for me to specify "6 mil trace/space" attributes for a PC board design. Not so easy to say "0.1524 mm" or "152.4 microns." If I round my specification down to 0.1 mm, the resulting copper features will carry less current and cost more money. If I round it up to 0.2 mm, other physical and/or electrical requirements won't be met. So now I have to add at least one more sig fig, which is a pain in the neck for no obvious benefit.
The first thing that looks a bit like modern notation is probably plainchant, originating in the catholic church circa 14th centure:
The basic system we use today originates from about the 1600's or so, but has still evolved a lot.
There were tons of historical warts along the way that have largely dropped off - for instance, figured bass notation (https://en.wikipedia.org/wiki/Figured_bass) or the French violin clef (https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Fr...)
I got a reply there that the current system is only suitable for professional musicians, and that you'd need something like shape notes to reach mass musical literacy. Now I'm hopelessly biased as a music degree-holder, a semi-professional musician, and a Presbyterian to boot ;-), but this strikes me as setting the bar way too low. Given levels of overall literacy in the US (which were very different when shape notes were developed) I don't think it's that difficult to learn the notation itself – the difficulty I think is in mastering the music system.
If there's a piece in C, for example, in most traditional Western music you're unlikely to play off-key notes. So why take up valuable space for those when you can denote that unlikely event with a sharp/flat symbol?
Traditional music notation made no sense at all to me until I realized this.
Edit: For those that don't know, in most western music you're only going to use 8 out of the 12 possible notes most of the time. This is not universally true especially of modern non-pop music, but traditionally if you played off-key notes people thought you might summon evil spirits so it's easy to understand why things would be written down this way. Not only is it space efficient, but you wouldn't accidentally summon the devil. To summon the devil you have to really want to and write a flat or sharp in there.
You mean 7 notes. Traditional music notation and terminology is confused in many ways, one of which is a fencepost-counting error. As a result: octaves are actually seven notes apart in a scale, two major seconds make a major third (2+2=3?), and two octaves make a fifteenth (8+8=15!?).
Can't thank you enough because your characterization is the first one I've heard that adequately explains the foundation of the visual system as one of condensation. I've been shrugging my shoulders about this for many decades!
...Not to mention the even more difficult task of knowing what you want the piece to sound like in the first place. A novice playing a piece at 100% accuracy sounds nothing like a concert pianist playing the piece. There's a world of depth to music beyond just learning the right notes.
Here's an example: listen to this performance of Debussy's "Reflets dans l'eau" by Arturo Michelangeli, one of the greatest pianists of the 20th century:
And then listen to this student play it (she is still a high-skill player, just not world-class talent):
Before starting down that path, I would recommend familiarizing yourself with the wide range of music notations that already exist and continue to be used, and then the ridiculously varying plethora of failed alternative music notations that have been invented over the centuries, and why they failed to see wider adoption.
And, of course, it's fascinating to study the evolution of the existing "standard" music notation, and see the changes that have been adopted, and the ones that weren't. For all its apparent stasis, it has definitely evolved over the centuries, in response to the changing needs of musicians.
Some other reasons why musical notation prevails:
- There's a huge switching cost, as much of the world's written music is in some form of "western notation". Being able to read standard notation unlocks a huge wealth of knowelege from books, etc.
- Standard notation is one of the most flexible ways to create readable music, playable and easy to read across a wide variety of instruments and ranges (clefs, transposing score, etc).
- It's a common language, in the way that a programming language is. Some of the conventions may be confusing to outsides (i.e. why is the term "puts" used for printing in ruby? this seems normal to any ruby hacker but is completely unintuitive to a layperson). Once these conventions are learned, they provide common reference point. Like a lot of languages, it's far from perfect, but much like spoken language, more likely to evolve than be replaced.
- There's almost no motivation for anyone to replace standard notation. Notation isn't required for all forms of music (many great jazz and blues musicians don't read music), and for the forms of music where it is required, it's by far the quickest and most efficient way to communicate the information.
In summary, I think the question of "why can't we do better" is valid, but you could ask the same question about programming in C. There are good reasons to write C in 2017, and there are still good reasons to write musical notation.
It's definitely true that most jazz musicians can read passably, but my original point was that it's an aural tradition. no one learns to play jazz reading notes off a page. Whereas in western classical music, it's an essential skill.
Tabular sheet music is much easier to read initially as it provides a one to one mapping between the visual representation and the physical location of the notes - i.e. 5 frets along on this string. However, from my experience there is a cap on the 'bandwidth' at which you can sight read this. It is just too hard to mentally parse a bunch of numbers on lines and turn that into notes when playing at speed. (For non musicians, 'sight reading' means to read the notes and play fluently at the same time)
Traditional sheet music has a steeper learning curve, however, I've found that reading this music becomes much more subconscious with practice and the bandwidth at which you can parse the notes is much higher. Also, it is much easier to notice patterns in sheet music - i.e. a major 7th chord in the key of the song is visually obvious no matter what the key.
To a first approximation:
Tablature is a _physical description_ of how a particular stringed instrument should be played, and the notes are a side effect of that. It is instrument specific and it doesn't contain much information about the musical details of the piece.
For example, tablature doesn't describe the key the piece is to be played in. To figure that out, you have to mentally translate the mechanical description into notes, and from there determine the key.
Standard notation is a _musical description_ of how a particular song should be played, and the physical act of playing is a side effect of that. It is not instrument specific, and it contains a lot of information about the musical details of the piece, but usually no information at all about how a the instrument should be played. (There are a few minor exceptions.)
For example, standard notation tells you exactly the key the piece is in, but the player has to mentally translate the notes into the physical steps of getting that note out of the piece.
Basically standard notation adds a layer of indirection from the music to the mechanical act of playing. Like many indirections, it can be hard to understand at first, but has adds great power and flexibility that a direct system doesn't have.
When you become adept with musical notation, this is one of the primary hindrances of tab.
I've noticed this as well and my team has developed a notation based of key/scale and a new user interface for the guitar so that experienced players and beginners can sight read on their first attempt at a new song.
We reduced the cognitive load of sight reading music, not only that, we then back fill technique like chord fingering where we introduce traditional chords one at a time, here is a series of three videos of what I'm talking about: https://www.youtube.com/watch?v=KXpTGIzBONU&list=PLvoNIaPTga...
Sorry but this is wrong IMO. You've been reading sheet music your entire life, but you've only been reading tab for the past few years.
I've been reading tab for 10 years. I think in tab. There are a bunch of songs that I can't be bothered learning (sultans of swing, metallica songs+solos, oasis songs.. you get the idea) because I don't like them enough but are fun to play along with, and I do so with Guitar Pro playing the tab at full speed. It's basically like rocksmith/guitar hero but in "real life" mode.
Tab is great for messing around, beginners or simple songs. I can't even imagine trying to learn to play complex jazz or classical music using tab. Sheet music also guides you right into learning scales and intervals.
Tab is great for playing guitar hero but, even on a real guitar, it's like pressing buttons. It doesn't help you learn much at all. I'll never go back to using tab even though I can visualize it easily in my head.
I've tried learning guitar a few times and when I've asked accomplished players how they get by with tabs, it's been explained as tab music establishes a minimal framework that you play within. It's a lossy compression scheme (and traditional sheet music is less lossy). Would you agree with that?
I was a tab-reading guitar player for years, then learned classical notation. Classical notation is undoubtedly faster to parse. For whatever reason, there seems to be a much more direct connection between your eyes and hands when you're reading dots.
It seems to be much more amenable to chunking - you stop seeing individual notes and start seeing chords and scale fragments. Tab is a meaningful and direct representation of the physical parameters of the guitar fretboard, which I think is a shortcoming; classical notation represents information in a way that more directly corresponds with musical theory.
Tab is lossy, but it discards some very important information. Unlike classical notation, it has no native means of indicating note length and can't accurately represent rhythmic subdivisions. If a piece of music has any real rhythmic complexity, tab alone is insufficient.
Jazz musicians typically learn the changes (chords and melody line) to tunes and improvise around that from a sophisticated understanding of harmony, a variation on the tab approach.
Sight reading music, especially for guitarists, is more akin to tightrope walking in my opinion but typically a combination of tablature, staves and chord changes gets me to where I need to be
Tab is LESS lossy than traditional sheet music because it encodes the string as well as the pitch.
A given note could be played in as many as 5 different places, and they will ALL sound different. An open A (5th string) will sound different than the same A played on the low E string, 5th fret.
(This is completely unrelated to the woeful quality of most of the tab floating around on the net. You can write down a piss-poor transcription as sheet music too.)
Fingering is a problem that mostly goes away as you gain an innate sense of what sounds good versus economy of movement and the ability to mute. For music written on guitar, it's usually relatively easy to tell what position works best.
Every guitar is different, too. String gauges, pickups, resonant notes, action height and intonation all play into it, and most of those are subject to personal preferences.
But it doesn't encode the note type, right? All the tab books I've bought don't differentiate between whole notes, quarter notes, etc... So that seems pretty lossy. Look at any guitar fake book for an example.
Plus, I never looked at tablature as a literal transcription. That's why I would describe it as a more of a framework. Like you say, a note can be played in a lot of different places. Once you internalize the fretboard logic, when you see an A in the tab, you play the one you think will sound right or is physically accessible.
Once you get past a cursory "eff this" reaction, you start to see how downright brilliant notation is.
The vast majority of music focuses on 7 notes at a time. If you alter a key signature, you are playing 7 other (non-distinct) notes. Music notation encapsulates this concept very well.
That's only one example, but telling musicians their notation sucks and needs to be fixed because it's hard for a non-musician is akin to a musician telling a programmer that Python and Linux needs to be fixed because it doesn't look like a violin.
In this it reminds me of vim.
Generally it can be said that some have been better in a specific use case (klavar notation was pretty big in the Netherlands among those who didn't know regular notation), but they fall apart pretty quickly when you try to write Liszt or Rachmaninov in it.
I might be a bit rigid (I have played bassoon professionally for most of my adult life), but I can't really see how it can be made much better and still keep the same utility.
While chords might be not optimal today, we can still express things like enharmonics easily (which, at least for me, is something that can make sight reading easier as it allows for the notes to stay "in key").
As with the spoken word, music has an advanced coding system. Both coding systems are flawed in their own way (as someone with a different mother tongue than english, I have a hard time spelling just about anything), but they have also stood the trial of time.
Consider this: In this system, your most complex Classical scores for an entire orchestra are written, and present day trained composers continue to work efficiently in it. That tells you about its expressive power. It is in fact not stupid, but very well tuned to a lot of music theory. Other than complex timbre manipulation (and even that), you can do probably everything you want to accomplish with just software that does nothing but notation.
Instead, what most music software lacks is in the organization department. The organization of non-linear ideas, their programmatic (as in music) occurrence, the automation of repetitive tasks, and the completion of obvious intent. Tracks and loops are probably not the right view of musical structure, at least far from a _complete_ view. There needs to be a better bridge between musical phrases and ideas at the local level (for which musical notation is perfectly suited) and the organizational structure of a complex piece at the macro level (for which tools are very lacking). There also needs to be a better bridge between some conception of events (for which musical notation is slightly ill suited, being restricted to notes) and the microscopic world of timbres, effects, and transformations.
Until music software makers recognize that what they should be helping with is neither engraving, nor mixing console simulation, but a non-linear creative task, music software will continue to suck.
It's not as expressive but is far easier to get started with.
First, you have tabs, which describe the physical position of the notes on the instrument.
Then, we have root / chord type notation, in which we describe the starting position and shape of the notes on the instrument, and the musician must translate that information to the physical position of the notes, on the fly.
What is important about this second stage is that the musician has a pretty good grasp on how to play, and can usually sight read a piece and get a pretty decent version of it just by tracking chords, or in the case of the piano, just chords and the melody on the other hand, or a small pattern.
Finally, we come to roman numeral notation, which describes the chords based on their relative position to the root note of key, not the chord. This is a powerful abstraction. It provides incredible insight into the relationships between music, notes, chords, and progressions of chords at a level divorced from the 'root' of that key. A 9th played over a minor 7th chord is going to give you a very similar sound in any key. This is a great skill for songwriters and composers, who need to have a strong working intuition about things like what chord will sound good in this progression, or what notes we want to appear in our melody (which is related to the chords beneath it).
Have you ever used Guitar Pro or Tux Guitar? It can be INSANELY expressive. Grab a MIDI of Van Halen's "Jump" (IIRC The best one was about 76kB) and import it into either of those. Guitar Pro will be noticeably more expressive vs TuxGuitar. Inside of that MIDI, the solo is 100% dead-on note-for-harmonic-for-slide-for-hammer. Both programs output the exact same tablature. You will get the solo perfect.
Most people that have read tablature haven't read the guitar-specialized notation found in Guitar Pro or TuxGuitar. It's far more instructive.
This is entirely incorrect. You can get velocities (Mezzo-forte, mezzo-piano, etc.) and such is expressed if you hover over the note itself in Guitar Pro or Tuxguitar. Sure they change the granularity of it, but the general range remains the same and for all practical purposes sounds the same if played properly.
I guess it's just inertia.
It's popularity also has to do with what sounds pleasing to the ear (and brain) on a biological level.
A number of people have come up with alternative scales and notations systems over the years, but none of them have really stuck for one reason or another. Nonetheless, they are pretty fun to read about.
here's the whole history of notation https://en.wikipedia.org/wiki/Musical_notation
Also, if you aren't familiar with John Cage, you should check him out. His music and writing deals with a lot of the stuff you just brought up, and it's also a really great jumping off point to find other interesting artists and musicians.
Indeterminacy, a work he did with David Tudor is a great starting point https://www.youtube.com/watch?v=_lOMHUrgM_s
First of all, Western music has complex structure both horizontally and vertically. This makes it rather difficult to encode and visualize, right at the outset. You need some sort of matrix visualization, like a staff or piano roll, to capture all of the nuance.
What makes the staff so useful is that it also captures the tonal aspects of music in compact way -- those that relate to the key the music is written in. Every triad in the same inversion looks the same in every key. A triad is three consecutive lines or spaces. And then deviations from the standard triad for that tonal function are marked with accidentals.
This turns out to be extremely useful for performers, because you learn to play an instrument by learning to play in all the keys, rather than learning what the 12 notes are and playing note by note. I realized this when taking piano class and doing exercises where we'd transpose to another key while sightreading in the original key.
There are other notation systems that have been as successful as the staff, but they tend to be specific to particular instruments or styles. For example, most guitarists find tablature much easier to play than standard notation, especially if the tablature is augmented with note durations and rests.
Also, although I've become a true believer when it comes to the staff, I have less rationale for why the traditional clef system has stuck around. It seems like something that is more regular as you go up and down the scale would be more helpful. There are systems that use things like note shapes or colors to help mark the note name. I guess we just haven't found a standard.
My biggest objection to conventional notation is that it gives a profoundly misleading picture of how music and harmony really work. It defines one reference key (C Major/A Minor) with a certain pattern of steps and gaps, starting on a certain note. Then for all of the other keys you add more and more sharps or flats until you get into ridiculous keys where all 7 notes are modified. The truth is that there's just one evenly spaced set of 12 tones, and all it means to be "in a key" is that you've picked a certain note out of the 12 to start the pattern on. There's nothing special about C. We could have chosen the key we call F# as the reference key and named it C, and everything would work the same.
It's hard to overstate the damage from this. Lots of musicians I know—serious players, people who took music in college—still think of "complicated keys" and "easy keys" and are only vaguely aware that the keys are actually all the same and they're just being tormented by the notation and terminology. I'm teaching guitar to a friend who was first trombone in high school and it blows her mind that she can play the same scales starting anywhere up and down the fretboard and it sounds the same.
It all comes from the design of the keyboard, where the notes of C major are evenly spaced (white keys) and the sharps/flats are stuck in between. There's also the fact that in the past the 12 notes weren't evenly spaced, so the different keys really did all sound different back then.
Conventional notation does have one big advantage, though: every line or space represents one note in the scale. This is more how musicians think: you don't care that much about the notes outside your key, and having the other ones "tucked away" in between makes it easy to see what's going on. That's why it's so quick to read once you know it. Out of the hundreds of alternative notations, I haven't seen one that's both key-neutral and also makes it easy to see things in terms of scale degrees.
(One idea I've had is a 12-tone staff with Sacred Harp-style shaped note heads to show you what scale degree you're playing. Not sure if that's ever been tried.)
Even in the key of C major, this is a problem in just intonation. Say you want to play a G major chord, so it's made up of G, B, and D (3/2, 15/8, and 9/8). Later in the song you want to play a D minor, so you play D, F, and A (9/8, 4/3, and 5/3). That doesn't sound right, though. It turns out that the D you want is actually 10/9, which is just a bit flatter than 9/8. In standard notation, you can't distinguish.
It's possible to get around this by adding non-standard modifiers to notes aside from the usual ones (sharp/flat/natural), but unmodified standard notation misleads people into thinking that those two notes are the same. Which is another example of your main point, that "standard notation gives a profoundly misleading picture of how music and harmony really work".
Also, with a 12-tone staff plus shape notes, you'd get a little extra information for just intonation because you can tell for sure what key was intended for a given note.
We could come up with more precise and effective languages than the ones we naturally speak, as well, but the good-enoughness of the ones we already have and the fact that others around us are very likely familiar with them is more important. Utility trumps quality, and worse is better.
That said, if all you want is a different notation system for you to use personally or with small groups of other proponents, there are plenty to choose from. ABC and MML variants use letters for notes and numbers for note lengths, for example. Probably not optimal for sight reading, but maybe better than staff notation when writing or transcribing music. There's also trackers and piano rolls. Neither is very good for quick conprehension, but maybe lay things out in a way that makes more intuitive sense.
Another advantage: each note of a diatonic scale is mapped injectively. Cf. representing each line (or space) as a whole-tone, which leads to hash-collisions (e.g. "is that a G or a G#?"). Each note on a line (or space) on which collisions occur would need an accidental. Which defeats the purpose of key signatures.
A diatonic scale contains an odd number of unique notes. The fact that C lies on a line while C' lies on a space is an unfortunate artifact of representing a 7-note scale with alternating lines and spaces.
Requires windows and a MIDI keyboard.
Is this supposed to be satire? Invoking Poe's Law on this one
Me too. But you think about it, all you really is a graphical representation that describes the pitch of sounds relative to each other as well as their duration relative to the beat. And the conventional notation is not bad at it !
The current system is essentially:
a dot on a coordinate system representing the pitch, duration, and position of the sound in a sequence of sounds.
- a horizontal position axis: you draw an invisible x-axis representing the position of the note in its ordered sequence. It gives no indication on its duration.
- a vertical pitch axis defined by western notes (do, re, mi, etc): You draw your pitch lines, y-axis with y=Do, y=Re, y=Mi etc.
- a duration axis (let's say it points towards you): We can't draw it for a 2d representation of music, so we'll project this coordinate on the time-pitch plane which is your staff. We'll decorate the dot representing the note w.r.t. to it's duration coordinate: say it's duration is half a beat, the the dot is a black filled circle; if it's a full beat then it's a white circle; it's its a 4th of a beat the it'll be a black filled circle with a hook. Etc etc etc.
And then you start making all the addition of music notation: blank for 1/2 beats, vibrato, tempo, etc
Now there is this choice not representing note position and duration on a single axis. That may very well be so it's easier to standardise and read probably. You could also choose to represent the duration coordinate with colour, would that make it easier ? :)
Maybe the problem doesn't come from the notation, but the system in itself. The half step between B and C, the 12 notes but really it's more, etc. That's why solfeggio is hard ! I think some greeks considered the study of harmony to be at least as intellectual as that of counting ! I wonder if there's an algebra for harmonie. An H-Algebra why not ?
But really, it's not the only notation: guitar tabs, guitar chord representation, etc
Me, I love standard notation. Common chord voicings and interval patterns stand out as easily recognizable patterns on the page.
I sat down and did this in an hour: https://github.com/exabrial/sonic-pi-beats/blob/master/house...
Sam Aaron is the guy behind the project, he does a lot of ambient type stuff: https://www.youtube.com/watch?v=G1m0aX9Lpts
If you are interested, sign up here https://docs.google.com/forms/d/1-aQzVbkbGwv2BMQsvuoneOUPgyr... and I'll contact you when it's released.
There's another similar-sounding project called Helio that was posted a few weeks ago: https://news.ycombinator.com/item?id=14212054
I hope that in time, we get more Markdown-style composition tools vs. the full DAW suite. Good luck! I'm looking forward to seeing what you make.
P.S. AudioKit is pretty dope. :)
: http://musicmessages.io -- working on turning it into a full iOS app, so will probably have to shut it down and fold it into the new app at some point
Send me an email at mpercossi at zenaud.io , always fun to talk to fellow audio devs :)
And for all the vim lovers out there -- my app supports vi commands for movement and editing :)
Indeed, I'll go further. I'm really starting to believe that the only way not to get royally screwed as an app developer is by abandoning the "major" platforms -- which all want to turn you into a serf -- and target OSS platforms like Linux. I'm honestly tiring of dealing with the artificial roadblocks Apple (and Microsoft is no better) throw at me to further their own ends. I actually analysed SteamOS with this intent, but sadly it looks like SteamOS is geared towards the "living room" experience.
Anyway, long story short: there will be Linux support in 2018.
I realize this is a big limitation, but we intend to add a piano roll in the next few months.
I don't have a demo yet if that's what you are asking about but I've open sourced this for example
Actually I do have some old demos but they don't show the best parts. It's actually kind of hard to show those right now.
>Current DAWs don't really understand music.
>Current DAW : my thing = Windows Notepad : IDE.
It really sounds like you're promising a lot.
I don't need or want any of that. In fact when I write music, music engraving is the least of my concerns. Actually music engraving is generally the least of my concerns period.
Also I find the current music notation to be kinda idk outdated. I can read it, but I feel like it's a system designed by someone who had the mathematical knowledge of a 15th century farmer (which is probably how it came to be).
What specifically about it?
I can read (and prefer) standard musical notation but when handwriting I use Hummingbird  because I find it aides itself to handwriting. But I can't really imagine a "better" musical notation than what is the standard today, except a better way to communicate natural/flat/sharp notes.
> (and prefer) standard musical notation
Prefer it over what?
I've signed up to your google form, so I'll look forward to seeing what, if anything, you come up with :) I am on linux (and yes, I agree that music on linux is a pain), so I might not get to use it unless you port it, but I still look forward to seeing it, whatever it is.
Most of what I write is highly dissonant or straight up microtonal.
This is actually exactly what I'm trying to prevent. Most of the current solutions only kind of constrain you to a certain tonal space that you can maybe explore but the space of possible compositions is actually insanely large. My DAW is going to try to help you explore all that.
Microtonality is definitely something I've thought about and I think I can make it work but I'm curious to know what do you use currently to compose?
Often I'll use http://www.huygens-fokker.org/scala/ and my synths and a fair bit of SuperCollider/Overtone.
Also why do you have to learn music theory first, why can't the DAW teach you as you go?
DAWs are used to produce the huge majority of music you hear in the media, from commercials to hip hop songs. Even seemingly real orchestral pieces for movies are often composed entirely using artificial instruments. For example, here is Junkie XL showing how he composed themes for Mad Max Fury Road.
The only difference is that Abelton Live and Bitwig (Runs on Linux) are designed for live performance.
I like Reaper (Cost is 1/5 but equally capable) and it also runs reasonably well under Linux. https://linuxmusicians.com/viewtopic.php?t=15280
Actually many people never pay for a license it has a similar model as Sublime Text.
Ableton, at least, also functions perfectly fine in the traditional piano-roll and timeline paradigm of DAW workflow too. Don't let the 'Live' part of the name mislead you into thinking it's only for live performers; it does everything the 'old DAWs' do, AND it's got great features to assist in live performance.
Also in terms of underlying concepts, if you know one DAW well, you can usually learn another one fairly quickly, as it becomes more a question of learning the interface more than anything else.
I couldn't disagree more, but I am talking about doing professional work. The concepts are all the same but getting where you are proficient in a DAW takes a very long time to find the quirks and strangeness that each one comes with to produce a quality piece.
Video Editors are hundred times harder to switch.
I remember way back when I used Cubase. Couldn't find any decent help online.
With Ableton, you are spoiled for choice when it comes to tutorials and lessons.
Many comments here mostly mention software. But there are some interesting exceptions. Check Surgeon for example, who likes to use his custom controllers with Ableton. You actually can see him re-wire the controllers every now and then. (Great music too ;))
When I'm in the zone, I don't care about check boxes. I have some new user interface paradigms that I haven't seen done before (I can imagine they have been tried before tho) that should make writing music super painless and should let you express yourself.
Idk if this will appease some of your concerns but I've been around hn for a while (I'm in the top 30 karma wise), I won't spam you.
Also nice endeavour
In the spirit of constructive criticism, may be you could at least point to specific negative sides of existing DAWs that you're willing to eradicate?
Note that I'm on the core team of AudioKit https://github.com/audiokit/AudioKit which is a platform for AudioUnit development so I know all about how dope plugins are :-).
What is so arduous about plugging a midi keyboard in?
Another thing I've been dying for is an easier way of layering sounds, for example drum hits. Multiple midi sends feels hacky in Ableton and certainly not a first class feature. On the other side of things, the pain of rearranging multiple wavs after wanting to change a note is even more painful.
I totally agree with you about the actions though. Configuring plugins etc can be a huge drain and its very mouse heavy.
Layering drum sounds is a typical feature in all DAWs, and in ableton, with it's instrument and drum racks, it's even easier to layer whatever you want. May be I don't understand what exactly are you trying to achieve?