Hacker News new | past | comments | ask | show | jobs | submit login
The Impossible Music of Black MIDI (rhizome.org)
183 points by wodow on Oct 30, 2013 | hide | past | web | favorite | 175 comments



People have been doing this kind of thing with trackers for some time.

Here's Vache by Venetian Snares, from 2006: https://www.youtube.com/watch?v=c2f5gOo1VEM

And here's a webcam video of the sequencer data scrolling past: https://www.youtube.com/watch?v=zGK-EzEa45U

The techniques are much older. Here's a screengrab of the OctaMed tracker running on an amiga, playing back the original drum tracks from Aphrodite's jungle classic Beats Booyaa from 1994: https://www.youtube.com/watch?v=bkVSe9DubE8

And here's the whole of Beats Booya to hear it in context: https://www.youtube.com/watch?v=vts6rqJHMK8

Real producers chop their beats in hex ;)

EDIT: also, most music software includes something called an arpeggiator, into which you play chords, which it breaks up into little sequences of notes according to different parameters. Set the interval down to 20ms or so and you've got your black midi :)

That's also how you get those distinctive chord-like sounds out of monophonic soundchips in oldschool video games.


I rather like Chris Cunningham's interpretation of what it would take to perform Aphex Twin's "Monkey Drummer" - https://www.youtube.com/watch?v=YB08leFMRnM


Even more insane - using pattern jump commands to create animations :) https://www.youtube.com/watch?v=oeCP6LteWJo


I'm incredibly surprised by the negativity of the comments here. Taking a medium, in this case a piano midi track, and pushing it far beyond its limits, while still resulting in something resembling music, is just about as close to hacking as you can get.

Also, constraints breed creativity, and this is just another example of that.

It's not the type of music I would listen to generally, but insisting that this should be the case completely misses the point of what's interesting about it.


Actually, this is kind of the opposite of constraint breeding creativity. You have removed the constraint of having to make a piece of music playable by a human and now you might have a 20 million note song not because it sounds good but just for the novelty of having so many notes. Like any other medium for art, there will of course be good and bad pieces created. But a million old-school geocities web pages w/ autoplaying embedded MIDI tracks I think serve as a good example that the default judgement is going to be harsh.


There's plenty of constraints in this style. You need to have very many notes. It must be a single instrument. It should presumably sound good or otherwise have some musical value. There is a lot of exploration possible within these constraints.

Most of the criticism in this thread are just eloquent formulations of "Get off my lawn" and "your a faggot". No different from the literally tens of thousands of previous instances of situations where a new artistic medium or expression has been criticised.


Actually, no. Using only one instrument is a constraint when it comes to electronic music. Making it extremely unplayable by a human is another constraint.


...there will of course be good and bad pieces created.

https://www.youtube.com/watch?v=wkyGQnZNLYY&feature=related

Delivered :)


No surprise -- the people making negative comments here are the same people who said $(your favourite rebellious composer) wasn't music. People who say that are always judged as wrong, in the end.


I think a lot of the dislike is from simply not understanding many of the techniques. E.g. in second example, there's a bit where the melody is extremely staccato, but the note length isn't achieved from the keyed duration but by a flurry of other percussive notes overwhelming the compressor/normaliser. I guess without knowing that it might seem uninteresting.

If you understand stuff like this you can appreciate the music. I'm not exactly going to sit down and listen to it in a dark room sipping some red wine, but it's a new and unique compositional technique that can be incorporated into other music.


I could appreciate it if a person was playing it, but as it is I don't see any reason to be impressed. Anyone with music theory knowledge and the right software could make this. What's the point of cramming notes in if it doesn't take you somewhere.

I understand it. I have no appreciation for it. sorry.


Anyone with the right knowledge and right equipment can do basically anything that's possible with that knowledge and equipment. Why be impressed by anything anyone does?


"People who say that are always judged as wrong, in the end."

By whom?


You're right to question my bald comment. My answer: history. I think the history of music (and more generally of art) is the story of the breaking-down of existing borders, while the history of those who would try to enforce the borders is the story of Canute on the beach.

Some early music used 5-note scales and regarded 2nds and 7ths as unusable (I'm referring to music in the west; folk music elsewhere often still uses 5-note scales). Early music was monophonic, but that limitation was overcome by the 12th century I think. Bach was genteel, but Beethoven was emotional. Brahms was tonal, but Debussy allowed atonality in music. Cage allowed aleatoric sounds to be music. Glass allowed excessive repetition of simple patterns to be music.


As a pianist and an electronic musician, I never have any qualms about making music I can't play.

This, however, is just ridiculous.

The thing is, it's not at all hard to write a piano piece that's unplayable. Simply add a third note group far enough above or below the existing two note groups and it will be physically impossible to play (unless someone else helps you). It doesn't need to be a grotesque fountain of millions of notes to be unplayable. This is better examined not as impossible music, but as an experiment that asks the question "how many notes can you use at the same time and still make a coherent song?"

The debate between music that's playable and music that's impossible would be better served by more realistic examples, instead of a small sub-culture.


I don't see why this is ridiculous. I don't think that physical playability/unplayability is the key feature here. What does it really mean to be "unplayable" anyway? When you play piano, is it your body that generates the sound, or are you manipulating a machine that generates it for you? There's a delay between notating a MIDI file and hearing the resulting sound, but the same goes for the time between your finger hitting the key and the hammers hitting the string. I'm not saying there isn't a distinction between live performance and pre-production, but I think we're better off looking at playability as a spectrum than a binary value.

We could critique the medium, but people have abused instruments and sound makers for generations. Look up prepared piano [1], glitch music [2], and the wide world of guitar effects. This is just another way exploring the possibilities of a given medium (MIDI, in this case).

Theoretically, we could generate all possible music just by adding together sine waves, but realistically, most music is only accessible by using far more limited mechanisms, like instruments, gadgets, and paradigmatic software, which in turn influence the creative process. A person is definitely free to critique the artistic merits of a given piece, but I think it's rather close-minded to critique an entire medium. Most of the styles of music we listen to were (or still are) considered cheating by adherents to some other style.

[1] http://en.wikipedia.org/wiki/Prepared_piano [2] http://en.wikipedia.org/wiki/Glitch_%28music%29


I found the result to be a rather monotonous sameyness. The sound of "all the piano keys being smashed" overwhelms everything else about the piece, and the end result is that for all the apparent energy, the end result is really quite boring.

Contrasted to its acoustic opposite, chip-tune minimalism, and I'll take chip-tune anyday, even in its "pure" form (limited polyphony, no postprocessing, etc, just as you would have found it back in the day).


I appreciate Black MIDI not really as a "cool gimmick" but more as a genre of music. In my opinion, its main function and appeal is to embody music that is still pleasing to the ear, even if it would be impossible in practice.

The majority of the pieces linked to in this thread probably aren't the best example of this, but I think Dream Battle (http://www.youtube.com/watch?v=Lzy_WrH8v7U) is quite good in this regard. The piece itself is just a normal piece -- it only happens to be unplayable, and since it is unplayable, there are a number of Black-MIDI-specific "extended techniques" that can be employed (as you'll see).


I'd be interested to hear what a practitioner of this would do starting from a renowned piece with some actual complexity. It seems like there's a lot of potential for cool effects, but it's held back a bit by compositions that don't use it for much.

There's stuff out there that has a frenetic line paired with a sparse melody that might work well: http://www.youtube.com/watch?v=RK5wWD1k7T0


>> "In my opinion, its main function and appeal is to embody music that is still pleasing to the ear, even if it would be impossible in practice."

It doesn't take many notes to make something impossible in practice. On piano 11 notes simultaneously makes it impossible for one person to play and it would probably sound a lot more coherent. The black midi I've listened to doesn't sound bad, but it all sounds quite similar - there isn't much range.


You can probably get to 12 to 14 and still sound good by using some of the fingers to play multiple neighboring notes.


It's better, yes. Still not my taste, but less monotonous.

Still, I'm not sure this "genre" has a lot of depths to explore. Their time and their choice, of course.


Rock started as music most people thought was too simple to be interesting, and so did jazz and a lot of early electronic music. I think it's really premature to write off a music style most people haven't even heard of yet.

Given how much interesting music (IMO) has come from things like serialist classical, I suspect it's only a matter of time before someone finds ways of producing interesting sorts of tone color that are more difficult to achieve via other methods.


Well, the natural next step for the "genre" is to use something other than just pianos, then use multiple instruments simultaneously, and before you know it you're "just" writing very polyphonic music.

It depends on how pedantic you want to be about the word "genre", and I tend to avoid the word entirely when possible because of the really weird (IMHO) ideations that surround it. But in this case, if we're going to keep it to "piano"-type sounds, which we pretty much have to if we're going to have a "problem" that needs solving anyhow, I think they've pretty much explored the space that's available to them.

And the primary reason for this is that they are not charging into a new, unexplored space... quite the contrary. The piano has been explored for hundreds of years. Rather than opening bold new fields of exploration, this is exploring the last few remnant bits that people couldn't cover earlier due to not having hundreds of fingers.

I understand being open to music ideas, but I also don't believe in entirely turning off my brain. I really don't think there's much "there" there.


It's only a matter of time before someone in this scene rediscovers spectral music and creates a black MIDI version of Partiels (https://www.youtube.com/watch?v=kX77MC5oXDY).

I also think the black MIDI examples that have been posted would sound significantly better with better instrumentation (that is, more expensive softsynths -- I am a fan of TruePianos, personally).


Honestly, I think you can take most of the notes out of there and it will sound better. I'm not saying that it sounds bad, but the influx of keystrokes doesn't do much there sonically and adds some annoying artifacts.


Ever played Rachmaninoff's Prelude in C# minor? 4 well separated simultaneous note groups in a couple of places. That makes it theoretically impossible to play...


Igudesman & Joo have their own way of playing stuff like that: https://www.youtube.com/watch?v=ifKKlhYF53w


This had me in tears of laughter. What a genius.


I love these guys. Saw them live once. :)


Pure genius, and funny too! I couldn't resist posting that link on facebook.


I'm assuming that since you bring up Rachmaninoff, you know that he reportedly had freakishly large hands, and was in the habit of writing things he could play, but others considered "impossible". To me, that's significantly different than black MIDI, which just seems silly, albeit vaguely amusing.


I'm actually only referring to a couple of places in Prelude in C# minor. Most of the song is "almost impossible" for one reason or another, but there are a couple of places near the end where the lower octaves and the upper freakish chords are played simultaneously. The 9th and 11th measure from the end, to be precise. Even Rachmaninoff with his freak hands had to roll that.


It's not a song, it's a prelude. And actually not very technically difficult, one reason for its immediate and enduring popularity. The simultaneous chords are just played very quickly one after the other and not "rolled".


"not very technically difficult" is a matter of perspective. It's about as difficult a piece as any student or amateur is ever going to play or be exposed to and few students or amateurs are skilled enough to play it.

But it is accessible to amateurs, and sounds very impressive. Which is, as you noted, a good reason why it's so popular. It's a good piece to pull out when you want to show off.


> large hands

Can't they make the piano keys a little smaller (narrower)?


Yup, there's a company that does this, can't google it tho

http://books.google.com/books?id=IbZOJIpgnNwC&printsec=front...

Thing is, players with broad fingers have trouble negotiating between black keys as it is now.


Sure, but it could take considerable time to adapt one's muscle memory to the different key spacing.


The keyspacing is identical on all pianos?


For the most part, with the exception of toys and some compact synthesizers and MIDI input devices.

Edit: see https://en.wikipedia.org/wiki/Musical_keyboard#Size_and_hist...


You could make the gap between those note groups wider and it would become impossible. Even then, it is impossible to play for some—my piano teacher had rather small hands and had to "cheat" by dropping out notes if she was going to try Rachmaninoff.


A piano piece requiring 3 hands isn't impossible to play. You just need a second person to help you play it. ;)


Yes, but by that same logic, no music is impossible to play. Just keep adding people to help you. Eventually, you'll be able to play it. This is why the entire "impossible" argument rapidly stops making sense, because you simply have to draw an arbitrary line somewhere, and it becomes a question of impracticality, not impossibility.


Or you can help yourself with your foot, like Jason D. Williams occasionally does.



Original video here: http://www.youtube.com/watch?v=16oaGSltUPE

(thought she deserved the credit!)


I'm curious to know what you think about dubstep, then.


It's not really 'unplayable' it's simply not playable by a single pianist at a single keyboard. It would entirely be possible to have 30 pianists at 30 keyboards playing these songs. The question then becomes, "Why?"


> The question then becomes, "Why?"

Why not? [/argument]

Not everything needs to be done for a practical reason; that's most art.


i think its really about the video more than the music


People here with critical perspectives are missing the point. "Where's the value?" The answer is that the people who make this music are not making music for you. They make it more as a personal challenge and/or to one-up their friends.

There's a long history, going back to 80's[1], of artists abusing various computing platforms to write somewhat melodramatic music that pushes the boundaries of both traditional pop songwriting and the computing platforms themselves. This tradition is closer to hacking than it is to pop music in that it follows its own internal logical of oneupmanship and works aren't produced for any audience outside of the "scene" itself. Black MIDI is just another plausible and entertaining development in that context. Probably some kids who got into the scene and wanted to distinguish themselves by doing something new.

It is indeed also in some respects "good music", but at this point it's already so weird that it's not particularly enjoyable to most people. I happen to have been following this sort of music for awhile, since at least the explosion of the chiptune/micromusic scene in the early 00's, and I've learned to enjoy it such that I liked the pieces linked in the article and in this thread. I liked them both as cheesy sentimental pop music and for the "hacks" (e.g. playing a bunch of notes to make a phased "kick" sound) in the same way that someone might appreciate technical guitar playing. Another poster was spot-on when he said that this is basically hacked additive synthesis – that's precisely the joy of it! Ultimately, it's just another acquired taste, like wine or classical music.

[1] http://en.wikipedia.org/wiki/Demoscene


Some "scene" music that I actually listen to:

http://www.youtube.com/watch?v=3GIemGd3Ctk&list=PLkMjO0BRqWu... (FM funk)

http://www.youtube.com/watch?v=kl7TjvZzWMY (Zabutom)

Basically just really good 4-part harmony on 8-bit chips. It's really not unlike pre-Rennaisance church music (early polyphony), or a lot of modern choral music.


That first one reminds me more of a jazz / electro / rnb / funky house type of thing. Mixed with some japanese video games of course. If that's done entirely on one 8-bit chip it blows my freaking mind.


I don't know if it's done on a real chip or not, but that's FM synthesis, like produced by the http://en.wikipedia.org/wiki/Yamaha_YM2612 in SEGA machines.


Interesting that you bring up the demoscene. This music genre shares with it the fact that it takes a platform to its limits in a technically interesting way (just like 4k intros, C64 demos, etc), but in my taste it distinctly lacks aesthetic merit, which is recognized in the scene as an essential component of any good prod.


And taste is something that cannot be argued with. C64 demos, 4K intros also take a certain kind of people to appreciate.


Demoscene encompasses pushing limits of video hardware (mode-x, 3d) and god know how many copper bars than mod trackers (audio), but there's definitively sizable overlap. Heck, keygens allegedly often have some cool shit in them besides serials in the form of demos. Sometimes the UI themselves is reported to be clever.

For the uninitiated, mod tracking is kind of like midi but with either or both synthesized and digitally-sampled instruments played typically by software. A friend in high school in the late 90's ranked near the top 5 globally with a beautiful piece that used like 32 channels, all in software. No help from a GUS with this.

http://artscene.textfiles.com/music/mods/S3M/MODLAND/


The way I see this is another form of additive synthesis, with poor (technical) sound quality and less control/expressiveness¹. I'd guess a lot of music would appear "black" if represented as MIDI sequence for a pure sine-wave synthesizer.

[1] not to disparage it too much as an artform. Art forms from what the artist able to with the medium of choice, and the choice of medium does not it make it automatically better or worse


>MIDI sequence for a pure sine-wave synthesizer

Interestingly, we do have this, and it is a crucial component of lossy audio compression techniques such as MP3 - it's the Fourier transform. Essentially you can convert any audio signal into sine intensity per frequency over time, and vice versa. The space-time resolution is somewhat adjustable, so quicker reaction time can be obtained at the expense of squishing nearby frequencies together. I would not describe a lot of music that I have observed as spectrograms as being "black," in fact there are visible patterns that correspond with the harmonics of the sounds being played.


Frank Zappa was doing something very similar with the Synclavier in the 80s (http://en.wikipedia.org/wiki/Synclavier).

He also has a piece called 'The Black Page' due to its density of notes on the page - http://en.wikipedia.org/wiki/The_Black_Page

Both together here: http://www.youtube.com/watch?v=UrOK98q_ILA&list=PL945B5DD750...


Yeah, I was surprised by the lack of a reference to The Black Page in TFA.


Fitting that both the examples are of Touhou song remixes, as Touhou is kind of the video game equivalent of black MIDI (shoot em up with ridiculous amounts of projectiles) http://www.youtube.com/watch?v=wmMDqub_UKA


Back in the 1930s and 40s, the term "Musique Concrete" would apply to this genre. Back then, "Musique Concrete" was making music via recording technology that could not be played live.

Unplayable music is not new. Some better-known examples are how Queen used to just step off-stage in the middle of Bohemian Rhapsody and play the tape of the mutli-tracked vocals; or how The Who used to screw up on stage playing along with the taped parts of Quadraphonia. Even the Beatles' live performances of Paperback Writer were weak because they used so much multitracking in the studio.

What I found interesting was that many of the multi-note combinations were just hacking the synthesizer to produce different sounds. A talented keyboardist could program MIDI sequences triggered by a single keypress and perform some aspects of "Black MIDI" live.

In contrast, I didn't think that the two examples were pleasing to listen to.


MIDI was never intended to be playable by people. It's just a protocol, you can do whatever you want with it. So saying that this is "impossible" doesn't make any sense.

That said, this music sounds atrocious when you run it through a computer, it'd sound better if it were spread out across multiple instruments, but whatever.


Yes if you displayed the full orchestral score to Beethovens 5th on to a single grand stave it would look "black"


Whilst clicking through the related youtube links, I found a 3D representation / rendering of one of these tracks (instead of the 2D one shown on most videos), with the notes cascading downwards onto ten rows of keys - gives a bit more perspective to how they made it.


Well, it is definitely interesting.

But in this form, I really see no value.

I'm only hearing noise in those videos, the noise from the switching on of the note (that slight 'tack')

The synthesisers apparently can't handle that amount of notes without some artefacts.

And see, they're only adding huge amounts of notes, but no pitch shift and no volume control (apparently)

This could be interesting with different (softer) instruments, better synthesisers focused on more notes and more "playfullness" rather than just hammering notes


We do know that more interesting things are certainly possible with many many notes.

https://www.youtube.com/watch?v=muCPjK4nGY4


I read a fair bit of scifi, and wonder what an alien life form would sound like, trying to pronounce English. This is pretty damn close.


Yeah, a lot of what I hear and see in the videos from the article is just octave multiplication. I've seen much more interesting effects achieved by interrupting fast-running arpeggiators. Also very complex, but far more melodic, and playable by humans in real time.

Not that you can't play multiplicated stuff in real time, it just doesn't sound very interesting. Low-pitch piano keys already have harmonics from higher octaves.


This reminds me of the kind of stuff I made while screwing around in Scream Tracker as a teenager 20 years ago. The difference is I recognized that it was garbage that nobody would want to listen to.


hey this reminds me of the talking piano (http://www.youtube.com/watch?v=muCPjK4nGY4)

MIDIs can do that too right?


The effect is just reminiscent of a very simple FM synth; you start with a sine wave and add another waveform whose cycle starts below the audible spectrum. Start cranking it up, and you don't hear any "pulse" it just takes on a different timbre.

So, by playing a bunch of notes really fast, you just end up with a different kind of buzz.

I'd rather just use a synth. This is like monkeying with waveforms using a step function. Kind of limited.



Holy smokes Batman, 21 million notes

Who'll be the first one to present a novel 21 million pages long :) It's quite a challenge as well (AI might help to take it in foreseeable future). Obviously haters gonna hate - shame on the haters.

Personally I'm more impressed by someone who puts together 210 words, but just the right words

Yes there are acclaimed authors who invent challenges for themselves, such as Georges Perec who wrote one novel without ever using the letter "e" etc.

It's quite fun, it's just meta - it's a bit of "literature about literature", or "music about music", so to speak.

Your goal is to prove a point, and art as such (the way I see it, of course) is not about proving a point.

If you need 20 million notes to achieve a certain effect, why can't this effect defend by itself, why the need to put this fact upfront, give it a name etc.


Composures have been making "impossible" music for a long time. They just relied on _orchestras_ to play them.

Here's one that does sound pretty great just on piano, though: https://www.youtube.com/watch?v=tds0qoxWVss


Exactly how many voices do these playbacks include? This couold be limited to just 16 simultaneous piano notes. I'd rather hear it played back on a clean sine synth, where it's possible to distinguish individual notes. I like how Black Midi shares some characteristics with Spectral music (http://en.wikipedia.org/wiki/Spectral_music), but the implementations are in most cases still crude experiments.


I would definitely recommend checking out Conlon Nancarrow as well, who was mentioned at the start of the article. A lot of his stuff is somewhat abstract, and sometimes his rhythmic patterns are really too odd to be heard (e.g. 7:17:27). But then he also has pieces like Study #3a, which is just a bluesy boogie-woogie that just builds. (I would love to actually be in the physical presence of a piano playing that piece; it would be like it was possessed.)


I went to play Fujiwara no Mokou's Theme through my Roland Atelier AT-30 [1], but the MIDI file (at 194.4 MB!) is a tad too big to fit on a 3.5" floppy disk.

It's got a pretty decent sequencer, and had no problems playing Circus Galop.

[1] http://www.youtube.com/watch?v=vVuNg9XWcBA


I almost like it, but it just has... it has... too many notes. Yes, that's exactly it. Too many notes.


Pff, needs more notes in that case. MOAR!


These style of music would make an awesome soundtrack to bullet hell style games. Here's a mashup of a bullet hell playthrough with a black MIDI: http://videodoubler.com/combo/saved/464


This is my new favourite genre of music. It's like a computer is tickling my brain.


This is pretty interesting. Reminds me of the "speaking piano" a little bit: http://www.youtube.com/watch?v=muCPjK4nGY4


I've always thought it would be interesting to see what a really good composer could do with a band (orchestra?) comprised entirely of pianists. I imagine it would be an interesting challenge all around.


I would suggest Dan Deacon for someone who writes "unplayable" yet very tuneful pieces: eg.

http://youtu.be/TPg4Vcr56F0?t=10m15s


To quote composer John Adams, "We forgot that music was supposed to sound good."


It is impossible to listen to it in the first place.


Snow Crash, anyone? :).


Must...resist...from spending my entire lunch looking at all the videos and wikis.


very cool


As a pianist I do not understand the purpose of music that is not playable by humans. In my understanding music is a communication method between human souls. A computer has no soul, likewise Black MIDI "music".


Obligatory quote:

"If I hear one more person who comes up to me and complains about "computer music has no soul" then I will go furious, you know. 'Cause of course the computer is just a tool. And if there is no soul in computer music then it's because nobody put it there and that's not the computer's role. It's the role of the songwriter. He puts down his soul in the song if he wants to. A guitar will never write a song and a computer will never write a song. These are just tools." -- Björk

That said, I'm not a fan of those particular songs either; less would be more IMHO. But many of my all time favourite songs are chip tunes... some compositions (!) simply don't need additional "soul" added to them, they work just fine played by a robot.


This music is "playable by humans." Their instrument is a MIDI sequencer rather than a piano or guitar. Are you claiming that only X, Y and Z physical artifacts can make real music? Do player pianos make music? Does my MP3 player output music? Maybe I'm unsophisticated, but plenty of 'music' that comes out of my MP3 player generates the kind of emotion that I think of when you say "communication between souls".


As an internet commenter I do not understand the purpose of Wifi signals that are not visible to humans. In my understanding discussion is a communication method between human souls. A computer has no soul, likewise Internet "comments".


Pianos don't have souls, either.

Maybe I'd understand your complaint if it was about computer composition of music, but this is just using the computer as an instrument. Would you say flute players can't play music "with soul" because they aren't directly whistling the noises? Why not? How is that qualitatively different from using a computer to play your composition?


Handmade Pianos have souls. The soul of the technician that built it. This is why the best Pianos like Bösendorfer and Steinways are still built manually.


That sounds kind of esoteric / superstitious to me. Has this been blind tested? Anyway... computer music has the soul of the people selecting the samples and the parameters for instrument synthesis, and choosing when and how to play them. That's why all that is done manually. Heck, on the C64 (probably not just there) it wasn't unusual for composers to write their own composing software and playback routines.


In the C64 days I programmed my TI99/4A to play Bach. It sounded awful. The interpretation had no soul.

Soul is not blind testable, like art, because it comes from the heart, not the brain.


Is soul the imprecision of notes not being perfectly uniform in duration and velocity? That's what your above comment seems to imply.

For instance: Consider a recording from a piano played by a human and a computer-generated MIDI file of the same musical piece with included variation/noise in BPM, note duration, velocity, timing etc.

This would result in at least single-blind test for `soul' if you were to listen to it. You could tell us which piece you think has more, or any (I'm not sure if soul is quantifiable or just a binary existence) soul.


What about a recording versus a live performance? Isn't a MIDI essentially a super-low-resolution digital recording, minus any spatial acoustic noise? Is soul then contained in space, or the presence of those souls present?

Here's an idea for a test: start with a song recorded at 44100Hz (standard CD quality) that has soul. We can debate the actual piece of music, but I'll use "Clap Your Hands" by A Tribe Called Quest in this example. Give a bunch of people a randomly-downsampled version of the song (at 12500Hz, 800Hz, 220Hz, etc), and have them answer a simple question: "Does this have soul?"

The song is 93BPM, or 1.55 beats per second. At a sample rate of 1.55Hz, we're looking at one sample per beat. Let's use a standard 16-step sequencer and say that a MIDIfied approximation is going to have four samples per beat (quarter notes). So, at about 6.2Hz, we've got a recording that has no better resolution than MIDI (potentially even worse).

Ultimately, I guess I agree with you: "soul" is in the ear of the beholder.

(Disclaimer: I don't actually know anything about digital audio.)


No soul, is not imprecision. It is the expression of humans feelings in form of variation of duration, velocity, loudness. For example speed in music is like speed in breathing. You are not breathing with the same speed all the time. It depends on the context. If you are in a hurry you breathe fast and short. If you sigh you breathe deep and slow. An interpret has to understand the emotions that should be transported. These emotions are unfortunately not sufficiently presentable in MIDI files or music notation. As Mahler said the essence of music is not in the sheet. Therefore a computer can not reproduce the essence of music.


> Therefore a computer can not reproduce the essence of music.

Music that wasn't written on and for a computer, no. Yet it's perfectly possible to manually craft "variation of duration, velocity, loudness" for every single note of every single instrument -- just not by feeding music in standard musical notation into a sequencer unchanged! I agree that MIDI isn't very sophisticated, but it's hardly the last word of music written on and played back by computers. Just consider how young this all is! I'm pretty sure physical instruments and the songs played on them started out kinda simplicistic, too. And tribal music for example often isn't so much about expression emotion, but putting people into a trance-like state by endless repetition, and techno does that just nicely already. It's not my cup of tea generally, but I get the same out of chip tunes: I don't need sophisticated music, I just need a canvas for my ears and soul to draw on, I can fill in the blanks or dream up harmonies on my own.

> An interpret has to understand the emotions that should be transported.

True, but also

a.) it doesn't stop there. Beauty is in the eye of the beholder, and if a simple "gridlike" composition makes me sad, happy or gives me goosebumps, that's "soul enough" for me. Even the soul of a simpleton is still a soul :)

b.) the computer enables composer and interpret to be the same person.. and if they so desire, they can put endless amounts of detail and emotion into a piece. Personally I have no doubt that people like Mozart would have been all over computers as an instrument, and the wide range of expression they offer already.


The problem lies in the context. If we speak to each other we take human context into account. So does a musical interpret. It is just another medium. Music instead of words. A computer does not understand human context. This has already be proven by Weizenbaum with his program Eliza. If the Computer does not understand emotion, he does not understand how to create music. There is no possibility to formalize music exactly, so that a computer can play it accordingly. I doubt that this will ever be possible because a true artist takes human context into account in his performance. So there is no static formalization of music. The only solution would be that the composer plays live on the computer. But an instrument that can not be played by humans is useless in this situation.


With computer music, the act of composition and the act of playing it are one and the same. It's like writing a piece, having an orchestra play it, then going back to the score sheet and changing something, over and over. Usually at some point time, patience and/or inspiration run out, long before the song is really good -- but that's a limitation of the state of the art(ists), not of music made with computers in general, IMHO.


No it is not. There is no orchestra. There is an context less, static description of tones, called sheet music or MIDI. These description gets transformed to music whenever a musician plays it or to a set of soulless notes if a computer plays it.


As I said already, MIDI is kind of crude and hardly the last word. The description can be as detailed as the brain of the composer can handle it. The acts of composition and performance are indistinguishable. You could even manually set the amplitude of 44100 (or more) points per second if you wanted to... arguably the musicians that can make full use of the possibilities that exist even now haven't even been born yet.

Someone else made a very good point about paintings, and you kind of missed it by saying computers can't paint like Da Vinci or Shakespeare -- of course they can't, just like a brush or a pencil can't, and just like a piano can't compose. Do reprints of Shakespeare's work have soul in your opinion? And do they have more, less, or just as much soul than exact reproductions of his original handwriting? Is it possible to communicate soul by typing as we do right now, or would we have to see and smell the hands doing the typing for that, and heads pausing in reflection? Can a photo made with a DSLR and tweaked in a RAW converter have soul? Can a big format analogue photograph? What resolution does soul have, what resolution does our perception of it have? If facial expressions convey soul, does imperfection of sight reduce the amount of soul being communicated? Why does a piano piece that can move one human deeply leave another completely cold? Why can a landscape, even one devoid of plants and animals, make the soul sing, why does soul get perceived where none was put into? If it's because God created it, how does this not apply to computers as well? So many questions ^^


He's making a more fundamental mistake, borne by his lack of emotional range.

He's arguing that every poem cannot have a soul, only during the recitation of a poem, by a live performer, can the work take on the kind of soulful meaning.

Yet this criteria, a human must perform art for it to have a soul, eliminates all non-performance art. Painting, sculpture, etc. all has no soul.

Yet this is obviously not true. A great painting has soul just as much as any other art.

So what happens when you have a poem, crafted as a sculpture? We've already determined that sculptures have a "soul", therefore something like this http://2.bp.blogspot.com/_GIchwvJ-aNk/SxMre-2FXnI/AAAAAAAANW... has a soul, but no human performed it. The emotional connection is made via the writer and the sculpture (who may even be the same person). Yet, no human can "perform" this sculpture.

In cases like the OP, the music we have here is no different than a sculpture of the composer's intention. No human performs it, yet it's no less valid than if it was written down for an orchestra of painists to perform.


Do you know and understand the painting "This is not a pipe" by Margritte?


That painting hangs in pretty much every independent coffee shop and cafe in probably 50 countries.


Great, and what does it mean?


You've got to be kidding me. It means it's a painting, not an actual pipe. And by extension, other representational art is not what it represents but something else. But it doesn't matter to you, because the painting is not being performed by a human, and is the same every time you look at it, therefore it, you claim, it doesn't have a soul.


A painting exists to moment it is drawn and it is persistent more or less for at least a few hundred years. Music on the other hand is ordered vibration of air molecules (and this is something different than a composition, hence the pipe example). These vibrations are vanishing immediately. Therefore a painting does not have to be performed like music.

Every form of human art has a soul, a painting, or the actually played music. Computer made "art" does not have a soul, although it may have the same physical structure than a human made one.


We are talking about MIDI sequences that are not producible by humans. If no human can produce those sequences, how can they have human soul?


Yes it is. The orchestra is the computer.


You say that a computer is equivalent to an orchestra?


I say that an old washboard and a bucket of nails can be the equivalent of an orchestra. A computer is no different an instrument in a musicians hands than any other instrument.


My only possible conclusion of your saying is that you have never heard a good big orchestra.


I've played in 2 good big orchestras, 1 nationally ranked. I've even been section lead of one and concert master of an orchestra and a Baroque chamber group. I don't consider myself a great musician, but I definitely know that you've puffed your mind up with so much bullshit you could open a fertilizer plant.


If those orchestras sounded like a old washboard and a bucket of nails, than I definitely do not want to attend a concert.


> duration, velocity, loudness

Then harpsichords have no soul and Bach would like to have a word with you.


It is known that Bach improvised about his works during performances. That means he even left the written notes.


A harpsichord pretty much fails your "test" for "instruments with souls".


why?


I told you two comments ago, you also don't have any reading comprehension. Do you know what a harpsichord is and how it works?


Timing is the most important thing in Music according to Mozart. Are you trying to say that a harpsichord does not allow variations of timing?


And all your experience tells us is that the Ti99/4A did not have good sound chips and/or you are not good at writing music for it. Meanwhile, I still listen to certain C64 music regularly that I greatly enjoy, because the composer knew how to extract great music even from something as limited as that.


No, the TI99/4A had no understanding of music. You have to have emotions to understand music. A Computer does not have emotions.


So does that mean that piano music played on an insufficiently fancy piano is soulless as well?


I have a Kawai MP10 digital Piano and an old hand made analog Hofmann piano. If I play for example just the last Chord (6 Notes) of Chopin Prelude op. 28/4 on the Hofmann the music goes deep through every nerve. Nothing comparable happens on the Kawai even with very good speakers.


That's what people who buy monster ethernet cables say.


Are you saying that those 2 data points constitute a spanning set of the possibilities?


Do you understand the concept of a example?


Bad musicians frequently blame the instruments.


You mean good musicians do not distinguish good and bad instruments?


A good musician can play world class music on a rusty spoon, a used pie plate and a one-string-out-of-tune-$20-guitar.

Crap musicians are those that rely on fancy instruments to make up for their lack of basic musicianship.

Even a drunk martini bar pianist can sound halfway decent on a $70k Steinway or Bösendorfer.


I do not doubt that. But to return to the original point, a good instrument has a soul, as a good musician has. My digital-Kawai does not have one, my analog Hofmann has. My musical skills have nothing to do with that.


The nature of the instrument doesn't dictate its soul. A digital piano was made with just as much care and soul as a regular old piano.

Even instruments with very limited expressiveness are no less important. Yanni regularly brings listeners to tears and he plays as much on a synth as he does on a traditional piano.

"Soulfulness" didn't stop with the digital revolution. You're simply no sophisticated enough to perceive it. Even instruments like a tb303 have brought deep meaning, and communicated emotional soulful intention, to millions.

Your emotional range is just too narrow to feel it. Blame yourself not the instruments.


Have you ever played a Bösendorfer Imperial?


Yes. I can do you one better, I've even played the piano discussed here

http://www.nytimes.com/2003/03/03/international/europe/03RUS...


If you can not understand the difference between these instruments and a for example a Kawai MP10 than any discussion is meaningless.


I've also played wooden spoons, a TB-303, TR-808, TR-909, Classical Violin, Appalachian fiddle, Classical Cello, Classical Piano, Piano Moderne, Three kinds of Organ, Several Synths, Tracker, Marimba, Xylophone, 북, 장고, Kazoo, Slide Whistle, Pot and Pan, 5-gallon drum, Hang, Recorder (modern and classical), flute, penny whistle, Appalachian dulcimer, Hammered Dulcimer, 꽹과리, 자바라, and on and on and on. I'm not even uniquely qualified to comment on this because music is a universe, not a spectrum.

If you can't create beautiful, soulful, music on a Kawai MP10, or even a bag or sand, then I question your authority on music. You rely on expensive instruments as crutches to fill in your musicality, when you need to develop your own. Start with simple instruments and when you can put soul into a pair of wooden spoons then you can move on to more expressive instruments.

Somebody who can't perform with soul on a Kawai MP10, then any discussion is more than meaningless because you've limited soulful musicality to such a tiny fraction of music and instruments in the world that you definition is effectively useless.


I did not say that I can not perform on a MP10. I regularly do, because of practical reasons. All I said is that the Hofmann or any other hand made piano is much better because it has a soul. Just look in the worlds concert halls and see what word class classical pianists prefer.


Because they require less performance effort to get a nice sound, not because they have a soul.

Your argument is like saying "chefs use better ingredients in their restaurant then at home because those ingredients have a soul, while the ingredients they use for home cooking does not".

You are utterly divorced from any reality and live in a trite pedantic fantasy world. Please stop talking to me.

You're simultaneously tiring, limited and boring.


No chefs use better ingredients because that leads to better tasting food. As music is a communication method of emotions from human to human, a better instrument is one that leverages emotions. Hand made instruments are far better in this regard than robots or computers because the idea of sound of the craftsman, and therefore his soul and emotions, are built in. That is why hand made instruments sounds far more different to each other than modern CNC made pianos. The hand made ones have personality and taste. Important ingredients for good music.

Do you think there is only one (your) reality in the world?


I don't get that mindset at all. A lot of the music I listen to is music that is not playable on an analogue instruments without severely butchering it.

Ranging from 8-bit chip tunes to much more complex electronic music.

Why does that affect the level of communication? To me, the only thing that is different vs a song is that for electronic music the communication is mostly from the composer. But I find that to be the case for most instrumental music, including classical music - a performer that adds so much "personality" to the piece that I notice will generally annoy me.


A human interpretation of sheet music is a 3 tier soul transformation. Composer - Interpret - Listener. Computer played music has an digital filter that eliminates a lot of soul in between.

Music notation is only a recipe for making music. Playing the recipe is not making music.


There are plenty of other artistic mediums that lack a performance aspect. Painting, drawing, literature, photography. Do these mediums also lack soul because they directly connect the composer to the audience, without a third party to interpret? Sure, if a piece of music was written with the intent that it be played by human hands, then it probably won't sound great when played by a computer. But if an artist creates an original piece of music using electronic tools, how is listening to that piece any different than viewing a photograph, a painting, or reading a book? Many people dislike audio books specifically BECAUSE the filter of the narrator colors their interpretation of the author's words. As with reading, one might argue that electronic music better communicates the author's soul to the audience.


A computer can not paint like Leonardo Da Vinci. He can not write like Shakespeare. And he can not play music like Sviatlosav Richter.

A prush, pencil, and musical instrument that can not be used by humans is therefore useless.


Your premise and your conclusion is entirely unconnected by any form of logic.

The point of the post you reply to is that art can be created even when it is not possible or meaningful to do it as a live performance. Performers do not have a monopoly on creating art. In fact, sometimes performers are props that are or have been necessary due to the lack of technology.


Music does not exist until the moment it is played. Therefore you need the performance to create art out of a composition. Computers can not create art.


I don't agree with your premise. The composition as much art, often more so, than the performance. For most of the music I listen to, including the classical music, I want the performance to be a faithful reproduction of the intention of the composition. For sheet music, you need the performer to interpret, but if he or she interprets outside of well established norms, the piece will sound off.

For electronic compositions rendered directly to a sufficiently precise format (which MIDI is not), you need no separate performance - the act of composing it and performing it is the same.

Since I reject your premise, your conclusion is irrelevant to me, and I don't think there's any chance we will get any further.

I see from other comments that you imbue the touch of a human performer some special quality beyond the qualities purely physical sound generated, and to me that is pure superstition with no basis in reality. You might as well try to convince me fairies are real.


Do you play an analog instrument?


No, but creating the recipe is making music.

You are assuming that more "soul" (whatever you mean by that) is better. I argue that often it is worse: I tend to dislike classical music where the person playing the music adds too much personal flair (or "soul"), because it makes it sound different to what I expect the piece to sound like. To me that added "soul" detracts from the experience more often than it adds. For that reason too, I organize my classical music solely by composer: If the performing artist is "too noticeable" for me, the piece won't stay in my library, and so I have no interest in who the performer is for the classical albums I keep playing (yes, I can hear the cries in agony from people who considers the performer important).

And electronically generated music is not sheet music. It is more akin to a recording of a performance, even if that "recording" was not live. It embodies what the composer intended the piece to be, rather than being a mix of a recipe from the composer and a musicians interpretation of that recipe. And I am perfectly fine with not having someone else meddling with the composers vision.

(I do listen to a lot of remixes, and that is different in that they are different enough to the originals to be separate works that I can enjoy that separate expression).


Then you definitely would not like to hear Bach or Mozart themselves playing their own works. It is well known that those improvised on their own works during performances.


You're making the fundamental mistake that all notes that are written down are the same. A midi-file isn't a score, it's a recording of the playback of notes. A performer can sit down at a midi enabled Steinway Grand and record their performance into a midi file, then they can play back that exact performance on the same piano and it will sound identical to the original. If you were to listen to such a playback with your back turned, you would be unable to tell if the pianist was playing, or if the midi was.

Programming a machine to play something back in just the way the composer wanted is the same as performance. The composer get to become the performance artist interpreting and setting their own will, their "soul" as you want to call it, into the machine. So that their music is interpreted and heard exactly as they wished it to be. It's no different than if the composer were to perform their own compositions.

You may not like the instrument being used here, or the way in which the composer has expressed their intent in the recording, but it's exactly as they intended.


The OP mentioned MIDI music that is not playable by humans.


So? It doesn't mean the composer can't write music that exceeds the ability of any human performer and still encode the nuance they intended into the performance.

Just because I can't humanly play Paganini's Caprices doesn't mean it's bad music.


If the composer writes music that is not even playable by himself, then this music is never played by a human. Therefore a soul to soul communication to the listener is impossible.


I challenge John Williams to play the theme to Star Wars on his own! The point bane is trying to make is that the composer can use MIDI to record parts of a performance and then combine it into a form which is then unplayable by a single person [Edit: or any group of people].


I don't even know what a "soul to soul communication" is intended to mean. It sounds like you are applying some mystic qualities to it. I think most of us here don't think that the act of performance itself adds anything to the quality of the sound of the music over and beyond the sounds that are recorded, and so a "recording" that is painstakingly generated note by note is to me no different from one that is played live as long as they sound the same. If a piece has never, ever been played live, it makes no difference to me.

This "soul to soul communication" you talk of has no meaning to me.


99.999% of composers write music they don't and can't play themselves. That's why techniques like this are appealing, because the composer can write the performance at the same time. The written music is the performance. There's no obtuse performer screwing up the composer's will in this scenario.

You have a fundamentally broken concept about music.


Zappa referred to electronic music (of the day) as 'missing the eyebrows'. Its sort of the uncanny valley of electronic music.


When you say "playable by humans," what do you mean?

If you mean "physically capable of being played by a human" then I'd say that a large majority of popular music right now is unplayable by humans, simply due to the sheer amount of digital production that goes into them. Many genres of electronic music fit in this category, but while being "unplayable," all clearly have an author (the composer/producer) and an intent (no matter how shallow or profound).

But if you meant "playable by humans" on a subtler level, I then ask:

Is music coming from a radio playable by humans? Is music written on a sheet playable by humans? Both lack an immediate human operator that is "playing" them, yet despite this difference of media, both are valid forms of music that clearly communicate the intent of a human soul (the former through the radio device, the latter through symbols).

I'd argue that using the computer/algorithm used to generate the black MIDI is an instrument like any other. Maybe you were asking something more along the lines of: how valuable is music that is generated arbitrarily or randomly by a computer algorithm, and does it contain "soul" like music produced explicitly by humans? That is a philosophical question about the nature of art and computational creativity, perhaps without an answer.


As Frank Zappa said music notation is like a recipe. He also said eating the recipe is crazy. Hearing computer played notes is like eating the recipe.



If music is communication that originates in your soul, then your hands are merely a tool for realizing that expression. Why can't music realized through other means have soul? Arguably, more complex tools allow for greater expressiveness.


Go see a Nicolas Jaar concert, or any other incredible electronic musician, and then please tell me again that electronic music has no soul.


Interesting take. What's your impression of Musique concrète or - possibly the furthest out - someone like Cage?


I just want to point out how depressing it is that a comment that has spawned quite a bit of interesting discussion has been downvoted so heavily.

C'mon HN, be better than that!


Some of my favorite music is VOCALOID-based (musical speech synthesis), so it doesn't even have a real person singing. And it sure as hell has a lot more "soul" that a lot of the music that's been played by "real instruments" (especially your extremely slim definition).


recommendations?


this is like saying books must be spoken aloud, never written down.


Ordinary people like music that sounds good. They don't care how it was made. Only musicians care about that.

So whether it was made by human hands or a computer algorithm, if it sounds good, people will listen.


I'm pretty certain I don't have a soul, either.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: