Here's Vache by Venetian Snares, from 2006: https://www.youtube.com/watch?v=c2f5gOo1VEM
And here's a webcam video of the sequencer data scrolling past: https://www.youtube.com/watch?v=zGK-EzEa45U
The techniques are much older. Here's a screengrab of the OctaMed tracker running on an amiga, playing back the original drum tracks from Aphrodite's jungle classic Beats Booyaa from 1994: https://www.youtube.com/watch?v=bkVSe9DubE8
And here's the whole of Beats Booya to hear it in context: https://www.youtube.com/watch?v=vts6rqJHMK8
Real producers chop their beats in hex ;)
EDIT: also, most music software includes something called an arpeggiator, into which you play chords, which it breaks up into little sequences of notes according to different parameters. Set the interval down to 20ms or so and you've got your black midi :)
That's also how you get those distinctive chord-like sounds out of monophonic soundchips in oldschool video games.
Also, constraints breed creativity, and this is just another example of that.
It's not the type of music I would listen to generally, but insisting that this should be the case completely misses the point of what's interesting about it.
Most of the criticism in this thread are just eloquent formulations of "Get off my lawn" and "your a faggot". No different from the literally tens of thousands of previous instances of situations where a new artistic medium or expression has been criticised.
If you understand stuff like this you can appreciate the music. I'm not exactly going to sit down and listen to it in a dark room sipping some red wine, but it's a new and unique compositional technique that can be incorporated into other music.
I understand it. I have no appreciation for it. sorry.
Some early music used 5-note scales and regarded 2nds and 7ths as unusable (I'm referring to music in the west; folk music elsewhere often still uses 5-note scales). Early music was monophonic, but that limitation was overcome by the 12th century I think. Bach was genteel, but Beethoven was emotional. Brahms was tonal, but Debussy allowed atonality in music. Cage allowed aleatoric sounds to be music. Glass allowed excessive repetition of simple patterns to be music.
This, however, is just ridiculous.
The thing is, it's not at all hard to write a piano piece that's unplayable. Simply add a third note group far enough above or below the existing two note groups and it will be physically impossible to play (unless someone else helps you). It doesn't need to be a grotesque fountain of millions of notes to be unplayable. This is better examined not as impossible music, but as an experiment that asks the question "how many notes can you use at the same time and still make a coherent song?"
The debate between music that's playable and music that's impossible would be better served by more realistic examples, instead of a small sub-culture.
We could critique the medium, but people have abused instruments and sound makers for generations. Look up prepared piano , glitch music , and the wide world of guitar effects. This is just another way exploring the possibilities of a given medium (MIDI, in this case).
Theoretically, we could generate all possible music just by adding together sine waves, but realistically, most music is only accessible by using far more limited mechanisms, like instruments, gadgets, and paradigmatic software, which in turn influence the creative process. A person is definitely free to critique the artistic merits of a given piece, but I think it's rather close-minded to critique an entire medium. Most of the styles of music we listen to were (or still are) considered cheating by adherents to some other style.
Contrasted to its acoustic opposite, chip-tune minimalism, and I'll take chip-tune anyday, even in its "pure" form (limited polyphony, no postprocessing, etc, just as you would have found it back in the day).
The majority of the pieces linked to in this thread probably aren't the best example of this, but I think Dream Battle (http://www.youtube.com/watch?v=Lzy_WrH8v7U) is quite good in this regard. The piece itself is just a normal piece -- it only happens to be unplayable, and since it is unplayable, there are a number of Black-MIDI-specific "extended techniques" that can be employed (as you'll see).
There's stuff out there that has a frenetic line paired with a sparse melody that might work well: http://www.youtube.com/watch?v=RK5wWD1k7T0
It doesn't take many notes to make something impossible in practice. On piano 11 notes simultaneously makes it impossible for one person to play and it would probably sound a lot more coherent. The black midi I've listened to doesn't sound bad, but it all sounds quite similar - there isn't much range.
Still, I'm not sure this "genre" has a lot of depths to explore. Their time and their choice, of course.
Given how much interesting music (IMO) has come from things like serialist classical, I suspect it's only a matter of time before someone finds ways of producing interesting sorts of tone color that are more difficult to achieve via other methods.
It depends on how pedantic you want to be about the word "genre", and I tend to avoid the word entirely when possible because of the really weird (IMHO) ideations that surround it. But in this case, if we're going to keep it to "piano"-type sounds, which we pretty much have to if we're going to have a "problem" that needs solving anyhow, I think they've pretty much explored the space that's available to them.
And the primary reason for this is that they are not charging into a new, unexplored space... quite the contrary. The piano has been explored for hundreds of years. Rather than opening bold new fields of exploration, this is exploring the last few remnant bits that people couldn't cover earlier due to not having hundreds of fingers.
I understand being open to music ideas, but I also don't believe in entirely turning off my brain. I really don't think there's much "there" there.
I also think the black MIDI examples that have been posted would sound significantly better with better instrumentation (that is, more expensive softsynths -- I am a fan of TruePianos, personally).
But it is accessible to amateurs, and sounds very impressive. Which is, as you noted, a good reason why it's so popular. It's a good piece to pull out when you want to show off.
Can't they make the piano keys a little smaller (narrower)?
Thing is, players with broad fingers have trouble negotiating between black keys as it is now.
Edit: see https://en.wikipedia.org/wiki/Musical_keyboard#Size_and_hist...
(thought she deserved the credit!)
Why not? [/argument]
Not everything needs to be done for a practical reason; that's most art.
There's a long history, going back to 80's, of artists abusing various computing platforms to write somewhat melodramatic music that pushes the boundaries of both traditional pop songwriting and the computing platforms themselves. This tradition is closer to hacking than it is to pop music in that it follows its own internal logical of oneupmanship and works aren't produced for any audience outside of the "scene" itself. Black MIDI is just another plausible and entertaining development in that context. Probably some kids who got into the scene and wanted to distinguish themselves by doing something new.
It is indeed also in some respects "good music", but at this point it's already so weird that it's not particularly enjoyable to most people. I happen to have been following this sort of music for awhile, since at least the explosion of the chiptune/micromusic scene in the early 00's, and I've learned to enjoy it such that I liked the pieces linked in the article and in this thread. I liked them both as cheesy sentimental pop music and for the "hacks" (e.g. playing a bunch of notes to make a phased "kick" sound) in the same way that someone might appreciate technical guitar playing. Another poster was spot-on when he said that this is basically hacked additive synthesis – that's precisely the joy of it! Ultimately, it's just another acquired taste, like wine or classical music.
http://www.youtube.com/watch?v=3GIemGd3Ctk&list=PLkMjO0BRqWu... (FM funk)
Basically just really good 4-part harmony on 8-bit chips. It's really not unlike pre-Rennaisance church music (early polyphony), or a lot of modern choral music.
For the uninitiated, mod tracking is kind of like midi but with either or both synthesized and digitally-sampled instruments played typically by software. A friend in high school in the late 90's ranked near the top 5 globally with a beautiful piece that used like 32 channels, all in software. No help from a GUS with this.
 not to disparage it too much as an artform. Art forms from what the artist able to with the medium of choice, and the choice of medium does not it make it automatically better or worse
Interestingly, we do have this, and it is a crucial component of lossy audio compression techniques such as MP3 - it's the Fourier transform. Essentially you can convert any audio signal into sine intensity per frequency over time, and vice versa. The space-time resolution is somewhat adjustable, so quicker reaction time can be obtained at the expense of squishing nearby frequencies together. I would not describe a lot of music that I have observed as spectrograms as being "black," in fact there are visible patterns that correspond with the harmonics of the sounds being played.
He also has a piece called 'The Black Page' due to its density of notes on the page - http://en.wikipedia.org/wiki/The_Black_Page
Both together here: http://www.youtube.com/watch?v=UrOK98q_ILA&list=PL945B5DD750...
Unplayable music is not new. Some better-known examples are how Queen used to just step off-stage in the middle of Bohemian Rhapsody and play the tape of the mutli-tracked vocals; or how The Who used to screw up on stage playing along with the taped parts of Quadraphonia. Even the Beatles' live performances of Paperback Writer were weak because they used so much multitracking in the studio.
What I found interesting was that many of the multi-note combinations were just hacking the synthesizer to produce different sounds. A talented keyboardist could program MIDI sequences triggered by a single keypress and perform some aspects of "Black MIDI" live.
In contrast, I didn't think that the two examples were pleasing to listen to.
That said, this music sounds atrocious when you run it through a computer, it'd sound better if it were spread out across multiple instruments, but whatever.
But in this form, I really see no value.
I'm only hearing noise in those videos, the noise from the switching on of the note (that slight 'tack')
The synthesisers apparently can't handle that amount of notes without some artefacts.
And see, they're only adding huge amounts of notes, but no pitch shift and no volume control (apparently)
This could be interesting with different (softer) instruments, better synthesisers focused on more notes and more "playfullness" rather than just hammering notes
Not that you can't play multiplicated stuff in real time, it just doesn't sound very interesting. Low-pitch piano keys already have harmonics from higher octaves.
MIDIs can do that too right?
So, by playing a bunch of notes really fast, you just end up with a different kind of buzz.
I'd rather just use a synth. This is like monkeying with waveforms using a step function. Kind of limited.
Who'll be the first one to present a novel 21 million pages long :) It's quite a challenge as well (AI might help to take it in foreseeable future). Obviously haters gonna hate - shame on the haters.
Personally I'm more impressed by someone who puts together 210 words, but just the right words
Yes there are acclaimed authors who invent challenges for themselves, such as Georges Perec who wrote one novel without ever using the letter "e" etc.
It's quite fun, it's just meta - it's a bit of "literature about literature", or "music about music", so to speak.
Your goal is to prove a point, and art as such (the way I see it, of course) is not about proving a point.
If you need 20 million notes to achieve a certain effect, why can't this effect defend by itself, why the need to put this fact upfront, give it a name etc.
Here's one that does sound pretty great just on piano, though: https://www.youtube.com/watch?v=tds0qoxWVss
It's got a pretty decent sequencer, and had no problems playing Circus Galop.
"If I hear one more person who comes up to me and complains about "computer music has no soul" then I will go furious, you know. 'Cause of course the computer is just a tool. And if there is no soul in computer music then it's because nobody put it there and that's not the computer's role. It's the role of the songwriter. He puts down his soul in the song if he wants to. A guitar will never write a song and a computer will never write a song. These are just tools." -- Björk
That said, I'm not a fan of those particular songs either; less would be more IMHO. But many of my all time favourite songs are chip tunes... some compositions (!) simply don't need additional "soul" added to them, they work just fine played by a robot.
Maybe I'd understand your complaint if it was about computer composition of music, but this is just using the computer as an instrument. Would you say flute players can't play music "with soul" because they aren't directly whistling the noises? Why not? How is that qualitatively different from using a computer to play your composition?
Soul is not blind testable, like art, because it comes from the heart, not the brain.
For instance: Consider a recording from a piano played by a human and a computer-generated MIDI file of the same musical piece with included variation/noise in BPM, note duration, velocity, timing etc.
This would result in at least single-blind test for `soul' if you were to listen to it. You could tell us which piece you think has more, or any (I'm not sure if soul is quantifiable or just a binary existence) soul.
Here's an idea for a test: start with a song recorded at 44100Hz (standard CD quality) that has soul. We can debate the actual piece of music, but I'll use "Clap Your Hands" by A Tribe Called Quest in this example. Give a bunch of people a randomly-downsampled version of the song (at 12500Hz, 800Hz, 220Hz, etc), and have them answer a simple question: "Does this have soul?"
The song is 93BPM, or 1.55 beats per second. At a sample rate of 1.55Hz, we're looking at one sample per beat. Let's use a standard 16-step sequencer and say that a MIDIfied approximation is going to have four samples per beat (quarter notes). So, at about 6.2Hz, we've got a recording that has no better resolution than MIDI (potentially even worse).
Ultimately, I guess I agree with you: "soul" is in the ear of the beholder.
(Disclaimer: I don't actually know anything about digital audio.)
Music that wasn't written on and for a computer, no. Yet it's perfectly possible to manually craft "variation of duration, velocity, loudness" for every single note of every single instrument -- just not by feeding music in standard musical notation into a sequencer unchanged! I agree that MIDI isn't very sophisticated, but it's hardly the last word of music written on and played back by computers. Just consider how young this all is! I'm pretty sure physical instruments and the songs played on them started out kinda simplicistic, too. And tribal music for example often isn't so much about expression emotion, but putting people into a trance-like state by endless repetition, and techno does that just nicely already. It's not my cup of tea generally, but I get the same out of chip tunes: I don't need sophisticated music, I just need a canvas for my ears and soul to draw on, I can fill in the blanks or dream up harmonies on my own.
> An interpret has to understand the emotions that should be transported.
True, but also
a.) it doesn't stop there. Beauty is in the eye of the beholder, and if a simple "gridlike" composition makes me sad, happy or gives me goosebumps, that's "soul enough" for me. Even the soul of a simpleton is still a soul :)
b.) the computer enables composer and interpret to be the same person.. and if they so desire, they can put endless amounts of detail and emotion into a piece. Personally I have no doubt that people like Mozart would have been all over computers as an instrument, and the wide range of expression they offer already.
Someone else made a very good point about paintings, and you kind of missed it by saying computers can't paint like Da Vinci or Shakespeare -- of course they can't, just like a brush or a pencil can't, and just like a piano can't compose. Do reprints of Shakespeare's work have soul in your opinion? And do they have more, less, or just as much soul than exact reproductions of his original handwriting? Is it possible to communicate soul by typing as we do right now, or would we have to see and smell the hands doing the typing for that, and heads pausing in reflection? Can a photo made with a DSLR and tweaked in a RAW converter have soul? Can a big format analogue photograph? What resolution does soul have, what resolution does our perception of it have? If facial expressions convey soul, does imperfection of sight reduce the amount of soul being communicated? Why does a piano piece that can move one human deeply leave another completely cold? Why can a landscape, even one devoid of plants and animals, make the soul sing, why does soul get perceived where none was put into? If it's because God created it, how does this not apply to computers as well? So many questions ^^
He's arguing that every poem cannot have a soul, only during the recitation of a poem, by a live performer, can the work take on the kind of soulful meaning.
Yet this criteria, a human must perform art for it to have a soul, eliminates all non-performance art. Painting, sculpture, etc. all has no soul.
Yet this is obviously not true. A great painting has soul just as much as any other art.
So what happens when you have a poem, crafted as a sculpture? We've already determined that sculptures have a "soul", therefore something like this http://2.bp.blogspot.com/_GIchwvJ-aNk/SxMre-2FXnI/AAAAAAAANW... has a soul, but no human performed it. The emotional connection is made via the writer and the sculpture (who may even be the same person). Yet, no human can "perform" this sculpture.
In cases like the OP, the music we have here is no different than a sculpture of the composer's intention. No human performs it, yet it's no less valid than if it was written down for an orchestra of painists to perform.
Every form of human art has a soul, a painting, or the actually played music.
Computer made "art" does not have a soul, although it may have the same physical structure than a human made one.
Then harpsichords have no soul and Bach would like to have a word with you.
Crap musicians are those that rely on fancy instruments to make up for their lack of basic musicianship.
Even a drunk martini bar pianist can sound halfway decent on a $70k Steinway or Bösendorfer.
Even instruments with very limited expressiveness are no less important. Yanni regularly brings listeners to tears and he plays as much on a synth as he does on a traditional piano.
"Soulfulness" didn't stop with the digital revolution. You're simply no sophisticated enough to perceive it. Even instruments like a tb303 have brought deep meaning, and communicated emotional soulful intention, to millions.
Your emotional range is just too narrow to feel it. Blame yourself not the instruments.
If you can't create beautiful, soulful, music on a Kawai MP10, or even a bag or sand, then I question your authority on music. You rely on expensive instruments as crutches to fill in your musicality, when you need to develop your own. Start with simple instruments and when you can put soul into a pair of wooden spoons then you can move on to more expressive instruments.
Somebody who can't perform with soul on a Kawai MP10, then any discussion is more than meaningless because you've limited soulful musicality to such a tiny fraction of music and instruments in the world that you definition is effectively useless.
Your argument is like saying "chefs use better ingredients in their restaurant then at home because those ingredients have a soul, while the ingredients they use for home cooking does not".
You are utterly divorced from any reality and live in a trite pedantic fantasy world. Please stop talking to me.
You're simultaneously tiring, limited and boring.
Do you think there is only one (your) reality in the world?
Ranging from 8-bit chip tunes to much more complex electronic music.
Why does that affect the level of communication? To me, the only thing that is different vs a song is that for electronic music the communication is mostly from the composer. But I find that to be the case for most instrumental music, including classical music - a performer that adds so much "personality" to the piece that I notice will generally annoy me.
Music notation is only a recipe for making music. Playing the recipe is not making music.
A prush, pencil, and musical instrument that can not be used by humans is therefore useless.
The point of the post you reply to is that art can be created even when it is not possible or meaningful to do it as a live performance. Performers do not have a monopoly on creating art. In fact, sometimes performers are props that are or have been necessary due to the lack of technology.
For electronic compositions rendered directly to a sufficiently precise format (which MIDI is not), you need no separate performance - the act of composing it and performing it is the same.
Since I reject your premise, your conclusion is irrelevant to me, and I don't think there's any chance we will get any further.
I see from other comments that you imbue the touch of a human performer some special quality beyond the qualities purely physical sound generated, and to me that is pure superstition with no basis in reality. You might as well try to convince me fairies are real.
You are assuming that more "soul" (whatever you mean by that) is better. I argue that often it is worse: I tend to dislike classical music where the person playing the music adds too much personal flair (or "soul"), because it makes it sound different to what I expect the piece to sound like. To me that added "soul" detracts from the experience more often than it adds. For that reason too, I organize my classical music solely by composer: If the performing artist is "too noticeable" for me, the piece won't stay in my library, and so I have no interest in who the performer is for the classical albums I keep playing (yes, I can hear the cries in agony from people who considers the performer important).
And electronically generated music is not sheet music. It is more akin to a recording of a performance, even if that "recording" was not live. It embodies what the composer intended the piece to be, rather than being a mix of a recipe from the composer and a musicians interpretation of that recipe. And I am perfectly fine with not having someone else meddling with the composers vision.
(I do listen to a lot of remixes, and that is different in that they are different enough to the originals to be separate works that I can enjoy that separate expression).
Programming a machine to play something back in just the way the composer wanted is the same as performance. The composer get to become the performance artist interpreting and setting their own will, their "soul" as you want to call it, into the machine. So that their music is interpreted and heard exactly as they wished it to be. It's no different than if the composer were to perform their own compositions.
You may not like the instrument being used here, or the way in which the composer has expressed their intent in the recording, but it's exactly as they intended.
Just because I can't humanly play Paganini's Caprices doesn't mean it's bad music.
This "soul to soul communication" you talk of has no meaning to me.
You have a fundamentally broken concept about music.
If you mean "physically capable of being played by a human" then I'd say that a large majority of popular music right now is unplayable by humans, simply due to the sheer amount of digital production that goes into them. Many genres of electronic music fit in this category, but while being "unplayable," all clearly have an author (the composer/producer) and an intent (no matter how shallow or profound).
But if you meant "playable by humans" on a subtler level, I then ask:
Is music coming from a radio playable by humans? Is music written on a sheet playable by humans? Both lack an immediate human operator that is "playing" them, yet despite this difference of media, both are valid forms of music that clearly communicate the intent of a human soul (the former through the radio device, the latter through symbols).
I'd argue that using the computer/algorithm used to generate the black MIDI is an instrument like any other. Maybe you were asking something more along the lines of: how valuable is music that is generated arbitrarily or randomly by a computer algorithm, and does it contain "soul" like music produced explicitly by humans? That is a philosophical question about the nature of art and computational creativity, perhaps without an answer.
C'mon HN, be better than that!
So whether it was made by human hands or a computer algorithm, if it sounds good, people will listen.