For some bands this will be a dynamic changer. Singers who can not write down chords sometimes don't want to impose on the instrumentalists by bringing in new material. If they can bring in a rough whack that barrier is lifted.
• Not all music is in 4/4. Ok, mixed meter and obscure meter, but 3/4 really ought to be either recognized, or let me tell you. Even when the song is in 4/4, it sometimes loses track of where "1" is.
• It doesn't like to (maybe can't) write more than a 3 note chord, and I didn't see any "sus" chords, but maybe I didn't feed it a good sample. Not understanding 4 note chords will probably add ambiguity to the results, say a Em7 being mistaken for a G if there isn't enough E going on.
• It appears to cache well. When I hit popular songs through their cleanest YouTube URL the results pop up instantly. Well done!
Looking forward to progress, but it goes in the toolbox now.
I've used it before, it's pretty interesting, especially for its visualisations of the chords being played. It works better for songs with a clear chord structure, though. Too bad the project seems kind of dead. There are other projects listed elsewhere in this discussion (e.g. Chordino+NNLS Chroma) which seem viable.
Tangentially, Melodyne's "Direct Note Access" promotional video was very exciting when I saw it years ago, but I have to wonder how well it ever worked (when it finally came out).
It had trouble with most pop and rock music, presumably because of the crushed dynamics and drums. Of course, it's not even designed for analyzing multi-instrument recordings.
(There's usually a few "doesn't work" comments, but imagine how much extra processing is required to get the fundamental when you run guitars thru delay/ reverb, overdrive/distortion, chorus/tremolo/vibrato etc effects)
Perhaps I'm foolish to question the artist about the chords of his own song, but it certainly doesn't look quite right to me. Each phrase in the verse starts on an A, and the algorithm misses that a few times. When the whole band starts, a lot more chord changes get missed, which seems odd to me based on the description of the algorithm.
Also, it would be nice if the tool attempted to identify a tonal center ("key") of the song, and use that to spell the chords more appropriately. For a lot of pop music, this would be fairly easy and reliable. In this song, the tonal center is clearly E, and it would be nice if the C#m wasn't spelled as Dbm.
For a much more blatant example of Chordify just failing to spot really obvious things, check out their results for "Get Lucky":
This song has a completely fixed chord scheme (Bm/D/F#m/E). Each exception in the chord sheet is a mistake.
If I remember aright, both Chordify and Yanno use NNLS Chroma for the feature extractor but they use different methods to segment and label the chords. (Yanno uses Chordino which you can find at http://isophonics.net/nnls-chroma with the NNLS Chroma plugin.)
I guess with it hooked to youtube we can rapidly try out all sorts of unusual inputs.
And there are even worse algorithmic composing systems like Lyle Murphy's Equal Interval System, Schillinger, etc... I love algorithms but that's not art.
At its most basic, composition is the creation of a musical score. What form the score takes, what genre the piece is in, the method of composition, the evaluation of the resulting work, are all flexible depending on the style of music and the quirks of the particular composer and audience. Whether something qualifies as "art" is almost entirely subjective -- if it speaks to an audience (including the composer!) it's worth something.
Makes me wonder what the next step would be. Seems like they're essentially decomposing a song into its ingredients. I wonder if they could use their algorithm to convert a song from one genre to another like auto-creating the hiphop or dance version of a given song :)
From the article itself:
“The problem with ‘full polyphonic transcription’ is that the computer doesn’t know how many voices and instruments sound together and what the characteristics are of these instruments,” says De Haas.
“When you transcribe chords, we examine the mixture as a whole and examine what the prominent frequencies are in the spectrum.”
Here is a demo: http://www.youtube.com/watch?v=jFCjv4_jqAY
It works best with single-instrument signals - not so much with a mix of multiple instruments.
this youtube video is not available in your location
this deezer song is not available in your country
this youtube video contains content from [x]. it is restricted from playback on certain sites.
Then I tried it with Don't think twice it's alright and it pretty much nailed it. I wonder if it was tuned on Dylan. :)
BUT i was trying to use it on Moonlight In Vermont by the Johnny Smith Quintet. I am expecting that the type of harmonic structure used in the jazz chord melody style is probably beyond the current capabilities of this technology.
Even still though, i mean the results seemed to have no relation at all to teh music being played, chords were shown on silent parts of the song and vice versa.
For instance, the lowest frequency (among the strongest amplitude candidates) is often the bass line, & thus the chord. However, those type of general rules are common, yet often broken to make the music more interesting in the first place. Esp. when notes are "left out" to open things up & let you mind fill it in.
The results seem hit and miss - consecutive lines with the same chord structure appear to me transcribed differently