I was talking about this the other day with someone, why don't DAWs push the spectrogramme view more forcefully, instead of the default wave view? There's so much more information to be gleaned in spectrogramme view than the waveform view.
I'm trying to learn to sing these days, and I'd been wondering if this was a good way to practice a song: look at a vocal stem of the song I'm trying to sing, and observe visual feedback of the spectrogramme.
The waveform view on the other hand will always remain useful no matter how good your ears get. If you're comping together multiple takes of the same section, or shifting tracks to adjust for phase-alignment in a multi-microphone setup. Doing this by looking directly at the samples is way less tedious than doing it by ear.
Also, though it's probably not an issue today, I would guess CPU concerns are another reason why a spectrogram isn't displayed by default on all tracks.
I also found out audio based classification trains your models on spectrogram images!
The things you're talking about... please talk more about them. What're some example applications of what you're talking about, for example?
Disappointing that they stuck to standard equal temperament for everything but the harmonics and string-proporations stuff. There's no reason Kandinksy should be limited to the tempered pitches.
Use the mic input option and try saying different vowels, or different held consonants like "mmmm" vs "nnnn". It's really interesting to see how the patterns of overtones change, which is what makes the sounds unique.
The monkey and the drum sounds really badass.