Long story short, it seems the answer is that our scales and chords derive from a combination of what makes for interesting voice leading and what notes sound good together in a given timbre. The choice of approximating ideal chords and scales using 12 equally spaced notes derives from a instrumental playability concerns and the desire to be able to freely transpose and to modulate between distant keys.
It's a deep topic, but if anyone's interested, I recommend two books:
- Timbre, Tuning, Spectrum, Scale by William Sethares
- A Geometry of Music by Dmitri Tymoczko
I've been playing with music and mathematics for a while, and I had a thought that someone far more mathematical and musical than myself could correlate tried and true mathematical constructs to music other than defining patterns via Markov processes, neural or evolutionary algorithms, or simply generating random melodies. Pulling these patterns into 12 point space is really interesting.
I was just turned on to SYZYGYS's music based on Harry Partch's 43-microtone scale. It is hard to stop listening to it today, but maybe because it's novel for me.
Mathematics is all about patterns and relationships, so music is an aural expression of mathematics as has been said by so many others, yet I never tire of seeing (and hearing) examples of this.
I'm talking about more advanced harmony constructions like Jacob Colliers work (check him out if you haven't heard)-
Harmony (and it's relation to melody) is a time based problem, you can't just join random chords together that match the current melody note. LSTM works for time series learning but I am not convinced it "gets" concepts of music theory, so maybe some kind of hybrid of engineered music theory features + LSTM with good training material could work