Hacker News new | past | comments | ask | show | jobs | submit login
Harmonics Explorer (teropa.info)
298 points by udit99 7 months ago | hide | past | favorite | 52 comments



This is really cool. I used to do a lot of overtone ("harmonic") throat singing and playing with this tool reminded me of those days.

For anyone curious, vowels are mostly just how we perceive different harmonic distributions. Put differently, harmonics are the basis of what it means to pronounce a different vowel. The human voice is basically just a harmonic chord, with different distributions of the 2nd, 3rd, 4th, etc. harmonics.

e.g. https://www.open.edu/openlearn/health-sports-psychology/heal...


That reminds me, I saw a video recently of a choir practice where they sang "brighter" and "darker" based on the conductor's hand position.

It was fascinating how the singers could control the brightness of their voice while holding the same note and frequency. When they went bright, it sounded closer to "eee" or "iii". When they went dark, it sounded like "uuu" and "ooo".

From that, I learned that the lyrics of a song, in particular the vowels, can be chosen consciously (or not) for their harmonic effect.


A lot of great music has been written with this in mind, I would highly recommend Stimmung, for six vocalists and six microphones, by Karlheinz Stockhausen.

Really the whole piece is created from a framework of phonetics, loose vowel sounds as well as names taken from the "magical of otherworldly" traditions of various cultures as well as words taken from the composer's own poetry.


Ooh nice. I'd heard of Stockhausen - attended a performance of his works once - but this is new to me. Very strange music with a wide range of harmonics produced by vocalizations of vowel sounds. Sometimes nasal, other times chant-like, machine-like, and even reminiscent of insects at night.

https://www.youtube.com/watch?v=3hPkJW95jsw



You may also find this interesting: https://www.youtube.com/watch?v=3oxe4mlsQos&t=120s



Thanks for sharing! Didn't know human voice can do this.


Reminds me of pink trombone.

https://dood.al/pinktrombone/


Funnily, this is how Hungarian appends suffixes at the end of the words. The "brightness" of the suffix has to match the brightness of the stem.

We'd call e/i/ö/ü "high" tone class while a (as in the word calm) / o / u "low" tone.


90 second video demonstration: https://www.youtube.com/watch?v=VnC8I3d2MXQ

Lots of different notes present. Perfect 5th, major 2nd, major 3rd, major 7th are all found in the harmonic series. In addition there's some beautiful non-piano intervals, notably the 7th harmonic (slightly flat minor 7th) and the 11th harmonic (flat tri-tone)


Strictly speaking they're all non-piano intervals, just some are more non-piano than others. (Or you can retune your piano …)


Is this related to formants? I don’t actually know what that term means but I’ve heard it used in this context


As I understand it, the formant is the difference between the interval from the harmonic series, and the interval using whatever temperment you're in. Piano-notes are in equal temperment. String players, and some wind instruments allow you to play the "in-between" notes selectively (like a true major third, unlike the horrid thing a equal temperment produces) giving you a powerful emotional tool


I wouldn’t go as far as to say the major third from equal temperament is a horrid thing, but it sure doesn’t hit as well as a true major third


I like the thing, but it misses what I enjoyed teaching about these most: phase. Many people know white noise is nominally 'all frequencies at the same intenisty', yet those taught Fourier mathematics are also taught that the same recipe makes a pulse. The difference is all in the phase information, and why I maintain to this day the Nyquist-Shannon sampling theorem, as typically applied, is incorrect.


Yes phase information is really important. Try this:

1. Fourier Transform an Image 2. Set all magnitues the the spectrum to 1.0, but do not change the phase 3. Inverse Transform and look at the result 4. Now try the same, but this time keep the maginutes unchanged but change all phases to 0°

Spoiler: When changing all amplitudes the image is still regocognizable, when changing all phases, it is not. See example: [1]

But in what sense are you saying the Nyquist-Shannon theorem is incorrect (when applied)? It only says something about the most general case of perfectly reconstructing a signal.

For getting an playful and intuitive understanding of time/frequency transformations my fourier-cube visualization might be useful [2]

[1]: https://static.laszlokorte.de/phase.png [2]: https://static.laszlokorte.de/frft-cube/


Phase information is important, but as far as I understand it, it is not important in audio: our ears are insensitive to phase.

(Phase is important when combining different sinuses of the same frequency, because the sum of those will be different depending on their relative phase, but that's a different matter and not relevant here.)

Changing the phases of the different frequencies will result in a waveform that looks different, but it will sound the same. Our ears are like a spectrum analyzer that only records the volume of each frequency, and is unable to record the phase.


To expand on the other comment. You wouldn't be able to tell the difference in a single sine wave with phase set to either 0 or 180 degrees. But if you add in another sine wave at 0 degrees, the two 0's will add up and the 0 and 180 will completely cancel.

Phase makes a huuuuge difference in audio engineering. There isn't a single song that gets mixed without intense consideration of phase interactions between the different tracks. Getting it wrong can result in catastrophic damage to the audio signal that reaches your ears. If you have a speaker capable, try switching the leads that feed the signal on one of the speakers and see how it sounds! Everything that's exactly the same between the two speakers will sound hollow and tinny, the frequency balance will completely degrade


It's insensitive to individual phase of audible frequencies, but the concept of phase itself is very important in audio in the form of time delays/echoes, if you are dealing with long sample lengths.

See my sibling comment explaining how translation corresponds to ramping phase shift/"fast-forwarding" each frequencies such that the shifted distance are the same across the spectrum.


The fun video The Other Square Wave plays them with phases all out of whack and funny looking, and they sound just the same: https://youtu.be/Ffka-hPzug0


Phase is hugely important and is part of how we perceive a sound in space. Making sure the phase is correct when using multiple mics to mic a drum kit, for example, is critically important.


Explaining concretely: a uniform spatial displacement of the image corresponds to a ramping phase shift across the frequency spectrum.

i.e. if you shift an image by 1cm, then the 1 rad/cm frequency component gets its phase "fast forwarded" by 1rad, the 1.5 rad/cm component forwarded by 1.5rad, and 2 rad/cm by 2rad and so on.

By subtracting each frequency's phase from their original distribution, you are basically displacing them each by a different distance from one another, decohering the image entirely.


Typically the Nyquist Shannon theorem is stated to say that you need to sample at twice the maximum frequency for complete reproducibility. Applying this to a digital signal, that does indeed work with a discrete Fourier transform. Applying this in the perspective of a sample rate (i.e. *.wav file of a recorded audio signal) does not hold true though in my opinion as the discrete signal has become a scalar, not a vector of the Fourier example.

If we take the highest reproducible frequency, two samples per wave, we find we could perfectly sample at the highest and lowest values of that wave, but we could have equally sampled the zero-crossing point, all depending on where in the phase the sample rate aligns with a given wave form. As the sampling has lost significant information, I believe a sample rate should be much higher than what Nyquist-Shannon would suggest for a high degree of reproducibility.

If your source is a digital signal, and you only need to reproduce that signal, of course 2x is ample.


Actually the Theorem states, that you must sample with a rate strictly GREATER THAN (not equal) than twice the highest frequency. Exactly because sampling only the zero crossings is not enough.

Ofcourse depending on how exactly you want to process your samples it might be convenient to have an even higher sampling rate. And if you know your signal does not contain low frequencies (=not using the full bandwidth) you might get away with even lower sampling rates.

But the general case is: you must sample with a rate strictly greater than twice the highest frequences.


Tangential: Your Fourier Cuboid is very cool project. I have added it to awesome-interactive-math [1] list.

[1]: https://github.com/ubavic/awesome-interactive-math/


I'm not sure the original poster meant, but the sampling theorem is often misunderstood, there's a good article I recommend to most anyone who has the choose a sampling rate which goes into those misconceptions: https://neuron.eng.wayne.edu/auth/ece4330/practical_sampling...


In the domains that I'm familiar with we never get to measure phase, only intensity. (https://en.wikipedia.org/wiki/Phase_retrieval)


Without entering the broader discussion in the comment. I've also missed the ability to change phase for each harmony.


The earliest known scientific hypothesis test was a Pythagorean investigation of whether the mathematical model for consonance in stringed instruments generalized to chimes — so rather than 1:2 as a ratio of string length, whether a ratio of 1:2 in chime thickness also produces an octave. It does. (This experiment was conducted by Hippasus and recorded by Aristoxenus, a student of Aristotle)

But interestingly, we still have big open gaps in our scientific models of consonance and dissonance.

Consonant tones involve a large number of shared harmonics. That alignment appears to be important in the perception of consonance and dissonance. Yet, harmonic alignment is not currently a mechanism used in the algorithmic detection of consonance/dissonance, so far as I know. This tool looks like a good way to generate stimuli for experimentation, thanks!


William Sethares developed a model of consonance and dissonance based around alignment of harmonics (or partials; it generalizes to inharmonic sounds).

Original paper:

https://sethares.engr.wisc.edu/paperspdf/consonance.pdf

Informal explanation:

https://sethares.engr.wisc.edu/consemi.html


Nice visualization! A few improvement suggestions -- I noticed that it is easy to clip the 'master' output, a 'master fader' to control its output (or a checkbox to rescale visualization based on maximum value).

Also, implementing a phase control for each harmonic would also be interesting for visualization.

Finally, why not add a wavetable synth to allow you to hear the resulting waveform?


Master volume fader, please. This is cool but I have to change my computer volume to adjust the volume and that's not a very great experience. Especially when I click Square and it just starts screaming at me.


It's in the top left, labelled master or sawtooth. Defaults to 0.500.


If you want to go above 13 overtones or make other waveforms, I quickly whipped this up for square/triangle/sawtooth/impulse trains:

https://www.desmos.com/calculator/eioaj93rzr


Very nice.


This is very cool.

If the creator is reading these comments, my one piece of feedback would be that I think it would be more interesting/useful if the harmonics were expressed as multiples or ratios of the fundamental.


The effect when switching from sine to square wave, as the harmonics are added, is very nice.


Wow, what a lucky find. This is incredibly useful to me for equalizing speakers to match a room. I was using Websynths Microtonal before but this almost seems designed for the purpose.

All it might need is the ability to manually enter the base frequency yourself or do an automatic sweep. But I could probably bodge that into the source myself.

Lovely!


I love these. My favourite part is that you can hear the fundamental frequency when you add up the non-octave frequencies (i.e. increase all harmonics except 1, 2, 4 and 8). Even though the fundamental frequency isn't "there", your ears can still hear it.


It might be interesting if there is more fine tuning towards the lower end of the volume. So the higher harmonics can be present but much softer.


It's wonderful when an interaction makes you question that which you thought you knew.

My understanding was that, in order to produce a triangle or sawtooth wave, you need to have a phase control. This is because of the (-1)^k term in the Fourier expansion, as seen in Wikipedia.

After seeing this site produce a sawtooth wave with no phase control, my mind is blown apart, into tiny little pieces.


My tinnitus does not thank you



This is great. I wish the left side showed the frequency ratio as well as the raw frequency


My similar project, with a bit of a spirograph mixed in: https://merely.xyz/waves


Nice. Would be even nicer to be able to move the base frequency


Look for the arrows on either side of C4 on the top bar.


Well, that was fun, I've got headphones on, so I heard it all the way down to 16 Hz sine wave.

Getting a triangle wave with 4n+1 harmonics wasn't easy.


I thought this might be a map of the positions of various harmonics on e.g. guitar strings, but still very interesting and cool.


should check Tero's other works: https://teropa.info/

another comment:

if you want to use any number of sine waves of any freq, you can use code to do this:

https://glicol.org/tour#mixjs2





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: