Hacker News new | past | comments | ask | show | jobs | submit login
About Drums: the physics of overtones (circularscience.com)
99 points by camtarn on Mar 8, 2017 | hide | past | web | favorite | 30 comments



It's a great article, but the piano tuner in me must correct a detail about "typical musical instruments" such as pianos. The overtones are not necessarily perfect multiples of the base waveform and this is called inharmonicity.

In fact, this is part of why a piano sounds like a piano and guitar sounds like a guitar.

For any piano and especially the upright, the bass strings are actually too short to produce any vibration of the main frequency, the only thing you are left with are the overtones and our brains fill in the rest.

And that brain fill is actually happening across the entire range of the instrument, our brain latches on to specific overtones depending on interval, and the piano tuner (electronic or human), must compensate for inharmonicity in that range.

This means the bass must be tuned lower than the middle which in turn is tuned lower than the upper regions.

https://en.wikipedia.org/wiki/Inharmonicity

http://www.precisionstrobe.com/apps/pianotemp/temper.html


Any piano can produce the main frequency of the bass notes. Apart from length, the tension and density of the string play a role in the main frequency as well.

The problem with shorter strings is rather simple. Shorter strings are stiffer than their longer equivalents. These strings produce sharp overtones, since there is less length of the string to vibrate on higher frequencies. Perfect overtones need perfect flexibility to be an exact multiple of the main frequency, which is physically impossible.


Good to know, can't remember where I read that.


I've first read about it on a more mathematically and scientifically inclined tuners page. It also described the effect that slightly untuned strings on the same pitch create a semi-chaotic effect which allows us to hear the tone for a longer amount of time, while raising the perceived loudness (caused by a very strong initial decay and a long linear decay).

There is a good explanation on Piano Acoustics linked from the wikipedia page you posted [1]

[1] https://en.wikipedia.org/wiki/Piano_acoustics


>For any piano and especially the upright, the bass strings are actually too short to produce any vibration of the main frequency,

Not sure what you're talking about here. For example, the lowest guitar string is an E2 which is 82.4Hz. If you calculate[1] the wavelength, it is 13.7 ft. The string lengths from bridge to nut[2] are only ~25 inches long and yet it's reproducing the fundamental of 82.4Hz without requiring the brain to fill in the gap.

I think that misunderstanding is similar to believing that a speaker can't reproduce 82.4Hz because the cone is not 13.7 feet in diameter, or you can't hear an E2 in a small room because the width and height of the walls is less than 13 feet. If this were true, when you listen on earphones, all 88 keys of a piano and entire range of guitar and vocals would be "audio illusions" of the fundamental frequencies since the physical sizes of the transducers and ear cavity are all less than 1 inch. The highest 88th key on piano is a C8 with a wavlength of 3.2 inches.

[1] http://www.mcsquared.com/wavelength.htm

[2] https://www.guitarlessonworld.com/lessons/parts-guitar-learn...


> In fact, this is part of why a piano sounds like a piano and guitar sounds like a guitar.

My understanding is that it is a bit more complex than that, literally!

The the final waveform is a not just

W = sum(a_i * f_i) = Psi

where a_i is the amplitude and f_i are the fundamental frequencies.

It is actually

W = sum(a_i * f_i + sqrt(-1) * (b_i * f_i))

  = Psi + i * Phi
Loosely, the imaginary part plays a significant role in making an instrument sound like it.

Of course, the brain fills up a lot of stuff that is still a mystery but the elec keyboards can set the a_i, b_i to change from "guitar" to "reed organ".


That is way too simplistic as well, as this model is time invariant. Most instruments, due to physical nature of excitation, are variable in both frequencies and phases over time.

It is the main reason why modelling physically is the best way for realistic results right now - lossy lumped finite element models typically - digital waveguides are one of such models.

In such a model you can incorporate nonlinear damping and resonance functions over time at desired accuracy.


I have a terrible maths background, so that's -way- above me.

However, I have a good grasp of why instruments sound like they do, so I'm hoping that your statement is a complex way (no pun intended) of showing that the waveform has many harmonics, and that those harmonics vary over time? Not looking for any kind of argument, just hoping for a bit of explanation of the above; From what I've learned over the years it's the balance of harmonics and the way that they change over time that gives an instrument its timbre and explains the difference in tone between instruments despite them playing nominally the same note (i.e. fundamental at the same frequency).


This is Stretch Tuning, right?

https://en.wikipedia.org/wiki/Stretched_tuning


To my understanding it's more like Equal Temperament vs Just Intonation..?


No, those work even with sine waves. What the piano tuner is talking about is timbre: overtones can be not-perfectly-in-tune with the fundamental.

Has to do with the thickness of the core windings, I think? Something to do with the string acting partly as a rigid rod, not just an idealized wire. I hear this also in electric bass strings with super-heavy core wires.


Right, you're absolutely correct. I actually "get" inharmonicity, I think lack of sleep and the fact that tuning and the imperfect interval of an octave comes into it (I didn't know that tunings were actually shifted around inharmonicity) made me start thinking about JI/ET.. how embarrassing.


As a piano player you might find this guy interesting, https://en.wikipedia.org/wiki/Lubomyr_Melnyk His playing style focuses nearly exclusively in using harmonics and resonance to create his music.


If you want to learn more about sound design[0], you should check out Syntorial http://www.syntorial.com/#a_aid=AudioKit. It's an interactive software synthesizer that teaches you in like an afternoon more than just about any book or video on the topic.

This tutorial series is also illuminating but it's almost too detailed http://sonicbloom.net/en/63-in-depth-synthesis-tutorials-by-...

You might also be interested in AudioKit https://github.com/audiokit/AudioKit, a (macOS|iOS|tvOS) framework for audio synthesis and processing.

[0] sound design is such an interesting field, as it's both vary artistic but also extremely math/physics/cs/stats if you want.


Cheaper solution would be to try the open source Pd (PureData) in combination with the free "Programming Electronic Music in Pd" book [1].

[1] http://www.pd-tutorial.com/english/index.html


Or SuperCollider or any demo of a standard synth plugin dropped into a free VST host (like dropping a demo version of Massive [0] into Reaper [1]... though Reaper technically not free).

[0] https://www.native-instruments.com/en/products/komplete/synt... [1] http://www.reaper.fm/


Don't forget http://overtone.github.io for (live) coding your synths, filters and effects!


is syntorial worth $129 ? Also I was not clear from the website what kind of synthesis you get to learn.

Can you build vst/au with audiokit for integrating with your daw?


It definitely was to me. It's subtractive synthesis with some fm iirc.

Audiokit is audiounit only but yes, yes it does.


One item on my TO-DO list is hooking up an electric guitar to an oscilloscope and frequency analyzer and do a video and/or blog post about the applied physics of heavy metal guitar sound.

There are tons of tricks employed in guitar playing to create interesting sounds by manipulating harmonics, by both the guitarist and the effect and amplifier signal chain.

Most guitarists that use these tricks are completely unaware of the physical phenomena involved. And the non-guitarist physics geeks always enjoy when I give a short demo with lots of distortion and artificial and natural harmonics tricks.


I'd look forward to seeing that - I teach music tech and having done that with a class singing a note and seeing the different harmonics in each person's voice is interesting, and seeing that applied to a heavy guitar sound would be interesting - as you say, there's lots that players do that they don't realise they are doing, and it will also give an understanding of what's possible to those who would then take that on board and use the techniques uncovered by it.


Yes lots of tricks. Picking/strumming close to the bridge versus at the middle. Pinched harmonics. Strumming past the nut. Natural harmonics at nodes of overtones. Exciting the strings by hitting body instead of strings.


Well,

Recently I’ve been trying to figure out what is special about mridangam and was wondering if I needed to do some analysis myself. Fortunately, I happened to run into CV Raman’s papers analyzing the physics/acoustics/wave forms of a mridangam that are well worth a read.

He first wrote his short paper in 1920 in Nature (almost 100 years ago) http://dspace.rri.res.in/bitstream/2289/2042/1/1920%20Nature...

his fundamental thesis/analysis is that the way the mridangam is built is special in that produces harmonic tones (integral multiples of frequencies) which is highly unusual for drums therefore giving it the ability to sound uniquely special, accompany vocal well and to be played in a smaller, softer setting.

A good blog delving into all of this including some cool youtube videos at the end on wave spectroscopy demos using talcum powder (related to raman) are at https://croor.wordpress.com/2010/11/10/cv-raman-on-drums/

A longer version of his paper from proceedings of IIS published in 1934 is here: http://dspace.rri.res.in/bitstream/2289/2047/1/1935%20Proc%2...

Figured some of you would be interested in this.


I have always wanted to understand the result in this paper, since it is one of the first major publications of the first Indian to win a Nobel prize in science. Can you point out to some more basic acoustics?

I've wondered about what similar results hold for Idakka [1], a drum played in Kerala, and the talking drums of Western Africa [2].

[1] https://www.youtube.com/watch?v=kaozNblda54

[2] https://www.youtube.com/watch?v=B4oQJZ2TEVI


Folks at University of Edinburgh are doing some super cool stuff on physically modeled audio, including drums.

http://www.ness.music.ed.ac.uk/archives/systems/3d-embedding...


A bit related (though much more theoretical): https://en.wikipedia.org/wiki/Hearing_the_shape_of_a_drum


Fun fact: the waveforms shown are bessel functions, the daddy of sine (and cosine).


I was really in to Supercollider a long time ago, and I remember there were synths floating around on the mailing list that included big arrays of weird looking values, and they somehow made drum sounds. My brain is remembering that they were called eigenvalues, but I tried searching and didn't find much. No idea how someone figured out or calculated the values. I think they were impractical to calculate at runtime in supercollider.


Many similarities between drums and radio frequency cavities. The drums are more interesting due to the nonlinearities of the heads and air inside.


OK, now I have a Cowboy Mouth song stuck in my head. Is there any better live drummer than Fred LeBlanc?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: