Hacker News new | past | comments | ask | show | jobs | submit login
The THX Sound (musicthing.blogspot.co.uk)
307 points by harel on May 27, 2012 | hide | past | favorite | 41 comments

to clarify:

the THX sound was created by a synthesis program that was probably quite substantial itself (I have long been under the impression that it was a MUSIC-N derivative, and Moorer's description seems to confirm it) and, I'm guessing, probably not written in C. The input to this program was itself generated by a 20KLoc C program.

EDIT: i figured "i don't have to explain music-n, people can just look it up" but i just read the wikipedia article and i think that explanation only makes sense if you already know what it is. so...

MUSIC is a software synthesizer by Max Mathews, considered the father of computer music synthesis. It consists of a library of 'unit generators' and an orchestra/score processing system for using them. The "orchestra" file is used to define "instruments", basically parameterized combinations of unit generators, which themselves are basic synthesis building blocks (say, a square wave generator, or in the case of the THX sound, a wavetable oscillator).

The orchestra is controlled by a score, also known as a note list. This is just a series of statements saying, basically, "start playing instrument N at time T with parameters X,Y,Z".

MUSIC was followed by MUSIC-II, MUSIC-III, etc, hence the name "MUSIC-N". Basically every extant software synthesis package is derived from MUSIC-N on some level, but Csound is the most direct descendant.

Ah, but what creates a sound? The instrument, or the musician that knows how to play it? An instrument without a musician is merely silence.

I would argue that if the C program didn't "create" the sound that it was certainly the musician responsible.

one of the nice things about computer music is the way in which it can be a collaboration between musician and engineer, so much so that the lines are frequently blurred. i certainly got the impression from the article that in this case, Moorer was himself involved in the development of the synthesis package and even the signal processor itself.

Fundamentally though, you are of course correct. This was a mainframe in the 80s, so presumably there was an operating system and system libraries and they all played a part, so should we credit those developers as well? it's tortoises all the way down.

This same conversation could very easily be applied to OSS.

No idea why you are being downvoted, your point is a sensible one to make.

Composers have a 'musical imagination' and often write at the keyboard of (typically) a piano. Classical composers sometimes write music that is not 'idiomatic' for a particular instrument; adjustments follow. From the original article it seems as if the composer had a definite idea of what he was after and adjusted things until he got close.

What interests me about the original anecdote is the use of random numbers in the parameters, and the difficulty that the composer had in reproducing a particular 'state' of the sample.

I have the wonderful fortune of getting to work with Andy, and spoke with him about this a few years ago. (This story pops up again every two years or so). Shortly after, he let me know he had taken a look at the code for the first time in years and had overestimated the line count. I'd have to go back through old emails but I believe he said it was only something like 2000 lines.

I'm pretty sure he has far more interesting stories to tell than this, and he is pretty hot on the banjo.

This makes more sense, writing and debugging 20,000 lines of C code in four days is a little crazy.

If itwere 20k lines, he would probably have said something like

"it was a 7 line Perl script that generated 20k lines of c that generated inputs for whatever"

This is the THX sound synthesised using Overtone, a music synthesis program.

  (definst thx [gate 1]
  (let [target-pitches (map midi->hz [77 74 72 70 65 62 60 58 53 50 46 34 26 22 14 10])
        r-freq         (env-gen:kr (envelope [1 1 0.007 10] [8 4 2] [0 -4 1] 2) gate)
        amp-env        (env-gen:kr (envelope [0 0.07 0.21 0] [8 4 2] [0 1 1] 2) gate :action FREE)
        mk-noise       (fn [ug-osc]
                         (mix (map #(pan2 (ug-osc (+ (* r-freq (+ 230 (* 100 (lf-noise2:kr 1.3))))
                                                     (env-gen:kr (envelope [0 0 %] [8 6] [0 -3]))))
                                          (lf-noise2:kr 5))
        saws           (mk-noise saw)
        sins           (mk-noise sin-osc)
        snd            (+ (* saws amp-env) (* sins amp-env))]
    (* 0.5 (g-verb snd 9 0.7 0))))
https://github.com/overtone/overtone/blob/master/examples/th... http://overtone.github.com/

You've just reduced 20000 lines of C to 13 lines of LISP :)


Impressed that this short bit of assembler code was responsible for the chime I hear when I start up my Macbook Pro, I set out to convert it to Javascript. Well, a few hours into this project when I start hearing the first (still quite off) results did I learn that actually the current chime is very very different from the original Mac chime: http://www.youtube.com/watch?v=GSuVacw8I-o

What I don't quite get is that the asm code seems to start with a sharp square wave that smoothens off as time passes, but in that video the first sound already sounds very smooth in the beginning.

Read http://www.mactech.com/articles/develop/issue_16/034-038_Qui..., especially the part that says:

   "Square wave sounds. Unknown to most, the square wave synthesizer never produced
    true square waves. It was more like a modified sine wave. This has been corrected.
    As a result you'll notice that the Simple Beep sounds different. It can now be
    heard as it was originally designed to sound."

If you load the sound into Audacity and view the spectrum (i.e. the FFT), you can get a moderately clear view of what's happening. For the first 14 seconds, pitches wander around randomly. Then they start sliding up in several jumps (interesting not a smooth progression), and end up at the final pitches (both lower and higher than the beginning) at around 19 seconds. You could probably extract the pitch tracks from the FFT if you want to re-create this sound. (I'd like to write up a detailed blog post on this, but don't have time, so I'll just leave a comment.)

A few other notes from the spectrum: the final chord is D major, with little F#. Between 14 and 19 seconds, the pitches take 5 jumps to their destination, at about .9 seconds per jump. (This is the part that sounds like rising tones before the final chord.) The jumps aren't synchronized, or else there would be a 67bpm rhythm to this part. The jumps aren't smooth, but an exponential decay, where they move rapidly at first, and then slow down. In the first part, the frequencies wander between about 130 and 260 Hz, with a noticeable peek at exactly 200 Hz. (And of course the harmonics.)

This will make more sense if you see the spectrum; I've put a picture of the spectrum at: https://picasaweb.google.com/lh/photo/qbuyIVSC2Bsvgxg5HlSD4-... Time is the X axis, and frequency is the Y axis. The lines that move in parallel are harmonic frequencies.

Wow, that's very neat, thanks.

I think it can be generated with few lines of code using "algorithmic symphonies" by synthesizing sounds with bit wise operators like this:

((1000/((t/12)%(t>>10))&1)35 + (1000/((t/23)%(t>>10))&1)35)

Link to generate (using javascript) and hear the sound above: http://bit.do/thx-first-try

More sounds created like using those simple equations: http://js.postbit.com/digital-computer-music-with-bitwise-op...

If we're going about reinventing the wheel see this excellent post from your forebearers: http://www.earslap.com/instruction/recreating-the-thx-deep-n...

And discussion from a few years ago: http://news.ycombinator.com/item?id=725564

Don't miss their Deep Note SuperCollider one-liner at the end: "play{Mix({|k|k=k+1/2;2/kMix({|i|i=i+1;Blip.ar(iXLine.kr(rand(2e2,4e2),87+LFNoise2.k"

That's a very creditable bytebeat version of the THX Deep Note! I recommend:

• Putting a couple of spaces before your formula so the markup processor doesn't chew it up.

• Using Darius Bacon's bytebeat player instead of the old Wurstcaptures one: http://wry.me/bytebeat/?code0=((1000%2F((t%2F12)%25(t%3E%3E1...

If you want to approximate the original more closely, I think you might need sine-wave oscillators rather than square waves. You can get a pretty decent bytebeat sine wave from http://wry.me/bytebeat/?code0=((t%2615)*(-t%2615)%5E!(t%2616..., but most of the time I use a triangle wave instead when the standard sawtooth is too harsh.

I'd like to point out that your bytebeat isn't actually using bitwise operators in any non-arithmetic way; it's equivalent to http://wry.me/bytebeat/?code0=((floor(1000%2F((t%2F12)%25flo..., and in C you can leave out the occurrences of "floor".

I think you went the wrong direction in pitch, but that method does have potential for sure.

Since only half the links work, here is Deep Note: http://www.youtube.com/watch?v=uYMpMcmpfkI and here is work on recreating it with SuperCollider: http://www.earslap.com/instruction/recreating-the-thx-deep-n...

While http://www.thx.com/trailers/ doesn't work, there are THX trailers at http://www.thx.com/consumer/movies/

I'm surprised his program didn't print the PRNG seed(s), exactly to allow re-creating pieces that stood out.

His next program probably did. Live and learn.

Maybe you could write a program that tries successive seeds and compares them to the sample, and then prints which seeds were the closest to the sound.

Funny this blog post is from 2005. I use to have the THX sound play when logging into Windows (sad times).

The Music Thing blog has some other great posts about music and instrument hacks.

The reference to the C code is here:

"The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form "set frequency of oscillator X to Y Hertz".

20kloc of C sounds like a lot and 4 days like too little. As a comparsion at work we developed a drive-by-wire system for handicaped car driver. This Software has around 22kloc of mostly C and some configuration/make files.

LOC is really pretty meaningless, it all depends on how the program was written. If it was a "top to bottom" program with no loops or subroutines (this is the style often seen in COBOL code for example) it could have been a lot of very repetitive code.

You might read the article. The code just sets parameters.

What i wanted to say is that 20kloc is a lot even for C. (Specially for "only" generating the parameters. Diggum sad that it was more like 2kloc which sounds resonable.

This is awesome. Thanks for posting it. However, may I suggest changing the title to "The THX Deep Note"? I thought it was going to be about the sounds my father-in-law and George made for their movie THX-1138, the namesake of Dolby THX, many of which sounds were reused in better-known movies later, such as Star Wars.

I uploaded the mp3 of THX (Deep Note) to SoundCloud, it's from the 25 seconds original audio file registered by LucasArts: http://soundcloud.com/rodrigo-de-almeida-siqueira/thx-deep-n...

The original file was found in the US Patent and Trademark Office: http://tdr.uspto.gov/search.action?sn=74309951

Cf the windows sounds by Fripp and Eno:


I wonder what influence, if any, this sound had on the noise at the start of "Another part of me" by Michael Jackson:


The most interesting part is that a sound generated by a pseudo-random process is copyrightable.

The sound itself isn't a 'sound' but a performance. Hence copyrightable. The process to create that performance could be patented.

The editing, the selection of that particular iteration as being the "right" one is part of the copyrighted material.

And it isn't merely generated by a pseudo-random process, it's generated by code written and adjusted to produce a sound within certain parameters, with variation within those parameters provided by pseudo-random perturbations.

That said, I'm sure you could also record the sound of birds chirping in the dawn chorus, with even less input from the recording individual other than selecting a particular portion of the recorded audio, and copyright that.

Any traditional performance is a pseudo-random process - it doesn't come out exactly the same way every time.

Well, the process is not pseudo-random but pseudo-randomised.

But I agree with the sentiment, that's really interesting.

It's the first time I ever hear this. "is the most widely-recognized piece of computer-generated music in the world" Care to replace "World" with "the USA"? Because you can't, in fact, say anything meaningful about the whole world.

On that line of thinking, can you say anything meaningful about anywhere other than where you are?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact