Hacker News new | comments | show | ask | jobs | submit login
An audio engineer explains NPR’s signature sound (current.org)
392 points by adamnemecek 540 days ago | hide | past | web | favorite | 237 comments



> Another thing for engineers or anybody at a station to do is to go into the studio, turn the microphone on, crank it to 11. Don’t talk — so you don’t blow your ears out — and listen to the sounds of all the fans that you have in the room. This is one challenge that we’ve had for decades here. You’re never going to get it to zero. You’re never going to get it completely silent. Nor do you really want to, because in order to get it silent you’ve got to move a lot of equipment out, and there’s a lot of cost with that.

Makes me wonder why they don't record the sound of the fans and then invert that audio track and mix it with the mic channel before broadcasting the master channel so that the relatively periodic noise of the fan cancels itself out.


Because it's not really periodic enough. I don't think any natural sound source short of a musical instrument (and even for them, it's doubtful) is so perfectly periodic that an out of phase version would stay in opposition (i.e. in the inverted/opposite phase of the cycle) for long enough for this to be of any use. But there are (FFT-based?) noise removal tools that can work for this sort of thing.

edit: also, the weird phasing artefacts this would introduce would be much more prominent and noticeable, even if it succeeded in reducing the overall volume. So much so that doing basically this ('flanging') is a standard musical effect for making parts sound psychedelic.


Follow-up question, what if you put a separate microphone further away from the speaker to get a synchronized sample of the background noise to subtract away?

I suspect the logical end to that sort of thing is to use multiple microphones to add "depth" to your sound recording, then filter out sounds coming from sufficiently far away.


That's how noise cancelling headphones work but it introduces audio artifacts that are often just as bad as the thing you're trying to remove.

The thing most people don't consider is that white noise (hiss, a fan) is readily tuned out by the brain - the brain is brilliant at reducing background noise - but artifacts introduced by artificial noise reduction are correlated to the signal and the brain interprets it as a distortion of the signal, not background noise.

That's why audiophile audio engineers fix the noise instead of using FFT noise reduction - at least whenever possible.


Having made a few records myself... when mixing with noisy source tracks, I generally choose to use console automation to mute the spaces between the notes, rather than trying to remove noise from actual signal, or even gate out the noise in the quiet spots. But it's more important to simply prevent noise in the first place.

On the first record I ever produced, I was credited with playing "furnace". We were recording in subzero January weather in Minnesota, and the furnace in my basement was very noisy. I'd go and turn it off, and we'd track as quickly as possible before the house got unacceptably cold and the furnace needed turned on again.

As an interesting aside, something a lot of newbie/hopeful audio engineers obsess about is "mic bleed", where the mic for one instrument picks up sounds from another. I mostly ignore this. Bleed is swamped by signal, and disappears into the mix. Even if a punch or edit fixes an error in the "bleed" instrument, it usually drowns out any error that remains in the bleed itself. Then again, audio engineering is full of a lot of nerding about second and third and even fourth-order effects, while ignoring first-order effects like "Is the song actually good?", "Is the performance good?"


> I generally choose to use console automation to mute the spaces between the notes.

I recently had to make a promo video for my startup on short notice, with a less-than-idea setup, and I did the exact opposite. I filled all of the 'silent' parts of the video with 'ambiance' because the sudden drop to absolute silence sounded very jarring.

I have no experience will audio engineering. I just thought it was interesting that we took opposite approaches to similar problems.

https://en.wikipedia.org/wiki/Ambience_(sound_recording)


Horses for courses. This is why a little background music can be helpful - it fills those silences. In the case of mixing a record, there are other instruments in the silences, so noise is just competing with music.


> something a lot of newbie/hopeful audio engineers obsess about is "mic bleed", where the mic for one instrument picks up sounds from another. I mostly ignore this.

A better approach to "removing" bleed is instead to make sure the bleed sounds good.

Euphonic bleed is solid gold.


This is true that the brain will filter out steady state noise, but this effect is much less strong once a signal has been recorded.

Basically, on playback, the steady state noise sounds much louder than it did when you are actually present in the room.

The same holds for the reverberation in a room, it will sound much louder on a recording than on location.

For example, if you record someone speaking in a reverberant space like a church, it will sound like they are in a very reverberant space. Which they are, when you are listening live, your brain filters out the "non-direct" sounds so you notice just the voice.

I'm pretty sure these noise filtering processes are related to the high priority the brain puts on understanding speech.


Non-human animals have all the same "noise canceling hardware" as humans (It's mostly in the hind-brain), so it can't just be for speech. I know that that echo/noise canceling hardware plays a big role in sound localization.

This is probably why the noise and echoes sound louder on a recording. They don't change the way that they should when I move my head, so my brain can't figure out how to cancel them.


All true.

However FM radio always has white noise, so adding a little bit to it is probably better than adding any correlated noise from signal processing.

FFT noise reduction definitely has its place and I do use it from time to time. I just highly doubt "live on-air signal processing" is a good application given that the noiseprint is constantly changing.


These broad generalizations about psychoacoustics are simply not accurate. It sounds like what every novice engineer experiences before honing his or her craft. If you're surprised by what you hear after recording something, either you weren't listening properly while recording it, you haven't learned how to record it properly yet, or your recording/listening environment itself is flawed.

Of course we notice different things about an object after taking a picture of it. But you've got to be very careful about cause/effect/bias.


I've been a professional recording engineer for 20 years. The phenomena I'm discussing is an objectively true, subjective perceptual effect, it doesn't change based on how long you've been recording. Over time you learn to expect it, but it doesn't go away. A lot of audio engineering involves working around the heavy real time signal processing our brain is always doing.

That's interesting about animals. It seems likely that their auditory systems would be tuned for the type of stimuli that are vital for their survival. There would be a lot of overlap, but since it's hard to tell their subjective experience, it would take some clever testing to see how their perception of recorded sound would differ.


Like the way the Grateful Dead used a pair of microphones to cancel out feedback from the P.A.

It worked for them and would work, so long as the person you are recording keeps their head very still. Any movement would change the parts of the sound emerging from their throat that gets cancelled by the cancelling mic.


While it is indeed a cool idea, it really does create more problems than it solves-- there are reasons why it never caught on in live sound reinforcement.


So in all audio, there is "error". Error which is correlated with the signal is called distortion. Error which is not correlated with the signal is called noise.

Amplifiers use negative feedback to cancel distortion out ( because it's correlated ). Some folks, like at Pass Labs, tend to avoid neg. feedback for philosophical reasons, and people like their amps. They use part selection and careful design instead.

Adding two uncorrelated signals together, you end up with a louder, uncorrelated signal.

There is "single-ended noise reduction" ( the version I use is from CoolEdit 96 and CoolEdit 2000 ) but it leaves artifacts. It's still wonderful for things like single-coil hum, things with a limited spectrum.


Fan noise isn't even remotely periodic, because most of it is due to moving air rather than moving blades.

However, you can take advantage of that because the sounds you want to keep generally are periodic at short timescales. You can use DSP filters to remove components of the sound that don't show significant autocorrelation. The same idea lets you remove white noise and random static from radio signals.


That would be like trying to silence a fan by buying the exact same model and putting it on the opposite side of the room. It will just sound like twice as many fans.

Getting two sounds to perfectly interfere is basically impossible. Real world sounds are way too complex for that.


This particular technique has its problems, as has been mentioned by other people. I would think inverting and cancelling is much more useful in situations where you can isolate the background noise in real time. For example, I do a little hobby music production, and I've heard of this being used to isolate vocals:

Because the vocals are sometimes the only thing that's perfectly in the middle of the mix (balanced left/right), you can invert the left channel of a mix and then recombine it with the right to cancel out only the balanced tracks (so you're left with essentially an instrumental). Subtracting the instrumental track you've just created from the full song (again, by inverting and recombining), you should get the vocal track by itself. Fun stuff, right?

I'm not sure how well this works, as I've never done it myself, but it would seem like having the exact signal is important. As people have said, the fan noise isn't predictable enough to cancel using that technique. On the other hand...

I recently read about a machine learning application using K-means clustering in which you feed the algorithm sound from two microphones set up in the same room and it's able to separate the audio by who is speaking. I'm not sure how well such an algorithm would work when one sound is significantly louder than the other (such as quiet fans and people talking), but it certainly points to that being a possibility.


As a part time MC, I can attest that inverting and recombining channels of a song can occasionally lead to a decent instrumental. That being said, with little talent or background, and using freeware, it will not be perfect. It will be much better than using the original track, but it will certainly be noticeable that you are not using the instrumental. Great for practice though.


NMF and ICA can also be used for blind source separation, but there are artifacts. Good for some applications like speech recognition, but for music you really want no audible artifacts


Tools like iZotope RX can do a pretty great job at removing that kind of noise.


True, but a lot of the tools within RX are meant to be very manual labor-intensive, very CPU intensive, and not particularly realtime.

At the end of the day, the cleaner the source material, the easier it is to process in any fashion you want later on in the airchain. A lot of stations will have noise gates on their microphones, or even wideband gating before material hits the transmitter in some cases. With the right settings, it should be more than plenty.


I think audio fingerprinting for noise reduction can be automated but it will not real-time of course


All the noise reduction algorithms I've heard so far introduce fairly audible phasing artifacts. I think better noise reduction for a known/trainable source like fan noise could be done with machine learning - it's something a few folks are looking at


Most modern recording studios store all equipment in sound-proof closets.

The challenge is keeping those closets cool with ventilation that does not leak sound.


Well away from leaking sounds in to the room that the recording is done in. Even desks can have fans in though and they need to be in the same room as the operator.


Ask any DAW studio owner for a tour. The first thing they will show you is the isolation of all fans/equipment from recording booths and mixing areas.

It's a point of pride in studio design today. The NPR article mentions similar isolation from the newsroom.

Fans have a tonal center around 110-220Hz, then a lot of harmonics that essentially generate pink noise... nothing that can be corrected in post production or flipping the phase.


Or why not use a shotgun microphone? That's what we (well, audio guys) generally tend to use on film and TV sets.


Does not eliminate room noise. If you have worked sound on a film or TV set I'm sure you are familiar with capturing room noise on a quiet set to mix in during post.


Yeah, you're right. Sound guys always pick up room tone before or after the shoot. I was under the impression shotgun was used to eliminate/avoid sound sources that are not desirable. From my, limited sound experience, I've seen sound guys use it like that on my sets. For example, there's some noise from certain direction, but I am told not to worry about it since it won't be picked up with shotgun. Lo and behold, it wasn't there in the source material. It's probably due to many factors I'm not aware of though (sound reflecting of stuff etc.). They do turn off all the A/C's, fridges, and everything that's humming and buzzing though.


The reason shotgun microphones are used is because the polar pattern[1] is highly directionally. Basically the amount of sound that a shotgun type microphone picks up is very strong in one direction and less in another. This reduces peripheral sound being recorded at the same level as your target sound.

[1] http://www.proaudioland.com/wp/wp-content/uploads/2014/08/po...


Shotgun mics have a pinched sound quality - the whole point of using U87s is the way they sound.

They can switch to SM7s or RE20s and achieve much better S/N with respect to air handling systems, but they want the "U87 sound" (they can keep it, too, IMO - it's painfully neutral) and that just entails a much higher "room noise to signal" ratio.


Studios normally use microphones with larger diaphragms, because they sound fuller. But they're too fragile for field recording. Hence small-diaphragm condenser microphones, like your shotgun.


Probably be better to get some of the highest frequency components and then filter those out.


This is a really good interview, in that the interviewer has knowledge comparable to the interviewee. Obviously that can't be achieved for when a single interviewer interviews people with widely ranging skills (e.g. many radio shows themselves) but is nice to have with something niche like this.

On a different note, why is that (as far as non-expert me knows) that nice mics often have eq settings? Is this not a blatant layer violation?


> On a different note, why is that (as far as non-expert me knows) that nice mics often have eq settings? Is this not a blatant layer violation?

The low-frequency/subsonic content that this switch will remove often consists of wind noise, pops from talking too close, vibrations from the floor/mic stand, etc. Even if these low frequency noises may be more or less inaudible, they'll contribute to a large signal swing that will decrease the headroom in the following amplifiers. In other words, you won't be able to turn up the gain in the microphone amplifier/mixer as much as you'd want, because the low-frequency but high-amplitude content will make the amplifier clip and distort the sound. Filtering the signal after the mic preamp won't solve the problem, because the preamp (which can provide in the order of 60 dB gain) will be the component that clips. By including a low-cut filter in the head amplifier circuit inside the microphone (the amplifier serves only as a buffer, and usually with unity gain), the disturbing low-frequency noises can be removed before entering the high-gain microphone preamp. You can find the low-cut switch next to the capsule in the U87 schematic [0]. It seems to me that they increase the capsule bias voltage, thus changing the low frequency response by electrostatically tensioning the membrane.

[0] http://recordinghacks.com/images/mic_extras/neumann/U87-sche...

edit: tweaked some things. meta: i'm not very good at commit messages.


Ah needs to be with the mic preamp, ok. Granted there's a lot of inertia in connectors, but has anybody tried to run extra wires in the mic cable to control these things from the both?


As odd as it sounds, microphone cables weren't standardized until 1992. Before then companies did all sorts of stuff.

Here's a Neumann mic from 1952 that could be set remotely. http://recordinghacks.com/microphones/Neumann/M-49

Today it is increasingly common to put the analog stuff as close to the source as possible and remote control it.


What is the standard? I mainly see XLR but I've stared at far more live equipment than recording equipment.


It's XLR. But even then Americans and Europeans couldn't agree on where to stick the three wires.

This is the 1992 document that finally standardized "pin 2 hot" among other minutiae: http://www.aes.org/publications/standards/search.cfm?docID=1...


Interesting, thanks!


For a vocal/voiceover mic loud sounds are not a concern. (The U87 has a pad switch for those cases separate from the proximity/EQ switch.)

The things you mention (vibrations, LF rumble) are typically handled by the highpass on the mic preamp and are in the 75 or 80Hz range. In musical terms, that's 4+ octaves below the 1k proximity rolloff on the mic itself.


> Is this not a blatant layer violation?

This is the one of the areas a recording engineer's problems are different than a software engineer's intuition.

In software, we have a lot of problems that can ultimately be traced back to not having good "visibility" into the software or its state. This has various far-reaching consequences, from specialized tools to examine software (such as debuggers) to cultural values ("one job per component") that help us reason about complex systems we cannot see.

But in the case of audio, we have a multiple sensory organs to directly perceive sound. A bad audio engineer would be able to push a few buttons to move his headphones around in the signal chain and know in seconds where the EQ is occurring. A good one would simply know the U87 rolloff "by sound" and would not have to conduct any investigation at all.

There are some cases where more detailed analysis is done with fancy tools–mastering song recordings, for example. But in live sound, an audio engineer's primary concern is UX, because good UX is the difference between solving a problem quickly and annoying pops or worse. One button that engages a good EQ is much, much better than something like this: http://adn.harmanpro.com/product_attachments/product_attachm.... And epoxying the single button to the correct position is even better still!


Isn't controls on mics the worst UX because the wrong people are always messing with them?

You raise a good point about (even in the bad case) moving headphones around the signal chain---and damn as an (especially functional) programmer I'd like that to be able to tap arbitrary locations in data flow like that. Enough all these control-flow-based debugging tools!

> One button that engages a good EQ is much, much better than something like this

Well there's always overkill, but would one of those be worse than one of those and then, next in the signal chain, a 1-button EQ whose functionality is completely subsumed?


Yes. Especially the instrument people who always turn up their instruments' volume mid play because they want to push theirs through. It really messes up the sound engineer's overall tuning.


> You raise a good point about (even in the bad case) moving headphones around the signal chain---and damn as an (especially functional) programmer I'd like that to be able to tap arbitrary locations in data flow like that. Enough all these control-flow-based debugging tools!

You should look into dtrace and it's new Linux counterpart who's name currently alludes me. It's almost like being able to do just that.


I'm very aware of dtrace. probes are kinda like faking dataflow on top of control flow I guess.

I was thinking more of a development aid that would render a dataflow graph like max/msp---in the general case the graph itself is changes but thats's OK.


> Is this not a blatant layer violation?

No, you want to apply cuts as early as possible in the signal chain if you're trying to prevent overloads later in the chain. The "EQs" that you see on mics are almost always bass cuts designed to reduce proximity effects. This gives you much better dynamic range at the preamp because bass frequencies are energy-intensive, so cutting them early allows the amplifier to not have to do all the work amplifying (and distorting) them, just to cut them later.


Microphones need tremendous amplification to even get them to "line level" signals to interface with other audio equipment - 60db is not uncommon. Bass signals require a great deal more power (watts, or in this case milliwatts) to amplify accurately than midrange/treble signals, so they're a lot more likely to cause distortion. Rolling bass off as early as possible - preferably at the capsule, before the microphone's internal circuitry - reduces distortion.

Beyond that, with a few exceptions, most of the signal rolled off by the switch is stuff you don't want to hear anyways - "plosive" pops, rumble from low-frequency buildup in rooms, vibration, etc. Much of it can't even be reproduced by most speakers (speakers don't do low bass because, as said before, it's a hard problem and not actually audible in most cases).

As an aside, when mixing records, I'll often roll off guitars at 200 or 300hz, losing a couple of octaves of low-frequency information. In theory, this is awful. And if you listen to the solo guitar tracks, it's awful. But in an actual mix, it's great! It gets the low end of the guitars out of the way of real low-frequency instruments like bass and kick drums, making the bass sound tighter and cleaner. The guitars sound clearer too, because they aren't getting muddied by competing bass instruments.


This is one of the reasons I find recording guitar so unsatisfying; I want the recordings to sound like what my guitar amps pushing air in the room feels like. If I succeed, the mix turns to mush.


Try using far less gain on your guitars when recording. Gain that sounds great in-room is too much for a record, generally.

The great challenge of recording is to capture the authority of a roaring guitar amp, the brashness of drums, the purity of a voice, and then squeeze it down into something that fits in an iPhone earbud and still has realistic proportions. That's why the idea of "accuracy" in recording is so ludicrous. A record is to a live performance as a HO train set is to a real train. It's a miniature, a model. To make it look "right", you have to exaggerate some details and lose others.


Layer violation?

The goal is to get good sound out of your recording equipment, not to get ISO approval. :-)


My aesthetic pleasure in engineering is more the means than the ends; nobody pays me to go on HN, so the latter is what I focus on :).

Every recording studio or both for a live venue I've seen always seems to have 'round 1/4 equipment always in use, 1/4 rarely but sometimes used, 1/2 equipment never used :P. I really do respect the science and art that goes into audio engineering, but damn that's a lot of clutter.

Now the flip side is it's a lot easier to trim the fat administering a bunch of a computers (and the clutter isn't a huge sunk cost like some once highly valuable piece of equipment one might be loath to chuck), and yet this ideal is also far from realized most the time. Boxes in closet are just a lot more obvious than files in a directory.


Honestly, the less stuff I have to do in post (or in a mixer doing live sound) the easier the world is for me. The closer to the source that you can fix issues the better.


Not (a layer violation) so much. Audio gear has evolved serendipitously anyway. Times were, when you wanted an EQ that wasn't just a "tone control" , you got out the soldering iron. Pultec wasn't even a thing until 1953.


Mm, I think serendipitous evolution may explain but does not justify layer violations. In this case, it's not just engineering aesthetics but a real practical pain: buttons in lay people's hands == more problems. [People getting angry at tech and then mic is off at live events is practically a trope!]

Arguably compression wars and other such things are also a layer violation: sound engineers can guess at but not be sure of listeners' equipment. With computer and audio (or anywhere engineering, business, and less educated users meet), optimizing against presumptions is a common source of layer violations.


I have never believed layering fixes anything; I'll take "layer 3 switching for $400, Alex".

As has been noted, the particular quirks of the U87 ( or the like ) mean a filter to lessen proximity effect are Useful.


Actually it's a terrible interview because the interviewer won't shut up and is far less knowledgable than the person he's interviewing.

The "EQ settings" on the mic are actually to compensate for proximity effect. On the U87 it starts cutting at 1k -- way above what the interview states.

http://www.coutant.org/u87ai/u87.pdf

Another mic commonly seen in broadcast (and top podcasts) is the Shure SM7. Has similar switches, and even comes with a plate you can screw over the top of them so people don't futz with the settings.


Mic choice is a lot more nuanced than is indicated in the review. Heil, EV, Neumann, Shure, and other manufacturers have a variety of mics for a variety of situations, and I bet there are some other reasons besides cost why the RE20 is such a popular studio mic across the US.

Also, rooms, talent, and equipment are often much more influential than mic selection in making a 'signature' sound. But mics are to audio engineers as languages are to programmers... Everyone has a favorite, and opinions border on the religious.

Source: worked in radio engineering for 4 years before starting programming, live audio engineering 4 years after that, and have worked with dozens of mics and varying degrees of on air talent in many environments.


Agreed! Unlike the U87, the RE20 is a dynamic mic so it doesn't need power. Quieter, less to go wrong, built in pop screen, smaller but still big enough to look "important", and you really can't break it. It's less boomy by default and so you get a more natural sound when you close mic things with it.

Likewise I'm sure you could swap a U87 on person X with a RE20 on person Y and it might even sound "more like NPR" depending on the natural sounds of their voices.


Agreed on all points but the "really can't break it" — it's not some crazy indestructible capsule like the SM58! :)


People use SM7s and RE20s as kick drum mics all the time, they're really, really durable compared to a U87.


>Actually it's a terrible interview because the interviewer won't shut up and is far less knowledgable than the person he's interviewing.

I agree. The number one rule of good journalism is not to put yourself in the story.

Step back, way, back, and let your subject tell their story.


I didn't even understand his follow up to the engineer saying it rolls off at 250MHz:

> Current: Yes, but if you’re at a station and kind of frustrated with the bassy, boomy sound you get on your studio mic, and you can’t get the engineering staff to do anything about it for you, one thing that you could do yourself is to look on the mic and see if there is a bass roll-off switch, and turn it on.

Is the interviewer just tossing out "life pro tips" in the middle of an interview? Why does it start "Yes, but" when it's not a counterpoint to _anything_?


Sobering what a non-expert like me can miss then reading this.


The SM7s are by far my favourite mic, and what we had in my tiny little community radio studio!


... because most of our consumers were listening to Morning Edition and All Things Considered in the automobile to and from work.

Yet they still occasionally have a story with a siren in the background, so I start checking my mirrors and pulling over.


Pardon me, but...

Holy shit, NPR's fascination with sound effects. Yes, I want to listen to 15 fucking seconds of someone chewing loudly between paragraphs. Sirens, crowd noises, people talking about something undoubtedly unrelated in Urdu.

Dear god, NPR's signature sounds are why I don't listen to it.


The shows with the constant sound effects also usually have what seems like a relatively new format: they try to make it sound like the hosts (it's always two hosts) are a couple of friends, interested laymen, having a talk at their house about the topic. So they stop to interject ("right?!" "No!"), they talk over each other but only occasionally so as not to annoy the listener, they basically feel like your two friends who've read the same Wikipedia page and are discussing it at a party.

And it's during this "conversation" that they play the sound clips:

HOST 1: "So there's this interesting thing they've just discovered, Host 2"

HOST 2: "What's that?"

HOST 1: "Scientists have discovered that when you eat..."

chomping and chewing noises

HOST 1: "... you're actually oppressing black people."

prison cell door slams

HOST 2: "What, seriously?"


That one doesn't bother me as much as this one:

HOST: "...and that's when we found Joseph SomethingOrOther" VOICE: "Hi, I'm Joseph SomethingOrOther" HOST: "Joseph SomethingOrOther is an expert on blahblah" VOICE: "Blahblah is foobar"

You just told me this person's name 3 times when 1 would've sufficed.


On the other hand, this makes sure you associate the voice with the name, when otherwise it's easy to miss.


This, times a million!

Oh, and the weird music. I'm not sure I need 15 seconds of annoying jazz improvisation at full volume between segments either. But that is indeed part of the "signature sound" :)


There's a great scene in a Parks and Recreation where Leslie (Amy Poehler) is on the local public radio station and they are about to go to commercial break. Before they do, however, they play part of a song from Nefertiti's Fjord (a "Lesbian Afro-Norwegian Funk Duo"). Leslie comments on how bad it is to which the host agrees but defends the choice with "but, they are lesbians, so..."

There's several scenes throughout the show that poke fun of various aspects of NPR that are usually quite funny.

https://www.youtube.com/watch?v=tU4t7COg6W4


I lost it the first time I saw "jazz + jazz = jazz"


This is already illegal in the UK, but sadly I think it will take a high profile court case where someone sues the radio station or an advertisement company for causing an accident to enact this kind of change in the US.


Just don't include emergency broadcast system[1] tones in your message[2].

[1] https://en.wikipedia.org/wiki/Emergency_Alert_System

[2] https://www.fcc.gov/document/enforcement-advisory-misuse-eas...



One fitting picture.


And the first person to be charged is the driver. If you cause an "accident" because you were listening to radio, you are driving reckless.


Human factors are a thing. Ask anyone who works in the aviation industry. (Or, for that matter, Jeep.)


Yow! The microphone they are using is $3,600 a piece.

I guess that's not as bad as it sounds, given a twenty year life and daily use.

http://www.sweetwater.com/store/detail/U87SetZ


The Neuman U87 is a very popular microphone in the audio world in general (TV, radio, post-production and even music), especially for voices[1].

Going a little further, the mixing console you can see on the picture is a Lawo Sapphire[2], which NPR seems to have installed in 2012. According to this gearslutz post[3], a 12 faders + central section was starting at 20,000€ back in 2010. You can see on the picture that NPR has a 24 faders version.

The audio monitor on top on the console is an RTW[4]. It's kind of hard to tell which model it really is, but the TM9 for example starts at $4,200[5].

[1]https://www.gearslutz.com/board/low-end-theory/618100-mic-vo...

[2]https://www.lawo.com/products/radio-consoles/sapphire.html

[3]https://www.gearslutz.com/board/music-computers/864209-what-...

[4]https://www.rtw.com/en/products/audio-monitors.html

[5]http://www.rspeaudio.com/RTW-TouchMonitor-TM9-p/rtw-tm9.htm


So let's say they have one mic per five employees. Those employees cost (I'm guessing) at least $300 000 per year, and most of what they produce of value for NPR goes through that mic. Doesn't look very expensive now?

And I haven't factored in cost/rent for the studio, music licensing, and a whole lot of other OPEX. Since there are quite a few local radio stations around, I'd say the CAPEX is not a big deal.


Talent for NPR might pull $300k (I have doubts) but if you're including everybody else when you say "employees" you're shooting astronomically high.

Radio pays notoriously bad. Local TV is just as bad, and they can't even afford photographers any more so reporters do their own shooting; I was laid off as a photographer when that change was made at a former haunt and newscast quality plummeted. (It's popular and spun positively now: Google "backpack journalism.")

Particularly local, you have to really love the craft to be in it. National/international is better, but to an extent. On-air talent soaks up the salary budget at pretty much every shop, universally, and to use a local example like KTVU or KRON I wouldn't be surprised if on-air talent was making $500k or less. You can do better at Google. The producer running the newscast is probably struggling to make rent. The camera operator certainly is, if they haven't yet been replaced with automatic, motorized cues driven by an overworked TD, who is just as worried about their own rent.

Ad-based business models suck, particularly when owners hate spending money. All media owners do. We got punished for the millions that had to be invested for the HDTV transition with 15% pay cuts at one station in the middle of nowhere.


NPR's last CEO, Gary Knell, made about $750K in his last year in the job.

They have a number of highly paid mid-level people, such a diversity officer (Keith Woods) who makes $220K.

More notable hosts make over the stated $300K, including Steve Inskeep ($400k) and Michele Norris ($350K –– wage gap!).

Highest honors, at least according to the 2013 990 linked here[1], go to Renée Montagne, who pulled in $412,581.)

These are not fully loaded costs. You probably want to add 10% for D.C.-based employees for employee overhead incl. labor lawyers, retirement contributions, payroll tax, &c.

Not bad to be a corporation that doesn't pay taxes, takes federal taxpayer money, takes state taxpayer money, takes corporate gifts, solicits foundation grants, and asks for individual donations.

[1] http://990s.foundationcenter.org/990_pdf_archive/520/5209076...


Their non-profit status doesn't have much to do with their staff compensation. Broadcasting at this level takes a lot of talent. I realize this every time someone sits in for a good broadcaster. Sometimes the substitutes are nearly unbearable. Good broadcasters don't work for cheap.

They wouldn't be successful at soliciting donations if they had bad broadcasters. Most podcasts have a tiny budget, which is part of the reason most podcasts are terrible.


I largely agree with your comment that talent doesn't come cheap. Software developers, venture investors, bank CEOs, &c.

In the case of NPR staff, I question whether anyone else is bidding on their services.


Michele Norris previously worked for ABC, the Chicago Tribune, the Washington Post, and the LA Times.


I worked in radio in a fairly large market. On-air talent made good (not great, compared to tech salaries) money. The best talent did lots of events & wrote books to push income, always moving to a bigger market if possible. Upper management (as in, national corporate management) made bank. Everyone else made garbage.


As someone upthread inferred, I was suggesting 300k or more for five people in total.


On a differen note, it is a dollar a day for 10 years; not so much, is it? For the added value it gives.


Am I misunderstanding, or do you seriously think NPR on-air talent is making in excess of $300,000/yr?


Employees cost a business much more than their salary. There are benefits, taxes, insurance, administrative overhead, etc. There's also the cost of replacing them.


Morning Edition and All Things Considered are hugely popular. Broadcast personalities on a national stage usefully get compensated pretty well.


According to NPR's IRS form 990 the hosts of Morning Edition and a couple other big shows do indeed make over $350k.


I think the GP intended this to be the figure for five employees combined, not per person.


Exactly.


The ones you can name offhand (Steve Inskeep etc.) generally are.

http://www.andymboyle.com/2013/07/21/an-updated-look-at-ira-...


I'd be stunned if the console it's connected to were any less than $20k. More likely in the $50-100k range.

Audio is a very expensive game to play well.


It's not that hard to get very good audio these days. Truly incredible audio will always be expensive. But the mid-range MI gear is arguably good enough to use.


When you're buying U87s by the dozen, mid-range MI gear won't cut it.


I suppose - but I'll probably never know if that's for actual performance reasons or for object fetishization reasons.

Object fetishization is not to be dismissed - having dozens of U87 and Manley stuff might be a competitive advantage.


An audio signal is only as clean as the worst element in the chain.


But these days, past a certain price point, you don't really get much cleaner. It's not like it was when you found studios with a ( relatively nice say, Peavey AMR ) console and a Fostex 1" 16 track, to where a nicer console and better tape machine were audibly better.

I have a CD or two done with a D&R console and a Mitsu DASH, and I'd say this is comparable. Room and human factors dominate.

You can get 8 channels of A/D-D/A with Lightpipe for 8 more for $500 bucks and while I can't say definitively it's as good as it gets, it loops back to < -90dB for noise & distortion. It's good enough to where the "worst element" is usually me :)


Mics (and to a lesser degree high-end preamps) are not so easily replicated as digital gear.

If you want the sound of a Neumann U87, or U47, or Shure SM7, or 57, then nothing - nothing - is going to scratch that itch like the right mic for the job, if you really know what it is you're listening for.

Preamps and signal processors too... I once thought as you did: that digital DAWs have so many advantages over hardware that it just never made sense to put $5000 into a particular piece of hardware when the same money would buy a pile of digital plugins.

Until I had all the digital gear possible, and I came face to face with GIGO: garbage in, garbage out. Audio starts life as analog and that's the beginning of everything that follows in the signal chain. Past that there are certain non-linear things that happen in analog that are particularly euphonic and hard to reproduce digitally.

For example I've got pretty much every plugin compressor ever made, but nothing can take the place of my UA/RE175 compressor, because it's delightfully nonlinear and chaotic and valve-y.

And even if you could perfectly model it, what you can't do is put it where you need it: in the signal chain before A/D conversion, so you're packing as much information into the track as possible. The same goes for mics. Mics distort the sound more than anything in the straight signal chain. Every mic has a fingerprint - once that fingerprint is in the sound, it's there. Same goes for the fingerprint of the musicians instrument / amp. And the room. Etc.

And thus my maxim: the earlier in the signal chain you can get the sound you want, the better.

"Fix it in the mix" is the devil. "Fix" is something you do to a cat.


Considering things like gold-plated speaker cables and crystal sound purifiers, audio is also a very expensive game to play stupidly ;-)


Studio audio technicians are not so easily fooled. I'm sure the field has its fair share of peculiar magic, but it's not like home audio.

People who spend thousands on exotic speakers in the name of "transparency" in their home setup could do so much better in that respect with a pair of active studio monitors. But they might not like the results as much.


Keep in mind though, the frequency response curve of studio monitors is more intended for pointing out flaws in a mix than enjoying music. Some of the engineers I've talked to pointed out that NS-10 monitors (once very poor selling consumer speakers, they've been considered legendary in studios for 25/+ years) are particularly harsh, and they don't enjoy listening to things on them when they don't have to.

https://en.wikipedia.org/wiki/Yamaha_NS-10

Things like Beats have frequency response curves that're basically a polar opposite from the NS-10s. Even less cringe-worthy headphones, like Sennheiser HD-280s have sort of a "smiley face" (dip from ~900-2800 hertz, while boosting very low and very high frequencies) EQ curve. Not to discredit them - they are good headphones, but in doing my own projects, I honestly feel like I've done better work on terrible sounding, $5 headphones.


"If it sounds good on NS10's it'll sound good on anything"

I have fucked up a few mixes by using $5 headphones. But yes properly monitoring gear will point out flaws rather than make something sound good. When I get to the end of a mix (using Sony MDR7506's for example) and I think "man this sounds SO CLOSE but it's still a little shitty" then I'll go and check it on iPhone headphones or something and it'll sound fantastic.


>When I get to the end of a mix (using Sony MDR7506's for example)

This. All that talk about "great detailed sound" that is "as close to the original as possible" of some $1000+ headphones and yet most music is done on something like $100 MDR-7506. Moreover, 7506s are very comfortable and sturdy, I can't comprehend why anyone in their mind would shell out hundreds or even thousands bucks for their "everyday" headphones (granted, noise-cancelling or sport-y ones are a bit different thing).


> most music is done on something like $100 MDR-7506

Musicians wear headphones in studios when recording, but audio engineers pretty much never use headphones for mixing except as microscopes to listen in terrific detail for something in particular.

Mix engineers mix on speakers almost exclusively.


Considering how many people listen on headphones as opposed to past decades, would it not be a valid choice to do more headphone mixing?


It's true that you can "mix for the listening format."

This is just my personal experience and opinion (and that of pretty much every audio engineer I've worked with): headphones lie. Ears are designed to perceive sounds in 3-space, not to have individual channels shot down the ear canal bypassing the outer ear, with a sealed transducer directly loading the eardrum. As an audio engineer, you're looking for the truth, and this isn't the way for your ears to best perceive it. Moreover headphones are generally fatiguing compared to a good set of studio monitors in a tuned control room.

So the most representative mix - regardless of listening format - is going to come from speakers. I think most audio engineers are going to agree on this point.

Some audio engineers "mix for the format" - I suppose if you want a mix that is optimized for headphones, then yes you're going to at least check the mix with headphones, but I still doubt many engineers would do the mix on headphones.

Seasoned engineers know that listening fads come and go - the way to make a mix that will stand the test of time is to make a mix that sounds good on anything, and "good studio monitors in a good room" is going to get you there best.


What makes these devices (e.g. the microphones, monitor speakers, the mixing console...) so hideously expensive isn't even the sound quality (even if I wouldn't expect anything short of the technically achievable performance in mic-preams, dynamic range/noise, ...).

This equipment will have to run 24/7, be constantly worked on, be easy to service, will likely have redundant power-supplies, maybe redundant DSPs/Routing/I/O. And if anything breaks, you'd expect to get a 100% drop-in replacement (in terms of frequency response, gain, ...) delivered over-night, or by a courier/technician.

Regarding the mixing consoles: For live-sound (the only area I know a little about), only recently a wealth of very affordable mixing-consoles have flooded the market, but anything slightly larger still sets you back ~€10k for an entry system (Midas Pro, Yamaha CL, Soundcraft VI, Allen-Heath GLD). And there, I suspect the number of units shipped is one or two orders of magnitude larger than what Lawo sells for fixed installation in studios.

Generally, the equipment will work flawlessly, even when hauled around every weekend and not handled very gently.


A home stereo has to take maybe 2 channels through a hilariously blunt EQ stage (bass/treble knobs) and then up to suitable wattage for a pair of small speakers, which only need to sound right in an area of a couple square feet.

I won't get into it all unless you want me to, but a system for live performance or broadcast has orders of magnitude more work to do, functionality which is demanded by a tiny market relative to consumer electronics, with far higher expectations on reliability, flexibility, etc.

There are certainly people (Pyle, Behringer, etc) who claim to sell cheap solutions, but they are substantially more likely to bite you in the ass in concrete, embarrassing ways if you try to use them in performance environments.


> I won't get into it all unless you want me to...

My experience is 100% studio and 0% live, and though I'm not ygra, if you're willing to type, I'd love to read.


You don't find real audio engineers using stuff like that. Gear has its place or it leaves the studio.


That comment actually intended to say »Even when you have no clue about audio (i.e. are doing this for your home, not a studio), you can leave a lot of money (in this case unjustified). Apparently that wasn't clear enough :-)


That's not as bad as it sounds because the console, preamps, compressors, etc... all cost a ton of money too.


Even the headphones themselves are probably at least a grand and they've got way more of those laying around. $3600 is a lot, but audio equipment gets REAL expensive pretty quickly.

Edit: Fair enough, I'm entirely incorrect as far as the headphone price point.


Sony's MDR7506 is, or at least used to be a few years ago, the reliable radio workhorse. When you spend all day with your headphones, especially in the field, super expensive cans most people like for music are a liability. I love the 7506s: flat, cheap, and comfortable all day. <$100. NOT the consumer version (-V6 or something). Real, purebred, MDR7506. NegativeK is right, too; the $100 AKG range is also popular in working radio, as well as some Sennheiser HD280s.

On the production side of things where quality super matters, you'll see the more expensive headphones for sure. But every single person in the last newsroom I saw rocked the 7506, and though I've moved on to more expensive headphones for programming and my music collection, I still miss my 7506s on occasion.


I have owned multiple pairs, and still use one for my main work. The bass and treble roll-off isn't great for mixing and mastering, but it's a truly great value for the money for most monitoring uses in performance. It's better on the production vs. consumption end - where people like all their colored sound.

It's like the NS-10's of the headphone world. Well-understood and consistent, while offering a great average, flat experience that translates well over the air.


Yep! Headphone nerd Marco Arment hates them for some reason, but you see tons of actual professional musicians using the MDR-7506 in the recording studio. Not just radio/broadcast people.

I don't think there's anything that comes close for ~$100 in terms of sound quality and comfort; you'd have to spend at least $300 for a real upgrade. And even then, there's an element of subjectivity.


They shoot for flat when they make them and they're indeed flat enough for sleeves-up radio work, but I have heard folks say (somewhat correctly) that they're a bit bright up high and don't have enough gas down low. I can understand Marco's opinion in the context of being an audio nerd, because they're flat, but potentially not flat enough for some kinds of listening.

But yeah, couldn't agree more. Price point.


And they last ten years, then you change the pads for $10 and they go another ten years...


I think they are good enough for the application they are designed for and somewhat of a bargain at the price point.

Having listened to most and owning/owned for both studio and personal listening use some of the headphones that Marco reviews I can’t help but wonder if there is some bias going on with the music he reviews with or whether he actually does have very skewed hearing. Having been used to fine equipment and rooms whilst working in mastering suites, recording studios and I have picked up a preference for probably a certain type of sound but when auditioning and getting used to audio equipment I often test with a wide variety of sources that includes some music I don’t personally like but know quite well (such as country and western) as a reference to understand capabilities across the board.

Whatever ones thoughts it highlights the importance of auditioning oneself and getting to know something well (goes for speakers, rooms and headphones). It’s possible to do produce a good result using MDR-7506 or NS10s if you are comfortable with them and have the skill to do so.


What is a can?


Sorry, "cans" is radio slang for headphones, from the can-and-string metaphor. I'll make that clearer.


Inside one of my Focusrite audio interfaces, the internal headphone-amp, and wiring to the headphone connector, is labeled "cans" ;-).


Actually, I've seen AKG studio monitor headphones quite a bit -- and they're right around $100. Whether NPR uses them is a different thing, though.


Apropos of not much, I like the venerable Beyer DT250 for everyday use -- they're similar to the DT150 studio cans but with a velour kind of finish, that makes them comfier at home but not so suitable for passing around in the studio. They're also in the 100-200 dollars/quid range.


Preamps and compressors will be integrated in the "console" (even though typically the user-interface will be separate from I/O and DSP, the latter being housed in a 19" rack).

This is radio, not some boutique studio with a ton of outboard gear.


preamps will cost a good couple thousand but compressors, etc... have now moved into digital computers. An expensive set of plug ins will cost $2,000 but you could get by with $200 easily as a professional vocal shop.


I use a AKG C-414 ($900 +) with the bass roll and you are pretty much there.


All mics colour the sound in different ways. For some reason the U87 adds a colouration that's perceived as expensive, larger than life, and authoritative.

Other mics - including cheaper Neumanns - don't have quite the same effect.

It's not about capturing the sound perfectly accurately, but about creating an impression with sound. That's why some pro audio gear is so insanely expensive.

It's very hard to create the same impression digitally because this kind of subtle analog distortion is complex, non-linear, and not very well understood.


There is a reason why AKG 414 are probably the second most popular mic for studios that have huge budgets. 414 sound is also considered an awesome mic.


I've never gotten a better drum overhead sound than with a spaced pair of 414's (I think TL II, but it's been awhile). Love that mic.

That said, I don't think the 414 color is the same as the U87 color. My impression has been that the U87 is much more flattering to vocals, but I've admittedly never had the opportunity to use one.


TL;DR: It's all about that bass roll-off.

Studio construction, baffling, and noise exclusion also matter.

What I didn't see addressed were some other factors I've noticed and/or been aware of over the years.

NPR's announcers, reporters, and hosts tend to speak conversationally. Rather than shooting for fill-the-room, highly-inflected (a/k/a Commercial Broadcast) voice, it's the tone you'd expect to hear from someone having a conversation with you in a room (though perhaps a larger room, and with less mumbling). That's a huge difference for me, and unfortunatly I'm hearing far more commercial broadcast inflection from the recent crop of announcers, to the point it's quite annoying. Steve Inskeep most particularly.

("Recent", for this old fart, means the past decade or so.)

The other element, and it's one I'm raging against, is that NPR has increasingly moved to live and unedited audio, to its tremendous loss.

When the network was small and it couldn't field reporters in many locations, sound often arrived after hours, or days, at headquarters where it was edited for broadcast. Even the flagship news programs were largely (and in cases entirely) pre-produced, with all news segments, interviews, etc., edited before they went on the air.

Yes, it meant that what you got from NPR was often slightly stale relative to other news outlets, for the latest breaking news (though hourly headlines broadcasts were generally current). But the result was polished and digested. In terms of a program which informed rather than merely screamed, it was, I'd argue, a better product.

NPR has been falling victim to currency bias since the late 1990s and I really don't care for it.


You made two key points. One is that the NPR voices sound like they are next to me in the car. There is good, thoughtful discussion and presentation. And as an NPR supporter I really appreciate that. They know they have my (some) of my attention in the car, they are not trying to wrest my focus from the Honda about to cut me off. The low key nature works.

I listen to NPR to get the slightly stale event that someone has taken the time to gather more than one fact and is trying to make a 1 min segment out of it. I can't recall a time where an NPR story got walked back (looking at you FOX) for shoddy reporting.

Posts above have whined about how much NPR talent make. You should find out about your local morning zoo team and the Ken and Barbie team on your Action news. Those guys don't even get out of bed for 300K a year. Plus the people listed above also write and edit their material, vs Ken & Barbie.

At a variety of levels, NPR and their shows are worth every penny. In a few weeks listen to them talk about the RNC / DNC conventions vs the others.


I have no issues with NPR talent being well-compensated. Really, they deserve what they can get, and they can certainly get more elsewhere.

I do have issues with how several NPR reporters and anchors have been terminated, most particularly Bob Edwards and (though she wasn't even a reporter or anchor at the time) Lisa Simeone (one of the most awesome voices on radio, BTW). Juan Williams, OTOH, should never have been hired in the first place.


Love the superglue/epoxy trick! As the author said, hilarious. That ultra bassy sound always rubs me the wrong way -- I'm glad to hear someone's been going against the grain.

And it's nice that they appreciate the different destinations for their signals, and take steps to prevent them from getting squashed into mush through multiple compression stages.

Thoughtful people.


I'd rather they actually mod it so the switch is removed and the circuit hardwired. I imagine someone beating up the "stuck" switch and breaking the circuit/mic altogether.


That's a good idea. Just bypass it internally and let the switch move freely, doing nothing. They'll learn to leave it alone pretty quickly.


Ha! Bet they don't. This is exactly how superstitions are made.


Also, any sound engineer who claims he has never eq'd a voice, an instrument, a venue to perfection listening to the subtle changes of the sound signature as he did so, only to find that the eq was in bypass the whole time is lying.

Don't ask me how I know.


Also true. :-)


From an earlier submission (https://news.ycombinator.com/item?id=2546087) another factor that gives NPR programs a different feel than your average live radio interview is a large amount of careful editing, described by On the Media’s John Soloman: http://nprfreshair.tumblr.com/post/5449544068/lk-on-the-medi...


As NPR falls increasingly under the spell of "it must be live" -- even the hourly news summaries now start with "Live, from NPR News in Washington", rather than the former "From NPR News in Washington", a gratuitous change that grates -- the segments are far more frequently not edited.

Among the problems this presents is what I call "the chase", which are the final 30 seconds or so of an interview in which the host or interviewer is quite clearly trying to wrap up with a guest, who either isn't aware of, or doesn't want, to meet that time schedule. It is always awkward, and occurs at the end of EVERY live interview.

(The BBC also exhibits this, the worse when it's over some long-distance line.)

A pre-recorded, edited interview can go for full feel and length, and then be cut to fit the timeslot. One thing I recall (and miss) from those was a far more gracious interview wrap. I'm sure that far less of what had been said hit the air, but what there was was better for it.


It always puzzles me when experienced media pros don't recognise that they are going to be shut down abruptly by an approaching hour boundary. Sometimes this happens every week on a regularly scheduled gig! Surely it can't be that difficult to wear a watch and keep it accurate.


It's not the approaching hour, but the 3-6 minutes allocated to a given interview. I've come to recognise that "in the few seconds we have left" is the cue for "wrap this up in 60 seconds". But with the very wide range of people interviewed, ranging from seasoned professionals (politicians and PR flacks) to complete novices (hey, news happens to random people), and especially given both language and technical barriers (it's really hard to formulate your thoughts and be cognisant of studio realities when you're trying to catch what's being said over a scratchy long-distance link), the awkward cut happens very frequently.

I'd notice it on commercial broadcasts if I listened to those more. I am very well aware of it on NPR and BBC broadcasts. Somewhat less so on CBC.


Well that might be so, but the syndrome I was addressing really is the approaching hour. It might happen for example if there is a regular slot for "our Australian correspondent" at the end of an hour. It's supposed to be at 8:55(ish) but sometimes it's delayed or advanced. Either way, the Australian correspondent should be aware the news at 9:00.00 is a hard boundary that is going to cut him off, but surprisingly often he doesn't seem cognisant of that at all. It doesn't matter what the time zone differences are - hour boundaries are hour boundaries! (okay there are exceptions - but the Australian correspondent is not reporting out of South Australia which has a half hour shift).


It was really awkward when an official in the UK was trying to give some condolences about Orlando and the NPR host had to cut them off in the middle of their statement.


It's always fucking awkward. My anticipatory cringe starts about 2 minutes into any interview.


AS a former audio engineer THIS is so important for clear vocals.

> We are fans of being close-miked, and P-pops come into play there. But we make sure that we are within a foot of the microphone and usually a lot closer — close to six inches — in working with any of our on-air talent. That’s another element that goes into it.


I'm Current's editor, and loving the traffic spike we're getting from this... thanks everyone! Serious follow-up question for you -- are there other public media/technology articles you'd be interested in reading that we should pursue? We don't cover tech exclusively, but this post has proven to be hugely popular (not just here but on reddit twice). Would be great to hear your thoughts.


Surely, bass roll-off could be done, with greater flexibility, later in the signal path, and applied to any microphone, right?


Rolling off before the microphone amplifier means that bass frequencies can't saturate the amplifier as easily.


I get your point, but I thought we had plenty of dynamic range available these days. Judging from the text, they seem to use it rather like an equaliser anyway.


Both the mics in the article have output transformers. It's really easy to saturate a transformer with excess low-frequency sound, so that's part of the reason to have a high-pass filter earlier in the signal path.


Fixing things closer to the source is usually best.


Regardless of your dynamic range, in order to have maximum S/N you have to run as hot as feasible. So there's no point capturing and amplifying any signal you already know you don't want.


You're right, though sometimes simplicity beats flexibility. It's a lot easier to set it as a standard right at the beginning of the signal chain and frees all the engineers down the line from having to make decisions about it, and ensures consistency in the process.


There is bass rolloff in the mic even when set flat. This is the proximity filter designed to correct for the way the mic responds when it is close to a source.

https://en.wikipedia.org/wiki/Proximity_effect_(audio)


I wonder why some people like compression so much. It almost always makes the sound much worse.


As mentioned in the article, a large proportion of radio listening is done while driving. A car travelling at highway speeds has 60-70dB of background noise. Dynamic range compression prevents the station from becoming inaudibly quiet or uncomfortably loud in such a noisy environment.

BBC Radio 3 (a national classical music station in the UK) uses heavy dynamic range compression during the drivetime peak, but far less compression during the day. This compromise provides a better experience for both hi-fi and in-car listeners.

Very heavy dynamic range compression was historically useful for AM broadcasters, because it maximised signal strength.


Don't conflate compression in general with the aggressive mastering-phase compression of the loudness wars. Some degree of compression is essential to pretty much any type of audio engineering. Tasteful use of compression is a real art and is an integral part of some of the best studio albums in any genre (except perhaps classical). Compression is essential for vocals except for people who somehow never move their head ever while talking or singing. Compression also makes drums sound amazing--hard, big boomy, Led Zeppelin-style drums don't just naturally sound that way, thats a compressor working its magic.


Mostly because you can get it louder, and usually louder can trick you into thinking it sounds clearer or "better". Plus there's the history with the whole loudness wars.


All broadcast radio is highly compressed. Without it, it would be inaudible in most practical listening conditions (car, walking on the street, in your kitchen cooking).


There's also a psychoacoustic effect: compressed audio sounds louder, even if at the same objective SPL. Probably because very loud sounds are naturally compressed by the mechanics of the ear system saturating at high SPLs, so artificially compressed signals are recognized as 'loud sounding'.


Mozart requires running the audio through compression multiple times. Without that, half of it is inaudible and you'll be constantly adjusting the volume knob.

(at least with modern performances -- perhaps Mozart intended differently, and is now spinning in his grave due to excessive dynamics)


It helps when you're listening in a noisy environment and the audio has large dynamic range.


One of the linked articles at the bottom of the original (at least for me) was about the relatively huge differences in levels between different shows, which noted that even a 6 DB change causes people to reach for the volume and higher levels end up with many turning the radio off.

A radio that's off has no listeners.


You're probably confusing audio compression with data compression.

These days every note of music you listen to on every modern album has a compressor in it somewhere, even if you don't think you can hear it, it's there.


I'm fairly certain he is speaking of data compression in the article, rather than dynamic volume compression.

The problem with sending compressed signals is that they will probably be recompressed before making it to someone's ear and the chain of codecs may introduce strange artifacts.


No way he's talking about data compression, with respect. He's talking about frequency response adjustment and range compression (sometimes called "companding").

A while ago NPR used to use 2B ISDN (112Kbps in the US) for remote locations. I believe they used G.722 codecs for low-latency applications (conversations) and MPEG layer II for higher quality stuff. That was data compression.

But the NPR distribution system isn't so bandwidth limited as to require crushing the data.


No, I don't think so. They specifically talk about adding compression only into their newscast unit. Why would they data compress only one segment?

Also, Compression sounds good. Too much, or badly configured compression sounds bad, obviously. But don't throw it all away. (Not that producers are at any risk of doing so.)


I was thinking of remote newscast, where you might need to fit your data reliably over a cellular link, but now that I think about it, I don't think NPR does much remote newscast.


Are you conflating dynamic range compression with data compression?


Well, technically speaking, dynamic range compression /does/ make an MP3 smaller (IIRC)


Right, but is it done for that purpose?


I don't know how MP3 works, but I was under the impression that a lower bitrate worked by compressing the dynamic range (or at least that was a side effect)


So yes, you are conflating these things.


> you

But I'm not OP?


Fair enough.


No, he's talking about audio (sound) compression.

https://en.wikipedia.org/wiki/Loudness_war

This is one reason why I've stopped buying "Remastered" CDs - they almost always sound worse. Shop at Discogs.com where you can specify the release date, or buy from a quality-oriented label like mofi.com


The key words are "almost always". While I haven't checked, I heard that Pink Floyd's remaster of "The Wall" and some of Led Zeplin's sound better because they focused on removing noise instead of making it louder.


> The Wall

Interestingly - I have that CD (2-disc set) from way back in 1984 when it was first released (on the Harvest label in Europe), and it sounds like rubbish. In the early days of CDs the labels were dumping any title they thought would sell onto poly-carbonate, and didn't care much about cleaning it up. There were stories of CDs that still had the RIAA equalization curve (from LPs) still on them.

I'd love to get the Mobile Fidelity pressing, but they were a limited release and is now selling for over $200.


I think the bad mastering of many early CDs is mostly because mastering engineers didn't know how to master CDs. They were still employing the same techniques they'd been using for vinyl, where they were hardly applicable. (I think that's where the RIAA curve thing comes from, which is unlikely to be literally true as the RIAA curve would remove almost all the bass from the recordings.)


The flat sound really emphasises the sibilants, and for me, excessive "mouth noise". I hear a lot of nose whistle and saliva sounds. Maybe I suffer from some misophonia, but it drives me crazy sometimes.


I couldn't agree more. I think that's actually the U87s you're hearing. They are very neutral, quite sensitive condenser mics - very unforgiving, not "flattering" just "realistic."

Dynamic mics like most stations use (SM7, RE20) hide a lot of sibilance by exaggerating the fundamentals.


That's due to the person being close to the mic. A flat microphone, by definition, emphasizes nothing. Moreover it's other aspects of the signal chain (compression, distortion, broadcast filters) that accentuate sibilance.


> We use a simple Neumann U87 microphone as the house-standard microphone at all of our facilities. They’re expensive, but that’s what we’ve used for years.

That simple U87 costs about 5x what the other broadcasters use (typically an SM7 or RE20) and is much more fragile and difficult to maintain. As I recall NPR bought hundreds of U87s...


> The NPR sound has so many tentacles. If we’re just focusing on the studio side, which was actually the easiest thing.

They didn't go into any of the other things.


Anyone know what polar pattern they have the u87s on? Cardioid?


250Hz, not 250kHz


It is curious that NPR contrasts itself with commercial radio, despite the native advertising, regular name drops, and references to listeners as their consumers (does active tuning decrease the availability of the signal for others?).


Umm, because commercial radio is 100% driven by advertisements where as NPR is less than 20% and is very selective about what sponsors it takes on?


I have often wondered if NPR or public radio in general is motivated to pander to a demographic. By this I mean, consider the leaders of the organization have demographic data on who is most likely to pledge support during their quarterly pledge drives. Would it not make sense to ensure that the sort of stories that this demographic enjoys and (more worryingly) agrees with?

This is not to say that its a bad thing, people are paying into a cooperative because they enjoy it, and they should continue to enjoy it. But public radio, though avoiding corporate advertising, still may have to pander, just on a lesser scale and to in a more democratic manner.


College-educated boomers, in significant part.

Though there are other audience segments.

If you listen to the NPR humour programmes and segments, they'll often allude to the whiteness of their audience, and it pretty much always draws a laugh.

(Edit: to clarify: CEB are the demographic, though it's not necessarily who NPR are pandering to. See followup.)


That's kind of gross. Rich white people have enough without getting free radio targeted to their interests.


I think the joke is more that they wish it wasn't so; going by the very name "National Public Radio", demographic skew is quite unfortunate.


It's not particularly intentional, and NPR mention this in large part as a way of drawing attention to the fact and seeking broader diversity.

NPR grew out of the public broadcasting movement that was emerging in the 1960s (though its roots go deeper), as well as the Baby Boom generation and the democratisation of higher education which had occurred after the 2nd World War. Stir in the Kennedys, Vietnam, the Apollo program, Civil Rights, and Nixon, and you had much of the initial foment from which the project emerged.

NPR has had its largest followings on the coasts -- Boston, New York, Washington, DC, for obvious reasons, San Francisco, Los Angeles, Seattle. These are all still flagship stations and regions (though there are many others).

There are also, and I say this as someone who's driven across and through the US several times, many local stations and "translators", which serve rural and outlying areas, which are a fantastic relief from what's otherwise the monotony of rural radio: Christian broadcasting, bad country-western, and increasingly, Spanish-language radio.

(And if you think NPR panders to its audience, you should try listening to these for a change.)

The problem of getting a high-concept, intelligence-oriented, limited-budget network to appeal to what have been its underserved communities: African-American, Hispanic/Latino, Native American, and other segments, is difficult, but you're seeing this. There are programs which try to bridge that divide as well as reach out to younger audiences (how successfully is another question). Latino USA, Glynn Washington's Snap Judgement, and The Moth come to mind.

Arguably, Car Talk was a crossover show for a long time, bringing in the shade-tree / garage mechanic contingent (though I feel it overstayed its welcome). I'm aware that that program had a huge draw from an otherwise atypical audience though, which probably explains its longevity even into reruns.

And finally: NPR is freely available to all. In an age of podcasts and iPhones, all you need is a cheap radio to pick it up, no subscriptions, no fancy hardware, no malware, no surveillance (though yes, those are availble through the various NPR, PRI, PRX, MPR, etc., podcasts, websites, and other online experiences). It's far less, IMO, that NPR targets its core audience as that that's the group to which it appeals and is known. The information provided is certainly democratic and would benefit most. And as I've noted, the network (and its public broadcasting kin) are actively seeking to extend both reach and relevance.

Rather than dismiss this with a smear as you have, what would you see done differently, what do you find missing, and why do (or don't) you listen to public bradcasting?


NPR certainly targets its core audience. Listen to most stories on higher education, for example. They talk about how hard it is to get into college and how competitive it is. They typically forget that relatively few Americans attend highly selective institutions, with a large number not even attending four-year universities. The NY Times falls into this same trap.

Another example is religion, or lack thereof. America is a religious place, but NPR reports little on religion.

However I might reluctantly grant you that NPR does not "target" its listeners. The truth might be even more concerning: they are not even aware of their upper-middle-class bias, so they think it is perfectly normal for all high school kids to worry about getting into Harvard.


> they think it is perfectly normal for all high school kids to worry about getting into Harvard.

Many public institutions are highly competitive. But I guess "University of Texas" doesn't sound as elitist as Harvard.


I've always felt that part of Car Talk's popularity was their eagerness (and perhaps determined effort, I'm not sure of the demographics) to include female callers in their shows, as equals to males. Honestly, their inclusion of females, to me, has been unmatched by other shows, especially programs relating to historically male-dominant subjects. Their broadcasts have always been a wonderfully enjoyable and enriching part of my road trips.


In the linked article a member of NPR discussed how the flavor of their sound is modified based on NPR's perception of their consumers' behaviors.


That's not pandering though. Even content targeting isn't pandering. Pandering requires doing something distasteful.


Distasteful to who, precisely? Even the fact that NPR exists is distasteful to many. On top of that, any choice it makes as to what to broadcast prevents something else, possibly even equally worthwhile, from broadcasting.


Distasteful was probably not the right word. Pandering means that you are giving something to someone, that they want but it is something that is immoral or improper to give and the judge of that morality or properness is the society the pandering is happening in. In this case the USA.

Choosing to compress your audio, or to play classical music instead of jazz is not immoral, at least not as judged by reasonable Americans, so it isn't pandering.

If what you are really getting at, is the idea of the US federal government providing tax revenue to NPR is distasteful to many, that may be true, but it doesn't mean that NPR is pandering to anyone. Similarly, FOX News may or may not be pandering to someone, but their acceptance of federal tax breaks isn't evidence of it.


The person I was responding to supplied their own definition of pander:

>By this I mean, consider the leaders of the organization have demographic data on who is most likely to pledge support during their quarterly pledge drives.

By this definition, NPR has admitted to pandering (would only need to establish that NPR consumer === pledge supporter) in this article alone.


Look up the definition.

A perverse appeal or suck-up.

Trump or Fox News would both be examples of highly successful (and distasteful) pandering.


https://en.m.wikipedia.org/wiki/Pandering_(politics)

"Perverse" to you and most HN commenters/readers, maybe.

Trump and Fox News may "pander" to groups who you find distasteful, but they DON'T "pander" to many groups who you don't find distasteful.

Trump was the FIRST PERSON to bring up many issues that had previously been censored by the media. He didn't change his views for the approval of others, he single-handedly SHAPED THE NARRATIVE. Many career politicians started talking differently after he came along. THAT'S pandering.


Where are you pulling your numbers from?

This[0] NPR page shows that only about 30% of NPR income is non-commercial. Also note the Nestle advertisement on the page (at least it was omnipresent when viewing on a mobile browser).

[0] http://www.npr.org/about-npr/178660742/public-radio-finances


What number are you referring to? Looking at the page, I can't really see that 70% of income (of either NPR or its members) is commercial.


I am referring to the Revenue Sources image (second on the page):

    39% Station dues and fees
    23% Corporate sponsorships
    9%  Distribution and satellite interconnection
    -----------------
    71% of revenue from commercial activities
>The most significant component of station dues and fees revenue is the charge for carrying our premier newsmagazines - [list of show names].

>Non-newsmagazine program fees (for example, [show]) make up the next largest component of station dues and fees revenue.


Calling most of items commercial is a willful misconstruction of the structure and methods use by NPR. Corporate sponsorships is functionally no different than individual member contributions at the station level, station dues are how costs are apportioned and passed on to member stations.

However, If NPR is using excess satellite airtime to carry non-NPR programming, that could be considered commercial activity, yes.


From the view of NPR, revenue from "station dues and fees", primarily come from charging for access to NPR content, not member donations. It is strictly commercial activity: NPR is producing content and selling it.

I think corporate sponsorships are different (if they were functionally the same as another category, they would have been aggregated there). For one, corporations get advertisement time.


NPR is a non-profit organization. Although they need to play the advertising game to keep the lights on, they certainly aren't there to try and make big profits on their listeners.

Commercial radio is not like this.


NPR (and their member stations) do not have advertising per se. Rather, they call it underwriting and there are very specific rules about what format the underwriting can take. Here are the guidelines: http://www.npr.org/sections/ombudsman/2015/03/11/392355447/n...

I used to work at one of the largest NPR affiliate stations. Listener support is hugely important, most of the money we made came from listeners. IMHO, NPR is pretty successful in that they have built a model that supports quality journalism, at a decent scale, without a lot of annoyances for the user. Newspapers and other media orgs could stand to learn a lot.


Good point, that is one thing I always admired about them. They atleast play bearable ads but I know that they have great content and news analysis stories coming on later.


Non-profit simply means that excess revenue is paid in salary instead of to shareholders.


I don't know NPR but from reading about half the article, it seems to be a radio station in America that flipped a switch on their microphones. Am I missing some insight, or is that all?


Five downvotes, but no comments. What am I missing then?


Maybe admitting to not reading the whole article, making a glib comment, and asking for someone else to do your thinking for you just wasn't received well.


NPR's "signature sound", sadly, is being overrun with hideous vocal fries.


Is this a misophonia thing? I really couldn't care less about vocal fry.


It comes around and around. Vocal fashions of teenage girls cause consternation. Remember the handwringing over uptalk (HRT)?

In fact, I think there's a lot of the anti-HRT sentiment at work in vocal-fry. Girl's being told that high rising tonation is bad, so no wonder they ground out at the bottom of their range.

Women can't win. Not too high, don't push low, don't use like. And so on.

Men also fry, btw, you'd just be hard pressed to find a handwringing article decrying that.


It is a prejudice against people who sound a certain way, especially women.


It's a preference, not prejudice. Prejudice would be if you see a person and assume they speak a certain way based on other characteristics like age, race, gender, or appearance.

But if you actually hear them speak and don't like their voice, that's not prejudice.


When you describe the natural sound of someone's voice as if it were pathological or the product of incompetent mic/broadcast technique, you're moving past "preference" and into something darker.


Maybe, but the style in question is cultivated. I don't think it's prejudice if you don't like the way Coltrane plays the sax. I feel the same way if you don't like some person's intentional vocal affectations.


It's the notion that "vocal fry" is an "intentional vocal affectation" that generates so much professional scorn for listeners who complain about "vocal fry".

http://val-systems.blogspot.com/2011/12/on-vocal-fry.html


Not sure what we are arguing about here. I don't believe any of the three myths that article purports to debunk. It's not a myth though that fry is something one does to ones own voice, intentionally or from habit. It is the same for American gay male voice: just a style of speaking, learned to conform to a peer group, and relatively easily changed even in adulthood.


Respectfully, please color me unsurprised that two traditionally marginalized groups in American culture just happen to have problematic "styles of speaking, learned to conform to a peer group, relatively easily changed".

I agree with what Alex Goldman from Reply All has to say about this phenomenon.

What reading I've done about this suggests that to the extent the phenomenon is empirically grounded at all, it applies both to men and women, while complaints about it are almost entirely gendered.


I think that's because vocal fry is usually seen as somewhat "out of place" in a female speaking voice. Few, if any, complain about male vocal fry because male voices are expected to be low and gravelly.

Preference against female vocal fry is just an nth-order effect from basic biology, and bias against females who use vocal fry is just an (n+1)th-order effect. A bias like that is often not a useful decision-making heuristic and should be checked accordingly.

Maybe I'm misinterpreting your comments but it sounds like you share some of the outraged tone I read from the article you linked above, and I'm really confused as to why. It's just a thing some people do with their voices (sometimes but not always consciously) and some people do or don't like (sometimes but not always consciously). Are you saying people shouldn't care about vocal fry at all?

I also don't understand your hesitance to acknowledge the phenomenon of vocal fry. I'm not sure if these are entirely credible sources but here are some links that I found informative:

- https://www.youtube.com/watch?v=FsqW8jdlaSk "How Does Vocal Fry Work?"

- https://en.wikipedia.org/wiki/Vocal_fry_register


I agree with what Alex Goldman from Reply All has to say about this phenomenon and have little else to add at this point.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: