Makes me wonder why they don't record the sound of the fans and then invert that audio track and mix it with the mic channel before broadcasting the master channel so that the relatively periodic noise of the fan cancels itself out.
edit: also, the weird phasing artefacts this would introduce would be much more prominent and noticeable, even if it succeeded in reducing the overall volume. So much so that doing basically this ('flanging') is a standard musical effect for making parts sound psychedelic.
I suspect the logical end to that sort of thing is to use multiple microphones to add "depth" to your sound recording, then filter out sounds coming from sufficiently far away.
The thing most people don't consider is that white noise (hiss, a fan) is readily tuned out by the brain - the brain is brilliant at reducing background noise - but artifacts introduced by artificial noise reduction are correlated to the signal and the brain interprets it as a distortion of the signal, not background noise.
That's why audiophile audio engineers fix the noise instead of using FFT noise reduction - at least whenever possible.
On the first record I ever produced, I was credited with playing "furnace". We were recording in subzero January weather in Minnesota, and the furnace in my basement was very noisy. I'd go and turn it off, and we'd track as quickly as possible before the house got unacceptably cold and the furnace needed turned on again.
As an interesting aside, something a lot of newbie/hopeful audio engineers obsess about is "mic bleed", where the mic for one instrument picks up sounds from another. I mostly ignore this. Bleed is swamped by signal, and disappears into the mix. Even if a punch or edit fixes an error in the "bleed" instrument, it usually drowns out any error that remains in the bleed itself. Then again, audio engineering is full of a lot of nerding about second and third and even fourth-order effects, while ignoring first-order effects like "Is the song actually good?", "Is the performance good?"
I recently had to make a promo video for my startup on short notice, with a less-than-idea setup, and I did the exact opposite. I filled all of the 'silent' parts of the video with 'ambiance' because the sudden drop to absolute silence sounded very jarring.
I have no experience will audio engineering. I just thought it was interesting that we took opposite approaches to similar problems.
A better approach to "removing" bleed is instead to make sure the bleed sounds good.
Euphonic bleed is solid gold.
Basically, on playback, the steady state noise sounds much louder than it did when you are actually present in the room.
The same holds for the reverberation in a room, it will sound much louder on a recording than on location.
For example, if you record someone speaking in a reverberant space like a church, it will sound like they are in a very reverberant space. Which they are, when you are listening live, your brain filters out the "non-direct" sounds so you notice just the voice.
I'm pretty sure these noise filtering processes are related to the high priority the brain puts on understanding speech.
This is probably why the noise and echoes sound louder on a recording. They don't change the way that they should when I move my head, so my brain can't figure out how to cancel them.
However FM radio always has white noise, so adding a little bit to it is probably better than adding any correlated noise from signal processing.
FFT noise reduction definitely has its place and I do use it from time to time. I just highly doubt "live on-air signal processing" is a good application given that the noiseprint is constantly changing.
Of course we notice different things about an object after taking a picture of it. But you've got to be very careful about cause/effect/bias.
That's interesting about animals. It seems likely that their auditory systems would be tuned for the type of stimuli that are vital for their survival. There would be a lot of overlap, but since it's hard to tell their subjective experience, it would take some clever testing to see how their perception of recorded sound would differ.
It worked for them and would work, so long as the person you are recording keeps their head very still. Any movement would change the parts of the sound emerging from their throat that gets cancelled by the cancelling mic.
Amplifiers use negative feedback to cancel distortion out ( because it's correlated ). Some folks, like at Pass Labs, tend to avoid neg. feedback for philosophical reasons, and people like their amps. They use part selection and careful design instead.
Adding two uncorrelated signals together, you end up with a louder, uncorrelated signal.
There is "single-ended noise reduction" ( the version I use is from CoolEdit 96 and CoolEdit 2000 ) but it leaves artifacts. It's still wonderful for things like single-coil hum, things with a limited spectrum.
However, you can take advantage of that because the sounds you want to keep generally are periodic at short timescales. You can use DSP filters to remove components of the sound that don't show significant autocorrelation. The same idea lets you remove white noise and random static from radio signals.
Getting two sounds to perfectly interfere is basically impossible. Real world sounds are way too complex for that.
Because the vocals are sometimes the only thing that's perfectly in the middle of the mix (balanced left/right), you can invert the left channel of a mix and then recombine it with the right to cancel out only the balanced tracks (so you're left with essentially an instrumental). Subtracting the instrumental track you've just created from the full song (again, by inverting and recombining), you should get the vocal track by itself. Fun stuff, right?
I'm not sure how well this works, as I've never done it myself, but it would seem like having the exact signal is important. As people have said, the fan noise isn't predictable enough to cancel using that technique. On the other hand...
I recently read about a machine learning application using K-means clustering in which you feed the algorithm sound from two microphones set up in the same room and it's able to separate the audio by who is speaking. I'm not sure how well such an algorithm would work when one sound is significantly louder than the other (such as quiet fans and people talking), but it certainly points to that being a possibility.
At the end of the day, the cleaner the source material, the easier it is to process in any fashion you want later on in the airchain. A lot of stations will have noise gates on their microphones, or even wideband gating before material hits the transmitter in some cases. With the right settings, it should be more than plenty.
The challenge is keeping those closets cool with ventilation that does not leak sound.
It's a point of pride in studio design today. The NPR article mentions similar isolation from the newsroom.
Fans have a tonal center around 110-220Hz, then a lot of harmonics that essentially generate pink noise... nothing that can be corrected in post production or flipping the phase.
They can switch to SM7s or RE20s and achieve much better S/N with respect to air handling systems, but they want the "U87 sound" (they can keep it, too, IMO - it's painfully neutral) and that just entails a much higher "room noise to signal" ratio.
On a different note, why is that (as far as non-expert me knows) that nice mics often have eq settings? Is this not a blatant layer violation?
The low-frequency/subsonic content that this switch will remove often consists of wind noise, pops from talking too close, vibrations from the floor/mic stand, etc.
Even if these low frequency noises may be more or less inaudible, they'll contribute to a large signal swing that will decrease the headroom in the following amplifiers. In other words, you won't be able to turn up the gain in the microphone amplifier/mixer as much as you'd want, because the low-frequency but high-amplitude content will make the amplifier clip and distort the sound.
Filtering the signal after the mic preamp won't solve the problem, because the preamp (which can provide in the order of 60 dB gain) will be the component that clips.
By including a low-cut filter in the head amplifier circuit inside the microphone (the amplifier serves only as a buffer, and usually with unity gain), the disturbing low-frequency noises can be removed before entering the high-gain microphone preamp. You can find the low-cut switch next to the capsule in the U87 schematic . It seems to me that they increase the capsule bias voltage, thus changing the low frequency response by electrostatically tensioning the membrane.
edit: tweaked some things. meta: i'm not very good at commit messages.
Here's a Neumann mic from 1952 that could be set remotely. http://recordinghacks.com/microphones/Neumann/M-49
Today it is increasingly common to put the analog stuff as close to the source as possible and remote control it.
This is the 1992 document that finally standardized "pin 2 hot" among other minutiae: http://www.aes.org/publications/standards/search.cfm?docID=1...
The things you mention (vibrations, LF rumble) are typically handled by the highpass on the mic preamp and are in the 75 or 80Hz range. In musical terms, that's 4+ octaves below the 1k proximity rolloff on the mic itself.
This is the one of the areas a recording engineer's problems are different than a software engineer's intuition.
In software, we have a lot of problems that can ultimately be traced back to not having good "visibility" into the software or its state. This has various far-reaching consequences, from specialized tools to examine software (such as debuggers) to cultural values ("one job per component") that help us reason about complex systems we cannot see.
But in the case of audio, we have a multiple sensory organs to directly perceive sound. A bad audio engineer would be able to push a few buttons to move his headphones around in the signal chain and know in seconds where the EQ is occurring. A good one would simply know the U87 rolloff "by sound" and would not have to conduct any investigation at all.
There are some cases where more detailed analysis is done with fancy tools–mastering song recordings, for example. But in live sound, an audio engineer's primary concern is UX, because good UX is the difference between solving a problem quickly and annoying pops or worse. One button that engages a good EQ is much, much better than something like this: http://adn.harmanpro.com/product_attachments/product_attachm.... And epoxying the single button to the correct position is even better still!
You raise a good point about (even in the bad case) moving headphones around the signal chain---and damn as an (especially functional) programmer I'd like that to be able to tap arbitrary locations in data flow like that. Enough all these control-flow-based debugging tools!
> One button that engages a good EQ is much, much better than something like this
Well there's always overkill, but would one of those be worse than one of those and then, next in the signal chain, a 1-button EQ whose functionality is completely subsumed?
You should look into dtrace and it's new Linux counterpart who's name currently alludes me. It's almost like being able to do just that.
I was thinking more of a development aid that would render a dataflow graph like max/msp---in the general case the graph itself is changes but thats's OK.
No, you want to apply cuts as early as possible in the signal chain if you're trying to prevent overloads later in the chain. The "EQs" that you see on mics are almost always bass cuts designed to reduce proximity effects. This gives you much better dynamic range at the preamp because bass frequencies are energy-intensive, so cutting them early allows the amplifier to not have to do all the work amplifying (and distorting) them, just to cut them later.
Beyond that, with a few exceptions, most of the signal rolled off by the switch is stuff you don't want to hear anyways - "plosive" pops, rumble from low-frequency buildup in rooms, vibration, etc. Much of it can't even be reproduced by most speakers (speakers don't do low bass because, as said before, it's a hard problem and not actually audible in most cases).
As an aside, when mixing records, I'll often roll off guitars at 200 or 300hz, losing a couple of octaves of low-frequency information. In theory, this is awful. And if you listen to the solo guitar tracks, it's awful. But in an actual mix, it's great! It gets the low end of the guitars out of the way of real low-frequency instruments like bass and kick drums, making the bass sound tighter and cleaner. The guitars sound clearer too, because they aren't getting muddied by competing bass instruments.
The great challenge of recording is to capture the authority of a roaring guitar amp, the brashness of drums, the purity of a voice, and then squeeze it down into something that fits in an iPhone earbud and still has realistic proportions. That's why the idea of "accuracy" in recording is so ludicrous. A record is to a live performance as a HO train set is to a real train. It's a miniature, a model. To make it look "right", you have to exaggerate some details and lose others.
The goal is to get good sound out of your recording equipment, not to get ISO approval. :-)
Every recording studio or both for a live venue I've seen always seems to have 'round 1/4 equipment always in use, 1/4 rarely but sometimes used, 1/2 equipment never used :P. I really do respect the science and art that goes into audio engineering, but damn that's a lot of clutter.
Now the flip side is it's a lot easier to trim the fat administering a bunch of a computers (and the clutter isn't a huge sunk cost like some once highly valuable piece of equipment one might be loath to chuck), and yet this ideal is also far from realized most the time. Boxes in closet are just a lot more obvious than files in a directory.
Arguably compression wars and other such things are also a layer violation: sound engineers can guess at but not be sure of listeners' equipment. With computer and audio (or anywhere engineering, business, and less educated users meet), optimizing against presumptions is a common source of layer violations.
As has been noted, the particular quirks of the U87 ( or the like ) mean a filter to lessen proximity effect are Useful.
The "EQ settings" on the mic are actually to compensate for proximity effect. On the U87 it starts cutting at 1k -- way above what the interview states.
Another mic commonly seen in broadcast (and top podcasts) is the Shure SM7. Has similar switches, and even comes with a plate you can screw over the top of them so people don't futz with the settings.
Also, rooms, talent, and equipment are often much more influential than mic selection in making a 'signature' sound. But mics are to audio engineers as languages are to programmers... Everyone has a favorite, and opinions border on the religious.
Source: worked in radio engineering for 4 years before starting programming, live audio engineering 4 years after that, and have worked with dozens of mics and varying degrees of on air talent in many environments.
Likewise I'm sure you could swap a U87 on person X with a RE20 on person Y and it might even sound "more like NPR" depending on the natural sounds of their voices.
I agree. The number one rule of good journalism is not to put yourself in the story.
Step back, way, back, and let your subject tell their story.
> Current: Yes, but if you’re at a station and kind of frustrated with the bassy, boomy sound you get on your studio mic, and you can’t get the engineering staff to do anything about it for you, one thing that you could do yourself is to look on the mic and see if there is a bass roll-off switch, and turn it on.
Is the interviewer just tossing out "life pro tips" in the middle of an interview? Why does it start "Yes, but" when it's not a counterpoint to _anything_?
Yet they still occasionally have a story with a siren in the background, so I start checking my mirrors and pulling over.
Holy shit, NPR's fascination with sound effects. Yes, I want to listen to 15 fucking seconds of someone chewing loudly between paragraphs. Sirens, crowd noises, people talking about something undoubtedly unrelated in Urdu.
Dear god, NPR's signature sounds are why I don't listen to it.
And it's during this "conversation" that they play the sound clips:
HOST 1: "So there's this interesting thing they've just discovered, Host 2"
HOST 2: "What's that?"
HOST 1: "Scientists have discovered that when you eat..."
chomping and chewing noises
HOST 1: "... you're actually oppressing black people."
prison cell door slams
HOST 2: "What, seriously?"
HOST: "...and that's when we found Joseph SomethingOrOther"
VOICE: "Hi, I'm Joseph SomethingOrOther"
HOST: "Joseph SomethingOrOther is an expert on blahblah"
VOICE: "Blahblah is foobar"
You just told me this person's name 3 times when 1 would've sufficed.
Oh, and the weird music. I'm not sure I need 15 seconds of annoying jazz improvisation at full volume between segments either. But that is indeed part of the "signature sound" :)
There's several scenes throughout the show that poke fun of various aspects of NPR that are usually quite funny.
I guess that's not as bad as it sounds, given a twenty year life and daily use.
Going a little further, the mixing console you can see on the picture is a Lawo Sapphire, which NPR seems to have installed in 2012. According to this gearslutz post, a 12 faders + central section was starting at 20,000€ back in 2010. You can see on the picture that NPR has a 24 faders version.
The audio monitor on top on the console is an RTW. It's kind of hard to tell which model it really is, but the TM9 for example starts at $4,200.
And I haven't factored in cost/rent for the studio, music licensing, and a whole lot of other OPEX. Since there are quite a few local radio stations around, I'd say the CAPEX is not a big deal.
Radio pays notoriously bad. Local TV is just as bad, and they can't even afford photographers any more so reporters do their own shooting; I was laid off as a photographer when that change was made at a former haunt and newscast quality plummeted. (It's popular and spun positively now: Google "backpack journalism.")
Particularly local, you have to really love the craft to be in it. National/international is better, but to an extent. On-air talent soaks up the salary budget at pretty much every shop, universally, and to use a local example like KTVU or KRON I wouldn't be surprised if on-air talent was making $500k or less. You can do better at Google. The producer running the newscast is probably struggling to make rent. The camera operator certainly is, if they haven't yet been replaced with automatic, motorized cues driven by an overworked TD, who is just as worried about their own rent.
Ad-based business models suck, particularly when owners hate spending money. All media owners do. We got punished for the millions that had to be invested for the HDTV transition with 15% pay cuts at one station in the middle of nowhere.
They have a number of highly paid mid-level people, such a diversity officer (Keith Woods) who makes $220K.
More notable hosts make over the stated $300K, including Steve Inskeep ($400k) and Michele Norris ($350K –– wage gap!).
Highest honors, at least according to the 2013 990 linked here, go to Renée Montagne, who pulled in $412,581.)
These are not fully loaded costs. You probably want to add 10% for D.C.-based employees for employee overhead incl. labor lawyers, retirement contributions, payroll tax, &c.
Not bad to be a corporation that doesn't pay taxes, takes federal taxpayer money, takes state taxpayer money, takes corporate gifts, solicits foundation grants, and asks for individual donations.
They wouldn't be successful at soliciting donations if they had bad broadcasters. Most podcasts have a tiny budget, which is part of the reason most podcasts are terrible.
In the case of NPR staff, I question whether anyone else is bidding on their services.
Audio is a very expensive game to play well.
Object fetishization is not to be dismissed - having dozens of U87 and Manley stuff might be a competitive advantage.
I have a CD or two done with a D&R console and a Mitsu DASH, and I'd say this is comparable. Room and human factors dominate.
You can get 8 channels of A/D-D/A with Lightpipe for 8 more for $500 bucks and while I can't say definitively it's as good as it gets, it loops back to < -90dB for noise & distortion. It's good enough to where the "worst element" is usually me :)
If you want the sound of a Neumann U87, or U47, or Shure SM7, or 57, then nothing - nothing - is going to scratch that itch like the right mic for the job, if you really know what it is you're listening for.
Preamps and signal processors too... I once thought as you did: that digital DAWs have so many advantages over hardware that it just never made sense to put $5000 into a particular piece of hardware when the same money would buy a pile of digital plugins.
Until I had all the digital gear possible, and I came face to face with GIGO: garbage in, garbage out. Audio starts life as analog and that's the beginning of everything that follows in the signal chain. Past that there are certain non-linear things that happen in analog that are particularly euphonic and hard to reproduce digitally.
For example I've got pretty much every plugin compressor ever made, but nothing can take the place of my UA/RE175 compressor, because it's delightfully nonlinear and chaotic and valve-y.
And even if you could perfectly model it, what you can't do is put it where you need it: in the signal chain before A/D conversion, so you're packing as much information into the track as possible. The same goes for mics. Mics distort the sound more than anything in the straight signal chain. Every mic has a fingerprint - once that fingerprint is in the sound, it's there. Same goes for the fingerprint of the musicians instrument / amp. And the room. Etc.
And thus my maxim: the earlier in the signal chain you can get the sound you want, the better.
"Fix it in the mix" is the devil. "Fix" is something you do to a cat.
People who spend thousands on exotic speakers in the name of "transparency" in their home setup could do so much better in that respect with a pair of active studio monitors. But they might not like the results as much.
Things like Beats have frequency response curves that're basically a polar opposite from the NS-10s. Even less cringe-worthy headphones, like Sennheiser HD-280s have sort of a "smiley face" (dip from ~900-2800 hertz, while boosting very low and very high frequencies) EQ curve. Not to discredit them - they are good headphones, but in doing my own projects, I honestly feel like I've done better work on terrible sounding, $5 headphones.
I have fucked up a few mixes by using $5 headphones. But yes properly monitoring gear will point out flaws rather than make something sound good. When I get to the end of a mix (using Sony MDR7506's for example) and I think "man this sounds SO CLOSE but it's still a little shitty" then I'll go and check it on iPhone headphones or something and it'll sound fantastic.
This. All that talk about "great detailed sound" that is "as close to the original as possible" of some $1000+ headphones and yet most music is done on something like $100 MDR-7506. Moreover, 7506s are very comfortable and sturdy, I can't comprehend why anyone in their mind would shell out hundreds or even thousands bucks for their "everyday" headphones (granted, noise-cancelling or sport-y ones are a bit different thing).
Musicians wear headphones in studios when recording, but audio engineers pretty much never use headphones for mixing except as microscopes to listen in terrific detail for something in particular.
Mix engineers mix on speakers almost exclusively.
This is just my personal experience and opinion (and that of pretty much every audio engineer I've worked with): headphones lie. Ears are designed to perceive sounds in 3-space, not to have individual channels shot down the ear canal bypassing the outer ear, with a sealed transducer directly loading the eardrum. As an audio engineer, you're looking for the truth, and this isn't the way for your ears to best perceive it. Moreover headphones are generally fatiguing compared to a good set of studio monitors in a tuned control room.
So the most representative mix - regardless of listening format - is going to come from speakers. I think most audio engineers are going to agree on this point.
Some audio engineers "mix for the format" - I suppose if you want a mix that is optimized for headphones, then yes you're going to at least check the mix with headphones, but I still doubt many engineers would do the mix on headphones.
Seasoned engineers know that listening fads come and go - the way to make a mix that will stand the test of time is to make a mix that sounds good on anything, and "good studio monitors in a good room" is going to get you there best.
This equipment will have to run 24/7, be constantly worked on, be easy to service, will likely have redundant power-supplies, maybe redundant DSPs/Routing/I/O. And if anything breaks, you'd expect to get a 100% drop-in replacement (in terms of frequency response, gain, ...) delivered over-night, or by a courier/technician.
Regarding the mixing consoles: For live-sound (the only area I know a little about), only recently a wealth of very affordable mixing-consoles have flooded the market, but anything slightly larger still sets you back ~€10k for an entry system (Midas Pro, Yamaha CL, Soundcraft VI, Allen-Heath GLD). And there, I suspect the number of units shipped is one or two orders of magnitude larger than what Lawo sells for fixed installation in studios.
Generally, the equipment will work flawlessly, even when hauled around every weekend and not handled very gently.
I won't get into it all unless you want me to, but a system for live performance or broadcast has orders of magnitude more work to do, functionality which is demanded by a tiny market relative to consumer electronics, with far higher expectations on reliability, flexibility, etc.
There are certainly people (Pyle, Behringer, etc) who claim to sell cheap solutions, but they are substantially more likely to bite you in the ass in concrete, embarrassing ways if you try to use them in performance environments.
My experience is 100% studio and 0% live, and though I'm not ygra, if you're willing to type, I'd love to read.
Edit: Fair enough, I'm entirely incorrect as far as the headphone price point.
On the production side of things where quality super matters, you'll see the more expensive headphones for sure. But every single person in the last newsroom I saw rocked the 7506, and though I've moved on to more expensive headphones for programming and my music collection, I still miss my 7506s on occasion.
It's like the NS-10's of the headphone world. Well-understood and consistent, while offering a great average, flat experience that translates well over the air.
I don't think there's anything that comes close for ~$100 in terms of sound quality and comfort; you'd have to spend at least $300 for a real upgrade. And even then, there's an element of subjectivity.
But yeah, couldn't agree more. Price point.
Having listened to most and owning/owned for both studio and personal listening use some of the headphones that Marco reviews I can’t help but wonder if there is some bias going on with the music he reviews with or whether he actually does have very skewed hearing. Having been used to fine equipment and rooms whilst working in mastering suites, recording studios and I have picked up a preference for probably a certain type of sound but when auditioning and getting used to audio equipment I often test with a wide variety of sources that includes some music I don’t personally like but know quite well (such as country and western) as a reference to understand capabilities across the board.
Whatever ones thoughts it highlights the importance of auditioning oneself and getting to know something well (goes for speakers, rooms and headphones). It’s possible to do produce a good result using MDR-7506 or NS10s if you are comfortable with them and have the skill to do so.
This is radio, not some boutique studio with a ton of outboard gear.
Other mics - including cheaper Neumanns - don't have quite the same effect.
It's not about capturing the sound perfectly accurately, but about creating an impression with sound. That's why some pro audio gear is so insanely expensive.
It's very hard to create the same impression digitally because this kind of subtle analog distortion is complex, non-linear, and not very well understood.
That said, I don't think the 414 color is the same as the U87 color. My impression has been that the U87 is much more flattering to vocals, but I've admittedly never had the opportunity to use one.
Studio construction, baffling, and noise exclusion also matter.
What I didn't see addressed were some other factors I've noticed and/or been aware of over the years.
NPR's announcers, reporters, and hosts tend to speak conversationally. Rather than shooting for fill-the-room, highly-inflected (a/k/a Commercial Broadcast) voice, it's the tone you'd expect to hear from someone having a conversation with you in a room (though perhaps a larger room, and with less mumbling). That's a huge difference for me, and unfortunatly I'm hearing far more commercial broadcast inflection from the recent crop of announcers, to the point it's quite annoying. Steve Inskeep most particularly.
("Recent", for this old fart, means the past decade or so.)
The other element, and it's one I'm raging against, is that NPR has increasingly moved to live and unedited audio, to its tremendous loss.
When the network was small and it couldn't field reporters in many locations, sound often arrived after hours, or days, at headquarters where it was edited for broadcast. Even the flagship news programs were largely (and in cases entirely) pre-produced, with all news segments, interviews, etc., edited before they went on the air.
Yes, it meant that what you got from NPR was often slightly stale relative to other news outlets, for the latest breaking news (though hourly headlines broadcasts were generally current). But the result was polished and digested. In terms of a program which informed rather than merely screamed, it was, I'd argue, a better product.
NPR has been falling victim to currency bias since the late 1990s and I really don't care for it.
I listen to NPR to get the slightly stale event that someone has taken the time to gather more than one fact and is trying to make a 1 min segment out of it. I can't recall a time where an NPR story got walked back (looking at you FOX) for shoddy reporting.
Posts above have whined about how much NPR talent make. You should find out about your local morning zoo team and the Ken and Barbie team on your Action news. Those guys don't even get out of bed for 300K a year. Plus the people listed above also write and edit their material, vs Ken & Barbie.
At a variety of levels, NPR and their shows are worth every penny. In a few weeks listen to them talk about the RNC / DNC conventions vs the others.
I do have issues with how several NPR reporters and anchors have been terminated, most particularly Bob Edwards and (though she wasn't even a reporter or anchor at the time) Lisa Simeone (one of the most awesome voices on radio, BTW). Juan Williams, OTOH, should never have been hired in the first place.
And it's nice that they appreciate the different destinations for their signals, and take steps to prevent them from getting squashed into mush through multiple compression stages.
Don't ask me how I know.
Among the problems this presents is what I call "the chase", which are the final 30 seconds or so of an interview in which the host or interviewer is quite clearly trying to wrap up with a guest, who either isn't aware of, or doesn't want, to meet that time schedule. It is always awkward, and occurs at the end of EVERY live interview.
(The BBC also exhibits this, the worse when it's over some long-distance line.)
A pre-recorded, edited interview can go for full feel and length, and then be cut to fit the timeslot. One thing I recall (and miss) from those was a far more gracious interview wrap. I'm sure that far less of what had been said hit the air, but what there was was better for it.
I'd notice it on commercial broadcasts if I listened to those more. I am very well aware of it on NPR and BBC broadcasts. Somewhat less so on CBC.
> We are fans of being close-miked, and P-pops come into play there. But we make sure that we are within a foot of the microphone and usually a lot closer — close to six inches — in working with any of our on-air talent. That’s another element that goes into it.
BBC Radio 3 (a national classical music station in the UK) uses heavy dynamic range compression during the drivetime peak, but far less compression during the day. This compromise provides a better experience for both hi-fi and in-car listeners.
Very heavy dynamic range compression was historically useful for AM broadcasters, because it maximised signal strength.
(at least with modern performances -- perhaps Mozart intended differently, and is now spinning in his grave due to excessive dynamics)
A radio that's off has no listeners.
These days every note of music you listen to on every modern album has a compressor in it somewhere, even if you don't think you can hear it, it's there.
The problem with sending compressed signals is that they will probably be recompressed before making it to someone's ear and the chain of codecs may introduce strange artifacts.
A while ago NPR used to use 2B ISDN (112Kbps in the US) for remote locations. I believe they used G.722 codecs for low-latency applications (conversations) and MPEG layer II for higher quality stuff. That was data compression.
But the NPR distribution system isn't so bandwidth limited as to require crushing the data.
Also, Compression sounds good. Too much, or badly configured compression sounds bad, obviously. But don't throw it all away. (Not that producers are at any risk of doing so.)
But I'm not OP?
This is one reason why I've stopped buying "Remastered" CDs - they almost always sound worse. Shop at Discogs.com where you can specify the release date, or buy from a quality-oriented label like mofi.com
Interestingly - I have that CD (2-disc set) from way back in 1984 when it was first released (on the Harvest label in Europe), and it sounds like rubbish. In the early days of CDs the labels were dumping any title they thought would sell onto poly-carbonate, and didn't care much about cleaning it up. There were stories of CDs that still had the RIAA equalization curve (from LPs) still on them.
I'd love to get the Mobile Fidelity pressing, but they were a limited release and is now selling for over $200.
Dynamic mics like most stations use (SM7, RE20) hide a lot of sibilance by exaggerating the fundamentals.
That simple U87 costs about 5x what the other broadcasters use (typically an SM7 or RE20) and is much more fragile and difficult to maintain. As I recall NPR bought hundreds of U87s...
They didn't go into any of the other things.
This is not to say that its a bad thing, people are paying into a cooperative because they enjoy it, and they should continue to enjoy it. But public radio, though avoiding corporate advertising, still may have to pander, just on a lesser scale and to in a more democratic manner.
Though there are other audience segments.
If you listen to the NPR humour programmes and segments, they'll often allude to the whiteness of their audience, and it pretty much always draws a laugh.
(Edit: to clarify: CEB are the demographic, though it's not necessarily who NPR are pandering to. See followup.)
NPR grew out of the public broadcasting movement that was emerging in the 1960s (though its roots go deeper), as well as the Baby Boom generation and the democratisation of higher education which had occurred after the 2nd World War. Stir in the Kennedys, Vietnam, the Apollo program, Civil Rights, and Nixon, and you had much of the initial foment from which the project emerged.
NPR has had its largest followings on the coasts -- Boston, New York, Washington, DC, for obvious reasons, San Francisco, Los Angeles, Seattle. These are all still flagship stations and regions (though there are many others).
There are also, and I say this as someone who's driven across and through the US several times, many local stations and "translators", which serve rural and outlying areas, which are a fantastic relief from what's otherwise the monotony of rural radio: Christian broadcasting, bad country-western, and increasingly, Spanish-language radio.
(And if you think NPR panders to its audience, you should try listening to these for a change.)
The problem of getting a high-concept, intelligence-oriented, limited-budget network to appeal to what have been its underserved communities: African-American, Hispanic/Latino, Native American, and other segments, is difficult, but you're seeing this. There are programs which try to bridge that divide as well as reach out to younger audiences (how successfully is another question). Latino USA, Glynn Washington's Snap Judgement, and The Moth come to mind.
Arguably, Car Talk was a crossover show for a long time, bringing in the shade-tree / garage mechanic contingent (though I feel it overstayed its welcome). I'm aware that that program had a huge draw from an otherwise atypical audience though, which probably explains its longevity even into reruns.
And finally: NPR is freely available to all. In an age of podcasts and iPhones, all you need is a cheap radio to pick it up, no subscriptions, no fancy hardware, no malware, no surveillance (though yes, those are availble through the various NPR, PRI, PRX, MPR, etc., podcasts, websites, and other online experiences). It's far less, IMO, that NPR targets its core audience as that that's the group to which it appeals and is known. The information provided is certainly democratic and would benefit most. And as I've noted, the network (and its public broadcasting kin) are actively seeking to extend both reach and relevance.
Rather than dismiss this with a smear as you have, what would you see done differently, what do you find missing, and why do (or don't) you listen to public bradcasting?
Another example is religion, or lack thereof. America is a religious place, but NPR reports little on religion.
However I might reluctantly grant you that NPR does not "target" its listeners. The truth might be even more concerning: they are not even aware of their upper-middle-class bias, so they think it is perfectly normal for all high school kids to worry about getting into Harvard.
Many public institutions are highly competitive. But I guess "University of Texas" doesn't sound as elitist as Harvard.
Choosing to compress your audio, or to play classical music instead of jazz is not immoral, at least not as judged by reasonable Americans, so it isn't pandering.
If what you are really getting at, is the idea of the US federal government providing tax revenue to NPR is distasteful to many, that may be true, but it doesn't mean that NPR is pandering to anyone. Similarly, FOX News may or may not be pandering to someone, but their acceptance of federal tax breaks isn't evidence of it.
>By this I mean, consider the leaders of the organization have demographic data on who is most likely to pledge support during their quarterly pledge drives.
By this definition, NPR has admitted to pandering (would only need to establish that NPR consumer === pledge supporter) in this article alone.
A perverse appeal or suck-up.
Trump or Fox News would both be examples of highly successful (and distasteful) pandering.
"Perverse" to you and most HN commenters/readers, maybe.
Trump and Fox News may "pander" to groups who you find distasteful, but they DON'T "pander" to many groups who you don't find distasteful.
Trump was the FIRST PERSON to bring up many issues that had previously been censored by the media. He didn't change his views for the approval of others, he single-handedly SHAPED THE NARRATIVE. Many career politicians started talking differently after he came along. THAT'S pandering.
This NPR page shows that only about 30% of NPR income is non-commercial. Also note the Nestle advertisement on the page (at least it was omnipresent when viewing on a mobile browser).
39% Station dues and fees
23% Corporate sponsorships
9% Distribution and satellite interconnection
71% of revenue from commercial activities
>Non-newsmagazine program fees (for example, [show]) make up the next largest component of station dues and fees revenue.
However, If NPR is using excess satellite airtime to carry non-NPR programming, that could be considered commercial activity, yes.
I think corporate sponsorships are different (if they were functionally the same as another category, they would have been aggregated there). For one, corporations get advertisement time.
Commercial radio is not like this.
I used to work at one of the largest NPR affiliate stations. Listener support is hugely important, most of the money we made came from listeners. IMHO, NPR is pretty successful in that they have built a model that supports quality journalism, at a decent scale, without a lot of annoyances for the user. Newspapers and other media orgs could stand to learn a lot.
In fact, I think there's a lot of the anti-HRT sentiment at work in vocal-fry. Girl's being told that high rising tonation is bad, so no wonder they ground out at the bottom of their range.
Women can't win. Not too high, don't push low, don't use like. And so on.
Men also fry, btw, you'd just be hard pressed to find a handwringing article decrying that.
But if you actually hear them speak and don't like their voice, that's not prejudice.
I agree with what Alex Goldman from Reply All has to say about this phenomenon.
What reading I've done about this suggests that to the extent the phenomenon is empirically grounded at all, it applies both to men and women, while complaints about it are almost entirely gendered.
Preference against female vocal fry is just an nth-order effect from basic biology, and bias against females who use vocal fry is just an (n+1)th-order effect. A bias like that is often not a useful decision-making heuristic and should be checked accordingly.
Maybe I'm misinterpreting your comments but it sounds like you share some of the outraged tone I read from the article you linked above, and I'm really confused as to why. It's just a thing some people do with their voices (sometimes but not always consciously) and some people do or don't like (sometimes but not always consciously). Are you saying people shouldn't care about vocal fry at all?
I also don't understand your hesitance to acknowledge the phenomenon of vocal fry. I'm not sure if these are entirely credible sources but here are some links that I found informative:
- https://www.youtube.com/watch?v=FsqW8jdlaSk "How Does Vocal Fry Work?"