The way this works (and I'm obviously taking a high level view here) is by comparing what is being played to what is being captured. There is an inherent latency in between what is called the capture stream (the mic) and the reverse stream (what is being output to the speakers, be it people taking or music or whatever), and by finding this latency and comparing, one can cancel the music from the speech captured.
Within a single process, or tree of processes that can cooperate, this is straightforward (modulo the actual audio signal processing which isn't) to do: keep what you're playing for a few hundreds milliseconds around, compare to what you're getting in the microphone, find correlations, cancel.
If the process aren't related there are multiple ways to do this. Either the OS provides a capture API that does the cancellation, this is what happens e.g. on macOS for Firefox and Safari, you can use this. The OS knows what is being output. This is often available on mobile as well.
Sometimes (Linux desktop, Windows) the OS provides a loopback stream: a way to capture the audio that is being played back, and that can similarly be used for cancellation.
If none of this is available, you mix the audio output and perform cancellation yourself, and the behaviour your observe happens.
Source: I do that, but at Mozilla and we unsurprisingly have the same problems and solutions.
>The missile knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation
Up to a point that text makes a lot of sense for describing a PID controller, which is a form of control that only really looks at error and tries to get it to zero.
>a PID controller, which is a form of control that only really looks at error
As the name implies the PID controller relies on proportional, integral and derivative information about the error. What you mean is a purely P controller, which just relies on the error.
Missiles are also not guided by a PID controller, that would be silly. They (or the guidance computer in the airplane) has to take into account the trajectory of the target and guide the missile in a way to intercept that target, which is not something you can accomplish with just a PID controller.
It wouldn’t surprise me at all if early heat seeking missiles used just a PID controller, since a big part of what makes PID attractive is the ability to implement it with electrical components. Take a pair of IR photodiodes and wire them such that their difference is the error of your PID control, wire the output of the PID to the steering on your missile, and suddenly you have a missile that points at the nearest IR target (on one axis of course).
Modern missiles do better than this, but a missile wired this way with a proximity fuse would hit the target a reasonable amount of the time. Not silly at all if you haven’t invented microcontrollers yet.
"Although proportional navigation was apparently known by the Germans
during World War II at Peenemu¨nde, no applications on the Hs. 298 or R-1 mis-
siles using proportional navigation were reported [2]. The Lark missile, which
had its first successful test in December 1950, was the first missile to use pro-
portional navigation. Since that time proportional navigation guidance has been
used in virtually all of the world’s tactical radar, infrared (IR), and television
(TV) guided missiles [3]. The popularity of this interceptor guidance law is
based upon its simplicity, effectiveness, and ease of implementation. Apparently,
proportional navigation was first studied by C. Yuan and others at the RCA
Laboratories during World War II under the auspices of the U.S. Navy [4]."
From Tactical and Strategic Missile Guidance Sixth Edition.
(To preempt the confusion. Proportional navigation isn't a simple P controller, the missile is seeking an intercept path)
>Not silly at all if you haven’t invented microcontrollers yet.
Apparently the Germans did try that during WW2, but such a missile can not be effective, outside of e.g. bomber intercept.
The "magic" of the AIM-9 Series is that it could achieve this without micro controllers.
I've observed that hacker culture exists because DARPA funded institutions like MIT for AI research, because the military wanted the missile to know where it is.
For a little more context on negative feedback to those who want to know more (I believe this is what you're referring to?)
Here's a short historical interview with Harold Black from AT&T on his discovery/invention of the negative feedback technique for noise reduction.
It's not super explanatory but a nice historical context: https://youtu.be/iFrxyJAtJ7U?si=8ONC8N2KZwq3Jfsq
IIRC the issue was AT&T was trying to get cross-country calling, but to make the signal carry further you needed a louder signal. Amplifying the signal also the distortion.
So Harold came up with this method that ultimately allowed enough signal reduction to allow calls to cross the country within the power constraints available.
For some reason I recall something about transmission about Denver being a cut off point before the signal was too degraded... But I'm too old and forgetful so I could be misremembering something I read a while ago. If anyone has more specific info/context/citations that'd be great. Since this is just "hearsay" from memory, but I think it's something like this.
It just seems more logical for the OS to do that, rather than the application. Basically every application that uses microphone input will want to do this, and will want to compensate for all audio output of the device, not just its own. Why does the OS not provide a way to do this?
> Basically every application that uses microphone input will want to do this
The OS doesn't have more information about this than applications and it's not that obvious whether an application wants the OS to fuck around with the audio input it sees. Even in the applications where this might be the obvious default behavior, you're wrong - since most listeners don't use loudspeakers at all, and this is not a problem when they wear headphones. And detecting that (also, is the input a microphone at all?) is not straightforward.
The "OS" isn't special here, apps can listen to system audio.
fwiw, you only need to know anything about outputs if you are doing AEC. Blind source separation doesn't have that problem and can just process the input stream.
> The "OS" isn't special here, apps can listen to system audio.
Even if this is true, it's easy to imagine such functionality being exploited by malicious apps as a security and/or privacy concern, particularly if the user needs a screen reader.
It definitely makes sense for the operating system to provide this functionality.
That doesn't make sense in the context of default devices. MacOS's AVKit (or is it CoreAudio?) APIs that configure the streams created on the device makes way more sense, since it's a property of the audio i/o stream and not the devices.
Assuming this isn't parody, the OS doesn't have to do it automatically. Having an application grab a microphone stream and say to the OS "take this and cancel any audio out streams" might be pretty useful.
I agree with that, but the point I'm trying to make is that audio i/o handling is pretty complicated and application specific. The idea I'm challenging is that "any app that wants microphone input wants this" is dubious. I'd say it's only a small number of audio applications that care about mic input want background noise reduced - and it makes sense for this to be configured per-input stream.
Really what would be nice is if every audio i/o backend supported multiplex i/o streams and you could configure whether or not to cancel audio based on that set of streams but not all output (because multi output-device audio gets tricky).
The technique introduces latency and distortion because it's subtracting an estimate of sound that's traveling/reflecting in the listening environment, which is imperfect and involves the speed of sound.
That latency is within the tolerance that users are comfortable with for voice chat, and much less than video processing/transfer is introducing for video calls anyway, so it's a very obvious win there. Especially since those users are most interested in just picking out clear words using whatever random mic/speaker configuration happens to be most convenient.
But musicians, for instance, are much more interested in minimizing the delay between their voice or instrument being captured and returned through a monitor, and they generally choose a hardware arrangement that avoids the problem in the first place. And that's not really a niche use case.
Live video or audio chat is basically the only time you do want this. Granted, that’s a big chunk of microphone usage in practice, but any time you are doing higher fidelity audio recording and you have set up the inputs accordingly you absolutely do not want the artefacts introduced by this cancellation. DAWs, audio calibration, and even live audio when you’ve ensured the output cannot impact the recording all would want it switched off.
Default on vs default off is really just an implementation detail of the API though, as you say.
> Live video or audio chat is basically the only time you do want this.
If I'm recording a voice memo, or talking to an AI assistant, I would want this. Basically everything I can imagine doing with a PC microphone outside of (!) professional audio recording work.
That last case is important and we agree there needs to be a way to turn it off. I think defaults are really important though.
My colleague works in a very quiet house, and has no need for noise cancelling. Sometimes, he has it turned on by accident, and the quality is much worse - his voice gets cut out, and volume of his voice goes up and down.
As you say, as long as either option is available, the only question is what the default should be.
I gave an example, when I'm wearing headphones I don't want this enabled. If I'm recording anything, I probably don't want it on either. If I'm using a virtual output, I don't want AEC to treat that as a loudspeaker.
But you need to have a strong-handed OS team that's willing to push everybody towards their most modern and highly integrated interfaces and sunset their older interfaces.
Not everybody wants that in their OS. Some want operating systems that can be pieced together from myriad components maintained by radically different teams, some want to see their API's/interfaces preserved for decades of backwards compatibility, some want minimal features from their OS and maximum raw flexibility in user space, etc
macOS has done this in recent versions. Similarly it will do all the virtual background and bokeh stuff for webcams outside of the (typically horrific) implementations in video conferencing apps.
As others have noted, this is trivial for most macOS and iOS apps to opt in to.
Frankly, I imagine its also available at the system level on Windows (and maybe Android and Linux) but probably only among applications that happen to be using certain audio frameworks/engines.
It doesn't seem to me that module-echo-cancel in Pulseaudio completely meets the requirements here (only one source), but it looks close, and seems in general like where you would implement something like this.
I think module-null-sink and module-loopback could be used to create a virtual source which combines multiple sources, though the source/sink thing makes my head spin. Or, more simply, I suppose using the loopback of whatever audio output device does the combination (and the same mixing) for you, if you play all audio through one output device (which is most likely)?
Huh? What's the aggression in characterizing the diverse landscape of operating systems and the users/developers who very reasonably may prefer each?
I think it's very good that we have so many options of what an operating system and its vendors/developers might prioritize, and that these differences in priority have consequential impact on how software gets built on each.
On mac/iOS, you get this using the AVAudioEngine API if you set voiceProcessingEnabled to true on the input node. It corrects for audio being played from all applications on the device.
This has certainly made conference calls significantly more usable. I feel like it must have come around during 2020, because I feel like pre-covid I would go around BEGGING everyone I did calls with to get a headset, because otherwise everyone else's voice would echo back through their microphone 0.75s later. Today I recently realized I could just literally do calls out loud on my laptop mic and speaker and somehow it works. Nice to know why!
This assumes there is an OS-managed software mixer sitting in the middle of all audio streams between programs and devices. Historically, that wasn't the case, because it would introduce a lot of latency and jitter in the audio. I believe it is still possible for a program to get exclusive access to an audio output device on Windows (WASAPI) and Linux (ALSA).
The OS doesn't know that the application doesn't want feedback from the speaker, and not 100% of applications will want such filtering. I think a best practice from the OS side would be to provide it as an optional flag. (Default could be on or off, with reasonable possibility for debate in either direction, but an app that really knows what it wants should be able to ask for it.)
There is a third place: a common library that all the apps use. If it is in the OS then it becomes brittle. If there's an improvement in the technology which requires an API change, that becomes difficult without keeping backwards compatibility or the previous implementation forever. Instead, there would be a newer generation common library which might eventually replace the first but only if the entire ecosystem chooses to leave the old one behind. Meanwhile there'd be a place for both. Apps that share use of a library would simply dynamically link to it.
This is the way things usually work in the Free Software world. For example: need JPEG support? You'll probably end up linking to libjpeg or an equivalent. Most languages have a binding to the same library.
Is that part of the OS? I guess the answer depends on how you define OS. On a Free Software platform it's difficult to say when a given library is part of the OS and when it is not.
My experience is the opposite. When it's part of the OS, it's stable and you just say "you need OS version X or better" and it will just work. When it's a library, you eventually end up in dependency hell of deprecated libraries and differing versions (or worst case, the JavaScript ecosystem when the platform provides almost nothing and you get npm).
Depends on the OS I guess. When it's established enough, all distributions carry a high enough version that it's not an issue. If it's not established enough, I'd argue that it isn't ready to be part of an "OS" anyway (regardless of the definition of that word).
I suppose the OS probably makes something like this available, when using Voiceover on Mac and presenting in teams by default only the mic comes into teams, you need to do something to share the other processes audio.
That's mac of course but in my experience Windows is much more trusting of what it gives applications access to so I suppose the same thing is available there.
How sure are you that Basically every application wants this? So should there be a flag at the os level for enabling the cancellation? How do you control that flag?
At the lowest level its a fouriertransform over a systems (your room the echochambers response is know from some testsound )and the expected output going through that transform on its way to the mic is subtracted. Most socks and machines have dedicated systems for that. The very same chip produces the echo of the surroundings.
Huh, thanks. I was interested in this probably 6-8 years ago, and when I went digging the stackoverflow answer mentioned elsewhere in this thread [0] was as far as I got. I guess the tech has progressed since then.
Within a single process, or tree of processes that can cooperate, this is straightforward (modulo the actual audio signal processing which isn't) to do: keep what you're playing for a few hundreds milliseconds around, compare to what you're getting in the microphone, find correlations, cancel.
If the process aren't related there are multiple ways to do this. Either the OS provides a capture API that does the cancellation, this is what happens e.g. on macOS for Firefox and Safari, you can use this. The OS knows what is being output. This is often available on mobile as well.
Sometimes (Linux desktop, Windows) the OS provides a loopback stream: a way to capture the audio that is being played back, and that can similarly be used for cancellation.
If none of this is available, you mix the audio output and perform cancellation yourself, and the behaviour your observe happens.
Source: I do that, but at Mozilla and we unsurprisingly have the same problems and solutions.