Hacker News new | past | comments | ask | show | jobs | submit | padenot's comments login

This is also a technique that was used more than 100 years ago for the line 4 of the Paris métro: https://www.ratp.fr/en/discover/out-and-about/culture/histor...

Funny that they had to freeze the ground to dig it, as it is one of the hottest line in Paris (at least 20 years ago).

My understanding is that the freezing is normally about stopping water intrusion long enough to drill and place concrete

Yes, I use rr all day every day (to record Firefox executions) on a rather recent Threaripper Pro 7950, and also with Pernosco. The rr wiki on GitHub explains how to make it work. Once the small workaround is in place it works very reliably.


The way this works (and I'm obviously taking a high level view here) is by comparing what is being played to what is being captured. There is an inherent latency in between what is called the capture stream (the mic) and the reverse stream (what is being output to the speakers, be it people taking or music or whatever), and by finding this latency and comparing, one can cancel the music from the speech captured.

Within a single process, or tree of processes that can cooperate, this is straightforward (modulo the actual audio signal processing which isn't) to do: keep what you're playing for a few hundreds milliseconds around, compare to what you're getting in the microphone, find correlations, cancel.

If the process aren't related there are multiple ways to do this. Either the OS provides a capture API that does the cancellation, this is what happens e.g. on macOS for Firefox and Safari, you can use this. The OS knows what is being output. This is often available on mobile as well.

Sometimes (Linux desktop, Windows) the OS provides a loopback stream: a way to capture the audio that is being played back, and that can similarly be used for cancellation.

If none of this is available, you mix the audio output and perform cancellation yourself, and the behaviour your observe happens.

Source: I do that, but at Mozilla and we unsurprisingly have the same problems and solutions.


This reminds me of:

>The missile knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation

https://knowyourmeme.com/memes/the-missile-knows-where-it-is



the missile is eepy https://youtu.be/Csp_OABIsBM


Up to a point that text makes a lot of sense for describing a PID controller, which is a form of control that only really looks at error and tries to get it to zero.


>a PID controller, which is a form of control that only really looks at error

As the name implies the PID controller relies on proportional, integral and derivative information about the error. What you mean is a purely P controller, which just relies on the error.

Missiles are also not guided by a PID controller, that would be silly. They (or the guidance computer in the airplane) has to take into account the trajectory of the target and guide the missile in a way to intercept that target, which is not something you can accomplish with just a PID controller.


It wouldn’t surprise me at all if early heat seeking missiles used just a PID controller, since a big part of what makes PID attractive is the ability to implement it with electrical components. Take a pair of IR photodiodes and wire them such that their difference is the error of your PID control, wire the output of the PID to the steering on your missile, and suddenly you have a missile that points at the nearest IR target (on one axis of course).

Modern missiles do better than this, but a missile wired this way with a proximity fuse would hit the target a reasonable amount of the time. Not silly at all if you haven’t invented microcontrollers yet.


"Although proportional navigation was apparently known by the Germans during World War II at Peenemu¨nde, no applications on the Hs. 298 or R-1 mis- siles using proportional navigation were reported [2]. The Lark missile, which had its first successful test in December 1950, was the first missile to use pro- portional navigation. Since that time proportional navigation guidance has been used in virtually all of the world’s tactical radar, infrared (IR), and television (TV) guided missiles [3]. The popularity of this interceptor guidance law is based upon its simplicity, effectiveness, and ease of implementation. Apparently, proportional navigation was first studied by C. Yuan and others at the RCA Laboratories during World War II under the auspices of the U.S. Navy [4]."

From Tactical and Strategic Missile Guidance Sixth Edition.

(To preempt the confusion. Proportional navigation isn't a simple P controller, the missile is seeking an intercept path)

>Not silly at all if you haven’t invented microcontrollers yet.

Apparently the Germans did try that during WW2, but such a missile can not be effective, outside of e.g. bomber intercept.

The "magic" of the AIM-9 Series is that it could achieve this without micro controllers.


  >The "magic" of the AIM-9 Series is that it could achieve this without micro controllers. 

The real magic were the fearless carrier pigeons and self-less kamikaze fighter pigeons missileers.


PID or even just PI are often used to control missile airframe roll rate. Pitch and yaw may be more advanced systems, sure.


I've observed that hacker culture exists because DARPA funded institutions like MIT for AI research, because the military wanted the missile to know where it is.


This is almost weirdly philosophical. I've been thinking about this all morning.


For a little more context on negative feedback to those who want to know more (I believe this is what you're referring to?)

Here's a short historical interview with Harold Black from AT&T on his discovery/invention of the negative feedback technique for noise reduction. It's not super explanatory but a nice historical context: https://youtu.be/iFrxyJAtJ7U?si=8ONC8N2KZwq3Jfsq

Here's a more indepth circuit explanation: https://youtu.be/iFrxyJAtJ7U?si=8ONC8N2KZwq3Jfsq

IIRC the issue was AT&T was trying to get cross-country calling, but to make the signal carry further you needed a louder signal. Amplifying the signal also the distortion.

So Harold came up with this method that ultimately allowed enough signal reduction to allow calls to cross the country within the power constraints available.

For some reason I recall something about transmission about Denver being a cut off point before the signal was too degraded... But I'm too old and forgetful so I could be misremembering something I read a while ago. If anyone has more specific info/context/citations that'd be great. Since this is just "hearsay" from memory, but I think it's something like this.


It just seems more logical for the OS to do that, rather than the application. Basically every application that uses microphone input will want to do this, and will want to compensate for all audio output of the device, not just its own. Why does the OS not provide a way to do this?


> Basically every application that uses microphone input will want to do this

The OS doesn't have more information about this than applications and it's not that obvious whether an application wants the OS to fuck around with the audio input it sees. Even in the applications where this might be the obvious default behavior, you're wrong - since most listeners don't use loudspeakers at all, and this is not a problem when they wear headphones. And detecting that (also, is the input a microphone at all?) is not straightforward.

Not all audio applications are phone calls.


>The OS doesn't have more information about this than applications

the OP pointed out that this only works if he uses a browser monoculture

the OS does have more information than that, it can know what is being played by any/all apps, and what is being picked up by the mic


The "OS" isn't special here, apps can listen to system audio.

fwiw, you only need to know anything about outputs if you are doing AEC. Blind source separation doesn't have that problem and can just process the input stream.


> The "OS" isn't special here, apps can listen to system audio.

Even if this is true, it's easy to imagine such functionality being exploited by malicious apps as a security and/or privacy concern, particularly if the user needs a screen reader.

It definitely makes sense for the operating system to provide this functionality.


The OS can have multiple sound input devices for the application to choose from, "raw" and "fuckarounded with"


That doesn't make sense in the context of default devices. MacOS's AVKit (or is it CoreAudio?) APIs that configure the streams created on the device makes way more sense, since it's a property of the audio i/o stream and not the devices.


Assuming this isn't parody, the OS doesn't have to do it automatically. Having an application grab a microphone stream and say to the OS "take this and cancel any audio out streams" might be pretty useful.


I agree with that, but the point I'm trying to make is that audio i/o handling is pretty complicated and application specific. The idea I'm challenging is that "any app that wants microphone input wants this" is dubious. I'd say it's only a small number of audio applications that care about mic input want background noise reduced - and it makes sense for this to be configured per-input stream.

Really what would be nice is if every audio i/o backend supported multiplex i/o streams and you could configure whether or not to cancel audio based on that set of streams but not all output (because multi output-device audio gets tricky).


I'm honestly having trouble thinking of a case where I wouldn't want this.

I'm sure there are some niche cases, but in those cases, the application can specifically request that the OS turn off audio isolation.


The technique introduces latency and distortion because it's subtracting an estimate of sound that's traveling/reflecting in the listening environment, which is imperfect and involves the speed of sound.

That latency is within the tolerance that users are comfortable with for voice chat, and much less than video processing/transfer is introducing for video calls anyway, so it's a very obvious win there. Especially since those users are most interested in just picking out clear words using whatever random mic/speaker configuration happens to be most convenient.

But musicians, for instance, are much more interested in minimizing the delay between their voice or instrument being captured and returned through a monitor, and they generally choose a hardware arrangement that avoids the problem in the first place. And that's not really a niche use case.


Live video or audio chat is basically the only time you do want this. Granted, that’s a big chunk of microphone usage in practice, but any time you are doing higher fidelity audio recording and you have set up the inputs accordingly you absolutely do not want the artefacts introduced by this cancellation. DAWs, audio calibration, and even live audio when you’ve ensured the output cannot impact the recording all would want it switched off.

Default on vs default off is really just an implementation detail of the API though, as you say.


> Live video or audio chat is basically the only time you do want this.

If I'm recording a voice memo, or talking to an AI assistant, I would want this. Basically everything I can imagine doing with a PC microphone outside of (!) professional audio recording work.

That last case is important and we agree there needs to be a way to turn it off. I think defaults are really important though.


My colleague works in a very quiet house, and has no need for noise cancelling. Sometimes, he has it turned on by accident, and the quality is much worse - his voice gets cut out, and volume of his voice goes up and down.

As you say, as long as either option is available, the only question is what the default should be.


I gave an example, when I'm wearing headphones I don't want this enabled. If I'm recording anything, I probably don't want it on either. If I'm using a virtual output, I don't want AEC to treat that as a loudspeaker.


Every normal application already does it through the os because most do not care about this at all.

Music player, browser, games, video player...

Audio is not app specific

The only application were this is true is audio were you want full control and low latency.

I find your take very weird.


> Why does the OS not provide a way to do this?

Some do.

But you need to have a strong-handed OS team that's willing to push everybody towards their most modern and highly integrated interfaces and sunset their older interfaces.

Not everybody wants that in their OS. Some want operating systems that can be pieced together from myriad components maintained by radically different teams, some want to see their API's/interfaces preserved for decades of backwards compatibility, some want minimal features from their OS and maximum raw flexibility in user space, etc


> Some do

Which Operating systems do this?


macOS has done this in recent versions. Similarly it will do all the virtual background and bokeh stuff for webcams outside of the (typically horrific) implementations in video conferencing apps.


Others have already pointed out macOS/Linux, here's Windows:

https://learn.microsoft.com/en-us/windows-hardware/drivers/a...


As others have noted, this is trivial for most macOS and iOS apps to opt in to.

Frankly, I imagine its also available at the system level on Windows (and maybe Android and Linux) but probably only among applications that happen to be using certain audio frameworks/engines.


It doesn't seem to me that module-echo-cancel in Pulseaudio completely meets the requirements here (only one source), but it looks close, and seems in general like where you would implement something like this.

1. https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...


I think module-null-sink and module-loopback could be used to create a virtual source which combines multiple sources, though the source/sink thing makes my head spin. Or, more simply, I suppose using the loopback of whatever audio output device does the combination (and the same mixing) for you, if you play all audio through one output device (which is most likely)?


> though the source/sink thing makes my head spin

Wait, what other audio paradigms are there?


Dunno, I meant more that it's an unintuitive way of thinking about a data-flow graph to me, moreso when introducing virtual sinks/sources.


so something for systemD then?


[flagged]


Weird macrosensivity


Huh? What's the aggression in characterizing the diverse landscape of operating systems and the users/developers who very reasonably may prefer each?

I think it's very good that we have so many options of what an operating system and its vendors/developers might prioritize, and that these differences in priority have consequential impact on how software gets built on each.


On mac/iOS, you get this using the AVAudioEngine API if you set voiceProcessingEnabled to true on the input node. It corrects for audio being played from all applications on the device.


My first thought in reading the question was “if your browser is doing that, your platform architecture has… some room for improvement”.


Having room for nontrivial improvement is, to be fair, a normal state of affairs for platforms.


This has certainly made conference calls significantly more usable. I feel like it must have come around during 2020, because I feel like pre-covid I would go around BEGGING everyone I did calls with to get a headset, because otherwise everyone else's voice would echo back through their microphone 0.75s later. Today I recently realized I could just literally do calls out loud on my laptop mic and speaker and somehow it works. Nice to know why!


This assumes there is an OS-managed software mixer sitting in the middle of all audio streams between programs and devices. Historically, that wasn't the case, because it would introduce a lot of latency and jitter in the audio. I believe it is still possible for a program to get exclusive access to an audio output device on Windows (WASAPI) and Linux (ALSA).


Historically, true, but nowadays it's pretty much standard for all the big OS.

Being able to get exclusive access/bypass the system via certain means (ASIO would be another) doesn't make it go away.


The OS doesn't know that the application doesn't want feedback from the speaker, and not 100% of applications will want such filtering. I think a best practice from the OS side would be to provide it as an optional flag. (Default could be on or off, with reasonable possibility for debate in either direction, but an app that really knows what it wants should be able to ask for it.)


There is a third place: a common library that all the apps use. If it is in the OS then it becomes brittle. If there's an improvement in the technology which requires an API change, that becomes difficult without keeping backwards compatibility or the previous implementation forever. Instead, there would be a newer generation common library which might eventually replace the first but only if the entire ecosystem chooses to leave the old one behind. Meanwhile there'd be a place for both. Apps that share use of a library would simply dynamically link to it.

This is the way things usually work in the Free Software world. For example: need JPEG support? You'll probably end up linking to libjpeg or an equivalent. Most languages have a binding to the same library.

Is that part of the OS? I guess the answer depends on how you define OS. On a Free Software platform it's difficult to say when a given library is part of the OS and when it is not.


> If it is in the OS then it becomes brittle

My experience is the opposite. When it's part of the OS, it's stable and you just say "you need OS version X or better" and it will just work. When it's a library, you eventually end up in dependency hell of deprecated libraries and differing versions (or worst case, the JavaScript ecosystem when the platform provides almost nothing and you get npm).


Depends on the OS I guess. When it's established enough, all distributions carry a high enough version that it's not an issue. If it's not established enough, I'd argue that it isn't ready to be part of an "OS" anyway (regardless of the definition of that word).


I suppose the OS probably makes something like this available, when using Voiceover on Mac and presenting in teams by default only the mic comes into teams, you need to do something to share the other processes audio.

That's mac of course but in my experience Windows is much more trusting of what it gives applications access to so I suppose the same thing is available there.


How sure are you that Basically every application wants this? So should there be a flag at the os level for enabling the cancellation? How do you control that flag?



It would be trivial to pass that flag in whatever API the application calls to request access to the microphone stream.


Did you just invent yet another linux audio stack?


At the lowest level its a fouriertransform over a systems (your room the echochambers response is know from some testsound )and the expected output going through that transform on its way to the mic is subtracted. Most socks and machines have dedicated systems for that. The very same chip produces the echo of the surroundings.


Is there any way to apply this outside the browser? Like, is there a version of this that can be used with Pulseaudio?


To spare others from googling:

https://docs.pipewire.org/page_module_echo_cancel.html

https://wiki.archlinux.org/title/PipeWire/Examples#Echo_canc...

If you're still on pulseaudio for some reason, it ships with a similar module named "module-echo-cancel":

https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...


Huh, thanks. I was interested in this probably 6-8 years ago, and when I went digging the stackoverflow answer mentioned elsewhere in this thread [0] was as far as I got. I guess the tech has progressed since then.

[0] https://stackoverflow.com/questions/21795944/remove-known-au...


It was there 8 years ago.


We've been working on the Web Codecs API for a few years now. It only handles encoding and decoding media codecs, ffmpeg does much more (container handling, filters and everything needed really).

Web Codecs can take a compressed media packet and get you the decoded image or audio buffer it corresponds to. Conversely it can take audio or images with timestamps and get you a series of encoded media packets you can then containerize (we say mux) and get you e.g. an mp4 file.

https://w3c.github.io/webcodecs/


Can I just say that I was disappointed by the Web Codecs API leaving muxing the end result as a "draw the rest of the fucking owl" thing.


I (Firefox developer working on anything media related) got in contact with the dev on Twitter, and he told me that Web Codecs was missing (and we're shipping this in a month or so, it's been in Nightly for some time), and something to save project file to disk (https://developer.mozilla.org/en-US/docs/Web/API/Window/show...).

So I spoofed the user-agent in a nightly build here on my Linux desktop workstation, then had to alias one method that we should have implemented years ago but only have with a `moz` prefix (`HTMLMediaElement.mozCaptureStream`). This is on us to fix.

Then it looks like a worker script is served with the `Content-Type` `text/html` instead of `application/javascript` or something like that. We also have a pref flip to bypass that check, so I did that, but this is on the dev to fix.

When you do this it works, I've loaded project demos containing videos, audio, various things composited on top, scrubbed the timeline aggressively in a debug build, moved things around in various bits of the interface and also in the rendering frame, etc., things seem to work as they should, perf is as I'd expect it to be (and again, I'm running it in a debug build with optimizations disabled for anything media related, enabled for other parts of the browser).

What's missing is `window.showSaveFilePicker` and file system related stuff. It's possible to use https://developer.mozilla.org/en-US/docs/Web/API/File_System... instead (that we ship, e.g. Photoshop on the Web uses it). We think that it's much less scary than giving access to the file system to a content process of a Web browser. Maybe because videos can sometimes be extremely big files, direct access to the FS could be of use there. Thankfully, we also ship extremely modern video encoders to make them tiny instead, but that's currently a limitation Firefox has, for better or worse.

https://paul.cx/public/pikimov-firefox-nightly.webm


I would really like to be able to give webpages access to a real local file/folder. It's one of the main barriers to using web apps as production apps and is IMO one of the drivers pushing everything into (generally proprietary, locked in) cloud storage.

Obviously the permissioning model needs to be thought out here. It could perhaps only be available to "PWA"s that have been "installed", only on https sites, and only once explicit permission has been given, etc.

But it's so cumbersome to have upload/download be the only way to sync files into a web app.


Wow, so you do have a workaround for the missing window.showSaveFilePicker, that's promising!


Probably the dev needs to use a different method to save (effectively download), much as Adobe Photoshop does when it exports files on Firefox. Not hard to do at all. Likely the same for reading files, if this tool needs that.

OPFS provides high-speed read and write for temporary files and non-exported files; again this is what Photoshop uses. There is a 10GB limit per domain currently. I'm not sure this particular app actually needs that, though


Aren't you replying to the dev? ;)


padenot already has


Thanks for taking the time to investigate what's currently the gap with FF. As a long-time Firefox user, I'm hoping you can guide this dev regarding ways to get things working from his end while also using this app's needs to inform FF improvements from your end.


Thanks, I have some of the same issues with FF on www.render.video - will look into this ASAP :)


ShazamKit, from that fetch the BPM, BPM detection from the mic, compare ?


It works offline, though.


I just tested and a song that works fine normally failed as soon as I turned Airplane Mode on.


Does it? I don't see that claim being made anywhere.

They do say that the audio is processed locally, but that does not preclude them from making an API call to find a signature match.

> The audio stream is processed locally on your device and never recorded.


"Grooved does not use any third party library or API, just the built-in components provided by Apple."


ShazamKit is not third party, it is provided by Apple. Their website states "The audio stream is processed locally on your device and never recorded.", though this could refer to the audio statistics being collected locally, I assume ShazamKit still contacts an Apple API to get the classification.


We're aiming to release this half, so in a month or so, on all desktop platforms, mobile will follow shortly after.

Then we'll gradually optimize (e.g. enable the use of more hardware encoders, decoding being generally in hardware at launch of supported) release after release, but generally almost everything will be supported at launch.


Great! thanks for letting me know, users will finally no longer be faced with sad info that its not supported :D


Please open a bug at https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&comp... if that's the case, and CC padenot@mozilla.com, I'll have a look next week.


Here's the bug I filed several years ago: https://bugzilla.mozilla.org/show_bug.cgi?id=1684718 - I just cc'd you


I interpreted the comments on that bug to mean that the Adafruit site has an unmuted video that happens to have a silent audio track, which isn’t strictly a muted autoplay issue. But maybe I’m wrong.


Yeah, I think there was some confusion about which video on the page was causing the issue, but the test cases I created are much simpler - just one video per page.


about:preferences#privacy, scroll down a bit, it's under "Permissions". You can also adjust it when on the site, using the icon at the left of the URL bar.


Thanks!! :)


We've put a dedicated section called Media in about:support, it has decoding capabilities and other things such as audio IO informations.

If you find that it is not accurate, e.g. by cross-checking via other means, please open a ticket at https://bugzilla.mozilla.org/enter_bug.cgi, component "Audio/Video".


> We've put a dedicated section called Media in about:support, it has decoding capabilities

I see, I hadn't found it when searching for the codec name because it only said something like "Information not available. Try again after playing a video." After opening a random YouTube video and refreshing about:support, it did show the hardware decoding information, and it was as I had expected given the hardware on this computer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: