Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Blur Webcam Background on Linux (github.com/jashandeep-sohi)
174 points by dumdumdumdum on Aug 30, 2021 | hide | past | favorite | 48 comments



Needed to blur the clutter behind me for meetings. Zoom and Google Meets don't support that on Linux. Couldn't find any existing solution that would give me a decent frame-rate for HD resolution without lagging. Wrote this instead.

Relies heavily on GStreamer under the hood for the video processing bits. And mediapipe for the selfie segmentation bit. Obviously need v4l2loopback to emulate a webcam. It's functional, but feedback and contributions are welcome :)


Zoom desktop on Linux added support for this recently (20th August). I tested this a bit, and it seems to work okay. Has blur support.

See https://support.zoom.us/hc/en-us/articles/205759689-Release-... and https://support.zoom.us/hc/en-us/articles/360043484511


That's good news, but I'm kinda stuck using the web interface because the desktop client doesn't allow sharing single Wayland windows in presentation mode. Unless they've changed that too recently?


I was at a meeting with a vendor the other day, and I got an option to enable Blur on the web interface in Chrome. Might be a beta-feature.


It doesn't support blur:

> The blurred background option is only available for the Windows and macOS desktop clients, as well as the Android and iOS mobile apps. Desktop clients must meet the "Image only without a physical green screen" requirements.


I think the docs are not updated correctly, I have blur on my local Zoom client (Running https://aur.archlinux.org/packages/zoom)

Here's a screenshot of the settings: https://imgur.com/a/f1XgOwz


Hm, on my Ubuntu 20.04 i don't have Blur, only 3 virtual backgrounds, none of which work unless i click on "I have a green screen."

I did a full reinstall, the version is the latest, and the dependencies are all installed and on latest. Weird.


Have you checked the requirements page? It's only supported on certain processors.

https://support.zoom.us/hc/en-us/articles/360043484511


Ah crap, i was sure i had a 4c CPU, but it's actually a 2core, 4thread one. That explains everything. Funnily enough on Windows 2 cores are enough.


Chrome on Linux here. Google Meet blurs my background just fine.


Not on Firefox though for some reason. Is there some special API only Chrome implements?


Nope, no special API what so ever. In fact there's even a Firefox add-on that makes it work[0], Google Meet just doesn't target more than Chromium with this feature.

If you try using it, they'll redirect you here[1], and tell you to:

> Check if your browser supports WebGL 2.0 at webglreport.com and verify that “Major Performance Caveat” is marked as “No”.

...but clicking on that site from Firefox shows WebGL 2.0 support and "Major Performance Cavet" is indeed marked as "no". That being said, simply changing the user agent to Chrome also doesn't work.

So in conclusion, I end up using Chromium a few hours per week purely for Google Meet's blurry backgrounds, and I use Firefox for everything else.

[0] https://addons.mozilla.org/en-US/firefox/addon/mercator-stud...

[1] https://support.google.com/meet/answer/10058482?hl=en&expand...


Any reason not to use the add-on you mention in [0] instead of falling back to Chromium?


Not really, I'm just used to this workflow. I've tried it, it works, and then went back to Chromium for this use case out of habit.


Played with the extension, it does not seem to enable this feature: the option appears, but then there's an error message. Although the other features are fun.


It works for me on Chrome but I had to use Wayland (not Xorg) and then force enable GPU rendering because it's currently blacklisted for Wayland. Possibly relatedly to that blacklist every couple of days (even with rebotos) new tabs in an existing browser window stop rendering content and I have to quit and reload. That doesn't seem to happen on Xorg/without GPU.


Kinda. Google uses the WebAssembly SIMD extension for their segmentation. Not sure if they have a fallback on Firefox, but if they do, it most likely won't perform the same way.


Firefox has had WASM SIMD for a few months now, it was enabled by default in FF89.

https://bugzilla.mozilla.org/show_bug.cgi?id=1695585


Thank you! I am going to try this today asap and test with Zoom. I heavily use Zoom for work and this is something lacking in Linux client


Zoom now supports this on Linux natively as of a new release a couple of weeks back.


Yep, and Google Meet always has for me (about half a year of using it).


To add: OBS can output to v4l2loop so can feed as virtual camera.

And OBS has lots of neat features - one of which is making my fans loud. Like a crowd cheering it on!


Yeah, this script will also heat your home :)

Especially, if you use the Zoom desktop client. Zoom on web (Firefox) seems to be easier on the CPU for some reason though.


Any chance this could be integrated with OBS? To select filters, show previews, etc.

Going forward, I wonder if OBS will offer pipewire or gstreamer streams (if those are a thing) for consumption by other apps?


There is an OBS plugin that allows you to use gstreamer nodes as filters: https://github.com/fzwoch/obs-gstreamer


One thing to note about OBS, as much as I love it and use it all the time: if you're using it to record/stream just one source, you're likely wasting a lot of resources. I see many people using it to record/v4loopback their screen or camera, which OBS is not particularly efficient at. Not something you're likely to notice on a modern workstation, but on an older machine or laptop the overhead is definitely noticeable.

Dedicated recording apps for cameras and screens will give you better performance and occasionally a marginally better image since there are fewer steps in the pipeline.

For v4loopback, a short gstreamer pipeline (for those willing to use the CLI) will perform better and give you noticably lower latency - even if some basic cropping is required, like if you're using a capture card with a camera that you can't hide the UI on.


> And OBS has lots of neat features - one of which is making my fans loud.

You're likely using software encoding. My 2015 MBP has a GT750M, but "thanks" to Apple OBS can't use hardware acceleration (https://obsproject.com/forum/threads/question-about-hardware...). Situation on Linux is likely similar.


The last time I tried, v4l2loopback wasn't a part of my kernel, at which point I decided it wasn't worth it to tinker with as I didn't know the stability of the mod.


Perhaps you can use OBS instead of holding the space bar: https://xkcd.com/1172/


If you happen to have a RealSense camera, I wrote a virtual camera for Linux that blurs your background using the actual measured depth map.

https://github.com/dheera/bokeh-camera


This looks very useful and far less hacky than my solution using OBS' Browser source and Bodypix/Tensorflow.js (https://usher.dev/posts/make-your-webcam-look-slightly-more-...).

Can't get it to work at the moment - but hopefully with a bit more experimenting, this will replace my current options.

Hadn't realised until reading this submission that Zoom has added blur support on Linux recently - on trying it out it does seem quite poor though - a very sharp edge and quite slow to update.


I believe the Raspberry 4B can act as a USB client device (e.g. a webcam using the Pi camera as video source); could this run on one of those? That would give you an external webcam with built-in background removal, independent of your local setup and video call app and OS.


I quite like this as it is one of the few comprehensive GStreamer examples that is readable. Kudos.


I see it uses SelfieSegmentation and then OpenCV's BoxFilter. Do these automatically use CUDA/OpenCL when available, or does that need special setup?

Would it be possible to avoid the conversion to RGB? (This forum thread says it's CPU-only: https://forums.developer.nvidia.com/t/videoconverts-performa...)


Nope, no GPU yet :(

Right now SelfieSegmentation is just a thin wrapper around the selfie segmentation solution provided in https://google.github.io/mediapipe. It operates on RGB frames, so that's why I need the conversion. The model inference is also done on the CPU. Interestingly, there is a GPU mediapipe graph available, but I haven't looked into what's needed to use that yet.

And yes, Boxfilter is just a wrapper around opencv's boxfilter. This is probably the lowest hanging fruit that could be moved to use GPU.



I want to prank my students. During class, I want to play a video as my video-background, in which a pre-recorded version of me is doing things behind me.

Any ideas how I can do this on linux?


I've been using https://github.com/allo-/virtual_webcam_background

There are many others similar to the above. They all basically work by using the v4l2loopback driver to create a loopback video device. Then you have the program consume video from the real webcam video device, apply a deep learning model that subtracts the background (leaving your face and torso) and replaces it with an image or video of your choice, and outputs the video to the loopback device. Then you configure your video conferencing software (zoom, teams, webex, whatever) to use the loopback device instead of the real one.


It's "easily" done with OBS and a green screen, however having sufficient quality for the prank to work is trickier than it sounds (matching the lighting, getting the good keying quality, avoiding the green cast, calibrating and matching the color etc).


I am not looking for perfection. Just want to get a laugh out of them.

I also just realized that if I place my laptop camera in the exact same spot, while recording the background video and during class, all I need to do is sit on the left of the camera pane during class, and only stay on the right for the background video, then I just need to stitch the two videos side-by-side in OBS and it should work well enough.

I think I achieved my purpose here. Ask a question from a bunch of smart people. Either they give a great answer, or their response forces you to rethink and figure out a simple solution. Thank you HN for being my AI enabled rubber duck.


Yeah, if you don't have a "real" green screen, you can probably shoot a video from the same angle where you come in from one side and stay on that side. Then use OBS's crop filter and keep your live video on the other side.

I just said to hell with it and bought a bright green sheet early on last year since I was spending so much time in meetings and assisting professors with running their classes. That sheet and a couple of cheap clamp lights (plus a decent tabletop mic) made a huge difference in the quality and entertainment value of my telepresence work.


well the easiest way I can think of is just using the OBS virtual camera thing (Unsure if they released it officially on Linux, but there is a plugin for that [1]).

Use chroma key to remove the background and add a video there, possibly on a loop.

[1]. https://github.com/CatxFish/obs-v4l2sink


It is present for Linux in recent releases.


Cool, thanks, I look forward to trying this out. Been annoyed ever since my company moved to teams and I can't do all the cool background effects that my windows and mac coworkers have access to.


Is there anything like this for Windows or Mac? Many third party videoconferencing solution don't offer blur if you're interviewing with an employer, using Jitsi Meet, etc.


Along the same lines, an issue I was interested in a while back, has anyone written a script in Linux to get the webcam to dynamically crop to faces/people?


I wonder how is the CPU usage as opposed to GMeet with Chrome. Seems like Google Meet on Chrome doesn't rely on the v4l2loopback kernel mod.


Thank you. Will definitely give this a shot!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: