Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Remove-bg – open-source remove background using WebGPU (bannerify.co)
286 points by anduc 73 days ago | hide | past | favorite | 121 comments
Yesterday,I saw a post in X asking for a self-hostable background remover service. I was thinking, can we make it work by using WebGPU? So it will run in the browser and doesn't require any server/queue to run

After a couple of hours, I created this and published the source code on https://github.com/ducan-ne/remove-bg

It's still new so welcome any ideas and contributions

Powered by WebGPU and Transformer.js (RMBG V1.4 model)




feels like it could be nice to abide by the license terms https://bria.ai/bria-huggingface-model-license-agreement/

> 1.1 License. > BRIA grants Customer a time-limited, non-exclusive, non-sublicensable, personal and non-transferable right and license to install, deploy and use the Foundation Model for the sole purpose of evaluating and examining the Foundation Model. > The functionality of the Foundation Model is limited. Accordingly, Customer are not permitted to utilize the Foundation Model for purposes other than the testing and evaluation thereof.

> 1.2.Restrictions. Customer may not: > 1.2.2. sell, rent, lease, sublicense, distribute or lend the Foundation Model to others, in whole or in part, or host the Foundation Model for access or use by others.

> The Foundation Model made available through Hugging Face is intended for internal evaluation purposes and/or demonstration to potential customers only.


A lot of these AI licenses are a lot more restrictive than old school open source licenses were.

My company runs a bunch of similar web-based services and plan to do a background remover at some stage, but as far as I know there's no current models with a sufficiently permissive license that can also feasibly download & run in browsers.


Meta's second Segment Anything Model (SAM2) has an Apache license. It only does segmenting, and needs additional elbow grease to distill it for browsers, so it's not turnkey, but it's freely licensed.


Yeah, that one seems to be the closest so far. Not sure if it would be easier to create a background removal model from scratch (since that's a more simple operation than segmentation) or distill it.


I got pretty far down that path during Covid for a feature of my saas, but limited to specific product categories on solid-ish backgrounds. Like with a lot of things, it’s easy to get good, and takes forever to get great.


Keep in mind that whether or not a model can be copyrighted at all is still an open question.

Everyone publishing AI model is actually acting as if they owned copyright over it and as such are sharing it with a license, but there's no legal basis for such claim at this point, it's all about pretending and hoping the law will be changed later on to make their claim valid.


Train on copyrighted material

Claim fair use

Release model

Claim copyright

Infinite copyright!


It is a 2024 model, for comparison https://github.com/danielgatis/rembg/ uses U2-Net which is open source from 2022. There is also https://github.com/ZhengPeng7/BiRefNet (another 2024 model, also open source), it's not too late to switch.


It's kind of silly to complain about not abiding by the model license when these models are trained on content not explicitly licensed for AI training.

You might say that the models were legally trained since no law mandates consent for AI training. But no law says that models are copyrightable either.


AI model weights are probably not even copyrightable.


Surely they would at least be protected by Database Rights in the EU (not the US):

>The TRIPS Agreement requires that copyright protection extends to databases and other compilations if they constitute intellectual creation by virtue of the selection or arrangement of their contents, even if some or all of the contents do not themselves constitute materials protected by copyright

https://en.wikipedia.org/wiki/Database_right


Those require the "database" in question to be readable and for every single element to be so too. Model weights don't satisfy that requirement.


At some point the worlds going to need a Richard Stallman of AI who builds up a foundation that is usable and not in the total control of major corporations. With reasonable licensing. OpenAI was supposed to fit that mold.


The repo doesn’t include the model.


Does the site not distribute it?


It doesn't, except that it runs it. There's no download link or code playground for running arbitrary code on it, so while technically it transfers the model to the computer where it's running (I think) it's not usually considered the same as distributing it.


Pretty sure downloading it to your browser counts as distributing it, legally speaking.


I think it's a bit more subtle than that. The code of this tool runs in your browser and makes it download the model from huggingface. So it does not host the model or provide it to you, it just does the download on your behalf directly from where the owner of the model put it. The author of this tool is not providing the model to you, just automating the download for you. Not saying it's not a copyright violation, and IANAL, but it's not a obvious one.


AYAL?


Sure!


Yeah, that doesn't sound right to me.


What's the point of running it in WebGPU then?

I think it's either running the model in the browser or a small part of it there. Maybe it's downloading parts of the model on the fly. But I kinda doubt it's all running on the server except for some simple RPC calls to the browser's WebGL.


What's the point of running it in WebGPU then?

Use client resources instead of server resources.


Anyone can easily do a online/offline binary check for web apps like these:

1. Load the page

2. Disconnect from the internet

3. Try to use the app without reconnecting


Well, my question is about where it lies within the gray area between fully online and fully offline, so that wouldn't work.

Edit: Good call! It's fully offline - I disabled the network in Chrome and it worked. Says it's 176MB. I think it must be downloading part of the model, all at once, but that's just a guess.

The 176MB is in storage which makes me think that my browser will hold onto it for a while. That's quite a lot. My browser really should provide a disk clearing tool that's more like OmniDiskSweeper than Clear History. If for instance it showed just the ones over 20MB, and my profile was using 1GB, at most it would be 50, a manageable amount to go through and clear the ones I don't need.


Yeah, this is why I think browsers need to start bundling some foundational models for websites to use. It's too unscalable if many websites start trying to store a significantly sized model each.

Google has started addressing this. I hope it becomes part of web standards soon.

https://developer.chrome.com/docs/ai/built-in

"Since these models aren't shared across websites, each site has to download them on page load. This is an impractical solution for developers and users"

The browser bundles might become quite large, but at least websites won't be.


As long as there’s a way to disable it. I don’t want my disk space wasted by a browser with AI stuff I won’t use.


It would be cool if it could ask before loading the model, or at least indicate to me how large the download will be, as I'm on a metered connection right now.

But maybe that's just a me-problem.


After living with satellite Internet as the only option for about 15 years, now that I have fiber, I still catch myself declining downloads that are too big and opening the scheduler.

Old habits die hard.

And the modern Internet implicitly assumes the end user is not on a metered connection. Websites are fucking massive these days.


Looks like ~4 MB, I think that's a fair size to not throw up warnings about (unless I'm missing something in the Network view of dev tools w/o cache). That said I wonder what people consider the "Click to enlarge (may take a while to load)" courtesy size to be in 2024.


> Looks like ~4 MB

You got me!

The model was 176 MB. Total pageload transferred 182 MB.

https://imgur.com/a/6xx3Lgu

It doesn't seem like "Disable cache" in the DevTools empties the Cache Storage.


I would probably consider 50 MB that size, or in the special case of metered connections 20 MB (for example downloading maps or so).


50 MB might be fine for desktops on effectively unlimited & high speed connections, but consider the case of a mobile user with a few GB of data per month. Might be unacceptable for them. Not sure how common that case is in the US, but certainly possible outside the US.


Great to have local tools. Here's another one that uses the exact same combination of technologies: https://huggingface.co/spaces/Xenova/remove-background-web (Feb 2024)


Exactly this, as mentioned in the post I've used the same technology with this playground (copied lots of code from here) What I do is mostly make the UX better

PS: WebGPU is the future


Nice! This is the model it uses, for anyone curious (it's also mentioned in the description):

https://huggingface.co/briaai/RMBG-1.4


How does this compare to "segment anything" from Meta


Much smaller and better at background removal, but doesn't segment everything.


The very first image I uploaded (A model lighthouse with a very obvious background) gives just "Error".


Does your browser support WebGPU? https://webgpu.github.io/webgpu-samples/


The error is ambiguous right now and I'll try to make it clearer (welcome for contributions) The idea is it fallbacks to not use webgpu if your browser is not supported, but it was made in 2 hours, bug is acceptable :)


Thanks for sharing this repo. While I don't have time to actively contribute to the code, I have been testing on images to share my feedback for future devs.

1. Background removal is working good on a lot of different types of images. This includes images with background, plain or white background, men, women, children, hair, and pets.

2. After background removal, the new image is warped in some areas. For example, I have a picture of a child eating ice-cream. The background was removed perfectly but left a lot of artifacts on the child. I can share those images for testing.

Please let me know if there are other areas I can test.


Well gave it 2 tries before giving up. Does not do anything in FireFox and in Chrome it also failed after loading some data. It told me to restart Chrome with some GPU unsafe flag.


Hey there, sorry for the issue, does it show status "Error"? if so can you try to enable WebGPU using this flag "--enable-unsafe-webgpu --enable-features=Vulkan", if not it doesn't seems like a common error and I will spend more time on testing later

Feel free to raise an issue on Github to keep track progress, it helps open source a lot, or you can DM on x.com/duc__an anytime


Someone mentioned > Looks like WEBGPU is only on nightly for FF, for now


Hi,

nice work. I recently also published a WebGPU version of our browser-only background removal library. It's using the onnx-runtime under the hood. Weights are from isnet. It could also run birefnet – if there is some interest – however BirefNet weights are almost 1GB in size, which is a bit much to download I guess.

There is a blog post about it and also a CPU only version available.

https://img.ly/blog/browser-background-removal-using-onnx-ru...

Source is available at: https://github.com/imgly/background-removal-js

and on npm: @imgly/background-removal

Feel free to check it out!


Very cool idea, but for me it completely doesn't work. I'll open an image, the entire computer (running Linux) will briefly freeze (including stopping Spotify in its tracks) and when it comes back, the only thing that's left is a message from Firefox telling the tab has crashed.


A website being able to do this is arguably a browser bug.

If a browser's sandbox can't even protect against accidental resource exhaustion, I'd be very concerned about that as an intentional attack vector.


What browser protects against resource exhaustion? Chrome and all variants do not. Running out of memory or cpu or even hard drive space can happen and does in all browsers.


Every browser I know meters all APIs capable of using disk space, and 100% CPU usage doesn’t hang your system on any reasonable OS.

Memory can indeed be a problem, but at least if a tab becomes the largest single memory user on my system, the OOM killer will come for it first.

So if for CPU and memory browsers can lean on the OS for proper resource management but it’s not the case for GPU, maybe their WebGPU implementations aren’t ready for production yet.


Before OOM killer activates, the system will swap out all other applications including desktop shell. If you are unlucky to use a spinning hard drive, your system will freeze for a long time.

This is a problem with popular Linux distributions, they do not have protection against application swapping out important system applications.

Linux has disk quotas and CPU scheduler but it doesn't have fair memory and swap management (or this is not configured out of the box). For example, desktop shell doesn't have protection against being swapped out.


Ah yes, that is indeed very annoying. That's why I usually don't configure a swap partition on Linux, although that probably ends up wasting a lot of memory on write-once-read-never pages that could otherwise be used for the page cache.

A per-process real memory maximum that automatically invokes the OOM killer unless somehow opted out would probably be useful.


Browsers do not protect from tab using all available memory and swapping out other applications including desktop shell.


Looks like WEBGPU is only on nightly for FF, for now.


Sounds a bit like running out of memory, followed by the OOM killer going after the process... though I'm not sure the OOM killer would act that quickly. Do you get any interesting messages in `dmesg`?


You're spot on - dmesg shows it's OOMing, and pretty fast too. Turns out that it happens even if I just open the web page, yet alone start trying to remove the background of an image, so I'd guess it's something to do with loading whatever model is being used.


I tried it on my Chrome-Mac device, but not sure how it works on other devices (assuming it works). I'll test it out later. Sorry for the inconvenience


Please don't apologise for any inconvenience! It's a cool project and I'd love to see it working, I'm just sorry I can't give any positive feedback


Non-descript error after 180 seconds. Vivaldi on Android.

Please turn off the rolling animation for the duration timer. It looks really wrong when the numbers wind back (which they wouldn't do on a rotor) and when the trailing zeroes vanish.


Why can't I just "apt-get install" tools like this?

I used to have all the powerful tools at my fingertips but now I feel like my Linux distro is slowly disappearing into a vortex of irrelevance.


I wouldn't blame apt-get, there are a lot of command-line tools nowadays. I don't usually install these if its not going to be a frequent task.

Anyways, if you're looking for a cmd tool, rembg [0] is a pretty good one.

[0]: https://github.com/danielgatis/rembg


For every Linux power user, there are probably a million regular web users who don't know (or care to know) what a command line is. Web tools are for them.

Why not fork the repo and repackage it as a CLI app if you really want it that way?


Because someone just created it so no one has packaged it yet


People are trying to write software for GPUs more commonly now + because of the troubled GPU programming landscape, it's hard to make programs that work more than one vendor's proprietary flavour of GPU. WebGPU is an attempt to address some of that in browser apps, but it's not ready yet.


Thanks for sharing; it looks like this has trouble with certain kinds of images. Here's likely the most representative example:

Original: https://imgur.com/a/NrEXfua

BG removed: https://imgur.com/a/JWKHVGE

Much of the background was untouched, and almost all of the actual data (the axis and bars) were removed instead.


Sorry for my late I missed the comment

Yeah I think it's because the quanlity of the model, hopefully we will have better quanlity in near future. I will see if anything I can do with the settings


I've tried many different background removal algorithms and found that InSPyReNet has been the most successful one. I use `transparent-background` on pip a whole bunch and it rarely fails on me compared to remove-bg and whatnot

https://github.com/plemeri/transparent-background


Hi guys, thanks for your interest in my little project, I'm glad you all like it.

I just spend my free time to make it much better based on your feedback

- Mobile support, tested on my iPhone

- GitHub README added

- Added medium zoom for easier viewing on Mobile

- The error banner makes it more friendly and easy troubleshooting

- Troubleshooting section in README

If you have any idea to make the UX better, please let me know, I'll appreciate it


Opened, uploaded image, looked good (it worked!). Then my browser (Arc) started freezing, unfroze after closing your website :/


Sorry for the inconvenience, I've tested with Arc and can confirm this error, I didn't know why this happens. I will check further to determine the reason and report it to the upstream library (I think it's because the engine or Arc itself)


Since, Arc is built over chromium and does not have active inbuilt chrome popups and error dialogues.

Most probably it because of cache utilization and network overload, can simply be solved with clearing cache and creating a service worker for managing download of model and invalidating memory.


I tried your site using the image: https://bannerify.co/tools/remove-bg I crashed the page after 10 or so seconds and Chrome prompted me: Crashpad_NotConnectedToHandler, it doesn't work!


can you raise this issue on Github? I will take a look into this later since it's not common error. I think it's because some settings on the browser

I've heard someone reported that Arc had a similar error


Status: Error

Firefox on Linux: Error: Unsupported device: "webgpu". Should be one of: wasm.

Chromium on Linux: Error: no available backend found. ERR: [webgpu] Error: Failed to get GPU adapter. You may need to enable flag "--enable-unsafe-webgpu" if you are using Chrome.

Passing the --enable-unsafe-webgpu flag results in the same error.


for Chromium, on Linux you also need to run it with --enable-features=Vulkan https://github.com/gpuweb/gpuweb/wiki/Implementation-Status#...


This worked, thanks!


This!


Maybe it's because of how I detect the GPU and switch to another backend (to support devices that don't support WebGPU).

Can you try to go https://pmndrs.github.io/detect-gpu/ and pass the result here


{ "fps": 60, "gpu": "amd radeon r9 200", "isMobile": false, "tier": 3, "type": "BENCHMARK" }

It's a AMD Radeon RX 6600.

It worked on Chromium after passing --enable-unsafe-webgpu --enable-features=Vulkan


“chrome://gpu/” may give more clues as to what went wrong.


Firefox doesn't support WebGPU FYI


You might want to warn about memory usage. I had Chromium consume several GBs of RAM and had to terminate it.

But it is great that there now is more offline tools. It would be great if there was a browser API that allowed the page to voluntarily go offline to guarantee that no data will be leaked.


Several GBs of RAM definitely too much, I'm not aware that this consumes that much memory. I will spend more time trying to figure out how much memory is minimum necessary for this and will warn in the page.


This is cool. I made a background remover for image and video a few years back https://github.com/nadermx/backgroundremover/

Always happy to see other people exploring this niche


It's nice to see it here, imagine it works like your repo but runs entirely in the user's browser, very cool, doesn't need any complicated setup


Yeah, that is awesome


@nadermx Does it work in real time for webcam? I ask that because you have an animated gif in your link.


I haven't messed much with it, but it would be possible to probably have ffmpeg pull the stream and run it at 30 fps blocks. Can do a pull request if you want


Not sure if you realise you've given it the same name as another product that does this exact thing. (Disclosure I work for Canva, the company which owns remove.bg).

https://www.remove.bg/


Ah, I didn't realize that there is any product with a similar name, and in case I already know, I think I'll still pick this name since it's only an experimental open source project with WebGPU. Remove-bg is quite common IMO, but definitely will affect SEO if I want to do more for the project, but I have no plan yet


I have used this model before, running it locally with Web GPU is quite fun https://news.ycombinator.com/item?id=40715181


I'm impressed by it working, since the olden days this was a painstaking job in photoshop, and I haven't been keeping up with SOTA, but I'm also impressed at how little code there actually is in src/ai.ts to make it happen.

Good job!


It's pretty much one click in photoshop now too. With better results and more control than what this tool is offering.

Not that I'm blaming this tool for being worse than a $200+/yr product. If anything it's impressive how close it gets with so little code. And if you just want rough results on a large number of files it even looks superior


A lot of tools exist for this, even on my macbook i can just right click, quick action and remove background.


Same here, this is the first time working with this library to me, it's really making me believe even more in the future of Transformer.js/WebGPU, it just beginning


I tried it on an image of me and a horse in front of a paddock and it worked perfectly. Really impressed.

Other background removal tools would find me - but erase half the horse or erase his tack or his ears. This tool worked perfectly.

Very well done!


Nice, works perfectly on win/chrome


"Remove Background" is at Tools/Remove Background in MacOS Preview.


This one is open source.


What are you basing that assertion on? There’s no licence information provided in the repo or on the site, that I can see.


Or just press and hold (iOS/iPadOS) / right click (macOS) on the subject and choose "copy subject".


It's not working for me


Can you provide more additional information for me? If you're using Linux and Chrome you probably need to pass GPU flag `--enable-unsafe-webgpu --enable-features=Vulkan`


firefox / linux. I only get fully transparent output : https://i.imgur.com/kcu2LSR.png

nothing interesting on the console.


Firefox doesn't support WebGPU today.


Same here. Plus error with Chromium.

Working with just one setup is midpoint of web development.


Looking forward to testing this out and sharing w network if it works well


Thank you, looking forward to this, if you have any feedback please let me know via x.com/duc__an


why do you use a canary version of an old react version ?


Hi there, it's because I copied the project template from my previous project, will check and try to upgrade to the React 19 soon if it'll work with the libraries


Can you take a look at this open source background removal project?

https://www.reddit.com/r/StableDiffusion/comments/1dwkwrx/i_...

It's based on python and neural network modeling, and I was wondering if it's possible to run it via webgpu? Or run it based on wasm? No offense, just because it looks awesome.


First time I see this, by the first impression I think the output quality of this one is better than mine and my code is only based on one model

In my understanding, It'll possible if the model's author build to https://onnxruntime.ai ONNX Runtime. And maybe the downside is user will need to download ton of data to their device, currently it's ~100-200mb


thanks ! finds useful .


Very cool, but man pulling 900+ dependencies to build and run this thing feels awful. The NPM culture is out of control.


Haha, I didn't notice this! Yeah, only Vite will reduce the dependencies a lot, but I'm not sure which one is the biggest here to talk about

Will try to the deps simpler later, after all this is about a little time of work


> Will try to the deps simpler later, after all this is about a little time of work

Word of tip, you may not want to jump on every last suggestion from the peanut gallery instantly


JS standard library is very limited. So, pulling in third party dependencies to even left pad a string is normal.


you mean the native `str.padStart(targetLength, padString)`


To be fair (as one who leans pretty heavy in favor of the JS world) .padStart() was only added in response to the aforementioned left-pad fiasco. The language adding that was more a face saving measure from the blowback than an attempt to fix the actual problems.

Despite all that, left-pad still gets > 1 million weekly downloads on npm.


Ahh, for a second I thought https://remove.bg had open-sourced their product (which I've been quite happy with on the few occasions I've used it).

Very cool, though!


Nice work! Making AI accessible directly in the browser (even if you have to download quite the payload for the model parameters) is a real accessibility game changer.

Shameless plug: if you prefer API-based background removal we offer a super low cost, high quality option on https://pixian.ai




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: