Hacker News new | past | comments | ask | show | jobs | submit login
Real Time Person Removal from Complex Video (github.com/jasonmayes)
429 points by aliabd on Feb 18, 2020 | hide | past | favorite | 142 comments



We’re very close to not being able to trust video, I don’t think many people outside of tech realize how close we are.

Imagine someone adding your face to a video of a crime.


I'm not sure video ever went as far as many people think it does in terms of criminal prosecution. My experience suggests that video alone is often not enough for a conviction.

I'm one of the increasing number of helmet camera cyclists. Last year, a driver nearly hit me head on and stopped nearby, so I went over to politely tell them to pay better attention. They got angry and tried to assault me with a screwdriver. Once the driver got violent, I left, shouting out his plate number for the camera, and then called the cops. The cops arrived a few hours later and loved the video. But after working with a detective, it became clear that the video probably wasn't enough to get a conviction. We had to get a second independent piece of evidence placing the driver at the scene of the crime. In this case, the driver was working for Uber, whose GPS tracking placed him at the scene at the time of the incident. (I had no idea the guy was an Uber driver, and I'm thankful that the police had the insight to suggest this possibility.)

Just last week the driver plead guilty to disorderly conduct, after the charges were reduced as part of a deal made with his lawyer.

You can watch the clip here if you're curious: https://www.youtube.com/watch?v=trSB3mK78bs


I'm not sure the prosecution part is as important as the public's image. The fact is that someone can ruin your life with a fake video, and the tools for doing so are becoming easier and faster to use. Regardless of if the court believes the video or not, your social and professional life could already be ruined by the time they view the video.


To add to this, even if someone knows a video is fake, if it’s realistic enough then the image will persist.

But perhaps the most damaging aspect of all this is the erosion of truth, in people’s general feeling that nothing can be trusted.


>But perhaps the most damaging aspect of all this is the erosion of truth, in people’s general feeling that nothing can be trusted.

Might this be an opportunity for crypto education? Digital signatures are pretty powerful.


No. Adding cryptography to video recordings is the worst of both Canudos. It can verify sources but not the literal truth. All that would do is half-mirror society where any ole bullshit signed by the credible is accepted while man on the street police brutality or politicians making out with a 12 year old are called fake.


You can trust what you see in person, without screens.


I've seen many an up close magic trick in person that I've been unable to figure out. In person only prevents digital shenanigans, not _all_ shenanigans.


Which would be fine if we still lived in small tribes of hunter-gatherers, but we don’t.


Wait for augmented reality glasses to become a thing


>>after the charges were reduced as part of a deal made with his lawyer.

That is more likely why they need the 2nd piece, They want to make a deal, they never want to go to trial, so if they have overwhelming evidence then 99% of guilty people, and 90% of innocent people will take the deal offered


I think you’re absolutely correct. Courts are backed up and trials are expensive. Big incentives for the prosecution to strike pleas.

The plea system needs reform, something Larry Krasner seems to be doing rather well with in Philadelphia.


>>I'm not sure video ever went as far as many people think it does in terms of criminal prosecution. My experience suggests that video alone is often not enough for a conviction.

Yet, people are serving life sentences over eyewitness testimony.

>>I'm one of the increasing number of helmet camera cyclists. Last year, a driver nearly hit me head on and stopped nearby, so I went over to politely tell them to pay better attention.

Cops have already a lot of work and probably didn't care about your case. In the video you clearly turned around and followed the guy hundreds of yards. Almost hit me or whatever...not suggesting that he should have threatened you with a screwdriver, but you shouldn't push your luck. You never know who the other person is and in what mood you find them. "You're on camera" and "I'm calling the cops" can be famous last words. He didn't hit you, and you should have called the cops, if he broke traffic laws and you felt so strongly.


> Cops have already a lot of work and probably didn't care about your case. In the video you clearly turned around and followed the guy hundreds of yards. Almost hit me or whatever...not suggesting that he should have threatened you with a screwdriver, but you shouldn't push your luck. You never know who the other person is and in what mood you find them. "You're on camera" and "I'm calling the cops" can be famous last words. He didn't hit you, and you should have called the cops, if he broke traffic laws and you felt so strongly.

Actually, the cops cared more than I expected, as I have been keen to emphasize to cyclist friends of mine who believe that the cops don't care at all. If the cops didn't care about my case they wouldn't have even responded. They are overburdened, absolutely, but they put in more effort than I think most people would notice. I actually helped them out a lot by contacting Uber independently, which got their subpoena processed as it seems to have been missed by Uber. If you make it easy for them to help you, they're more likely to.

As for whether I should have followed the guy, I have talked to many drivers who passed me dangerously over the past decade and very few react violently. I believe offering feedback to drivers is valuable, particularly given that the vast majority of them are simply ignorant about how to drive safely around cyclists. Many were not aware that they were driving dangerously. Many drivers apologize and say they will do better, and I believe them. Calling the cops because someone passed you too closely will almost never lead to any changes, in contrast. The best you can hope for there is that your report gets added to some traffic safety statistics, but I'm skeptical that occurs in Austin, TX, where I lived at the time. If you have any better ideas, I'm listening.

Also, "hundreds of yards" isn't right. Pulling out Google Maps gives a rough estimate of slightly less than 100 yards in total.


Watched the video. Nice job with following through.


Do you think the driver got closer to you to avoid the other cyclist going the other way?


It's not clear. He was preparing to make a left turn anyway, so he would have to move left at some point. I think he moved left prematurely, without checking for oncoming traffic. Maybe he was also trying to avoid the other cyclist, I don't know. He should have slowed down and waited for both cyclists to pass.


I see why they needed more than the video. It's pretty dark and hard to see who exactly the individual is.


How did you get the GPS data from Uber?


Presumably the police did, and it's not unreasonable for Uber to willingly assist investigations of their drivers by providing such data.


I didn't. The police did with a subpoena.


Pretty soon I think there will be a market for CCTV cameras that use a tamper-proof module to sign the video output, with a unique key and a key chain back to the manufacturer. Video evidence simply won't be admissible unless it's signed, and you can present the undamaged camera in court.


> Video evidence simply won't be admissible unless it's signed

Courts today readily admit the testimony of eye-witnesses, which ware notoriously fallible. It is a common misconception among tech people that all systems operate in rational ways, but this is only really the case for machines.


> It is a common misconception among tech people that all systems operate in rational ways, but this is only really the case for machines.

Very elegant and concise. I would say this extends to all deeply complex fields: engineering, physicists, quant, etc. They all have a mindset of "I can understand the whole system because they have logical components that build on top of one another" and so therefore expect and demand that all other fields do the same.


To be fair, wiser engineers seek to discriminate between the rational and irrational parts of a system.


A video of someone committing a crime carries far more weight than an eyewitness account currently.


I dont know if that is really true

TONS of people a convicted based on mainly eyewitness accounts, later to be released when proper evidence is found to exonerate them

Most of the Innocence Project is made up of cases like that


Eye-witnesses can at least be prosecuted for perjury if they're found to be deliberately lying. It's hard to apply the same threat to a CCTV camera.

Honest mistakes are obviously a separate issue, but the parallel there would be bad lighting or a corrupted recording, neither of which are new issues.


"Honest" (subject to police pressure) mistakes in identifying people are vastly more common than a video accidentally showing a different face.


That just restricts the circle of people who can doctor these signed videos, it does nothing to solve the underlying issue.

State actors would love to be able to deepfake someone into a crime scene and then say 'hey look this is signed so it's totally legit'.


This is maybe one of the few relevant use cases of Blockchain technology: Cameras can add video checksums to a public audit log secured e.g. by proof of work or another trust mechanism. If enough parties archive this log it will be very difficult to forge videos without tampering with the actual recording device, because duplicates as well as tampered videos created at a later time would be easy to detect in the data, and the replication as well as proof of work would make it very difficult to forge the entire audit log.


You don't need a blockchain or proof of work for that, timestamped digital signatures are a thing that's being used already, you just need a proper trust infrastructure.

A blockchain with proof of work solves the problem of decentralization and absence of trust; however, for legal matters, having a centralized root of trust is the simpler way to go and requires much less resources.

However, tampering with the actual recording device is a very relevant risk - I struggle to imagine an attacker who has the desire and capability to make some serious crime, and convincingly fake a video as part of it, but would be foiled because they can't figure out a way to upload it properly in the exact manner as a real camera would.


Can't state level actors take over 50.1% of the computing pool and achieve the same result?


In my understanding 50.1 % attacks are less of an issue here as they would be easy to spot since individual parties still have the old blockchain when the adversary publishes the new one, so by comparing them the forgery could always be reconstructed (if not averted).

As far as I understand this is a problem for Bitcoin because even temporary forging of the chain allows the adversary to double-spend funds, and once they are exchanged for real-world money/services they're gone. For an audit chain this shouldn't be a problem as it's only for logging and there is no monetary value tied to the chain. Also, the chain could be anchored with a traditional trust model and wouldn't need to be completely trustless like Bitcoin.


You can detect a 50.1% attack but how do you reconcile which fork is legit?


You cannot, but with the two forks you can see that someone tried to tamper with the signature of a given video as there will be conflicting signatures. That alone can make tampering unattractive for an adversary.


How so? What if all I want to do is cast doubt on a legitimate video? The point of the video manipulation is to manipulate the human response and casting doubt is just as effective. Moreso since the average person is not familiar with the technology & thus defaults to "eh - it's all fake" because there's no way they can distinguish the likely real from likely fake from "too hard to tell".


Not all blockchains are distributed and vulnerable to 51% attacks. I suppose that's what the GP was hinting at.


Merkle trees don’t require proof of work.


Trust is a part of nearly every system. At some point you're just going to have to trust a person or company to get some value in a practical way.


Yeah this is just another chain of evidence problem. We trust cops not to plant evidence, so long as they follow strict chain of evidence procedures. It’s an imperfect system but it’s largely effective.


And since it's not much different than planting evidence we know for certain that it's going to happen.


The damage will really lie where it already does today, where people consume info without any filters in whatsapp groups within their bubbles. If they can believe a photoshopped image with a caption, they will be much more inclined to believe video "evidence", and no explaining of deep fakes will convince them otherwise.

Any tamper proofing process may be viable for news outlets and courts, but if the technology is easy enough to use there will be no escaping spreading of fabricated facts.


There will probably be an intermediary step: in the same way that one can try to find evidence that a picture was "photoshopped" or an audio recording was tampered with, there are probably hints (for experts) that a video was edited.


And conspiracy theorists will apply that same level of skepticism to unedited video, seeing those hints everywhere.


Display fake video on screen. Point video-signing camera at screen.

You now have a signed fake video. You're welcome.


Commercial DVRs already sign video as its recorded and have for years.

It's more about chain of custody/location and time verification. Video authentication can't be used to tell if a video is real because it may just be a "real" recording of faked content. But you can say with some certainty that a video came from DVR X at Y time.


That should be easy to defeat: Patch into the camera sensor and feed in your own data, which the module will sign.


Presumably the whole thing would have to be tamper-resistant. Try to get to the CCD and it fries itself.

And sticking a monitor in front of the camera probably wouldn't work, at least if telesync/cam rips are anything to go by :)


How would this tamper-proof technology work? The first thing that came to mind was paper money having textures/etc to prevent counterfeiting.


No need for manipulation; cutting is enough. The turning point for me from "doubt, but verify" to "default to believing it's an outright lie" was the infamous fish feeding video[0].

If manipulation becomes commonplace, more people will default to "doubt", which is good.

[0] https://www.snopes.com/fact-check/did-trump-impatiently-dump...


I’ve seen it on TV since I was a kid with “reality” tv shows, and I have never have understood the psyche of people that like to watch fiction under non fiction pretenses. People like being fooled I guess.


Thanks for noting this. It's really sad how much the event was twisted. Can't trust the media, they're just out to bend your mind :/


I think intransparent processes/businesses are to blame for a lot of reasonable doubt concerning all kinds of things, which is then used by actors with an agenda to influence what people can read from the noise.

I think the way to neutralize information warfare (an incredible flood of cherry picked 'takes') is to have businesses and politics become more transparent, to a degree that would seem extreme (and individuals more anonymous).


Video is not to be trusted since 1896 and the first of Melies' movies.

The only difference is that the skills and computation power necessary to make it realistic enough are now much more accessible.


Scale, rate, and levels of achievable detail matter.

In 1896, a small handfull of people could conceivably alter film in a time-consuming process to achieve then-believable (and now generally laughable) results.

In 2020, billions of computers worldwide can alter video streams to levels requiring extremely good detection methods, or defeating even those, in realtime.

"Photographic evidence" was at one point in time a gold standard.

The era of analogue recordings -- that is, analagous to the ground truth -- created a period of high-detail, high-fidelity records of facts, in the original sense of events.

Rather than relying on testimony, witnesses, hearsay, or small fragments of physical evidence, there were records of still or moving images, audio, and more.

But with that came increasing capabilities to edit those records, general perception of which generally strongly trailed capabilities. It was the creator of uber-rational detective Sherlock Holmes, Arthur Conan Doyle, who was taken in by fabricated "fairie" photographs:

https://www.mentalfloss.com/article/559519/cottingley-fairy-...

What's changed though is the level of skill required to doctor content (none, with available software), the speed at which the alterations can be accomplished (realtime), and the detail and convincingness of the alterations (high).

A hydrogen bomb does the same thing that Greek Fire does, at scale.

The gas turbine does the same thing the ox does, at scale.

Mustard gas does the same thing a bee does, at scale.

Scale matters.


That ‘only’ difference is phenomenal. Previously the stakes had to be enormous, now they can be trivial. So now you can’t trust video for the most trivial stuff, but we rely enormously on the trustworthiness of video/audio in all communications except the smallest town interactions.


Maybe it's a good thing that common people will finally learn to distrust the kind of propaganda that governments and corporations have been able to sneak into unsuspecting populations since the emergence of mass media.


More likely is people will believe videos that they agree with and disbelieve ones they don't, much like today.


True, but automated people removal won't change any of that.


That was the plot of Rising Sun by Machael Crichton and was published in 1992. https://en.wikipedia.org/wiki/Rising_Sun_(novel)

The only thing you'd need to change in that novel is substitute China for Japan


Well it isn't like novels and books haven't been warning of the danger. My favorite being Stephen King's The Running Man and the first movie made from it; both are good for reasons.

What we see in a video can equally change what is portrayed by careful selection of the what leads to the event being shown as well as what follows. The other threat to justice is video existing that is not entered into evidence whether by procedural means or deceit. Throw in that not only is the image portion valuable and subject to manipulation but audio is far easier to tamper with.


We still trust photos 100 years after they were edited in the darkroom.

As ever, context matters. This isn't some armaggeddon scenario where suddenly either anyone can be stitched up for anything or all evidence must be thrown out.


The real issue is it shifts the burden of proof to the accused.

The US uses a common law system, juries won’t buy it until the average person grasps how easy it is to make high quality fake videos. Navigating the court system is both time consuming and expensive.

We’re not only talking about courts, though. Private parties like an employer may have different standards or could use this information as leverage.


Lol. What if [they] start growing your DNA from trash that you've discarded (saliva from a straw) and add irrefutable evidence scattered about where the camera was recording.

For all of the convenience that these modern innovations provide, that slight chance that they will be exploited for ill-gotten gains does seem plausible.


I think it's less that it will be used and more how easily it could be used. Sure, it's been possible to add/remove people from video for a while, but now it's becoming so easy and automated that every video will become a possible fake AND the average person will think it, not just the tech-savvy. Not sure that's 100% bad, I haven't really thought it through too much yet.


Not to be too much of a hater, but the example video in the link isn't anywhere close to fooling anyone. It can barely replace a moving person with a static background and does so with obvious artifacts. It was hopeless when the person impacted the environment or moved in front of the video.


I hope personal data collection habits, like "I collect my own data", gets to the point where each of us maintains our own data cache.

Therefore if someone deepfakes you, you'll be able to prove via your cache that you were not doing that at the time of that video's capturing.


How can they prove their data is real and not backdated? How do you prove a lack of alteration in any witness's cache?


Reddit/4chan style proofs will get us covered for another 5-10 years.


or throw your cigarette butt at the scene...

They are people in jail with a lot less "evidence", and these things will only make things worse.


That’s why you should always have a smartphone on you with location tracking on and be video documenting your life on Instagram, so that you can always have an alibi.


That has its own problems, 1. proves you were near a crime scene (you're bound to), caught in the dragnet and then you might need an alibi to escape charges. 2. a gazillion laws in the books, you might be documenting your own 'crimes'


I would also think though that similar tools can inspect video and detect (at least in some cases) if the video has been modified.


We're long past that.


Yeah, it turns out "El Coco" was actually just an AI master.


The corollary to this is that people’s faith in video will be so low, that actual video of a major event can be dismissed as potentially fake - and the more extreme the more easy to dismiss. The sword cuts both ways: false negatives being accepted as true and true positives being dismissed as fakes.


How is this any different than a picture? We have been making fake pictures, editing, airbrushing, photoshoping for years.


Yeah I don't buy this idea at all. People freaked out over the ability to edit photos in dark rooms decades ago. Trust didn't deplete because pictures got tied to identities which in turn is tied to reputation (or by verifying "hey is that picture of the president k-wording a person with a gun real" with a trusted institution or something), etc.

There's other cool stuff that will come of this but the fact that this is the idea mentioned every single time shows a severe lack of imagination in our sphere.


What do you mean trust didn't deplete?

Faked images are super popular and effective and are used to reinforce biases.

Major outlets like Fox News that traffic in false information have bad reputations and also good reputations among others.

You are right that this is isn't a technical problem though. Why bother faking a video or photo when fake words (misquotes and lies) are 10000 cheaper and just as effective?


We've always been able to edit a video frame-by-frame, but it takes quite a bit of time. Now you can deep-fake a video at real-time. Imagine the disinformation that can be produced so quickly in response to the news cycle.

We are in for a grave period in history indeed.


Barrier to entry has completely changed. A dark room wasn’t something most people had access to. Photoshop required money, time, and skill.

We’re entering an era of radically different scale where anyone will be able to manipulate imagery. Our environment could be saturated with fake media in a way it’s never been before.


Those have taken much more time to produce. Now it will take just a few seconds.


People on FB and Reddit are just upvoting Twitter-screenshot-style pictures and screenshots of headlines with no sources. That takes about 2 seconds to create. Just choose-your-own-caption with an accompanying image.

The fear of fake videos is overblown. What amounts to a screenshot of a headline can get 30k upvotes and comments on Reddit without anyone asking for a source. That tendency is what worries me, not the medium.


The fake screenshots and the fake videos are both troublesome. How do we know what’s true and what isn’t? How many false facts do you believe?

I can imagine a world where false tweets/videos/headlines are generated specifically for you to keep engagement levels high. The outcome would be not only do we not know the truth, but no one can agree on the truth. It’s something that currently exists in our society, but technology can make the split bigger.


> People on FB and Reddit are just upvoting Twitter-screenshot-style pictures and screenshots of headlines with no sources [..]

Those are much less sophisticated than the sorts of stuff this enables.

Pointing out a qualitative similarity does not mean quantitative differences don't exist, and quantitative differences sometimes create qualitative differences as emergent properties on a higher level, e.g. that of society. Denying this truth (and downvoting me for saying so) is idiotic.


It is easier to detect fakes than it is to create them. The datasets will always be better for fakes.


That doesn't mean the fake-finders win, just that the fakers with most resources are the ones who can pass their fakes.


Just yesterday I watched a video that was difficult to detect it was fake, it was too funny to be true though. Anyways I couldn't tell just by looking at the video. The video showed Mexico's president next to a phallic shaped cactus.

I like to think I'm good detecting fake videos or images, but I guess now I'll have to check the sources every time.

Fake (possibly NSFW): https://www.youtube.com/watch?v=4r0TCQrfMq4

Real: https://youtu.be/31KzD5a3KS4?t=88


That's just a composite though. Nothing currently novel there.


I was kind of disappointed with this. I may be misunderstanding this but essentially it seems to use machine learning to identify body parts and then just plasters over the background. Now what I would have expected from a 'complex video' would at the least be a moving camera - in which case you actually have to build a model of the environment ala GuaGAN and generate from it.


Yes. It’s best to underpromise and overdeliver. If you’re not really anywhere near SOTA, then be accurate in your description, eg “Real-time person inpainting from video in JavaScript”


It would be great to see what the more advanced (but bigger and slower) ResNet model is capable of:

https://github.com/tensorflow/tfjs-models/tree/master/body-p...


I work in a gym, where lots of people take video of themselves doing exercises (for instagram, or form checks), despite the gym's policy for privacy reasons (other members in the background).

There's an immediate need for an app that will remove the background of people for privacy purposes.


Combine this with facial recognition to only remove unrecognized people (or opposite)


Commercially it would be more interesting to do active background removal. If every Youtuber has a professional background like a news anchor they would immediately look legit.



Green screen? Yes, it still requires some effort (buy and set up the actual screen, proper lighting), but I doubt a proper green screen setup would be outperformed by some AI system any time soon: Got a shitty cam? AI would have to fix that too. Shitty lighting? Still a problem after any background removal. And if you're gonna invest in those two things, might as well go for your green screen.


Background replacement is a commodity motion graphics tool. It has been around since the 90's, and exponentially better than this work here.


But against an arbitrary background?

I’ve seen real-time background removal for video and for still photos, but never gets it perfect. Too many artifacts.

That’s why the green screen is still utilized.


Yes, how do you think automated wire and stunt performer rigs are removed? Think of all the VFX you see; an immense amount of visual effects are animation basic principals applied to scraps of previous and past frames for the composition of the current frame.


zoom does this in real time, virtual background.


Came here to confirm this. Indeed Zoom does this in real time and replaces the real background with a virtual background or solid colour. We often use it at our workplace for fun.



this would be very useful in busy tourist spots where you want a video of something, the leaning tower of piza for example, but minus all the annoying people pretending to push it over


Isn't it what "long exposure photography" (and photo stacking in photoshop) is used for ? ;-)

I mean: if there's too much crowd, you have to wait a long time to get every little part of what's behind. So I'm not sure that ML will be useful except maybe to detect "humans" and decide what parts need to be replaced


Yes! Just take a lot of photos over a couple minutes (depending on how busy your scene is) from a fixed position, or just a video if you're lazy, then use imagemagick and combine them using "median" (NOT average). It's not always perfect but can deliver most of the time. That way even a command line dork like me can do it. :-)


A similar simple trick allows to reduce reflections from photos of flat objects under museum glasses (books, pictures, coins): take several photos from slightly different angle, co-register them using some panorama assembly techique (I've used a simple Python OpenCV script) and merge with minimum or some bottom quantile, since reflections are additive.


Also useful if you want to live stream in a country where privacy laws make it illegal unless you get everyones consent. Live face blurring would probably be enough however.


hold it up.


Looks like mid 90's background replacement extremely early work. This type of thing, and exponentially better - as you know from feature film VFX - have accomplished these types of operations decades ago, and have published how it is done, and commodity versions of this logic are in open source packages like blender. Deep Fakes and Style GAN has people all excited, but this type of effect does not need ML and are fully features digital toys now. Hell, after 40 years as a developer of games, 3D graphics, interactive video, 3D games, feature film VFX, a digital double creation service, and now FR I can say with authority: this work is nothing.


It's interesting though because an amateur slapped it together quickly rather than an entire industry over decades.


I wrote one myself back in '93 that was used in talking head videos by a documentary company to place their talking heads into the locations of their films. The essential logic is quite simple, leveraging the fact that you've got previous frames.


The earliest cameras around 1830 already removed people, perfectly. Exposure times were as long as 8 hours, so people were invisible. only buildings, trees, etc appeared.

Link below includes images of streets with people “removed”, and has the oldest known image of a person, a shoeshine that stayed in one place long enough to be seen

https://www.livescience.com/60387-oldest-photographs.html


It's arguable that 8 hour exposures count as "real time", however :)


What’s impressive is that this is done in JavaScript!


Why is that impressive? JS is one of the most powerful and performant general-purpose languages in common use.


No SIMD and awful multithreading, so that's like factor 32 of performance thrown out the window right from the get-go. Also doesn't help that the GPU API is hopelessly outdated.


performant?


Ugh. Yeah, that's an awful word.


I have doubts this hasn't already been done given that the background is static.


Yes, 35+ years ago. Background replacement is a commodity motion graphics tool.


Bank heists just got that much easier


Sure, you just need to hack into the camera hardware system and inject a browser-specific machine learning library. Easy-peasy


That bag of cash is floating out the door all by itself!


is it just me or arent we supposed to be terrified of this kind of development? Cringe.


You mean specifically because of the real-time-ness of it? As far as I understand, with professional video editing tools and skills, this camera-in-fixed-location removal is trivial.


Personally I find anything we have to be told to be terrified of instead of drawing our own conclusions eyeroll worthy manipulation.


Has "Real Time Background Removal" already implemented?


Yeah it's been around for a long time - although many of the earlier version were rather awful. There's different methods, and not all of them use neural networks.

But there's software to do it with a decent camera that works okay, you'd be better off with a green screen though in most any situation.


In MS Teams you can blur the background while video chatting. So I'd say yes, it exists.


Depends on the background by method. At thr very least chroma key and edge detection qualify - define a target and take them to their edges - just with signal processing and no learning.


I think this can be improved if edge detection was applied. At the moment it appears to be a bounding box, this lets shoulders and stuff on the edges appear out of box


Making provenance of software and data easier will be of increasing importance in the future. We need to be able to prove things have not been tampered with.


This project should have the name "Stalin"


This will be a praise for him. Unfortunately censorship has become common practice around the world and it did not started with Stalin.


Reminds of the movie "Rising Sun"


This is what they used to remove Bojack from Horsin' Around :O


i think i saw this in a black mirror episode


Stalin famously had people that had “left” his administration airbrushed out of photographs.

I don’t think it will be long before current-day dictators start doing the same thing with old footage containing previous allies they no longer wish to be associated with.


Not just people, the famous 'Red Flag over the Reichstag' photograph was modified (added smoke and removed watches on soldier arms).


German tv WDR also removed sensitive content from a video. At least a dictator has an excuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: