Hacker News new | comments | show | ask | jobs | submit login
A scientist who spots fake videos (nature.com)
138 points by rstoj 5 months ago | hide | past | web | favorite | 35 comments

This seems like a good time to mention Captain Disillusion.

A filmmaker and video editor, Alan Melikdjanian, started a YouTube channel back in 2007. He debunks viral videos and adverts, whilst performing in character as the rather surreal "Captain Disillision".

Don't let the bizarre performance scare you off - his production values are simply outstanding, especially considering he does this all pretty-much singlehandedly, and has done so with hardly any recognition for the past 10 years. I think his channel only hit 100k subscribers last year, so it looks like he might finally be getting some well-deserved recognition.

Here's a few links to some of his best stuff. You'll probably be thinking "WTF is this?" when you first start watching, but trust me and stick with it:

- https://www.youtube.com/watch?v=H_slT5YpBok

- https://www.youtube.com/watch?v=KbgvSi35n6o

- https://www.youtube.com/watch?v=NXCtgI2lkFw

Came here to post about this guy. So awesome he's finally getting the attention he deserves.

I just went down what I called the "YouTube Blackhole", watching video after video produced by Alan Melikdjanian. Although I spend a considerable amount (for a 30 year old) of time watching YouTube, I find myself absolutely floored with the quality of his content.

Captain Disillusion has been way ahead of his time for so many years.

I think his time has finally come.

Thank you for sharing. This is the best thing I've seen on HN in a long time.

what i found interesting about the channel when i noticed it 2 years ago was that the oldest videos, despite their lackluster video quality, already follow the same procedures and conventions he's still using to this day.

I agree. I recommend also another of his videos (that may not be so related to this subject, but is amazing) "BEAKMALLUSION: Free Energy Devices" https://www.youtube.com/watch?v=sT_bTnkwLuE If you watched 'Beakman's World' when you were young, this episode includes Beakman himself.

> I think his channel only hit 100k subscribers last year, so it looks like he might finally be getting some well-deserved recognition.

No kidding. Even if he hit 100K subscriber 2 years ago or at the beginning of 2015, he is at 500K subscribers now. If his growth hasn't slowed too much, he could be looking at [close] to a million subscribers before this decade is done.

His videos are pretty cool so far. Happy he is able to do this for a living.

He is financially supported by crowd funding (https://www.patreon.com/CaptainDisillusion) and makes videos full time.

He has 500K subscribers and growing rapidly. I assume he makes a decent chunk of change from YouTube itself too?

This is very cool. I wish there was a channel like this debunking real magic tricks (not video ones).

In his (great) video on Will Tsai https://youtu.be/_dSp_f0f9gE he talks about why he doesn't analyze magic tricks, and mentions "The Masked Magician" whose videos sadly aren't even close to the quality of Alan's

As he mentions in the article, DARPA currently has a large media forensics project going on. They have split it up into teams making manipulated videos and images, and other teams trying to detect those manipulations. They had a workshop at CVPR-2017. I'm on one of the teams that is making manipulated data.


It's nice that they call it "media forensics", but it would seem that the making of manipulated videos is much more useful from a military standpoint.

I had a lot of great college professors, but Hany Farid (the subject of this article) was always one of my absolute favorites.

He always placed a strong emphasis on strong mathematical fundamentals, once joking in class that we only think AI is hard because we're "bad at math." And to prove it, he showed us endless examples of how the right math makes problems far easier. For example, I remember one time when he showed us how correctly understanding something like the sinc function would improve directional derivatives of sampled data, and then turned this into a better video motion tracker.

He loved working with colleagues in other departments, solving all kinds of fun problems and publishing joint papers. At one point, I think the campus magazine was writing articles on him several times a year, because he always had something cool going on.

I wonder how the courts and elections will survive if this type of fake ever exceeds our bandwidth and investment to detect them. Maybe ultimately no one will believe video any more? Maybe people will ignore fake detection and go with whatever their biases tell them is true? Either way electing or convicting anyone might become too easy to manipulate.

> Maybe ultimately no one will believe video any more?

People believe whatever they want to believe; forensic hair analysis (not DNA) is also quite a controversial practice, it's still accepted as evidence by many courts in the US. Last Week Tonight had a bit about that issue in a recent episode [0]

It's sad and mindblowing that people have been sentenced to prison/death because a so-called "expert witness" confused a dogs hair with being their hair.

[0] https://www.youtube.com/watch?v=ScmJvmzDcG0

Note that the article also talks about how to detect standards image manipulation as well. This could be useful for me when reading academic papers and questioning the visual representation of results.

We're already at the stage where one can't trust the video evidence, unless it's backed by the camera's signature/encryption.

Creating a very realistic fake is now trivial:



Google is very close to synthesizing realistic voice.

It's game over, as far as I can see.

It's a matter of time someone creates a fake video of someone famous saying something very outrageous, like nazi propaganda, and it will result in the destruction of that person's career and life.

We really need something like Secure Enclave in every camera.


Another related video:


I'm not sure in-camera signatures are the way to go.

In the very simplest case, someone could just point a camera at a very high quality screen and record that, generating a signed video.

A more complex attack would be to effectively emulate the image sensor and pipe image data straight into the camera.

If you want to prove that a video was filmed on or before a certain time, one way would be to hash it and put that hash on a blockchain, but that doesn't really solve the problem of authenticity.

Try it. Record something off the screen (including the audio), and see what happens. I'm yet to see one example of that that looks like reality.

Emulating a sensor sounds like it immediately reduces the number of perpetrators by orders of magnitude.

I agree though, what I'm suggesting is not 100% bulletproof, but it's using a proven technology, and it's relatively simple, assuming hardware manufacturers are willing to add one small chip to their cameras.

If you're pretending it's footage of a film, TV show or video game, then recording from the screen will fool a lot of people. And while in a lot of cases that would make the 'metadata' aspect useless anyway, there's always the possibility of a hoaxer either saying:

1. This was recorded from a TV broadcast

2. Or a CCTV camera


I've seen a number of YouTube and liveleak videos that were obviously made by recording a security monitor with a phone. Seems to be an acceptable practice for certain genres, and is usually credible.

Which only makes things worse, because if you are trying to create fake security camera footage, you just need to make it real enough to play back on a "security monitor" and record that with a phone camera.

That's not his point. He is claiming he can record a fake video that plays on a screen, and the resulting recording will look like a recording of reality, not of the screen (and the camera will sign it as such).

What if the camera also included gps and time stamp information? That might make it harder to fake, because you would have to be roughly in the location that you were claiming that the footage occurred.

Are the signals from the gps satellites cryptographically signed?

Of course, the cameras would need to have very good physical security so that a person can't either extract the private key from it, or do things like replacing the camera part with something that just feeds in the data you want, and still getting it signed with the key.

I think the design that went into the ORWL pc might be good for this (which would quickly delete the private key if it detected tampering).

In order for one of these to be trusted though, there would have to be a trusted source demonstrating that it was constructed and configured correctly (rather than in a way that would allow faking). Maybe by having the construction and setup be recorded with other cameras of the same type which are already trusted, in a web of trust sort of thing? If one had enough of these cameras I don't think that the bootstrapping of the chain of trust would be too difficult.

It's pretty easy to fake GPS signal with the proper equipment. I remember seeing a video of a guy faking the GPS signal in order to cheat Pokemon Go.

Aw dang, ok.

But if the gps satellites cryptographically signed their messages, would that help this much? And would it be all that much of a cost for future gps satellites to sign their messages?

maybe 3d-cameras can help? anyway, this forgery stuff is getting scary...

FotoForensics [1] is a great tool for checking if images are airbrushed/manipulated, seems like they work by checking JPG compression and the RGB-average concept.

-- [1] http://fotoforensics.com/

Unfortunately it doesn't really work except in constrained cases. It's easy to convert an image to PNG and back.

What? The compression from the JPG and the ways the edits do not affect said compression is not undone by a simple conversion.

Recently listened to an episode of Radiolab includes an interview with Hany Farid: http://www.radiolab.org/story/breaking-news/

Pretty interesting thinking about the gap between what is possible now and where the human population is with understanding that.

It is intriguing to think about this in reverse. Normalisation of the error rates across an image, and the techniques to achieve that or detect that. Also, rebuild images according to specific camera profiles such that the per pixel statistics match.

Captain D is basically the best thing on the internet.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact