Hacker News new | past | comments | ask | show | jobs | submit login
Tricks That Can Outsmart Deepfake Videos for Now (wired.com)
138 points by joewee on Oct 20, 2018 | hide | past | favorite | 124 comments



It’s really silly how people are worried about this like it’s a looming legal crisis. Have you seen the types of videos that are used in courts? They're grainy, CCTV footage with a trusted chain of custody. It’s not like they take footage from some random guy that shows a convincing, perfectly clear face.

Deepfakes will have the same impact on law enforcement as convincing photoshopped pictures before. Which is to say absolutely no impact.


Bystander cellphone footage is often used as evidence in courts, at least in the USA.

This comment seems to imply that all court admissible video evidence comes from trusted chains of custody and thereby trusted sources, which isn't necessarily the case.


>This comment seems to imply that all court admissible video evidence comes from trusted chains of custody and thereby trusted sources, which isn't necessarily the case.

What's stopping a defense attorney informing a jury about DeepFakes and having that fact factor into their "beyond a reasonable doubt" criterium for criminal cases?

DeepFakes have more potential in PsyOps than legal cases.


Just informing them of that possibility will render such evidence basically useless. Once video can be faked where it can no longer be detected, video just doesn't carry any more information than, say, "we got this anonymous letter saying the defendant is guilty".

So, yes, that information should be given to the jury (as soon as it becomes accurate), and it will lower the potential for false positives (convictions), but with an equal and opposite increase in false negatives.


> Just informing them of that possibility will render such evidence basically useless.

Is a verbal statement from an impartial witness useless? What if they say recognise the defendant as the criminal they witnessed? Obviously it's not as ideal as unfakable video evidence, but it is useful and has been the basis of legal systems for the millennia before computers and photography arrived.

Fakable video evidence is somewhere between these: you still have to account for the possibility that the person isn't honest, but don't have to worry about the fallibility of human memory.


Assuming, of course, that currently video is an equally occurring factor in both values and invalid convictions


No, no, that's not what he's saying. He's saying that in court, proving the provenance of a piece of physical evidence is more important than the technical factors of the thing itself. But what everyone is focused on with deep fake videos, is how technically good they are.

My daughter's birth certificate could be reproduced 100% indistinguishably by a Fedex Office print shop. But it is backed up by a chain of people who, if necessary, will testify that it is authentic.

People think that testimony requires physical evidence to back it up (look at the reaction to Dr. Ford's accusations). But the legal reality is the opposite... physical evidence is only useful at trial to the extent that credible testimony can establish its authenticity and relevance.

So a deep fake video that looks exactly like a real video will only be useful at trial if it can be established as credible, by the testimony of credible witnesses.

All that said... there is more to life than the justice system. Forgeries have always taken in the gullible, so deep fakes will undoubtedly have a social impact, especially in the early days before the public is generally aware of the capabilities. This is undoubtedly one reason the press is starting to cover this technology so aggressively.


I was on a jury in a murder case that was decided by evidence from a surveillance camera (actually two of them, from different angles).

If it's possible to fake HD video, it's also possible to fake grainy surveillance camera footage. If you prove the defendant was in a particular place, and you produce video purportedly from a surveillance camera in that place, a jury would believe it.


Forgeries do have significant impacts. Everything from beef being photoshopped into meals leading to violence (e.g. in Hindu regions), to significantly impacting the burden of proof required for prosecution of war crimes.

And that's not even considering what you can do without an actual forgery, just by messing with context e.g. https://www.buzzfeednews.com/article/meghara/we-had-to-stop-... https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-l...

If "deep fakes" become easier to create than mis-contextified video, they can and will be used to spark conflict in areas with simmering tensions.

I say this as someone actively involved in avoiding the worst case scenarios; I wrote a piece here with more detail: https://www.washingtonpost.com/news/theworldpost/wp/2018/02/... (though HN is not the target audience).

[first paragraph repeated from another comment below]


I’m not sure why the below comment is dead - it’s a valid point. My worry regarding deepfake videos is their political and social consequences


Exactly -- many assertions fall very far from being prosecutable but can be enormously disruptive socially.

See the recent SCOTUS nominations for an example.

... and that's in a modern well-educated society.

In countries that are riven by sectarian and tribal rivalries, a convincing video can result in the deaths of millions.


We've already seen that internet conservatives will prey on the elderly and other susceptible types on Facebook with fake news. And it works. Even if it gets retracted or disproved later (and not everyone sees the disproving), the false image has been imprinted. I expect this to be used in this way shamelessly.


Stop the fear mongering.

Things will be fine, we'll have a few minor hiccups to resolve, but that will be it.

Some people really underestimate human ingenuity, do you really think that we won't be able to solve this issue in a manner of days if need be? Come on..

We're a few decades away from AI and immortality, and some people are worrying about fucking politics?

Think long term, and chill out.


There is a lot more than just law enforcement to worry about. Deepfake videos can and will be a tool for political propaganda.


Exactly. A popular fake video of a person doing something bad may not put them in prison, but can prevent them from being elected.

Also, a fake video of them tearing up a holy book can get them killed.


Personally I think the reaction to deepfakes has all of the hallmarks of a moral panic - a widespread reaction and banning of it on all sorts of forums and repositories even though they previously had no issues with 'fake' celebrity images or videoes perhaps out of stereotypical corporate blame aversion bubbling up. The language used was a deliberate conflation with invasion of privacy with 'nonconsensual pornography' - and that was in official policies.


A lot of CCTV footage from modern security systems are capable of high-definition and can zoom in on many portions of a frame.

But yeah, as others have said...I'm more worried about the political and social consequences of doctored videos. We already have a culture that is triggered at the slightest sign of perceived injustice. What if that injustice is a fabricated lie, architected by troll farms?


One technique I haven’t seen mentioned would be to take advantage of the fact that the world is 3D but neural networks that make fake videos are somewhat fundamentally 2D. Making good 3D computer graphics is hard and involves a lot of lighting calculations that are very different from what neural networks do. It seems like it should be possible to detect the errors that a neural network will introduce that will make the scene unphysical, at least in cases where more is modified than just small bits of geometry like the position of someone’s lips and other facial features.


If you can write a function to quantify those errors, then you can use it to train the neural networks that produce the deepfakes. The better your error function, the better the deepfakes.


Is that always true ? I know nothing about AI whatsoever but wouldn’t the function have to satisfy certain properties for a NN to be able to learn from it? Properties that the discussed “3D verifier function “ wouldn’t necessarily have?

It sounds very much like complexity theory, where you definitely have problems that are easier to verify than solve (see NP hard).

(Otherwise I could do stuff like use a hash function to train a NN to compute input for hashes , for example, no?)


I think this is a valuable framing. In other words, is there any computation problem which physics implicitly solves in the process of generating an image or video—and which is:

1) computationally intractable to forge. (e.g. requires simulating trillions of photons)

2) computationally tractable to verify

Sadly, my immediate gut intuition is that there is not such a problem, for a variety of reasons; but hopefully I'm wrong!


I don't know much about AI and NN's either, so grain of salt and all that..

In theory, yes - you can. The training function could return a score - say the number of matching digits in the hash - and the NN will in theory learn what inputs produce the better output. If somehow, there is a weakness in the hash algorithm, it could stumble onto it - allowing it to get better at producing the right input to get the required output.

The simpler it is to deterministically manipulate, the easier it should be for a NN to learn to manipulate - even of the function is just returning a boolean or a "rating between 1 and 10". So yea, good hash functions are unlikely to be learned and solved, but photos and videos aren't designed to be hash functions.

(All that said, I'm willing to be bet the time and computing power you'd need to pour into this NN to break a good hash function is likely more than has ever existed in the sum of all past time + computing resources ;))


It sounds very much like complexity theory, where you definitely have problems that are easier to verify than solve (see NP hard).

I think you mean NP. NP hard includes problems that are not in NP, such as the halting problem.

I'm not aware of any proof that photorealistic 3D rendering is in NP. If it is not, then verifying 3D cannot be said to be easy.


Well, not exactly regular NP, as was pointed out. I guess NP complete would make more sense. Although the set of problems that are easier to validate than solve reaches much further than NP complete; see basically any Σ in the polynomial hierarcy. It's about the existential quantifier.


Nitpicking on your nitpick: you mean NP-complete, not NP (for instance, any P problem is an NP problem, but NP-complete problems are probably not P)


You’re correct, but I’m gonna have to nitpick your nitpick and say that NP-complete problems have not been proven (or disproven) to be in P. “Probably not” is an opinion.


It's a well-informed opinion that is based on the fact that what we do know indicates[1] that, probably, P≠NP.

[1] https://en.wikipedia.org/wiki/P_versus_NP_problem#Reasons_to...


Publish a paper with the mathematical proof. “probably not” is neither here nor there.

The main argument from your link is nobody has found an efficient algorithm for any of the 3,000+ studied problems after all this time.

And yet it’s also true that in the same span of time, nobody has been able to prove the that it’s not, either. No matter how hard they have tried.

“probably not” is just an opinion. I’ll wait for the formal proof. Till then it simply is not known whether or not.


its not always true (hash functions are in some sense designed by brilliant people to be as hard to learn as possible) but it’s true often enough to be surprising.

if there was some score that would look at lighting etc, you probably could just tack it on to one of the loss functions somewhere and expect to see improvement.

maybe not enough to beat the system; or perhaps it would increase the lighting score but make the image obviously unrealistic in other ways we haven’t thought of.


ya but in this case, wouldn't you have to program a really complex loss function, to track a lot of things like lighting, edge detection, granularity, video compressions artifacts etc.

basically at this point you would have a lot of if-else statements, like a "Expert System" AI in the 80s.

so much so that it would be better to use a NN to replace the loss/error/verification function instead of coding it by hand?


It will keep generating stuff until it improves score of the function, not try to satisfy the function.


Yeah, but we don't generate prime numbers with them by scoring them based on how few common factors they have. Pattern matching only works for certain things. At some point it's necessary to conceptualize at which point I'm pretty convinced you have hard AI.


Neural networks are not magic.

You can't solve the travelling salesman problem in polynomial time just by throwing machine learning at it...

On a mathematical level you can prove that machine learning will do a bad job for certain things, and this is what the parent talked about.


Is there any reason to believe generating 2D video with the properties that a recording of a 3D world has is a difficult mathematical problem though?


But you can solve TSP pretty well with a TSP solver. I’d be impressed if a neural network did that well. But maybe it’s possible.


See Dai et al, "Learning Combinatorial Optimization Algorithms Over Graphs", experiment section A.1.3 "Traveling Salesman Problem".

https://www.cc.gatech.edu/~lsong/papers/arxiv_rl_combopt.pdf

It does at least respectably well against the Concorde TSP solver although (unsurprisingly) doesn't beat it. Note that this also isn't a neural network that just maps a graph to a TSP solution; instead, the neural network function is a heuristic that guides the greedy construction of a solution one vertex at a time.


That sounds like an approach prone to backfiring if not done carefully. The real world is noisy - something too perfect would stand out like a sore thumb. Training on an overly large set would likely wind up inconsistent and overly small one would probably be too biased.

For instance one fundamental of forensic accounting is Benford's law [1]. Even in peripherally related with artistry it stands out. one of my never off the ground webcomic attempts with digital art I tried making backgrounds with perspective and I calculated the pixel spacing completely evenly - it stood out as unnaturally regular compared to doing it with just a straight-edge and a ruler on a cheap Wacom tablet.

[1]: https://en.wikipedia.org/wiki/Benford%27s_law


Except that the error function is not differentiable.


Thats what a gan could do automatically nah ?


That is what capsule networks were designed to solve. So far though I've only heard they got good results on MNIST and promising results on CIFAR10.

Anyway I'm not sure that you can conclude that just because convolutions are 2D that deep neural networks are unable to understand 3D relations. For example this network seems to have a pretty good understanding of the 3D world:

https://deepmind.com/blog/neural-scene-representation-and-re...


I imagined the interview video where multiple light sources constantly moving for the proof that the lighting is real.

Or more likely these NN-generated video fake the poor quality video and audio. The fake video author claims it was captured by hidden camera and faked person admit the fake illegal act.


What do you mean by neural networks being 2D? How do you define the dimensionality of a neural network?


Look up "adversarial networks". If you have a classifier that tells when an image is faked, then you can use that classifier to train a network to generate images which make the classifier say the images are real.


ya but isn't the adversarial network in this case just a more sophisticated version of the error function (i.e. verifying function) the others have talked about?


What about using the video magnification technique to detect fakes? I'm guessing, in the same vain that the training dataset doesn't have blinking, it probably doesn't have a regular heartbeat.

This is the technique I'm thinking about https://www.youtube.com/watch?v=e9ASH8IBJ2U

Perhaps you need a high framerate in order for this to work?


Extracting heart beats from video with this technique only works reliably if the target is almost stationary, so a high framerate would help. However, high framerate requires bright lighting.


> MediFor started in 2016 when the agency saw the fakery game leveling up. The project aims to create an automated system that looks at three levels of tells, fuses them, and comes up with an “integrity score” for an image or video.

That seems iffy. The problem with any automated system is that it can be used as an input into the training process. Things like matching weather reports to lighting sound more promising, because they'd require the manipulator to be more detailed in the data they collect, but that kind of thing also sounds a lot less reliable as an indicator.


If one wants the content of this article in podcast format, the researcher was interviewed in one of the latest episodes of Data Skeptic (an excellent pod cast, by the way): https://dataskeptic.com/blog/episodes/2018/deepfakes


The thing about video authentication is that any algorithm that can classify videos into "real" and "fake" categories can serve as an input to a GAN that makes more convincing videos. Any fake-detection oracle is in fact a valuable training apparatus, and distributing the things will quickly backfire.

"So", you might suggest, "let's keep the classifiers secret and only reveal specific classifications to the public! The learning rate at one example of day would be terrible." Sure, this approach would stop adversarial classifiers acting as training aides, but such a thing would be socially useless, since nobody would have a reason to trust your pronouncements.

I don't think there's a way anyone wins here. The end game is that we return to an almost 19th century model in which recorded media takes on a faded secondary role relative to in-person experience and trusted commentary. (And don't expect anyone to agree on who's "trusted".)


> don't expect anyone to agree on who's 'trusted'

This has always been the case. I personally have issues with almost every major news outlet recently. Not just bias but blatant one-sided partisan reporting by nearly every actor. Not taking a side (nobody even remotely spoke to my values or concience) this election really opened my eyes. Both sides hatefully attacking each other for being hateful, and using that to justify further hate (of hate, so its cool, we arent all bigots right? Right!?) of the other 'side'. Neither side will even give the benefit of a doubt that what the other side does is in good faith with their values. It was kinda funny when drunk sports fans duked it out over forgone alegences to groups that don't know they exist, but its happening in casual social situations more and more. How much more of this before our own beerhall putsch moments begin? I digress...

Trust is already a fickle beast. Bloomberg is wrestling with it now, no matter their rigour. Personally, I think it might be a choice, by prompting either a culture of verification and lockdown (modern china), or implied common trust in an ideal but uncertainty (i.e. early usa). Both have huge advantages, injustices, and liabilities. It seems we have approached a middle ground with few of the perks of either and many of the disadvantages to both. Perhaps there is a better way moving forward. I am personally partial to 'dangerous' freedom, over 'safe' subjugation, in all cases. As such, my values my differ may from the mean. Such a system implies personal agency, which is rejected by the popular dogmatic fatalism of 'post-meritocracy'.


How long until we get a deepfake filter option when shooting video, that then gives you plausible deniability should your sex tape leak..


I always wear these when shooting plausibly deniable blink-free porn:

https://www.ebay.ie/itm/231712727530


That sounds useful for making porn to sell, but what is the point of making your own sextape if you can't tell it's you on it? There is already loads of porn...


I suspect GP means a filter that makes the video appear to have the hallmarks of a deep fake without actually changing the apparent participants.


Exactly


Presumably, you would remember.


More likely will be that once deep fakes are commonplace then no video will be trusted unless cryptographically signed by the participants.


Hmm, maybe. I wonder if it will be slightly more nuanced.

Documents are forgeable, but we generally believe documentary evidence. (Well, in courts we need a person to vouch for a document, but we already also need that for video too-it's a very testimonial system.)

When two people dispute authenticity (outside of courts), we reflexively gauge the credibility and perceived motives of those involved.

It's a rough system, prone to error, but my point is we navigate these things socially without assuming bad faith for all documents.

It's like how home locks offer close to zero protection, but they are still a normal part of our world.

We can't be sure a document is real, can't be sure a lock will matter, but we are all busy, so we shrug and go on with our day in most cases.


And even then, only if there’s no incentive to fake the video.


Photoshop has been around forever but I can't remember a time any forgery caused an issue publically.


The public issue usually comes when the photoshopping is discovered. National Geographic got huge public pushback when it became clear that they had "moved" Egypt's pyramids for a cover. And Iran came in for mockery when it became clear that they had photoshopped extra missiles into a launch photo.

Forgeries are hugely risky for legitimate organizations; if discovered, the backlash can hurt them quite a bit.

Fringe, extremist, criminal elements have used forgeries to further their goals, and they will do so with "deep fake" videos too.

But our society is dominated by organizations that run on trust. And those organizations will have to quickly learn (indeed, they are learning now) how to distinguish and filter out the deep fakes. Just like they did with fake letters, fake signatures, and fake photos.


Or Russian news channel using Arma (video game) footage as if it was real combat footage.


https://www.telegraph.co.uk/news/worldnews/europe/ukraine/11...

I'd say that was a pretty big deal (Russians providing a "satellite image" showing a Ukrainian fighter plane shooting down a commercial airliner).

Bad actors use this kind of technology whenever they think they can get away with it. As someone else mentioned, they've used video game footage and pretended it was actual combat footage.


Have you so quickly forgotten Emma Gonzalez? We don't need Deep Fakes to cause problems, regular ones will do just fine.

https://www.snopes.com/fact-check/emma-gonzalez-ripping-up-c...


Even if they haven't caused any obvious crisis, fake images do go viral. It's human nature to trust what you see, and it takes effort to maintain skepticism.

Exhibit 1: https://www.huffingtonpost.co.uk/entry/fake-news-trump-flood...

Exhibit 2: https://www.cnn.com/2018/03/26/us/emma-gonzalez-photo-doctor...


Yeah, this is why I'm not worried one iota about deepfakes. Say what you will about fake news, but blantant lies like a faked video or image generally does not get through journalistic filters. Something big like a fake video of a celebrity or a president would very unlikely be published (and if it grew viral through some disingenuous source, its spread would be severely dampened by trusted newssites warning the public not to believe it). I mean just look at the subsequent backlash after oobah butler's pranks. Sure, his prank fooled the news, but it's effects were totally nil in the long-term steady state.


You realize that a very small minority of the overall population even watches news or pays much attention to it, right? A well-timed fake video could cause a riot, erode trust in institutions, antagonize relationships, and disrupt the social fabric.

It will become even more apparent when the population is primed to believe it, pick your hot button issue of the day and some significant portion of the population would absolutely believe a fake video depicting the opposing party doing something horrible, without pausing to reflect.

Even seemingly harmless caricatures, like Tina Fey's infamous SNL skit of Sarah Palin ("I can see Russia from my house"), have a long term affect on people's perception of reality: http://eprints.lse.ac.uk/59838/1/USAPP_Blog_In_politics_cari...

In that paper a survey found that 7 in 10 Americans believed that Palin actually said that.

The mass production of deep fakes will have a deep and profound impact on society. I'm not keen on where this is heading.


Depends how good the journalistic filters are. Would these deepfakes get through the ones for the New York Times or BBC or Telegraph? Maybe not, but I have no doubt they'd fool the tabloid papers, especially given how low the standards are for stuff like the Sun/Daily Mirror/Daily Star/etc. They'd also likely fool quite a few specialist sites too, especially given the likes of the gaming press have very low standards for credible sources or evidence. People on Twitter have fooled them with blurry cellphone photos on a semi regular basis.

But it definitely depends on the news source and how thorough they are with verifying stories.


Sure, but I still don't see why we should be hysterically scared about its development since I think photoshopped images would have made the same damage a long time ago if the system was really that fragile. The fact that photoshopped images aren't nearly as big a threat as what we're expecting deepfakes to be makes me think that however the set of amplifiers and filters are configured (and I'll admit that I don't have a full understanding of it all), it's likely that they're sufficient to not make society collapse once the technology has matured.

Photoshopped images are a sort of "trial balloon" which has already proven that the system is pretty dang robust, that's my thesis.


What about the likely fake Chinese spy chip news put out recently by Blomberg? The pictures of the spy chip that this supposedly reputable news agency plastered all over the Internet have been shown to be nothing more than a bog standard filter chip they found on Mouser Electronics' website. Fake news is not automatically distinguishable from real news regardless of the source.


You're right, that using deepfakes would be difficult to manipulate the general public with. There are plenty of dangerous cases where deepfakes could be effective, especially in the military or in business. The article sites two examples:

what if you see satellite images of a country mobilizing or testing nuclear weapons? What if someone synthesized sensor measurements?

Another example I could think of is a highly sophisticated spearphising attack. What if someone impersonated a boss's voice over the phone? What kind social engineering/manipulation could that type of adversary get away with?


Forgeries have had significant impacts. Everything from beef being photoshopped into meals leading to violence (e.g. in Hindu regions), to significantly impacting the burden of proof required for prosecution of war crimes.


i also can’t think of a case where a forgery has caused a serious problem in the public sphere.

however, there are tons of gullible and/or wilfully stupid people who respond to photoshopped pictures on social media as if they are real.

especially when these images are politically charged, some of them probably believe it and some “believe” because it fits their world view and makes them feel righteous.


> i also can’t think of a case where a forgery has caused a serious problem in the public sphere.

You're not paying attention then. Here's just one example.

https://www.google.com/amp/s/globalnews.ca/news/4333499/indi...


right, isolated small groups or individuals believe the fake videos (or "believe" them, it's possible some of these groups were already looking for an excuse for violence).

but it's not really the public sphere -- there is no nationwide uncertainty about these videos. the police understand perfectly that they are fake, for instance.


Videos are so shareable though. The video will go viral before it's successfully disputed.


Images are too.


Probably even more so than video.


I have a feeling there will always be a way to detect trace footprints of a 'fake' but assuming the technology is perfected, what is the best way to verify that a Deepfake video is not you without analyzing the video?

Thinking on this I feel like the future might require that we record our every move via some sort of verified GPS service so that you can corroborate your whereabouts. Maybe their service comes with a video recording device that can show no, 'at this date + time client was at a McDonalds drive through at 41°24'12.2"N 2°10'26.5"E here is a 452 second video of their engagement' which would overlap the supposed Deepfake timeline/video

Thoughts?

I imagine if the service is publicly trusted/credible then it would not need to expose the actual details of the client's whereabouts, it could just issue a true/false verification on a Deepfake based on the data held privately.


I imagine it would work the same as any other information today, like press releases. Who is the source? Where did they get it from? Do I have any reason to doubt them?

Ie, the end of "trusting" videos from unknown sources with unknown reputations and agendas.


The best way is probably corroborative physical evidence similar to dissecting testimony. Reminds me of the MOVE lawsuit where there were claims that people inside the compound were shooting at firefighters as the reason they pulled back. There was zero forensic proof of such encounter. Similarly if one deepfakes a clip of say George W. Bush shooting Bill Clinton in the head the fact he is alive and uninjured is proof that it is fake regardless of how sophisticated the fakery. But those are old fashioned legwork solutions.

Depending on how divergent the algorithms are and how 'hand-done' they are I think a good intermediary measure would be to attempt to replicate it from scratch from other sources as itself a proof of insufficient proof. If you can take a DMV driver's license photo or other publicly available footage of your client, a scene of a robbery from a surveillance camera and put together something that looks exactly like the supposed evidence made by someone who was given only a loose description of the event you basically have proof that it is either fake or easily frameable in the same sense that a random diary that says "I am the zodiac killer - Ted Cruz" isn't admissible evidence.


Have definitely thought about this type of situation. We have the technology that you could do it, even mostly off line as storage is cheap enough a person could take a go pro and have it running at all times. I like the gps stuff but think people don't like being tracked so that wouldn't work. So instead of a service this all has to be done locally on the device so the person ultimately has control. We don't need to give out the information unless there is a reason.


For anyone wondering, these are the coordinates of the Sagrada Familia, and yes there is a McDonalds across the street.


Perhaps in the future there will be a step to edit videos with human oriented operations to remove artifacts?


The obvious worry about this technology is disinformation being spread, misrepresenting who people are and what they have done or said. But it also gives everyone deniability. If someone catches a powerful person assaulting someone, and actually has a video of it, they can just say it was a really good fake. How can we know?


Trust journalists with a good track record, which is the way we as a society decide whether whistle blowers are to be believed. I get that we tech people dislike relying on humans, and the libertarian among us feel particularly uncomfortable trusting the media, but there are professional investigative journalists that have dedicated their lives to making the world a better place by uncovering the truth. And they have had to deal with similar issues since forever (other types of forgeries have existed for a while).

We fight this by having less partisanship and stronger free press.


I would love this to be the case. My problem with this approach is that every single news report that I have actually had personal knowledge of has been ridiculously flawed. I just can't bring myself to believe that there is some protected elite cadre of journalists I should trust when 100% of my actual experience has been the opposite.

Their direct incentives are getting sales (and now clicks), which doesn't appear to be highly correlated with accuracy. Additionally, my experience in college was that it wasn't the best and the brightest going into that field, though I deeply hope that it was a skewed sample, and believe that there are many deeply intelligent people working there.

However, I think it likely that many people are not drawn to the field for the pursuit of pure objective truth, but because they have a particular agenda and want to enlighten the world with it.

There is a quote about a logical fallacy that I can't quite remember, it goes along the lines of the following: You read an article about something with which you are deeply familiar. You laugh to yourself at the childish misconceptions of the journalist. You then turn the page to something you know little about, and assume that everything that is written is true. That doesn't make sense.


You are talking about this anecdote https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect

And I partially agree! But do not throw out the baby with the bathwater! I am a physicist myself and have seen how the Gell-Mann effect has become less problematic over time, especially in respected journals that want to be taken seriously by scientists: Just make a decisive effort to 1) go out of your bubble and 2) learn about logical fallacies.


IME, in most things I have personal knowledge of the better news sources do a pretty good job; they rarely have significant errors, for example. The lesser, non-partisan sources also rarely have significant errors, but they cover issues with less sophistication. My most common objections are that they don't go deep enough (but if it's something I already know about then I'm not the intended audience - the NY Times isn't going to cover tech issues as deeply as I'd like) and that what I see as important questions are left unanswered and sometimes unasked. On the other hand, I should't presume my own superior intelligence - if the journalist and I disagree, perhaps I'm not the one in the right; the smartest people I know often disagree with me about things like which questions are important.

Beyond that, I'm not sure stereotypes of any large group of human beings are worth addressing, other than to point them out for what they are.


>Trust journalists with a good track record

Do these journalists work for large mainstream media corporations that are all owned by a shockingly small number of partisans, by any chance?


This is a very lazy way to shut down a conversation. Huffington Post and Fox News are indeed owned by partisans and are partisan themselves, but even such cheap shots have respectable journalists (Smith and Wallace at Fox for instance). WaPo is owned by a partisan but has a good track record. NPR has many underwriters but that does not stop them from having good investigative journalists.

Do you have a complaint different from "It is not perfect so I do not believe in any of it"?


> WaPo ... has a good track record.

Sure, if you go back to the Watergate investigation.

Their track record in recent years is abysmal. They are basically the pseudo-official mouthpiece of the U.S. Democratic Party, and somewhere between the New York Times and the Huffington Post in terms of seriousness.


If we are sticking to soundbites I can simply respond with "Reality has a liberal bias".

When facts aling with a policy idea this makes the policy good, not the facts wrong (and we should applaud both parties when they do it). I do not read every single article in WaPo but their front page and push notifications are most of the time fact based.


I said nothing about liberal ideology. I mentioned a specific organization: the U.S. Democratic Party.

That party hated Bernie Sanders (who is plenty “liberal”[1]) nearly as much as it hated Trump, and both were covered very negatively by the Washington Post.

I don’t know why you assume I’m criticizing the Post from a right-wing perspective. Perhaps you are too steeped in the two-sided U.S. political culture, where disliking one party must mean liking the other?

By the way, reporting objectively correct facts does not make a publication unbiased or serious. Even if everything TMZ reports about celebrity relationships is true, do you consider them a serious news source? Deciding which facts to emphasize is just as important as telling the truth.

[1] To head off in advance any off-topic responses: this is true regardless of whether you mean “liberal” in the American or European sense.


Yup, I did assume it was that type of critique, my bad. I do not have an answer that would be satisfing however:

- While I like Senator Sanders, reasonable people (I would like to believe I am one) can have stuff to dislike about his policies. This is how I view the non "alt right" complaints about him.

- Being more to the center than someone else, does not make you a mouthpiece for the centrist party.

Edit: I misread your comment again. I deleted an unrelated response.


> Trust journalists with a good track record

Really wish we could find a solution that didn't involve trust... rather, cryptographic proof.


Unless you can find a way to weave cryptography into light, it will always be possible to trick a video camera into recording something that is "fake". Trust is what society is built upon, we're not going to get away from it now.


Yes, I don't know if this problem can be solved, but it would be folly if no one was trying to crack it. We should always pursue ways to remove trust from vital social systems.


How about a government enforced mandate that everyone gets a secure computer embedded into their skull in a way that removal will kill them, with a camera that continuously records video, sound and GPS positioning? This could then digitally signed and timestamped by the individual's computer.

Then, when a questionable event happens, the video of all participants and anyone nearby can be replayed. Then, all videos of participants can be replayed to make sure they match up, and the video of the time leading up to the event could be scrutinized. Furthermore any one who visited that area in the recent past could be checked to make sure they didn't create some sort of hologram or something that the participants watched rather than the real thing.


I do not get that point of view. You still need to trust the intention behind whatever is cryptographicly verified. You will never have a verification of what happens inside of someone's skull (or for the scifi counter arguments: inside of whatever else they store their thoughts in).

Having cryptography without trust in some form of desire for teamwork in your fellow humans defeats the whole purpose of cryptography... But I admit this is more about my ideology and reasonable people can disagree about it.


I just hope we can figure out a way to create watermarks, like maybe some kind of hash sum incorporating an asymmetric key which is injected by the camera / device, which can provide verification of source and which cannot be overwritten.

Possibly could involve some kind of blockchain tech similar to https://proofofexistence.com. It doesn't solve all of the problems but at least we can begin establishing a record of source. The point is, something needs to be done to augment the trust we have in journalists because it's easy to trick or buy people out.


You really do not need anything that complicated (definitely not a block chain, when you only need the merkle tree). All of this is exactly what cryptographic signatures are for, a tech that is decades old.

And as mentioned in the above comments, it does not help with analog loopholes and trust in the person being photographed.


The blockchain provides an immutable public record. Yes, not necessary but still an interesting approach and actually a decent fit for a blockchain. A blockchain might still be an over fit but it's useful if I claim to have an undoctored video at a certain point in time but don't want to release it, immediately, and need a distributed way to validate that claim.

> And as mentioned in the above comments, it does not help with analog loopholes and trust in the person being photographed.

Like I said in my above comment, it doesn't solve all of the problems but at least we can begin establishing a record of source.


I am pretty sure that what you are calling a block chain is actually just a single hash or at most a Merkle tree. You do not need a consensus algorithm (PoW or PoS or whatever), i.e. you do not need a block chain if all you want is decentralized verification of a current or future publication.


I was just making an arbitrary suggestion based on previous work by proofofexistence.com. I didn't suggest it was the only way or the best way forward.

I realize that people all the time try to say "Blockchain would be great for ____" so you're probably a little hypersensitive and I get that, but the suggestion of a blockchain was arbitrary and tangential to the real point of my post, so I don't really feel like getting into it right now.


This reminds me of the Star Trek episode where Data figures out his “mom” is an android because her blinking follows some mathematical pattern.


Could not oursmart the non-clickable-on-smartphone subscription notification.


I used the Brave browser. No issue.


Why not cryptographically sign videos from officials?


Well that sounds like now having two problems instead of one given that if the hypothetical 'deepfake turing test' was passed completely it would amount to giving them arbitrary faking ability trusted via authority when public trust in institutions is already on a downward trajectory.

Signing it only proves that they approved of it not that it is fundamentally accurate in any way.


Because the most politically salient and salacious videos are the ones not officially endorsed by their subjects.


They don't have to be. The reporter/agency could sign the video.


This is my thoughts as well. Create a digital chain of custody using cryptography.


I wonder why they choose Obama and Hillary as examples for being deepfaked. Aren't they basically retired now?


Couldn't this be solved with Blockchain Watermarking?


Combat Deepfakes by doing original things regularly on your paid and/or public mediums that you incorporate randomly into new content. The fakes gotta play catch up due to training process. The more complex, subtle, and human, the better since they'll likely screw it up somehow. Just be an original, spontaneous (or more so) human from a source that people can verify.

The fakes will stand out. We'll all get more interesting celebrities. The celebrities will have more fun. Everybody wins... for a while. :)


I get irritated seeing deep fake videos and articles, because my 2004 Master Thesis "Automated Actor Replacement in Filmed Media" details everything about how to make them, how to overcome all their integration with real media issues, as well as how to identify fakes. It further details how to integrate Deep Fake (I called them Personalized Videos) into the existing post production media industry, enabling large scale productions. I acquired a global patent, and tried to create a Personalized Advertising startup. Yet, back then, no one believed, thought it was fraud, and when they got it they hyper-focused on creating porn, which if they read my thesis turned business plan, I detail how porn is economically unprofitable as personalized videos.

I tried to prove my ideas by created a "vfx at the last mile" pipeline with a small investment round, enabling media agencies to create ad with you in the Domino's pizza or rental car advertisement. It worked, feature film quality. Still they did not believe, or hyper-focused on porn.

I went bankrupt, closed, and went to work in Facial Recognition, where my skills are recognized. The personal one-to-one treatment I received when pitching to entertainment studios versus that of silicon valley VCs is worthy of a book; I have zero respect for VC now: of hundreds of personal interactions, one and only one was not a social climbing, rich parents moron.


> It worked, feature film quality.

Link to examples, otherwise your comment just reads like crankery.

By the way, I googled "Automated Actor Replacement in Filmed Media" and I couldn't find your Master's thesis. I could only find posts by you on HN, Quora, and other sites.


Not the OP but I found this: https://www.youtube.com/watch?v=1GnIVWEAPus


The process was created for and was used in this feature film production: https://www.youtube.com/watch?v=HBf9-_KyFJk That is my work, in combination with a VFX team. The original actor replacement process as implemented by the production team did not convince. I was a production analysis brought in, and created the extra embellishments to make the illusion convincing. The work was unique, I did some research and learned it was patent capable. Discussing with John Hughes, President of the Studio, he gave me the technology to pursue as I wished. This is significantly higher quality than today's Deep Fakes, and as such it requires actual VFX artists, as well as a non-trivial expense. (Of course, that expense is less than any return from using the process - that's the whole point.)


https://youtu.be/1GnIVWEAPus?t=38

Ah yes, feature film quality. For values of 'feature film' that approximately equal 'late-night comedy show where moving lips are overlaid on a still image'.


No, actual feature film VFX quality, which requires a VFX budget and is not created trivially. Any studio or production facility we negotiated vetted the process and understood it's viability. The process was used in dozens of VFX films, but as a one off technique, where the process could just as easily have been used to actor replace anyone. You see, that is somewhat the key the process, because it is successful regardless of who is being inserted into the media, it grants an amount of freedom, or casual attitude for the production artists who used to view actor replacement as a difficult, time consuming challenge.


Early test...


One of the patents, which are global btw. http://www.freepatentsonline.com/7974493.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: