Isn't the end game an endless stream of personalized content for everyone? Wherein the entire corpus of human-created media becomes a training set for our fantasies.
It is interesting how entertainment is again pushing the boundary of technology. Soon enough this push to make face editing tools for porn more accessible to everyone will allow anyone to:
1) Replace their ex-husband's face in their old family videos with their new husband's face.
2) Create a viral video of Donald Trump murdering someone.
3) Be the star of their favourite movie, porn or otherwise. (What's the effect this would have on people's memories, when they actively see themselves doing everything James Bond does, for instance? Shooting people, being generally powerful, and "getting the girl"?)
An abuser could make images where a person was at an event they were never at, or with a person they never met.
> "You've totally met Steve before, here's a photo of you with him, how do you not remember?"
An abuser could even more effectively tear down someones reality than ever before. If they were having an affair with someone they just met, they could claim to be old school friends catching up, just insert them into an old photo.
Obviously, it's not all bad. There is the potential for this to be used for good as well, but I'm a pessimist.
It does touch on an interesting point, though: we've had roughly 100 years in which photo and audio recreations of events constitute "hard evidence" beyond our ability to fully falsify. It appears that within the next ~20 years we'll lose that reliability - footage of a politician making a dirty deal or a businessman engaging in conspiracy will become deniable not just as a misleading edit, but as outright fabrication.
What do we do at that point? Do smartphone videos get automatically hashed and uploaded to a blockchain somewhere, so that we can prove when the video came into being? Do we return to an 1850s sense of news, where claims effectively cease to be falsifiable except via personal experience? Are we ready for any of this?
We, as technologists, should come up with solutions that allow such shouts to become meaningless.
But that's thinking too small: what happens when every recording of corporate misdeeds, every photo of a cheating spouse, even a child's baby pictures, lose their solidity as evidence? What happens to anonymous online conversations when "pics or it didn't happen" becomes "it didn't happen"?
Adversarial algos sound quite interesting. It might mean that we'd have to look at the evidence of fraud in a whole new way though, because the thing that gives away the fake might not be obvious.
Conversely, the people who want to believe false things (and under the right circumstances that's most of us) will be easier to convince. What is truth anyway?
* A browser extension that detects if the social media profile you're viewing face-matches any revenge-porn that's out there, and serves it to you
* A phone app that undresses people, or [woman < AGE], or whoever, in real-time via AI-guided compositing. Will this be considered as offensive as putting a mirror on your shoe?
* Digital VR girl/boy-friends ala the movie "Her", except with the face, body, and voice of anyone you choose
Suddenly all these things seem very close at hand.
Anyone can see you fake naked at any time. Meh who cares.
Anyone can put you in any random video. Meh who cares.
Anyone can ... meh
We used to think it scandalous/offensive for someone to take photos of us. Now it’s just part of being outside. We don’t even think about the fact that everyone walka around with a camera.
What if a suspect's lawyer plays the same CCTV video and shows the arresting officer committing the crime and says, "See. Anyone could have made this evidence." You'd then have to prove chain of custody and it can get incredibly hairy .. but only if you're rich and famous enough to hire those lawyers and make that argument.
Imagine what will be possible in even 5 years with good hardware. Deep fakes in real time, digitally signed.
This is basically already the premise of PoW -- it's hard to fake out the network and the chain of hashes show you exactly how much computational work was put into demonstrating veracity.
This doesn't remove the ability to fake things, but it imposes a price. If you really want to show something is real, dump a bunch of computation into computing hashes.
I know of and read about people commuting suicide or being killed because their honor was harmed.
This is not meh for many billions of people.
I agree. It’s not meh for many people.
I think that the more common it becomes, the more social mores are going to change to accomodate it. Once upon a time dressing like people do today was scandalous now it’s not.
You know, like wearing a straw hat too late in the year leading to riots. https://en.m.wikipedia.org/wiki/Straw_Hat_Riot
There was a time when it was quite easy to find (without even trying for that specific content) photorealistic rape, snuff, bestiality, and child porn on the public web, without any AI involved.
> Illegal or not, if it’s purely virtual law enforcement is going to focus on the subset of crimes which involve actual human victims.
Actual prosecutions for virtual (generally not photorealistic) child porn in various jurisdictions demonstrate that this is not a hard and fast rule.
Now with added realism, these lines could become blurry and we could see some of these issues brought up again.
Citation please. There is nowhere near enough precedent to draw such a conclusion in the US. The defendants in these cases often end up pleading guilty.
US v Hanley, US v Red Rose Stories, etc.
Personally I believe that even unspoken thoughts can have a strong moral dimension for the individual, though of course I see no legal dimension.
One aspect of this will be does our indulgence of our own negative fantasies weaken our capability to act rightly when presented with a real world moral choice and does that make us culpable...or more culpable if we make a wrong choice.
It'll be really interesting to see more data come out about this. This concept is pretty much at the core of the video game violence debate which is still somewhat ongoing.
I suspect that if it’s 2 seconds after pulling off a VR headset after being in a photorealistic world which had no forced errors then people would be very confused.
In the US, all of that is already illegal. If you put yourself in a position where what you possess is indistinguishable from the real thing, the courts err on the side of the potential victim.
Law enforcement's priorities are not going to change; they don't distinguish between what's virtual or not. If it looks like CP, you can't point to a producer with valid 2257 documentation and it isn't obviously a cartoon, you're cooked.
The 2257 law follows the legal reasoning that as a porn producer, you have the burden of proving your innocence. You have to show the proof that the person in your image or video is a real, live _adult_ person, and if you cannot, it is assumed that the person is a real, live _child_ person.
A landmark case will come along where a jury will decide that this new technology introduces reasonable doubt into this thinking. When this happens, the _government_ will then have the burden of proving that the person in the video or image is a real, live _child_ person.
If you upload it to reddit, congratulations, you're no longer a consumer. You're a distributor. New charges apply.
If you just browse, you're sort of safe, but you better hope your browser isn't caching anything to disk. Forensic reconstruction still constitutes possession. But nobody just browses.
It's a weird situation because in the UK, they simply block the content (no freedom of speech). In the US, we have freedom of speech so ISPs can't block anything. But they do have to report if you visit a site that contains illegal material or transmit it, plain text, through their services.
Child pornography is a strict liability crime to, so intent doesn't matter. Say you download something from /r/gonewild and the girl is 16, but she looks 20 to you. Too bad. You're not in violation of the law and can be put on a sex offender list.
Many people probably have illegal content without even realizing it. That's another reason why encrypting all your devices is so important.
I'm not convinced. https://en.wikipedia.org/wiki/United_States_v._Handley
We can keep going. Why would that be desirable? It hits the right chemical buttons in the brain. Drag it out far enough and we're really aiming at being blissed out brains in jars being fed shots of endorphins at the right intervals.
You need the highs with the lows.
Calm down Charlie Brooker.
I don't see how this is significantly different.
We could create a viral picture of Trump killing someone now.
On the plus side artistic tools that help materialize internal life become more effective. We can interact with our dreams and fantasies more readily, to potentially therapeutic benefit.
It's hard to say whether this trend holds more danger or promise.
Cryptographic signatures. Ie every frame in a video is signed with a 512-bit key that states authoritatively what camera was the source of the video and when it was taken. In order to change any pixels in the video you'd break this key and need to resign it. An attacker would be unable to do this unless they had physical access to the original camera.
But it'll be at least 3 decades before this technology is commercialized, people see the demand for it, and the majority of all cameras in the world are replaced by it.
Even if this new tech is on the market in a decade (simple tech, no demand / ecosystem yet), but 90% of existing / installed cameras don't have the feature then fake videos still get created with them. Only once ~80% of videos are authenticated, and a significant portion of the remaining 20% are fakes will people be able to dismiss non authenticated video. Up until then it's fakes non-stop.
We've got some ugly decades coming up.
A key is just data. Even if the key can't be extracted from common cameras, there's no magic that can prevent from someone making a device with a known key (or falsely register to the magic worldwide-trustworthy central registry that USA, China and Russia trusts but aren't able to influence to get keys for manufacturing fake news) that a particular camera has been made), one that can be used to sign data outside the camera context to make it appear that it was "securely" seen by a legitimate camera.
If you're up against the NSA then pretty much any tech evidence would be invalid. But if you're having a small claim court case, then it seems reasonable that authentication technology can keep up with faking technology.
1. The attacker wouldn't have access to the original cameras private key.
2. The attacker creates a fake video, creates a private key and signs the video with the key
3. The attacker tries to install the key into a camera
Step 3 has to be made impossible. Meaning that a camera becomes a trusted entity and only allowed parties (ex the camera manufacturer) has the authority to insert a private key into a camera. This would be that after installing a camera with a new private key, the key would then need to be signed with the manufacturer's private key and also stored in the camera. This shows the manufacturer is responsible for the contents of the video. If someone tries to change the camera's private key it would no longer match what the manufacturer signed.
Which yes then means we need to have authorized camera manufactures and a process for certificate revocation and all of that.
The exact opposite of "free and open" for recording devices - and the only way we aren't flooded with fake videos flooding and seriously impacting society. We have to want this if we want to still have a concept of video evidence in either the justice or social spheres.
I think, at this point, anything more than the camera having a secure device-generated key that it uses to sign its output and produce watermarks is overthinking it. That will add a nearly insurmountable barrier many classes of forgery, namely one were a forger claims to have a photo taken with your camera.
You also have to remember a significant class of photo-forgery would involve images that are claimed to originate from a camera the attacker controls. For instance, forged video of a robbery from a security camera. That could, in some cases, be defeated by observing that the images have authentication information they shouldn't have (such as traces of watermarks from other cameras).
At the end of the day, forgeries will be spotted by observing that they have subtle errors that don't add up. That's how it's always been done.
Don't have a registered key? Great, but that's no longer admissible.
Still, problems would exist with letting the camera read the private key for signing, but not allowing an attacker (forger) to access the private key and thus sign their modified footage.
Just a thought.
If a video is available and doesn't contain a pre-published key then the courts will still want to use it as useful evidence instead of simply ignoring it - sure, the opponent might have a slightly easier time attempting to contest that evidence as possibly doctored (just as they can do now), but courts will still make a judgement for each individual case.
I don't know if they subsequently updated it ( was called OSK ) but the idea has been around for some time. It was considered important for submission of evidence and several news agencies also insisted on it.
Think of it this way: we have fantastic technology for forging paper documents, but we still trust them. For instance, someone forged a memo to make a former president look bad close to an election, and they were tripped up because they weren't thinking about fonts .
Getting a forgery right is hard, and requires a lot of attention to detail. Technology will improve to add even more details that require careful attention, and maybe some cryptographic assurances. Maybe it won't be a world where anyone can spot a forgery, but there will still be experts who can.
If you're gonna frame a presidential candidate you should try harder.
Every recording device sends a stream of hashes to be cryptographically timestamped in realtime as they record. Now the time window to create fake evidence becomes incredibly small. You have to arrange the fake to be created at the same time or earlier than you will claim that the event happened. You have to know what you need to fake in advance, too, which isn't the case for all fraud.
Evidence will be corroborated from multiple places. If I'm in a public place, I can bet the scene is being recorded from multiple places, and I will not be able to predict who will be in that place in advance. Fake evidence can be caught out by finding discrepancies. If the majority of recording device owners whose evidence corroborates can be trusted, then the identification of the fake one can be made with reasonable certainty.
I'm not so optimistic. Other people here are focusing on breaking the authentication, and that's something to worry about. I'm worried about the effect that even _known_ false content will have on people. Seeing a perfectly realistic video of $POLITICIAN doing something repulsive will affect your emotional perception of $POLITICIAN even if your rational mind reads and understands some kind of "unauthenticated" label attached to the video.
I've grown to completely distrust audio recording ever since I found out about VoCo from Adobe. I wonder how long until there's an open source alternative.
> The subreddit /r/Deepfakes became very active very fast and new deepfakes are submitted every day with varying degrees of realism. The most scary part is that ANYONE can be deepfaked, not just celebrities. Provided you have the right hardware (because neural networks demand beefy video cards for training) you could train a model of your friend and paste her face onto a porn video and boom. All you have to do is download a browser extension that downloads all photos from someone's instagram and work from there. Nobody is safe from this.
> I think this here is as cyberpunk as it gets. The technology is 4 months old and has already yielded extremely realistic results. Think of what we will have one year from now. Something like this matches both the high tech and the low life aspect of the cyberpunk genre.
EDIT: Pasted the wrong link.
It's the "Give me six lines written by the most honest man in the world, and I will find enough in them to hang him" problem except now it's "give me six photos of the most honest man and I will convince everyone he loves to abandon him".
There are notable examples of harassment mobs forming and never dissipating off of really scant "evidence", things like accusing school shooting parents of being "crisis actors" or long harassment campaigns based on flimsy claims that a woman slept around. It's disgusting.
School bullying leading to suicides is already bad enough, what about when teens are sending forged porn of each other around? Or to classmates' parents to get them in trouble?
It will for a while, then it'll revert to the point that video “evidence” is—except to experts validating validation tools that most people don't understand—exactly equal to rumor in public estimation, because everyone has been exposed to so much fakery.
Counterpoint: "photoshopped" is a dictionary word.
Part 1: https://www.youtube.com/watch?v=QajyNRnyPMs
Part 2: https://www.youtube.com/watch?v=wfYlTtA7-ks
Where I would like this to go. Is either being able to take scenes from different films and create mashups like this.
Or perhaps, getting a whole bunch of extras. Narrating lines and acting in-front of basic sets with green screens. Then putting the faces of recognizable actors and using something like Lyrebird for the voices. Where actors have sold the rights of their faces, voices and personality for cheap.
Now you have a $100m movie for the cost of $100k.
A similar premise of the film: The Congress.
I really think in about 5 years, when the software is there and the dedicated IaaS to train the sets is commonly available. We'll start to see some really cool stuff.
Now I’m concerned about the implications of this. We already know any image can be faked and almost any video but we also laugh at people who say the moon landings were faked. Given this though how could anyone believe video evidence of say, the president with Stormy Daniels, which is a matter of unfortunate import with real consequences?
How hard would it be to fake an international incident from multiple vantage points?
But the overall implications are deeply troubling. I am tempted to say we're not entering a "post-truth" era, but more like a "post-reason" era. It's almost like rational thought has painted itself in a corner. These videos are almost like a mathematical "proof" too complex to be independently verified, or too complex for a human (and those are happening too).
If reason is hitting a ceiling, I'm not sure what else we could use to steer a coherent society through whatever murky waters might lay ahead.
I do believe we are entering what I call a "trans-rational" era. There are stable strange attractors beyond the Age of Reason. Our technology is forcing us to confront the questions of who we are and what we want to do with ourselves.
With face recognition, old pictures you might have posted online are very easy to find. Some ex-boyfriend shared a naked picture of you? You are screwed.
Now, you can simply say that it is a deepfake. Everyone will have naked pictures of "themselves" online, even if they are fake.
> Top is original footage from Rogue One with a strange CGI Carrie Fisher. Movie budget: $200m
> Bottom is a 20 minute fake that could have been done in essentially the same way with a visually similar actress. My budget: $0 and some Fleetwood Mac tunes
Specifically, I don't understand this sentence:
> Bottom is a 20 minute fake that could have been done in essentially the same way with a visually similar actress.
@ekimekim 44 days ago
"We've already seen this with images and Photoshop. Society and their heuristics of belief will adjust as these new capabilities become widespread.
What's more troubling is that as media becomes falsifiable, solid evidence of...well, anything, becomes hard to have.
The ultimate loser there is the truth, sadly."
In the short term, this will probably lead to all kinds of terrible things (kids getting bullied through computer generated imagery of them, people being fired for videos of them saying things they never said, jealous spurned lovers attempting to break apart marriages with fake videos, etc.)
In the long term, it might actually be a good thing - instilling a strong sense of caution for anything that claims to be recorded from the real world.
We've had good technology for the forgery of printed documents for a long time, and the world hasn't ended.
Central signature verification or blockchain could verify a timestamp - i.e. prove that the frame was taken (or maliciously created) before a particular time. That's about it.
Exactly the opposite happened with print media, where any old bollocks is believed if it aligns with the readers viewpoint.
These emerging video manipulation tools open new frontiers for related forensics research. In 10 years, when we see a video of someone doing something horrible, these people are perhaps our only hope of knowing whether or not what we're seeing ever happened.
I guess vein scanning is going to happen sooner or later in order for personal verification.
It seems /r/deepfakes (NSFW) is where the content is at (I assume the article doesn’t link to that, but haven’t checked).
That said not being able to definitely classify videos between "faked" ones and "original" ones could help people suffering from revenge porn (or political manipulations)
All video streams (and frames) are signed / watermarked with the source that recorded them. Video editing now becomes impossible without showing a change in ownership.
Unfortunately it will be a long number of decades until we can successfully replace the majority of cameras with these new ones that society doesn't yet have reason to understand the need for.
Yes inherently scary, but is there a solution to still allow a user to maintain control as to when they give up their privacy?
Ex. a 3rd party escrow service that will authenticate a video belongs to a user without revealing who the user is? Or some other key sharing scheme that allows users to decide when they want to take ownership of a whistle-blower video or some other govt corruption video in a repressive country?
I'm not a public key crypto expert, but I can't imaging this is the first time user controlled authentication has been investigated. The requirement would be that everything is always signed, but (through key chaining, sharing...) a user gets to decide if they reveal their association to it.
You can do plenty of things with it. Create fake porn among others. But is it ethical ? Is it ethical to create fake porn with a celebrity face ? With your ex-girlfriend's ? What about if you keep it for your own personal use ?
But then were are the limits ? What about a "teen" ? What about fake child pornography from pictures of childs you found on the internet that permit to alleviate yourself without causing any harm to others ?
I would at first think its okay if you keep it for private and own use, but then that becomes scary because we've been "trained" to think that child porn is not okay.
Don't know why this is been downvoted ?
The only difference is that now instead of using photoshop to make fake images, we can automate the process, and use it to make video. This isn't a paradigm shift, it's a change in pace.
I was just trying to suggest that our thoughts, words, and actions, even in private, are continually shaping us as people over all.
Simple example: I can often tell when my friends are depressed, even when they are trying to be depressed in private.
Also, I'm not sure why you used this as a chance to take a crack at feminists. tsk.
Well, it used to be that the loudest public voices asserting that what people did in private somehow affected the outside world were people like Pat Robertson. In 2018, it's people like Anita Sarkeesian. So I think it's apt to make the historical link there. If some other people come along in the future, I'll make the link then as well.
With such in place you will know who created the video as their verified identity is attached to it. If there's no verified identity attached to it the video wont hold any weight. Same goes for everything done on the iNet .. you can use it anonymously where what you say or do doesn't hold much or any weight vs. commenting,posting, etc using your verified ID does.
This just one solution that could help with all the fakery on the iNet and the mayhem it brings and will continue to bring but worse.
YOu can use the Internet anonymously as you always have but if you want to get your point across and ensure the veracity of whatever your posting you'll use your Internet verified ID.
Im thinking in the case of someone impersonating a public figure where the originator of the video is not the public figure its immediately labeled fake news. Also if it's revenge porn and you upload it without your identity attached to it .. it's immediately labeled fake.
Besides, what you're proposing is stupid for other reasons. If I have a video of a police officer murdering someone, you're just going to say it's fake news unless that officer himself posts it? That's straight bullshit. "Government Sanctioned Truth" bullshit. The truth about a person or agency is not determined solely by what they approve of.
Again I am thinking of two use cases re: this solution... revenge porn and celebrity & political figure fake videos.
And a political figure can already just deny that the video is real. You're not preventing it from being posted, so it will be posted, and people aren't going to believe it? Simply because it wasn't approved by that politician? Again, it makes no difference. People will believe it's real because of course that politician won't approve of an unfavorable message.
Again, things don't become true just because someone approves of them. And things don't become fake just because nobody involved approves of them. Labels do not mean anything unless you can guarantee their accuracy. You're really not thinking this through effectively. And again, digital signatures already allow all of this to happen.
You're basically asking for an official worldwide propaganda platform, and it will no more trustworthy than the existing propaganda platforms.
That's what I am getting at ... things change 180 degrees on the Internet where no believes anything that is not posted by someone with their verified identity. Thus making revenge porn pointless. You even say they will label it fake thus no one giving a crap about some fake b.s. and it's totally and immediately disregarded.
No. People will not care what identity you used to post things. Verified or unverified makes no difference unless the verification is thorough and performed by a third party. If it is self-done, then it is no different from how things are currently.
>You even say they will label it fake thus no one giving a crap about some fake b.s. and it's totally and immediately disregarded.
No. Non sequitur. You label it fake, and people will just disbelieve your label. They do not necessarily disbelieve the content.
At this point, I don't know what to tell you. Create a startup and put your money into it. Place a big bet. You can lead a horse to water, but he drowns himself.
Do you not think based off of this Face2Face technology and all the fake news this evolution needs to happen?
Just like people watched that first film of the train moving towards the camera and freaked out, while today that seems quaint, humans as pattern-matchers extraordinaiers will find a way to discern the fakes.