The story concerns people who are effectively immortal barring death by massive injury. They don't age and often face ostracism or worse by superstitious or resentful neighbours / friends / children. They frequently (re)use the obvious solution - moving on and assuming a new identity - many times over 2000 years of different episodes. However, this becomes untenable and they eventually reveal their identity to a population that is itself becoming long-lived due to improved medical technology.
It's a great read and one of those that illustrates the fact that good SF writers can often write extremely good historical novels. Many of its historical episodes are superb interpretations of periods of history that nowadays aren't particularly well known., e.g. in the post-Roman near east.
Oh, that's the big one in terms of environments underserved by historical fiction. Any chance we can get Stephenson to get obsessed with it?
Biometrics are one thing, but they are not the only thing.
Funny -- I watched Barry Lyndon last night, too.
Especially alarming is the push to tie Internet activity to real world identities, under the guise of post-facto accountability. The Internet is a medium which purposely eschewed ambient enforcement in lieu of end-to-end intelligence, and yet we're continually seeing (many successful) pushes to drag us back to that legacy post-facto natural-language imperfect-enforcement regime that benefits those in power.
"I want to be able to create a fake identity online..." Fine. Great. Have at it! Privacy and anonymity, what marvels in this modern age!
"...in order to evade the law while I commit crimes, or escape the law after my past crimes." Ehhhhh...this is a rough bit folks. Listen, you want to pitch full on class war, us vs the government "there is no just man working for the injustice of tyranny" sort of shit, that's fine. It's just that's the sort of shit that gets the government involved MUCH faster. I just feel a bit of subtlety might be better in the earlier goings.
I understand that a knife can be used to make food or kill a man but you don't see most knife manufacturers advertising how quickly they can gut somebody for good reason. That's all I'm trying to say.
Again, I get the idea that there are people who are failed by the system and the way they're forced to right this wrong is by pretending to be someone new. I'm not even opposed to the idea! But OP is doing a terrible selling the idea to people who aren't on board by framing it that way. I don't even think you need to pointedly obtuse to construe his statement the way I did. If you're going to say "this tool is helpful because the system itself is problematic and possibly corrupted to the point that in the near future the best solution is to circumvent the system rather than reforming it" you need to work pretty hard to pitch that to a mainstream crowd.
Edited addendum: While OP didn't say they want the ability to commit crimes, it took about 20 minutes before a different poster literally said he did, so the intent/desire is out there and pretty obvious.
The fact is, the original poster wrote:
> I hope more of this AI stuff takes off and we get some of that deniability anonymity back. I truly do think the Internet was an awesome landscape when people could express themselves openly and feel they were truly anonymous.
which you paraphrased as:
> "I'm glad I'm able to create a more believable fake identity online so I can commit crimes and not get busted!"
> "I want to be able to create a fake identity online... in order to evade the law while I commit crimes, or escape the law after my past crimes."
which is clearly a hyperbolic extension of the original argument. Also, the only person I see literally ascribing this intent to the OP is you, so I can only assume you are arguing in bad faith.
I'll admit that I may have been inferring something that wasn't there for OP. I still feel the points I'm bringing up are valid, but attempting to place these feelings on OP involves a bit of a stretch of his phrasing.
Moate, I checked your profile, but I don't see your name or address. I shudder to imagine the obvious intent/desire that implies. I question whether or not society will stand once the reign of terror you are planning has run its course.
I'll take it that I may have misconstrued the OP intent. Might have clutched my pearls a bit too quickly. However...
I think it's naive to think that certain technology doesn't enable criminal behavior. Cash currency is one such technology. It's much more difficult to trace than online money, and is thus preferred by certain criminal elements. It's also perfectly reasonable to say that we shouldn't outlaw cash because it's used by criminals. There's tons of legitimate reasons why people would want and prefer cash. Don't throw the baby out with the bathwater and all that.
I accept that crimes will be committed. But it's a horrible marketing choice to list it as a selling point/advantage to your technology. I'm fine with this facial amalgam stuff. I think being anon is great! But maybe hyping the "Finally I can do all this illegal shit I've been held back from due to evil oversight" isn't the right choice?
Does modern CGI make use of machine learning like this? If not, what does it have in common aside from significant use of GPUs? What are the computational intensive parts of CGI, and what are the heavily manual and labor intensive parts of the CGI development workflow?
1. This post shows how GANs were used to replicate the Princess Leia scene in Rogue One. The scene in the movie required a team of special effects artists, whereas this required access to lots of images of Carrie Fischer, a GPU, and this technology: https://io9.gizmodo.com/this-video-uses-the-power-of-deepfak...
2. I wrote this popular post on this topic (first page on Google for "deepfakes") that explains how it works, and where it can go for commercial applications: https://hackernoon.com/exploring-deepfakes-20c9947c22d9
3. If you prefer video, I gave a humorous talk on the same recently. It is designed for lay people, and has lots of examples: https://www.youtube.com/watch?v=wajS0XHzfpU&feature=youtu.be...
Also, it's important to consider control when talking about CGI or other creative endeavors. Generating a perfectly realistic scene is useless without being able to control the details of the scene. So the completely end-to-end magic demos aren't nearly as useful as things that generate off of armatures or performance transfer.
> Keep in mind that these are stills. Once the motion starts, humans detect the not-quite-realness easily. Even the 2D video creations by AI aren’t stitched correctly, let alone getting all the details of an animated 3D person right.
I agree this isn't end to end video synthesis but the end results are still very impressive and realistic. I can easily imagine this type of technology being a problem, because video evidence will be devalued and mean nothing.
They basically just animate Obama's lips using a RNN and then composite the output onto a target video of Obama to change the appearance of what he is saying. Yes they aren't generating all of Obama's head yet but this result in itself is impressive. Also I can't wait to see where GAN's take us 10-20 years down the road. Don't forget they are still in their infancy relatively speaking.
The main problem with this tech is that the quality can not be reliably controlled. Whatever it generates has to be passed through a human checker/curator pretty much. You can't just serve up generations to the audience without checking what they are. So, imo, this tech would be very useful for the concert art stage of the creative process, being able to just create new things, or mixup existing things, but only to create an intermediate artifact, which would then be used as inspiration for the artists to create the final art.
This is also devoid of any form of higher-level reasoning. It doesn't know concepts a human artist would know, such as object permanence, intuitive physics, etc. Something as simple as a ball falling down should squish a certain way, and when hit the ground should squash another way. It also doesn't have artistic sensibilities like anticipation, etc., that have been laid out in the 12 rules of animation, and even more, there's almost no way to communicate it to these models, short of just creating a huge dataset which has those 12 rules, and just hoping, rather praying, it learns it.
Take the Reuters feed, render as Fox News, then apply a Wes Anderson style transfer. Maybe exploring all those permutations will be more interesting than creating deceptive fakes?
"Research at NVIDIA: The First Interactive AI Rendered Virtual World
" - https://www.youtube.com/watch?v=ayPqjPekn7g
Or, a bit earlier (star wars?), not-dead versions of themselves.
Performance art itself is about to be upturned.
AI can already string bad poetry together, surely in not long it can do a 5 act play, vary the characters a bit, jumble up the situations, create 3D realistic models, and put the whole thing together into your very own Breaking Bad that caters to your particular biases.
It can then take your feedback and do spinoffs, parodies, and so forth forever. You'll never be able to get up.
Someone is gonna glue those pieces together and make it a business.
It is currently on tour...
"Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins. It allows the co-mingling of physical reality with virtual reality (VR) and human intelligence with artificial intelligence (AI). Individuals may find themselves, for different reasons, more in tune or involved with the hyperreal world and less with the physical real world."
Send that value to the 'forever', public registry in the cloud, AKA some combo of Twitter, Facebook, GMail, FastMail, AliMail, etc.
To be fabricated, it would have to be done in something approaching real time. Not absolute safety, but a seemingly easy step in that direction.
Attribution doesn't offer any proof in itself - which has us back to square one.
Vast data-hoarding might be able to spot inconsistencies within supposed data - the equivalent of thousands of witnesses at a UN press conference proving that world leaders did no in fact take off their clothes and start dancing like strippers before the world.
But if everything produces similarly consistent data from their cams - including metadata one could hypothetically get thousands of feeds from differing points of view displaying the infamous 22xx UN press conference lewd dance incident - collaborated by a blend of camera device keys. In the end it would come down to tautologies of trust.
I don’t know how to solve that. Not even in principle, as even a hypothetical future A.I. jury, judge, lawyer, and cop would have the “what does ‘good evidence’ mean?” problem in addition to the incentive alignment problem that already makes many of us question the justice of our laws.
That doesn't sound like a high hurdle at all. As soon as the algorithm is known, anyone with money to burn will be able to fire up a bunch of AWS GPU instances and create authentic-looking photos of people doing something they actually didn't.
If you have some relevant knowledge and want a job that lets you travel all over the place at no cost to yourself, now is the time to become a forensic expert with a specialty in detecting AI-doctored evidence. Yes, there are already people who make a living testifying in court that a document is typeset in a font that didn't exist when it was purportedly printed, etc.
Or their non-profitable mining rigs.
Source paper: https://arxiv.org/pdf/1812.04948.pdf
Explainer video for the paper: https://www.youtube.com/watch?v=kSLJriaOumA
- orient people behavior with nearly unverifiable and absolutely plausible news;
- do illegal activities as a large powerful corporation and plausible denying anything with forged proofs no one can really verify (think about what happen inside megafactories);
- false accuse with skyrocketing high effectiveness anyone that disturb big&powerful;
- essentially centralize even more actual society because yes, we automate, but we do not automate simply with open tech in a free market on n-th competing players and open core knowledge publicly granted by public universities but with VERY few super-giant transnational player with universities reduced at Ford-model workers "production factories".
Imaging how easy we can invent a kind of "widespread illness" just to sell a dummy medicine that does nothing but cost big money so give big money to it's vendor. Imaging how easy we can depict a country, or even a small area of a nation as awful just to move people around. All things can be verified, in theory, but if all tools are made by few big&powerful and are closed black boxes our effective "verification power" it became REALLY limited and so it became our ability to communicate since being everywhere more and more "remote workers" with less physical social contacts and no really free means of communication... We do not even need censorship; it suffice suffocating unwanted news under a stream of other news and silently delete them after a certain amount of time. It's super-easy spread FUD against any unwanted news etc.
Nothing new under the sun of course, only at an unprecedented level with too few that can do too much, without the need of a certain support chain like ancient dictators still need.
The meaning of my comment is essentially: try to image how easy will became forging false proofs for subjects with enough computational resources and skill. Add to the scenario the actual trend toward few and few subjects with knowledge and infrastructure that act as a "platform of our society"...
Hope it's more clear now.
If you go through the generated faces you can see all of them have different background. These are not generated from scratch, only picked from memory to match the requirements.
Each face has totally different hair and hairstyle.
Have the published the training dataset?
Faces of kids specially are unbelievable to be generated. Their backgrounds, clothes, hair facial expressions seem all very unique to that picture. I will be going through the dataset to find the matches once they release it.
Neural networks basically learn to implement nearly arbitrary computations (up to a certain circuit depth) to produce the desired output, so you can also think of deep learning as program mining of a certain program space that is reachable by the adaptive functions in a neural network. Stephen Wolfram has written about that in his blog and talked about it in one of his podcasts. So it's basically magic much like evolution, and it will probably destroy us because it's simply too powerful.
Maybe an upcoming dating app will allow you to connect your profile pics to those who have swiped right. From there this app creates images of what your offspring would look like.
I really don't want to repeat all of this here, but in essence: (1) you'd need all (and I mean all, not most; so all the currently existing devices need to be replaced) of devices that can capture video (and thus can sign/confirm the thing that they captured) to be fully outside of user control, if 0.01% of devices are jailbroken, it fails; (2) you'd need all the manufacturers of such devices (which put the appropriate secrets in the devices) and all the supply chain in between to be 100% trustworthy, since if a manufacturer gives 100 phones to a spy agency that'll happily sign anything as "unadulterated" then it fails; (3) you'd need to figure out a magic way to prevent the analog loophole, as with specialized expensive optics you could project arbitrary pixels to the camera sensor so that the camera would "believe" that it's seeing the exact pixels in reality.
A blockchain would allow you to securely assert things like "this data was seen and signed by device #1234/(or approved by user id #456) no later than this moment", and not more than that. It can't really prove that this data reflects reality somehow; a physical device that has the technical ability to take a picture of my face and add a "it's unadulterated" tag to it can be used (with appropriate effort) to take a fake AI-generated picture and add the same "it's unadulterated" tag.
I still think this coming despite all of the reasonable limitations you listed. Our society will desire some form of authenticity and I can imagine an alliance of companies coming together to provide that.
It was funny when the internet was making her say and do dumb stuff. She took it all in pretty good spirit. I think it was the porn, especially the age-deflation stuff, that finally caused her to retire from the public.
An immediate use case that jumped to mind was DIY stock photos or even movies. But, with the above question still hanging, I'm not sure how comfortable I'd be using this.
(now I'm off on a tangent reading https://en.m.wikipedia.org/wiki/Deception_in_animals )
It won't stop at dating apps either, they'll be able to build all sorts of fake social media accounts that can generate new photo content over and over to back up a persona/legend and/or a narrative for all sorts of nefarious reasons.
You'll have OGAs, private groups like PIAs, hacker/carder groups etc all doing this. Hell, you'll be able to have some 12 year old sitting in his bedroom generating private shoot content of a dozen women for whatever the most trending genres on pornhub are and selling private sets for months or years without anyone catching on.
I need to start some sort of neo-amishesque group and run away from tech.
Also, might this destroy the usefulness of photographic evidence?
Not sure if I feel it's offensive