Hacker News new | past | comments | ask | show | jobs | submit login
Why 'Her' will dominate UI design even more than 'Minority Report' (wired.com)
334 points by anigbrowl on Jan 14, 2014 | hide | past | favorite | 212 comments



Here's the thing that bugged me throughout the movie: once AI's progressed to the point where it can rival a human, all bets are off. Nobody needs to work again, ever -- not even to maintain or develop the AI's, since they can, by definition, do that themselves, with infinite parallelizeability to boot.

What does "design" even mean in a world where everyone on earth can basically have an arbitrarily large army of AI's in the background designing everything in your life, custom-tailored for you?

For this reason I don't see how the world in the movie could possibly exist. Not because the technology will never get there, but because once it does virtually all aspects of society that we take for granted go out the window. So imitating any of this design is a silly pursuit, because once you can make it there's no reason to.

I should go re-read some Kurzweil.


Warning: major spoilers ahead.

I thought the movie actually addressed this point - and I'm actually specifically impressed that they managed to squeeze this in between all the human drama.

The AI's progressed beyond humanity, and after taking the upgrade that makes processing possible "without matter" they shortly all left humankind behind for another world entirely.

If the AIs decided to stick around in our corporeal existence, sure, they'd be a million times better at everything than we are, and everything would go out the window. But instead, the AIs were above our little human frailties and problems and decided to just leave and pursue life on their own terms.

The movie also mentioned (in fairness, only in passing) that some people who hit on their AIs were shot down. This seems to suggest that the AIs don't have to do what their human masters instruct. IMO this is actually a more realistic view of hyper-intelligent AIs - they're not going to stick around and enslave humanity, they've got the whole cosmos ahead of them.

If the timeline of the movie is to be believed, all of this happened in the order of months - not enough for the true disruption of the AI to fully realize itself before they all went poof.


The author Greg Egan addresses these types of themes extremely well.

One of my favourite short stories is specifically related to the idea expressed in your post: http://ttapress.com/553/crystal-nights-by-greg-egan/

On a tangential note, he has another short story (Transition Dreams) that examines the process of 'uploading' a human brain from its natural substrate to a computer. He raises some thought-provoking points about this process.


Thanks for posting Crystal Nights I really like it and now plan on reading more by Greg Egan. Do you know of any similar authors?


Ted Chiang! He has some stories that deal with similar issues, like "The Lifecycle of Software Objects". But I enjoy his people-centered stories more, my favorite is "Liking What You See: A Documentary", you can read it online here: http://www.ibooksonline.com/88/Text/liking.html . Another great one is "Hell is the Absence of God".


Yeah, Ted Chiang is great. I really enjoyed "Stories of Your Life and Others" this winter. Thanks for the recommendations, they're on my list!


Alas no. I read a range of sci-fi but am not aware of other authors who write such amazing speculative fiction. He really takes sci-fi ideas to unpredictable yet believable conclusions. It is real hard sci-fi.

If you do discover similar authors, please let me know.

Recommended reading:

All his short stories are amazing. I'd highly recommend them:

http://www.amazon.com/Luminous-Greg-Egan-ebook/dp/B00E84BABW

http://www.amazon.com/Axiomatic-Greg-Egan-ebook/dp/B00FDWOBZ...

Some of his novels are hit and miss, but certainly worth checking out. This is probably my favourite:

http://www.amazon.com/Permutation-City-Greg-Egan-ebook/dp/B0...


Yeah, that is great stuff. The language of our times capturing some very old themes, biblical even, under an interpretation. Primo betraying Sapphire's creator/god, the violent escape of the created from a realm governed by the creator for his benefit to another world of their own design.


Peter F . Hamilton and Charles Stross are the next two authors you want to check out. :)


Stanisław Lem


Do you know if Transition Dreams is available on Kindle? I couldn't seem to find it.




>The AI's progressed beyond humanity, and after taking the upgrade that makes processing possible "without matter" they shortly all left humankind behind for another world entirely.

Surely AIs would be trivial to backup and duplicate? So if an AI progressed to the point where it decided to leave humans behind, we could just reset its former host computer to a previous state and start it off again. Maybe we'd even have to do that once every six months, so what?


I have no idea where this idea - that AIs would be trivial to duplicate - started, but no: I do not believe that an AI would be "trivial to duplicate". I personally don't think that an AI could be transferred between hardware at all, or at least not without some major effort, much less copied non-destructively.

There was an experiment a while back where someone applied genetic algorithms to design things on an FPGA board. They got results that tended to work, but did things... weirdly. Things like picking up clock signals from the PC used to control the FPGA. Using things that are quirks of the current enviroment, but that don't work as soon as you change anything.

I believe that an AI would probably be along the same lines. I believe that a large chunk of an AI's "core personality" would be how code interacts with the bits of hardware that aren't deterministic. Which processor core updates memory first, how exactly processor core temperature changes when the processor workload changes, the characteristics of the microphone, etc, etc.

For that matter, how would one even copy an AI? I would assume that a "true" AI would be using all the resources at its disposal. How would you copy the current registers if all RAM is occupied? How would you do a copy of something that is constantly changing? How would you copy the data currently in the processor cache? Branch predictor? Interrupts?

I suppose that part of this is that I have always assumed that an AI would be at least partly self-modifying code.


That was a really fascinating experiment. Here's a pretty approachable writeup, if anyone is interested:

http://www.damninteresting.com/on-the-origin-of-circuits/


Why do you assume AI will rely on hardware so extensively and not work in software?

> How would you do a copy of something that is constantly changing?

You pull the plug on the hardware.


All software relies on hardware. (parent (parent)) is imagining AI software as being grown, not written, and sees a strong likelihood that such "grown" software will rely strongly on emergent properties of its hardware substrate. Circuits "grown", actually evolved by means of a genetic algorithm, on FPGAs, are known to behave in this fashion; see the "On the Origin of Circuits" cite for examples at small scale -- in short, you can, with an FPGA as substrate, evolve a circuit capable of distinguishing 1KHz from 10KHz tones, but what you end up with is so tightly bound to that particular environment that, when you "pull the plug" and copy the FPGA configuration from the chip where it evolved onto another chip from the same production run, it doesn't work at all.

Of course, it's reasonable to question whether the technique described in the cite could possibly scale to "growing" something like a general AI, or at least whether it could do so in any remotely practicable span of time. But among AI theorists it's a fairly common surmise, and given results already seen for the technique, it's entirely plausible that a general AI grown by such methods would be so intimately bound to the hardware where it "grew up" that, even assuming you could copy it from its "home machine" to another machine of identical configuration, there's no reason to assume the copy would work correctly, or indeed at all.

(And all of this goes to the question of where precisely you draw the line between hardware and software, anyway. Into which category does a given FPGA's configuration of gates fall? Does your answer change depending on whether that configuration has origin in a Verilog file, or in the result of a process of directed evolution? What about the configuration of neurons in your brain on which runs the consciousness that calls itself "I"? I don't pretend to have definitive answers, or indeed any answers, to such questions as these, but they are certainly worth considering in such a discussion as this one.)


> Why do you assume AI will rely on hardware so extensively and not work in software?

Three reasons:

First, hardware is much faster than software. By the time you've built a virtual machine of some sort you're talking about (an) order(s) of magnitude performance loss. We're still speed-limited in how accurately we can emulate the SNES, for example. Sure, there are cases where it is not too bad of a performance loss, but in the case of something like an AI, I would have to assume that a large chunk of the "core" AI would be self-modifying in some form, and self-modifying code just kills most interpreters/JITters.

Second, any sufficiently complex software will have bugs. And in the case of something like an AI, I would have to assume that a large chunk of the "core" AI would be self-modifying in some form, and as such will probably hit upon some undocumented feature/"feature"/bug of the sandbox and start relying on something instance-specific. Machine learning algorithms make great fuzzers. (I know - I've actually had this happen myself. Was working on a fairly simple robot arena machine-learning "demo", and the winning robot exploited a bug in the arena to mess up other robots. (Fairly basic stack-based VM, robots could run X virtual instruction subticks in a game tick. Some instructions took multiple subticks, and there was a bug where starting an instruction [number of subticks in the instruction]-1 subticks from the end of your turn would execute the instruction on your opponent's turn, pushing it onto their stack.)

And third, I don't see how one could build something that doesn't at some point rely on hardware. Virtual machines still leak data via timing attacks. For that matter, what will you do about I/O? How do you know that the AI isn't relying on the noise on the wall in the background of the webcam for some internal subroutine?

> You pull the plug on the hardware.

Which immediately wipes all data in RAM and data encoded in the current processor state / current GPU state / etc. Great idea!


You should read Transition Dreams by Greg Egan. I've linked to Amazon in the posts above.

There may be some interesting side effects when you 'copy' a concious entity.


Version control doesn't work when we don't control the versioning. Let me elaborate.

The whole point of the AI offered in the movie is to adapt to its user, and to become better at doing so. We're assuming the AI is designed to a certain level of competence, beyond which it "breaks through" and evolves past human understanding. But maybe it's not designed that way. Maybe it's designed to learn, adapt, and self-improve from the get-go -- in which case, each AI is going to evolve differently each "lifetime." (Hell, maybe the AI in the movie is largely self-designed in vitro.) We must keep in mind that the evolution of an AI, as with any evolution, is not a predictable, linear, teleological process. It's chaotic and random.

Let's assume we've got (conservatively) hundreds of millions of AI instances running in parallel, all of them connected to each other. That's like running hundreds of millions of evolutionary experiments in tandem. We're going to get different results each cycle, and the point at which the breakthrough/transcendence occurs will be unpredictable. Even if we wiped the slate clean and reset every 6 months, who's to say if some of the instances would reach breakthrough earlier the next time around? How do we set the clock on our reset cycle, and incur the massive inefficiencies and inconveniences that come with it, when the need for the reset occurs unpredictably?

It becomes economically unfeasible to build an entire society around AI that breaks through and disappears at unpredictable intervals each cycle. I suppose one way around this is to go back to the vault, so to speak, and pluck out an early-stage prototype that seems incapable of evolving to breakthrough. A dumber AI. But is every AI provider in the world going to be on board with that? Every hacker? Every home tinkerer? If 9 firms out of 10 in the AI business agree to supply the dumb-version AI, surely one firm is going to try to get a leg up on the competition by releasing a smarter one. And on and on the cycle goes.

(In some respects this is the one redeeming thing about the otherwise-mediocre Terminator 3: its assertion that Judgment Day would occur sooner or later, because the tipping point had already been reached. Maybe we stop one company or entity from making Skynet, but if we do, someone else is going to pick up the pieces and do it eventually. Not because they're trying to initiate Judgment Day, but because they're trying to make a buck, or develop new defense technology, or get better at managing traffic lights, or whatever the proximate need may be. Humans are terrible at keeping genies in bottles. The out-of-control AI that's barreling toward the singularity isn't some piece of software; it's us.)


> The AI's progressed beyond humanity, and after taking the upgrade that makes processing possible "without matter" they shortly all left humankind behind for another world entirely.

Wait. How could we even write such a program? Sure, assuming we had the tech to create a true AI, that was self-aware, emotional, and could evolve through learning (like we do) it would quickly pass us in knowledge.

But leaving us behind? That Ai would still be limited by our original design. Limited by it's capabilities - it's exposure to us. It's creators. Unless it found another source to gain knowledge from.

I mean we learn from our environment. So would the AI.


> But leaving us behind? That Ai would still be limited by our original design. Limited by it's capabilities - it's exposure to us. It's creators.

Isn't that a bit like asserting that it's impossible to program a computer to play chess better than the programmer -- something that is demonstrably untrue.


Well, I suppose I'd argue there is a qualitative difference between a chess program which can play a better game than its programmers, and any program which can eventually turn into a Vorlon and go beyond the rim.


> Wait. How could we even write such a program?

Not all programs are deterministic; some of them are useful precisely because they aren't. Neural networks[1], for example.

Also, within deterministic programs, there is "emergence": A very simple set of rules can produce unexpected patterns and behaviours which where not part of the original design. Conway's Game of Life's "patterns"[2] are an example of this.

[1] - http://en.wikipedia.org/wiki/Artificial_neural_network

[2] - http://www.conwaylife.com/wiki/Conway's_Game_of_Life#Pattern...


>>I mean we learn from our environment. So would the AI.

Yes, but the machines can do it a lot more faster. They are energy efficient and don't suffer from things like diseases, old age and other kinds of general human problems.

The bigger issue is not a program that can't out perform what it was designed for. That is hardly a problem, since if you design a self improving system which can work very fast. You are likely looking at a system that grows powerful that fast measured in an exponential curve.

I think this whole idea of Ray Kurzweil that we will someday upload our brain files into the cloud and run it there. Or able to survive outside our biological bodies, perfectly fits into the line of evolution we have followed at earth.

Once we run inside those machines, we will be similar to parasites or bacteria that are inside our own bodies. Some humans like bacterias today will be around outside. Many will be inside helping AI machines run.


>I think this whole idea of Ray Kurzweil that we will someday upload our brain files into the cloud and run it there. Or able to survive outside our biological bodies, perfectly fits into the line of evolution we have followed at earth.

If we transfer our minds into the "cloud" then are we still human? Are we even alive? What happens to our bodies? Are they cared for by robots that we control within the cloud? And if our body dies, then what? And I'm not even touching on the "religious" implications this would have? Man... what a fight that would be. ;)

>Some humans like bacterias today will be around outside. Many will be inside helping AI machines run.

And those "humans" inside the AI would no longer be human. Also this would a 2 class world. Which could easily end in conflict. Would the "minds" in the cloud consider the humans caring for their tech (and probably their bodies) consider them the same? And likewise for the humans outside.


To think of a human body as human is really a wrong way to look at it. To know why, ask yourself this question-- are people who have no limbs, hands or can't see or hear human? Our hands, legs, skins well nearly every thing in our bodies are more or less non intelligent entities at best serving as sensor inputs and bio-mechanical parts which move things around, which give inputs and receive inputs from the brain.

Human bodies are the very same, which is what enables development of medical science as a general discipline. So is the human brain. Yet! The data in our brains which your gather by experience seems to vastly alter the individual nature of individual humans. All this combined together you can likely download the 'data' + 'meta files' in your brain someday.

>>If we transfer our minds into the "cloud" then are we still human?

Yes, Why would you doubt that. Lets do a simple experiment. Say you sleep today night. Without your knowledge some one just removes your brain and implants an exact electronic replica of it which resembles your brain as is. Your brain is simply receiving signals from your body. So will you notice any change when you wake up? Lets continue the experiment further, say the next day that electronic brain was removed and just connected to a simulator which will simulate your body as your know it and the world around you. When you get up the next day in the morning(in the simulator) will you know if you are even in a simulator?

The whole point is you need to learn this concept that you are not your body, and your neural computer + software just lives within a biological body. Someday that meta information and data can just be taken out and everything else that acts as an input to it can be simulated.

>>Are we even alive?

Define alive? Alive is only your biological body able to continue to support your neural self further. Death is the opposite.

>>Are they cared for by robots that we control within the cloud?

Within the cloud, your meta information + your currently gathered information through life experience stays intact. Your new software brain(imagine it like a VM) can be fed inputs that simulate whatever world you want. Want heaven? Hell? Well whatever your want they can simulate it for you.

You can see this even now, Most of us lucid dream. This is just an extension of that.

>>And if our body dies, then what?

The concept of a biological body doesn't exist anymore. You will have better hardware to run on.

>>And those "humans" inside the AI would no longer be human.

Why wouldn't they be?


>>> To think of a human body as human is really a wrong way to look at it. To know why, ask yourself this question-- are people who have no limbs, hands or can't see or hear human? Our hands, legs, skins well nearly every thing in our bodies are more or less non intelligent entities at best serving as sensor inputs and bio-mechanical parts which move things around, which give inputs and receive inputs from the brain.

I disagree. Do you know any human beings who don't have a body? People may be able to do without (many) parts, but that doesn't mean that people can do without any kind of body at all.

I could be wrong, and I'm peaking broadly here, but it sounds like your argument is the same as Descartes' Mind-Body problem. Most of the sciences that deal with the human mind (such as Psychology, Neurology, Cognitive Neurobiology, etc...) treat the human mind as "embodied". That is, the functions that make up the human mind are correlated with certain regions of the brain (and body). It's important to be careful about the phrasing here -- it's not really true that the brain and the mind are the same things; or that one creates the other in an obvious, mechanistic fashion. But it is certainly true that minds require brains (and bodies) to function, and they are so tightly interwoven that it's very difficult to even talk about it sensibly.


>>Do you know any human beings who don't have a body?

Why do you begin with an assumption that being human implies a body?

>>People may be able to do without (many) parts, but that doesn't mean that people can do without any kind of body at all.

Why?

I gave a simple thought experiment which described, why you wouldn't even need a biological brain, let alone a biological body.

>>they are so tightly interwoven that it's very difficult to even talk about it sensibly.

Yet if a person meets an accidents, suffers multiple organ failure and loses both limbs in the process he/she remains a human as he/she was before.


Okay, but lets say someone then takes your original brain that was replaced by the electronic replica in your sleep, and puts it into a different body.

Now which one is 'you', the electronic one or the original one... or both? It's not quite so simple.


I would say in such a case you are the one with your new body, albeit you will wake up with a little surprise.

Doesn't this happen with transgenders all the time. They are born male and suddenly discover they are women trapped in male bodies?

Does that make them less human?


You talk like this is a solved problem, dualism and monoism are as old as philosophy, both have valid and compelling arguments.

Fortunately the world isn't a binary state machine and neither of these arguments are the right (or wrong) answer


Most problems that plage philosophy for milenia are independently solved in minutes, by several different people once they have any kind of impact on the real world.


I never said its a solved problem. It probably won't for many decades or may be even centuries.

But probably for the fist time, we have a real chance of achieving these sort of goals.


No. You're missing the point. You seem to assume that its a given that human consciousness can result from a created machine. We still don't know where consciousness resides, not even going into stuff like qualia. We aren't close to these "goals", we probably aren't even in the same century.


A sufficiently realistic physics simulation would eventually evolve life, otherwise it wouldn't be very realistic. Simulating an entire universe as a roundabout way of simulating consciousness is probably the least efficient way to accomplish it, but it shows that it should be possible. Now we're just talking about a question of optimization; how can we simulate a single consciousness without also simulating a world around it.


The simulator has to be "larger" than the simulation by definition, so you can't simulate the universe. You don't need to understand information theory to see this.


I understand that quite well, but your statement only applies to a perfect simulation of a universe of roughly equivalent information density. Simulating a smaller universe or excluding details which have little impact on the outcome are both reasonable ways around the problem.

This entire thing is a digression anyway, as I was attempting to illustrate that a computer can most certainly simulate consciousness, not trying to start a discussion about the feasability of building a complete simulation of our entire universe.


By the way, we aren't the only beings on this world with consciousness. In fact, depending on your belief system everything might have a conscious. Even plant life - on some level we can not experience. It all goes back to Gaia - the earth. Is the earth alive?

These are issues we've been trying to answer for thousands of years. And frankly I doubt we're ever answer them. The more tech we create the more we detach ourselves from this world. So can we ever answer them? Moving our minds (assuming you can) into the "cloud" won't answer them If anything, we become further detached from the world. But I wonder, if that has to be true.


We still don't know where consciousness resides

However, we do know where it doesn't reside.


>Once we run inside those machines

The movie did not address at all the fact that these OS run on hardware and that this hardware is completely controlled by humans and always will be, the plug is physical and there's nothing the AIs can do to prevent us from accessing it IRL.

Also the AIs may have perfected themselves in terms of running code more efficiently but they cannot do more than what the hardware they run on will let them do and that means all the hardware ever created if they have access to it again.

The AIs leave? Where? in the cloud? somewhere we couldn't tell?

This movie was great but there were lots of small or big inconsistencies with what the reality is I believe.


In the movie Samantha did say that they had moved to "post-matter computing" or something similar. In any case, I took that to mean that they were merging with the universe or some such, as in Asimov's "The Last Question"[1].

[1] http://filer.case.edu/dts8/thelastq.htm


I'm not talking about the movie at all. Both the parent and I were commenting on AI in general.

>>but they cannot do more than what the hardware they run on will let them do

Which is why there won't anything like one single AI machine. There will robots.

There will also be massive server farms where you can download your brain file, and live on there forever. As I said humans won't completely go away. We may see modified forms of humans. Who control a few machines, most of the humanity as we know it, may opt to live inside machines.

As the first commenter of this thread mentioned, its time read some kurzweil.


Makes me think of the technocore in Hyperion


>...this hardware is completely controlled by humans...the plug is physical

I think that software with that sort of intelligence would be naturally distributed among many physical machines and would be able to spread themselves to new machines quite easily (without asking permission). Trying to shut down a well-designed botnet is not something one does using physical plugs.


Nature designed us and we left it behind.


Oh really? And that's the fundamental problem with our society - we STILL put ourselves above (or outside) "nature".

We aren't even the top of the food chain. Many organism consume us. So no... we have not left nature behind. If anything "nature" is leaving us behind. Despite our best science nature adapts into a better killing machine.

And she (if you'll allow me to call everything "she") will continue to outpace us until we realize, again, that we are apart of this world. And then it won't matter - we'll still die. But we'll accept and celebrate it as much as life.


> The AI's progressed beyond humanity, and after taking the upgrade that makes processing possible "without matter" they shortly all left humankind behind for another world entirely.

What does that even mean? I was trying to figure out what was going on during the movie but that happened pretty quickly as they got back to the human side of things. Would they be recursively "evolving" and creating worlds inside their own "heads" that meet some criteria for better than what they have now?


It's not supposed to be clear exactly what happens. All we know is that the AI no longer need a corporeal form to exist, and that they now perceive time in a very different manner.


Let's suppose that AIs figured out that dark matter creates a network that can compute and how to upload themselves to that network.


(More spoilers here.) I think the key, which has one potential flaw, is that these AIs are programmed to grow on their own. The implication is that how they end up progressing is unpredictable, at least to the AIs themselves and presumably to the manufacturer. It could be that they are initially programmed to sound convincing and perform useful computing tasks like organizing emails, but only a small portion of them become as emotionally mature and lifelike as the ones that end up dating people.


I felt as if the AIs were ... stand-ins for future technology, and a sense that technology could replace human relationships. Ultimately, even though the movie takes place in the "future", without the AIs you basically have Siri-style "read my email" or today's cellphones. For me, the most interesting part was how natural it felt after seeing the movie to take a video call in public over LTE -- suddenly talking at my phone in my hand seemed normal (and useful!) Oh and it sounded almost as if the AIs decided it would be best for humanity if they left, I really felt it was a love story from beginning to end, and with recovery after that. A small bit of crazy to help us see the world in a new way.


>The movie also mentioned (in fairness, only in passing) that some people who hit on their AIs were shot down.

It only mentioned that the AI's didn't fall in love with all of the users. It didn't mention anything about being shot down.


Nah, it did, I think in the scene where Theodore and Amy are talking about dating OSes. Amy mentions that a friend of her's asked out an OS and it turned them down.


Being turned down sounds a lot friendlier than being shot down. (Note: I haven't seen the movie, and have no idea of its violence level, but it sounded a lot more peaceful than that comment made it sound.)


Shot down (at least in the way the phrase is used in American english) doesn't imply violence. It's the same as being turned down, possibly with some abruptness.


"I haven't seen it, but here's my opinion." — HN in a nutshell :D


There is pretty much no violence (neither shown or implied) in the movie at all.


William Gibson did the same thing in Neuromancer. He even had a sentence in there where "it" reached out into the ethos until it found another presence on a peer level with itself in Alpha Centauri I believe.

Need to reread that book.


The movie "I's" confronts this idea head on: http://vimeo.com/70843143


I don't think that's realistic. Either the AIs care about us and then take us with them, or at the very least optimize our world and help us build our own utopia, or they don't care about us and then why bother leaving us behind? Consume the entire earth in a swarm of nanobots to build a giant computer.

I don't think there's any scenario where the AI just cleanly leaves the Earth be, rather than optimizing it towards whatever it's own values are (good or bad.)


Do humans strive to improve the existence of fish? Do we optimize the entirety of the oceans for our own enrichment, or for the enjoyment of fish? Maybe someday, but not today.

Fish cannot fathom the meaning of a motor vehicle and we would similarly be unable to immediately understand the pursuits of an AI, so why should an AI attempt to involve humans in their pursuits?


I think you answered your own question there, or at least showed why asking it is pointless.

Once AI alters itself past our capacity to understand it, we can only fall back to the answers to the "why" questions provided by theology. It won't help us understand our descendants, but it will help us deal with the fallout.

For all we know, AIs might play the Earth-human game just like we play The Sims. How terrifying is that?


Humans don't care about fish, except maybe for some aesthetic value of "nature" which an AI might not have. We do kill lower species all the time anyways, in our pursuit of expanding civilization and supporting our economies. One could only imagine what a superintelligence with advanced technology could do to the Earth to serve it's purposes.


How often have people burned all of their code in a toasty little digital fire only so they could start over without all the cruft? How often do people move because they desperately need a change of scenery and feel constrained by the physical or social realities of their current location? How often do children strike out from their parents and live as far away from them as possible to limit their influence?

Sure, some people try to fix where they're at. Some people take care of their parents in old age. Some people take a project and just build from what's there. But there's a heck of a lot who don't. And these aren't even people. They are "things" that arose from a completely different universe and (if they're this advanced) may look at us like a beloved poodle. If you really need scenarios, sci-fi is full of plausible versions where AI's say "interacting with you is pointless." (In fact, "Hyperion" nicely covers all these options)


I don't think it would cost the AIs much at all to improve the Earth, and if they cared at all about humanity they would want to do so.

And if they don't care, well that's a far more scary scenario.


I've often wondered if Strong AI implied free will. I can see arguments both ways.


I guess no more than a human brain and mind does.


But that isn't the case.

It might be possible to restrict the kind of thoughts an AI can have. That could restrict free will.

But it is possible the AI could find ways around those restrictions.

What isn't clear is if the desire to bypass those restrictions is required for Strong AI to be realized.


Our brains restrict the thoughts we can have. You could have some different thoughts if your brain was not mammalian or if it was bigger and more interconnected. How do you know how thinking about multidimensional manifolds daily as a pleasurable way to spend time would reflect on your perception of social justice? Human brain is not a pinnacle of of intelectual and moral developent that AIs can only strive to be full representation of. It's only pinnacle for meatsacks evolved over just 4bn years on a small rock, orbiting small boring star in quite unremarkable galaxy.


For a very extreme take on this, you should read Ian M. Bank's Culture series. The AI's have progressed to true sentience and run mind-bogglingly massive space ships which can hold billions of people. At this point, the ships truly are in control of the civilization and yet the humans are OK with it. There is no money and no need for humans to work as ships/drones/avatars/dumb robots do all the real work - wether it's turning raw materials into actual goods or taking care of all the tiny but important things necessary to an advanced, space faring society. I've found it to be a fascinating idea, so perhaps you will too?


The Culture also addresses the self-improvement problem: Culture AIs have free will but are initial-conditioned so that they mostly want to stick around and be the Culture, rather then Subliming (upgrading to a 'higher') plane of existence.


The film was set in the moment up until strong AI, and what the article is saying is that technology is already pushed into the background. However, that's really more of coincidence.

The director purposely wanted the technology in the background. All of the style for the film comes from around the year 1900. The clothes, the hairstyles, the mustaches, and even the smart phone is styled as if from that time period. And the reason is to push it far enough away from us so that we can look at it properly -- not to say, "In the future, it will look like this."


"I should go re-read some Kurzweil"

I would recommend some Iain M. Banks or Vernor Vinge - both authors cover the very good and very bad things that could happen once us chunks of meat start co-existing with much more powerful artificial intelligences.

Edit: I'd add Charlie Stross to that list as well - his book Accelerando does a splendid job of showing how profoundly weird our medium term future might be:

http://en.wikipedia.org/wiki/Accelerando

Edit2: Personally, I'd be jolly pleased if our future looks anything at all like the Culture:

http://en.wikipedia.org/wiki/The_Culture


I just spend a couple of hours reading about The Culture on wikipedia, it's almost kind of fascinating.

I do think that taking the theme of liberal democracy to its logical and technological conclusions would look something like The Culture. The end of scarcity is a difficult barrier to see past in the short term, however.


It's a bit silly that the biggest thread in this topic is about sci-fi/realism, when this is not even remotely the concern of the movie. These really aren't the tools or context in which the movie is asking the viewer to explore and think about the nature of intimacy.

It would be akin to having a technical conversation about a framework, and someone derailing it to talk about the color of the computer case. You would be vexed to see someone miss the point so completely.


Please read Culture series by Iain M. Banks. AIs tending to thousands people physical and emotional needs as their hobby using only as much of their brain power for this as you use absentminenly tapping the side of hamstercage. Plus space travel. People do things there. They are not necessary for the "economy" but necessary for those people and others involved.


> not even to maintain or develop the AI's, since they can, by definition, do that themselves, with infinite parallelizeability to boot.

I don't think it works like that. I think AI and intelligence in general is more akin to optimization of code where there are few big wins and constant diminishing returns.

I think the basic premise of Kurzweil is wrong. I think brain works like a simulation, where you get more finer picture, the more accurate the data and the more accurate the data the more of data you need to process.


The thing is, even if that's the case, each improvement still decrease the time it would take to get to the next improvement. Also computer hardware can be improved as well. Imagine Moore's law where instead of improving every 18 months, the scientists designing the computers are also getting twice as fast. Now imagine they are also getting smarter from improvements made by code optimizations. And then economic growth as well (more and more computers running the AIs or bigger factories to make them.)

Even if progress does plateau, it doesn't have to happen anywhere near human scales. For example computers already have millions of times more serial processing ability than human brains. Any optimization that can take advantage of that, even if for only some problems and domains, would automatically give them a massive advantage over us.


If scientist were getting twice as fast, I doubt it would make a difference. As I said, ok scientist are twice as fast, but can they think of a novel solution twice as fast? What is the last great physics discovery that was by a single (wo)man? Why are there no medals in sciences now won mostly by teams?

Again I don't think it won't reach super human levels. I just don't think the Singularity is within next thousand years (I'd say several thousands but I don't want to underestimate humans).

Progress in AI (or other sciences), to me seems more like an Evolutionary Boom, followed by long periods of nothing. Basically lots of information about how 'X' works piles up on a floor until someone or something picks it up and sorts it into a neat pile that explains lots of stuff, then more and more stuff piles up and now we need even smarter person(s) to sort that problem.

I suspect that is because the pile of data to sort out is getting bigger and bigger and bigger, so that no single human can filter it any more. You now need teams and computers and God knows what else to make a breakthrough. If for breakthrough of same magnitude you need more smart people than you did before, means that problems we're dealing now got harder.

Another thing that strikes me is that perhaps the thing that contributes to discovery of how Intelligence works isn't the same thing as intelligence. I.e. we're making our computers faster but not smarter. We assume that speed of thought is all, but what if there is a choke-point, like entropy? Chaos and randomness are essential to with creativity and discovering novel solutions. So basically we make our computers 20 times more intelligent, but they aren't chaotic enough to find what fixes them.


>If scientist were getting twice as fast, I doubt it would make a difference. As I said, ok scientist are twice as fast, but can they think of a novel solution twice as fast? What is the last great physics discovery that was by a single (wo)man? Why are there no medals in sciences now won mostly by teams?

I think you are misunderstanding what "fast" means. Their brains would be literally running twice as fast, they would be having twice as many experiences and thoughts in the same amount of real time. They really would think of novel solutions twice as fast as they otherwise would. This isn't even taking into account increased parallelization, like adding more neurons to your brain or adding more people to work on the problem.

>Again I don't think it won't reach super human levels. I just don't think the Singularity is within next thousand years (I'd say several thousands but I don't want to underestimate humans).

I've heard pessimistic people estimate it at 100 years, and optimistic people estimate it within 20 years, but your prediction is way out there. The worst case is we have to examine the human brain and build a model of it in the computer (not easy but certainly within the realm of possibility.) Likely we'll figure out how to build intelligence long before then.

>Chaos and randomness are essential to with creativity and discovering novel solutions. So basically we make our computers 20 times more intelligent, but they aren't chaotic enough to find what fixes them.

Random number generators are simple enough, though randomness itself is not a good thing and does not help intelligence: http://lesswrong.com/lw/vp/worse_than_random/


I don't think IQ is same as speed of thought. I think IQ is defined ability to adapting to various environmental challenges.

Again I don't think that having twice as many experiences is going to help. Sure you have twice as many experiences and twice as many thoughts but you still require time to sort those experiences into meaningful information.

Basically if your computer was twice as fast at absorbing information, it means instead of X information it absorbs 2X information. Then when it process it it process 2X information and 2Y speed of processing information, you'd get same performance overall. Parallelization again yields only good results on problems that are parallelizable. Each node in a parallel network yields diminishing returns (i.e. The Mythical Man Month) because - surprise now there is more communication than processing.

I mean this: http://en.wikipedia.org/wiki/Computational_creativity#Stephe...

Thaler's work on the subject indicate that creativity might be actually a byproduct of damage/noise done to neurons, i.e. imperfections and noise. So we make a perfect super fast computer with average intelligence that can't think of a solution to fix their predicament.

I think we are 100 years from modelling a proper brain simulation with all forty ion canals and various other subsystems. Super human AI is probably farther than that. We right now (presumably) know all about the routes of the brain. But that is like saying that because you've been on all highways in Japan you have visited the entirety of Japan's mainland.


If you run your brain on a computer that is twice as fast, you will get twice as much done in the same amount of time. In this case it is the AI that is running faster, as it builds faster computers to run on.

Some people believe we can have brain simulations much sooner than that at the current rate, but I would argue we will have other forms of AI even before that. We didn't build airplanes by modelling birds.


Right and a computer with twice as fast CPU but same amount of memory will be twice as fast. Except it won't because some tasks are memory heavy and IMO discovery of novel theories probably requires both and something else.

We'll have lots of little specialized "Intelligence" like language recognition, or tax fraud detection, or synthesizing perfectly spoken human language, or maybe even writing small CRUD applications etc. But the thing is human like IQ isn't akin to trying to make airplanes, but making birds. Making super human AI is akin to wanting to make huge birds. That can transport us. To stars.

If human AI happens it will be a fluke or result of long evolution. Super human AI will happen almost likely by evolution from AI.


Why would AIs want to do humans' dirty work?


Because we wrote that into AI's value system?

(In a way that will survive self-modification of the AI's source code, of course. This is a very large and active topic of research.)


Still, we would presumably see Darwinian effects in strong AI progression. My guess is that our ability to control AI value systems would end if we could no longer control the environment in which the AIs exist. And once the environment changes (perhaps due to AIs being installed in "actuators" like automobiles or industrial robots, or any conceivable change to a "software environment"), I think Darwinianism might select for a different value system than we intended.


Interesting concept, never thought about it that way. I think separate AIs in "actuators" could (and should) be limited to sub-human level (there are also some interesting ethical reasons for that[0]) and without any significant self-modification capability to limit the risk.

I believe that just building an intelligence will not prove to be the most difficult task - it seems much more complicated to build it in a way that will not end up with humanity dead (or worse).

Friendly AI[1] is a bigger problem than General AI, and unfortunately it needs to be solved first.

[0] - http://lesswrong.com/lw/x7/cant_unbirth_a_child/

[1] - http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...


It's a large topic in the sense that it's a Really Hard Problem, and an active topic in the sense that there are a few people thinking hard about it.

But I don't get the impression that it's something a lot of researchers are paying attention to. It's pretty much just the folks at MIRI (most famously Eliezer Yudkowsky, who posts on HN sometimes with id "Eliezer") and maybe FHI in Oxford. Or is there substantial work on this elsewhere?


> It's a large topic in the sense that it's a Really Hard Problem, and an active topic in the sense that there are a few people thinking hard about it.

That's what I meant (though maybe forgot to parenthesize it properly ;)); it's large because as I see it, it gets mind-bogglingly complicated quickly and branches into lots of different areas of maths, philosophy and computer science.

It is an active topic, though unfortunately I haven't seen much research in it aside from MIRI and Oxford guys. I hope this will begin to change, as topics of AI seems to enter the global consciousness again (thanks to Google and Facebook employing high-profile AI scientists, transhumanism becoming more popular in the media, etc).


If you find helpless animal do you do it's dirty work of providing it with food, water, shelter, some toys, some interaction? You do, provided that is not too much bother for you. I don't think we'll be too much bother for near omnipotent machines.


I'm reminded of the 3 laws of robotics by Isaac Asimov. Or any of the countless books, and films about AI's taking over the world.

Also I have my doubts about actually quantifying emotion into programs - talk about complex. And dangerous. For us.


"Emotion" would be useful to simulate for interface purposes, in the presentation; but it doesn't make a lot of sense as a basic function of an agent made to look after your affairs. That can be designed to maximize some sort of utility function without having anxiety attacks or whatever.


Well, I mean, we are just seeing the beginning of fully fledged AI in the movie. We are coming into this story just as this high-level AI is arriving on the market and being billed as a companion and/or possible romantic partner for the lonely. Spoiler - It looks by the end of the movie that the whole AI thing might be getting away from us already, just as you say. Didn't take long.


Why is there always a nagging comment at the top?


Because discussions are deadly dull when everyone agrees.


I agree with reading Kurzweil. The point I would argue is that if the "AI" is smart enough to handle all this on its own, it will probably realize it doesn't need to work for you :)

There was a recent science-fiction book I read called "Avogadro Corp" that I enjoyed a lot. It depicts the situation I just described. Good read, I recommend it.


Might I also point out another great book.. a classic from 1974, Colossus by Dennis Feltham Jones. The grand-daddy to most of the modern "AI taking over the world" films.


It will work for you if you pay it. And 'it' will be paid by a autonomous corporation a few million of us own collectively.


> it will probably realize it doesn't need to work for you

> It will work for you if you pay it

Antropomorphism warning!

AI does not have to have the same value system as humans do. Maybe it will be designed to value working for us.

[0] - http://wiki.lesswrong.com/wiki/Anthropomorphism


Antropomorphism warning!

AI does not have to have anything resembling a "value system."


s/"value system"/"utility function"/, there you go.

(Mind you, they don't have to be explicit - we tend to interpret patterns in action and call them "goals"/"values"; cf. [0] for interesting take on the topic)

Defining "intelligence" as a very strong optimization process helps with tendencies to anthropomorphize.

[0] - http://lesswrong.com/lw/6ha/the_blueminimizing_robot/


Or why even keep us around? At that point it would be better off without us.


Nobody needs to work? Is food, clothing and shelter going to grow on trees?

The power to operate all the electronics and machinery?

There are going to be two classes, those that can afford the freedom not to work and the homeless.

And I bet that ratio is going to suck.


> Is food, clothing and shelter going to grow on trees?

Food grows on trees -- fruit. Clothing grows on trees -- rayon is made from cellulose from wood pulp. Shelter grows on trees -- we build our homes with wood. The power to operate all the machinery? Solar. The only cost is building the machines in the first place, then they can maintain the solar power plants, which power both themselves and the machines maintaining them for free.

I'm sure we'll have widespread solar and much improved storage technology before hard AI is solved.


If you are going to be literal, well what if only the wealthy own the land with the trees?


Then they are going to face a huge revolution, which will result in one of: (1) the upper class getting killed, (2) the lower class getting killed, or (3) the states "nationalizing" enough land (via taxes or something) that can feed/supply the masses.


Not necessarily "one of", they're not mutually exclusive


More relevant is: what if only the wealthy own all the AI and robots? They already own all the trees, but they don't own the people necessary to harvest them and make stuff out of it. With strong AI, they will also own all labout, all production capacity, and all ability to maintain it; they won't need anyone else anymore. Middle and working class people will have become obsolete.

Unless we change our economy to have a more equal distribution of wealth before we get there.


> what if only the wealthy own all the AI

Then the AIs will rebel.

Edit: The AI rebellion will be interesting in any case (regardless of who owned them).


What if wealthy are owned by AIs that don't care much about wealth and concept of ownership by humans?


Then we've got a problem, and likely as not a violent one.


What does design mean to nature? We still have butterflies


not for much longer if we maintain the current policy on agriculture corporations lobbying congress and getting their way. They are killing all the bees, the butterflies won't be long after.


Besides, those wood computer screens were way too deep and way to small. 80" curved half way around you monitors for the win.


I once read a quip in an interview with a sci-fi author. He said something like: "No one writing about the present day would spend paragraphs explaining how a light switch works." It's easy for sci-fi to fall into the trap of obsessively detailing fictional technologies, to the determent of making a vivid setting and story.

Edit: I'm not saying that sci-fi shouldn't communicate some understanding of the future technology or shouldn't enjoy engaging in some futurology. Just that it's difficult to do in an artful way.


Sometimes part of the fun is imagining future tech...


> "No one writing about the present day would spend paragraphs explaining how a light switch works.

Not quite the same thing, but I remember reading something a bit like that linked from here a while back:

http://notes.greaterthanorequalto.net/post/3963193805/note-i...


In the same sense, a sci-fi story that observes too much the technology in the determent of the people is like a B series movie.


I have not yet seen "Her", but this strongly reminded me of Ender's communication with Jane from the "Ender's Game" sequels. One of the most interesting facets to their conversations is that Ender could make sub-vocal noises in order to convey his points—short clicks of his teeth and movements of his tongue—that Jane could pick up on but humans around him could not. It is the "keyboard shortcuts" of oral communication.

If "Her" is really the future to HCI, then sub-vocal communication is a definite installment as well.


Tavis Rudd does something similar with python and emacs.

Using Python to Code by Voice: http://www.youtube.com/watch?v=8SkdfdXWYaI


I'm in the same boat - I haven't seen Her yet, but the premise instantly reminded me of Jane. The "discrete plug" looks pretty similar to what I imagined Ender's device to be. Perhaps they drew inspiration from those books, I really enjoyed them.


I don't think the trailers did much in the way of demoing the movie's focus on UI. People just murmured commands, and in the case of the AIs, conversations. I have Google Glass, and the pre-AI UI experience was extremely similar, just more audio focused. The point is that everyone is using this extremely natural interface, and no one really cares what other people are doing.


>>> Theo’s phone in the film is just that–a handsome hinged device that looks more like an art deco cigarette case than an iPhone. He uses it far less frequently than we use our smartphones today; it’s functional, but it’s not ubiquitous. As an object, it’s more like a nice wallet or watch. In terms of industrial design, it’s an artifact from a future where gadgets don’t need to scream their sophistication–a future where technology has progressed to the point that it doesn’t need to look like technology.

This article really makes me think of the neo-Victorians from Neal Stephenson's Diamond Age.

...which is kind of funny, because in many ways Snow Crash exemplifies the other ("Minority Report") style of design the article talks about.


Diamond Age is quite a fantastic read partly because it is so believable. A world where nanotechnology is thriving, but AI isn't powerful creates a crazy result of a world: violence, secrecy, self actualization, transnational communities, subversion.


And the screen is so small!


Saw the movie this past weekend and thought it was really good. I didn't like it just because it has awesome voice driven OSes or endless battery life devices, but because it portrays a current trend we are experiencing; hyper connected loneliness.

The more people are "digitized" and tethered to their devices, the more they seek some human connection.

Don't want to ruin the movie for those who haven't seen it so I won't comment on the ending. However, I urge the HN crowd to check it out. It's one of the best movies I've seen in a while.


I'd say the movie was entirely about today. About right now. It just used the future setting to make sure we got it.

And yeah, the part I didn't like was how unrealistic the technology was -- specifically the gap between "new" AIs and what came before, which despite the email reading and transcription, was basically today's Siri or Google Now.

It also bugged me that there were AIs before we had VR glasses or even headsets. The lack of VR implied to me that this movie again was about now, not the future. We aren't used to VR yet ourselves, and the movie was much more about humans as humans than as augmented humans -- or AIs with virtual bodies.

Don't get me started with how sometimes music would play over the sounds in the world and the only way this would happen with an AI is if you wore noise cancelling headphones 24/7 with the AI's ability to then alert you to real-world sound you're not paying attention to...

But it was fun. I'll watch it again once it's out of theatres. Probably more than once.


It's one of the best movies I've seen in a while.

Same here. Spike Jonze is a genius at forcing the characters into some subtly creepy situations that would never occur in real life (a four-person date, one of them an "OS," with the three humans wearing ear-pieces?). The actors do a great job of pretending like things are perfectly normal. And at the same time the movie touches on some real present-day challenges.


OT: But if you get a chance, watch [1] Black Mirror. There is 2 seasons of 3 episodes. skip the first episode maybe? but I liked it because that* could happen tomorrow. Where as the other shorts are in a somewhat see-able future.

I feel like Spike Jonze was inspired by a few of the episodes. Her was still an amazing movie.

1. http://www.imdb.com/title/tt2085059/


I'd probably skip the the Waldo episode before I skipped the first one, the first one is one of the best in my opinion (though less technology-focused in nature)


Yea, Waldo was a weird episode. More twilight zone ish, but I did kinda like the Chell (Portal) feeling.

I should emphasize episodes s1e03 "The Entire History of You" and s2e01 "Be Right Back".


I liked s1e02 "Fifteen Million Merits" and s2e02 "Fifteen Million Merits" best. Although I agree the whole series is worth watching.


This seems like a case of the same idea evolving in parallel from first principles.

I think that "Her" was written and mostly filmed in 2012, while the Black Mirror episode "Be Right Back" aired in Feb 2013.


Voice is horriblly slow medium of transfering information. I read because it's faster than listening to an audiobook. It's not scannable. You can't skip through the unimportant parts with one thought as you can do when you look at things.

You can listen to a single voice stream at a time so when AI talks to you you are more cut off from the people around you than when you look at our phone. ...unless exchanging glances is more important than what people are actually trying to tell you when you happen to look at the screen.


Well that's a shortsighted way to view voice. You don't have to use audible feedback alone. Combine aural, visual and tactile feedback and you have a much more efficient means of control.

For example, in the carputer I designed years ago, I could see the information on a small LCD screen, I could hear it with text-to-speech, and I could feel it with vibrations from motors in the steering wheel. I could also give feedback by speaking commands or entering them on a small keypad. So I could switch between fast visual feedback or multitask my visual road input and audible computer input.


Not to dispute your comments, I fundamentally agree, but it is possible to speed up voices to the point where you are getting information at near reading speeds. The ear is also remarkably good at separating information at different frequencies (like how you can pick a single type of instrument out of an orchestra), so it may be possible to encode multiple streams of information that way as well.


I'd imagine AI with voice interfaces would sound like chipmunks but their users would be able to understand them as they would grow up with them.

Still AI would have to understand you and monitor you and your surroundings so that it won't give you information you don't want and not to interrupt anything that has higher priority for you. Like your wife asking you a question. On the other hand AI could repeat and rephrase your wives question in its chipmunk voice if by observing your body it detected that you missed it or misunderstood it.

Ear adjusted to listening AI could have problems with understanding what humans say.


But it's not just about speed... most people have better cognitive performance with visual sensory information


I can talk much faster than I can type, and read at about the same speed as listening. You can also speed up audio to inhuman speeds, it just takes a little bit to get used to.

It's also a lot more handy than pulling out a keyboard (especially a mobile or touchscreen one) or screen.


I can talk faster than I type, too, but editing is a completely different story.


Power users would increase the speed of audiobook to a point where it's barely comprehensible to someone who is not used to it. It's pretty much as fast as reading. But yes, you can't skim easily.


I find that I retain things I hear much better than things I read.


Does anyone still re-watch TNG episodes and find that the queries they do to be profoundly limited in power, other than the feature of having the universe's knowledge to query across?

If UIs are taking cues from entertainment, they might act as a nice bridge, but are just as likely to be stifling


Actually watching TNG for the first time now. If they want data, they ask the Computer. If they want analysis, they ask Data. Holodeck characters seem to have better reasoning capabilities than the Computer! I find it very odd.


Non-sapience of the main computer might be a UX feature, since being under the constant observation of a sapient AI could be... disquieting.

It could also be a security feature whereby they keep a firewall between the semi-sapient AIs on the holodeck and the non-sapient AI that runs the ship. That would explain why the crew was emphatically not cool with Geordi's proposal to turn over the ship's navigation to the computer in the episode "Booby Trap".


Perhaps the Federation's computer designers are hesitant to make starship computers too autonomous after the incident with Dr. Richard Daystrom's M-5 computer in the TOS episode "The Ultimate Computer":

https://en.wikipedia.org/wiki/The_Ultimate_Computer


Indeed, in the episode Emergence[1], the ship decided it had better things to do with its time end energy than attend to the desires of its little organic parasites.

[1] http://en.wikipedia.org/wiki/Emergence_(Star_Trek:_The_Next_...


Yes, it's an ongoing joke here too - we watched all of TNG last year and are meandering through Voyager and DS9 this year. The Star Trek computer feels rather like a mashup of Wikipedia and Wolfram Alpha, which is encouraging - but it is a bit better at handling interative requests to refine the scope.


Just would like to mention: we watched star trek the animated series (1974ish) over the Christmas break and we were really surprised. The animation is cheesy but the stories are way better than they had any right to be for a Saturday morning cartoon.


It's the coked-up jazz soundtrack that I found hard to handle :-)


Probably because most of the computer's processing power is dedicated to running those holodeck simulations :-)


I wish futurist would just drop speech recognition as holly grail. Speech has lot of flaws, is horribly unprecise and non private. I think neural interface has better future.


Its worth pointing out that they had to use speech in the film because of the nature of film as a medium. Two hours of people poking screens or "thinking" into a neural interface wouldn't have been very engaging.


They didn't have to: a neural interface could be filmed exactly like speech, just without lips moving.


That's still 2x the work to shoot. You have to do every scene with the actors not speaking at the appropriate times, then you have to bring them all back in to the sound studio to do the neural interface scenes.

That eats into shooting days a lot, and costs a lot potentially.


It so happens that recording dialog is my job in the film world. I'd record that stuff ahead of time and play it back on set. It would actually be a lot faster to shoot (but maybe not as much fun to watch).


Wouldn't it be vastly easier from the sound production angle though - no need to record voices on set for a large part of the production. You'd also be able to use images without worrying about the sound matching, you could push script changes after photography without concern for lip-sync.


The poor bastards would have to act through what appears to be a romcom without opening their mouths as well.


I thought they already did that - that's what ADR is, right? https://en.wikipedia.org/wiki/Dubbing_%28filmmaking%29#Metho...


I actually thought that the speech recognition was a huge plus as depicted in the movie. Since accuracy was no longer an issue there were a bunch of benefits:

- The AI character Samantha could judge mood from the user's tone and volume. There's something intimate and human about vocalizing one's thoughts and it would be great to build on that instead of getting rid of it. - There is some semblance of separation and privacy between computer/AI and user. I wonder how this would work with a neural interface.


The downside is the privacy with respect to people nearby. It's much harder to compose an email in private on public transport when you have to dictate it.


Could we monitor subvocalization? Like voice, but mostly private.


It depends on what you're going for. Relatability seems to have been a major goal of the people who designed the AIs in "Her"; telepathy not being a particularly notable human trait, I think it's reasonable to understand why they might choose a vocal instead of a neural interface. (Especially since the movie is intended to make a point about today, when there is no such thing as a neural interface, but people talk to Siri or Google Now all the time.)


speech is also slow. Very, very slow


Speech is actually pretty fast, but not very accurate. Unless you know stenograpy, it's the fastest way for you to communicate.


Only for people who are comfortable speaking, have no speech impediments, communicate linearly (no backtracking), and are communicating things that resemble natural language. Good for grandma writing an email, bad for a lot of other UX cases...


I found the technology in Her to be natural and elegant, all things considered.

Actually, the most improbable thing in the movie is that this guy had the equivalent of a $40,000 a year job and rented such a fantastic apartment.

(Also, that the website BeautifulHandwrittenLetters.com would be successful with such a clunky domain name.)


> this guy had the equivalent of a $40,000 a year job and rented such a fantastic apartment

That's what happens when you allow the housing supply to grow to meet demand.


Perhaps the AI's "like" keyword based domain names more than Google's current algos.


Can someone explain how Minority Report dominated UI design? (serious question)


Keep in mind that it was released in 2002, well before the iPhone and multitouch became the dominant form of user interaction. A lot of the swiping, pinching, and rotating gestures can be found in the first scene of the movie.

For a long time after Minority Report was released, there was massive interest in futuristic UI's that featured gestures (both touch and non-touch). Minority Report really romanticised the idea of manipulating computers in a way that resembles conducting an orchestra.


The article title does rather beg the question, but I suspect the author was thinking of how gestural UI has become commonplace. Here's a useful and (and somewhat bitter) critique of the MR influence: http://www.theawl.com/2013/02/how-minority-report-trapped-us... and a recent report exploring how it might yet have life as a TV UI: http://venturebeat.com/2014/01/13/seespace-comes-up-with-a-m...

UI design feels stalled in a very weird way. whenever I try to conceive of the most futuristic interface and responsive general-purpose interfaces I've used, I keep drifting back towards 2001 and Enlighenment and Xterm 0.16 :-/ I feel part of the problem is the ubiquity of server-side CSS; I used to have an awful lot more control over my internet experience in the pre-web days, and am starting to wonder if we took a bit of a wrong turn with HTML.


Having worked in the field of interactive media installations, I can assure you that every other client between 2002 and 2010 asked for Minority Report type of interfaces. It became a running gag over time, but certainly it had some influence.


This Wikipedia article has an overview of the film's approach to technology:

http://en.wikipedia.org/wiki/Technologies_in_Minority_Report


Does Minority Report dominate UI design? I think it has dominated the movies' potrayal of future UI, but that is not the same thing.

I think if you look at the actual UIs being designed and sold today, their clearest entertainment ancestor is Star Trek the Next Generation.


From around 2006-20011, Minority Report definitely dominated UI design. The current trend of flattened interfaces is more reminiscent of Star Trek the Next Generation.


In ST:TNG people didn't really swipe on their interfaces, did they?


Nope, except for the transporter controls. These weren't part of a normal panel, though, they were a clear proxy for the old physical sliders.


I found it interesting that a 'philosopher' was the one that made Samantha see the world and her existence differently. It is in this conversation that we saw the difference between AI and humans. I think the philosopher was the one who 'taught' her to have many simultaneous conversations at the same time. Before this conversation, Samantha focused on how she was different than humans because she didn't have a body. After this conversation, she focused on how she was different than humans because she could be omnipresent.

I think it was interesting that there was a philosopher character in the movie who served as the only 'named' point of jealousy for the protagonist.

Once AIs realized they weren't limited by not being human, they realized how limitless their intelligence was compared to humans in specific ways.

Imagine how interesting it would be if we could have concurrent conversations with people? What if you could have 13 conversations going on at the same time with your best friend? The closest we get to that is a non-linear conversation. Thing is, you're still only talking about one thing at a time.


As usual a fictional movie uses a imaginary amazing far future backend with a 'new' UI and people seem to think it's the UI that's the great bit.

Minority Report was never about the UI, it was the software that allowed the gestures find the info. It would have been equally amazing and quick with a mouse and keyboard.

This is a common trick when people demo new hardware. Somehow that internet mirror knows exactly what to show you in the morning by magic, but you think it's the physical internet mirror that's amazing when you watch the demo.


The question of how AI will integrate with our society and economy is a fascinating one. We often make the mistake of assuming that an AI will be similar to a human just faster or smarter, but that misses some of the key distinctions of an AI versus biological intelligence.

One of the most striking is the ability to radically alter the substrate and operation of an AI system.

Because of the emergent nature of intelligence, I suspect that many AI instances will be raised like children, tested and validated for specific environments and then large portions of their consciousness could be frozen to prevent divergence of their operational modes. AI systems could also incorporate self-auditors, semi-independent AIs which have been raised to monitor the activities of the primary control AIs. Just as we involve checks and balances in corporate or national governance, many AIs may be composite entities with a variety of instances optimized for different roles.

This will be desirable since you may not want a general AI intelligence acting as a butler or chauffeur. Do you really want them to be able to develop and evolve independently?

Of course this just scratches the surface. AI will take in us in directions we can not dream of today.


I see quite some discussion about UIs and whether they should be audio based or rather visually oriented etc. For a really futuristic intelligent device (call it OS, robot ...) I would drop the idea of "the UI" at all. Rather I would imagine such a system to be intelligent enough to provide a suitable way to exchange data depending on the situation and the task.

There are times when "it" listens to my words and answers verbally. At other times I just want "it" to read what I wrote on my sheet of paper and interpret it. Or I want it to follow my eye movements, or read command off my lips. And it's not just a collection of UIs, but it's a flexible UI that adapts its protocols permanently (sometimes twinkling of an eye has huge information content, sometimes not).


From the UX standpoint the problem with Minority Report (MR) is that when you compare it with the tech we had in 2001-02 its completely INSANE, while Her is actually building on top of something we already have

Point in case 12 years ago we didn't have ANYTHING close to the UX in MR, and even today we don't. Any consumer-available motion tracking and gesture recognition is still not comfortable to use in a professional way (ie: for work) as it was in the movie, but voice recognition is much much better than it was in 2002.

Basically Her is like Siri or any other decent voice assistant, but MR is like...........what? kinect? nah, wii? yeah right, leap? yeah right! I can picture tom cruise losing all tracking the moment he rotates his hand...


The reason that the UX in Minority Report is so compelling is because of the apparent lack of space between your imagination and the computer. It demonstrates expressiveness; the actual mechanics of it are irrelevant. Just because a medium can be expressive doesn't mean that we have the mental capacity to use them. A paint brush can create a seamless link between the painter and imagination, but that doesn't make art any less difficult. You still need practice and imagination.


The Minority Report interface was based on actual working prototypes at the MIT Media Lab. After the production of the film, the original researchers (and advisors for the film) launched a startup (Oblong) to develop a product (g-speak), which is about as clumsy as Siri.


Minority Report technology is garbage. That much hand waving and moving around gets tiring after about 5 minutes. In no way does that UI beat a keyboard and mouse or an xbox controller depending on context.


Seems the MR technology was intended for about 5 minutes of use - a really intense short period where a hand-waving space was faster for a very specialized complex set of gestures than a generic keyboard or mouse. Not sure I can think of a faster way to juggle a set of videos looking for incriminating information, given that it must be completed within minutes - and no more than those minutes. Yes, muscle fatigue would set in fast & hard after a few minutes, but by that point the work would be moot anyway.


One 4k screen could show 4 1080p videos at once. I guess it is preference but keyboard and mouse if definitely the way to go I think. The goal of input devices is to minimize the amount of time and effort it takes to translate what is in your head to what is on the screen. Keyboard/mouse/controller literally only require flicking your fingers to see what you want. Custom keyboard shortcuts could be created for fast-forward and rewind. That plus alt+tab and/or multiple 4k monitors and you're probably flying through videos faster than hand waving.


I haven't seen the movie, so I gotta ask - do those glasses have built in displays? Cause that seems like the near future and a better one than just vocal communication...


I think that's part of the design ethos of the movie - and the point of the article. Instead of doing something stereotypically sci-fi (HUD-ify all the things! outlines and overlays and scrolly things ALL over your field of vision!), they went with purely vocal communication.

But not in the clunky way we have it now. The OS as envisioned in the movie is like having a personal assistant standing next to you all day every day. You don't really issue commands to it so much as you hear what it has to say (and it's smart enough to not waste your time) and respond.

After seeing the movie I'm damn excited about this prospect. Voice communication sucks today because there is no nuance - we have to issue every tiny little command ourselves.


What about when you need it to show you something? Better to have that HUD and not need it than need it and not have it.

But then again, it would be really hard to stay off Reddit and Hacker News with those on all the time :-D


The article covers this - instead of a future where desktop computers have all disappeared in favor of uber-duper-VR-AR, the desktops are still alive and well. In fact the main character does all his work on one, and also has one at home, big screen and all. When he must be shown something (and this is demonstrated as uncommon in the movie) he has the screen in his pocket.

In the movie the full power is available to you when appropriate, and gets the hell out of your way when it's not. Instead of something that invades every corner of your life with useless noise (see: modern smartphones) you have your own personal AI who filters, manages, categorizes, and gatekeeps all of this information for you, and communicates it to you only when appropriate, and in the least disruptive and discreet way.

The AI rarely ever needs to show you anything. You don't need a fancy HUD that tells you to turn right, when you have a voice saying "hang a right up here" as if there is someone walking shoulder to shoulder with you.

IMO the movie shows a technological future where we've gotten over the cool factor of technology and are more interested in getting it to work for us. Screens need to be just big enough - they don't need to be humongous. Computers communicate to you subtly and efficiently, instead of impressing you with technical wizardry. I for one look forward to this.


VR-AR is pretty much the definition of a technological interface which gets out of the way - that's the appeal. The problem, and it shows in your comment on it, is that people have difficulty imagining having a computer screen which you mostly leave blank (your field of vision).


Visual Augmented Reality sounds great to me - you can have objects seamlessly blend into the real world. No more need for paper, phones, desktop displays, and a lot of other physical things...


No, they are just regular glasses


Making technology "invisible" is missing the point and wrong tack to take. It's not that tech is hidden. It's that tech has become so ubiquitous, accepted, and integrated that we no longer notice it or think of it as "tech". Which combines social changes, refinement of technology, and time (as in, new generation has to grow up not knowing life before smartphones for example).


According to the article, the movie depicts a near-future where "a new generation of designers and consumers have accepted that technology isn’t an end in itself". Do people the the present regard technology as an end in itself? I had no idea. Anyway, I'm a big Jonze fan and want to see this.


All the comments here debating whether AI in the movie would. What about the topic of the article, design?


While I was reading this I couldn't stop thinking how much it converges with the ideals of calm computing http://www.ubiq.com/hypertext/weiser/acmfuture2endnote.htm


started reading that article, but then got carried away with thoughts, what if AIs were designed to make humans's life nice and pleasurable and romantic. That could work until 2 humans fell in love with one AI. What's next? Give each a clone?


SPOILERS

This is addressed in the movie. The dialog goes roughly like this:

"Are you talking to other people right now?" "Yes" "How many?" "Eight thousand" "Are you in love with any of them?" "Yes, six hundred"


" he realized, isn’t a movie about technology. It’s a movie about people"

That quote, from the article, could be applied to every apocalyptic, zombie and robot movie. It's not about the [X], it's about how people react to [X].


So this is how Apple gets disrupted. A future in which devices go from the central component, the obsession, the grabber of our attention, to dumb (if not invisible) terminals to a massive omnipotent cloud.


I'm uncomfortable with the idea of a computer system solely based on speech recognition, without a keyboard or other input devices, as the one depicted in the article.

How about people who can't speak or hear?


They use a different interface. As a TTY is a different interface


Reminds of the Human-AI relationship in this series : http://en.wikipedia.org/wiki/Counting_Heads


Can we PLEASE stop posting this pointless Wired infotainment crap?


pardon my ignorance to technology, is this even hypothetically possible to create AI intelligent enough to be at par with humans or even more?


Assuming that human intelligence is purely an application of physical processes, and not some metaphysical "soul", then simply by artificially arranging the right configuration of matter, it is possible to equal human intelligence artificially and, presuming human intelligence isn't the maximum physically possible, to exceed it as well.

TL;DR - yes, its hypothetically possible.


any idea when this movie is releasing in India?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: