What? Wolfram is saying it wouldn't be that hard to build something like Samantha. Did he see the movie? Samantha is like a AI with human level intelligence, only much faster, able to talk to thousands of people simultaneously. That's not easy to build - Kurzweil is talking about this stuff, and it might become reality someday, but probably not before several decades have passed.
Hypothetically, if strong AI requires something like a full simulation of a human brain, then those are certainly practical concerns. Of course, one single, slow implementation would be quite impressive on its own.
I don't think the bar has been raised, not since the Turing test was formulated and popularized. The criteria are pretty clear.
You're assuming that Samantha is everything she says she is. One possible take on the story is the AI makes up all the "Superintelligent-AI" stuff in order to have a reason to break up with him.
I don't even question the feasibility. If we can create strong AI and/or self-aware AI, we can make Samantha. If we can't, we can't. It doesn't seem to be any more complicated than that.
(The other natural follow-up question to "how feasible" is "how soon". Again, I simply point to AI research)
And "If we can eventually create a strong AI," is basically the same as saying, "If minds are physical" (hint: they are). In fact, we create strong intelligences by the tens of millions each year. We just currently only have the ability to do it biologically.
> If we can eventually create a strong AI," is basically the same as saying, "If minds are physical"
No it isn't; it's saying if we ever understand minds well enough to duplicate them. Yes, minds are physical, that doesn't mean we can duplicate one yet.
Eh, close enough for government work. I don't know of a plausible reason why with enough research and time, the full workings of every neuron in the human brain might not be understood. I guess it's possible that the brain is just incomprehensible, but it seems fairly unlikely.
But we're talking about future feasibility . . . I think it should be fairly obvious that Samantha is not yet creatable, since such a thing hasn't been created. Also I presume we are mostly fairly technical here and understand that such a creation is far beyond our current capabilities. If that was in dispute in this thread and I somehow missed it, you have my sincerest apologies.
Of course I agree, I was simply pointing out those two statements I quoted are not equivalent; I would presume most people here accept the reality that the mind is physical; mind is what brain does.
I mean what I said. Is mind-ness completely encapsulated in physical properties like the locations of biological components and electrical states? Are all things we would recognize as minds composable from matter? My ability to formulate a definition has little to do with the answer to the question.
(Answers to your two questions. Being 100% biological beings built according to strict mechanistic darwinian rules, our "minds" are purely physical constructs. Insert leap of logic from that to Turing Machines and, well, you get the idea. We will build a "mind" or two, it's only a matter of time.)
Is a mathematical object physical? Nope. Yet we need those to talk about "physical" things.
I asked you to define physical. Not to state what you mean being a mind being physical. If you try a bit, you would find out that dualism, materialism are all equally idiotic or equally truthful.
I am just wondering if modeling human intelligence is becoming a "coastline of Britain" problem, with more complexity revealed as we progress such that the goal being in sight may be illusory.
The last point, which also struck me while watching the movie, is how ubiquitous (or I guess completely absent) serendipity will become.
The idea of machine domination being through a gun wielding robot is absurd when so much of an individual's thoughts and experiences can just easily be optimized towards a specific goal.
I noticed that what comes off as a simple joke in the movie, Theodore being cut off when explaining how unhappy he is when his mother selfishly reacts to his feelings, is something that Samantha uses throughout the movie to help Theodore cope with his feelings of abandonment. It really doesn't take much information to manipulate someone so innately.
Even now, people who use Facebook regularly have their exposure to information mediated through News Feed algorithms that seem fairly opaque. Of course, you can take manual actions to "shape" your feed, but mostly it's computers deciding who you'd probably like to hear from, and maybe even what you'd like to hear about. It's interesting how such a major part of people's infostreams are invisibly shaped by behavior-trained systems.
What kills me about this movie is how derivative it is from a Vonnegut short story EPICAC that was written in 1952. The original is far more poignant about a machine in love.
"Her" may captivate non-techies and romantic liberal arts majors. But "HUMAN AI AROUND THE CORNER!" must leave anyone who knows anything about AI smiling benevolently.
The guy in the article who says we're basically there sounds like a loon.
I can totally imagine an algorithm behind Samantha. No strong AI or self-awareness necessary. It just learns from thousands of conversations it has with all the people around the world, and feeds it back. Kind of like what Google does, but with language recognition and some inference (which is doable, as Watson has shown).
Some people fall in love with her as a joke, and Samantha will use these phrases in conversation with other people, and it will eventually catch on for real. So what Theo gets actually served is not Samantha's love, but averaged relationship of all the people that interact with her as a romantic partner.
Of course this technically disagrees with the movie at a certain points, but still is a reasonable approximation that preserves the intent.
I don't think it's possible without a strong AI. For instance, you wouldn't be able to ask it questions about complex aspects of your personal life, which are entirely unique to you, and have it respond in a reasonable way.
Also, it's all too easy to construct questions that have not been asked before, which a human could easily answer but a machine like you describe cannot.
Sure, but given situations in which it could not respond, it could elicit elaboration. From that point on, it would be able to handle increasingly sophisticated responses to that specific situation. It's a data aggregation problem, similar to what we experience, as humans.
I also question exactly how unique individual elements of our lives are. In aggregate, they are probably fairly unique, but at an elemental level, we're all probably fairly mundane.
Also, your hypothetical program is not static. It can communicate with thousands of other humans in real time. To half the user base it is Samantha, to the other half, it is Sam. If someone asks Samantha a personal question it doesn't know how to answer, it may try to pose a similar question to the other users as Sam. The larger the user base, the faster and more effective this can become.
It won't be as fast as a real human, but with some clever stalling techniques, it might be able to pull it of realistically.
In reality, if someone pulled a program like this off, it wouldn't just be Sam and Samantha. It would be thousands of different personalities, trying to build a realistic social network between its own personalities and between its human users. It would be like a mirror held up to society itself.
If it learns, it is AI; what you're describing is a chatter bot, Eliza style; it's not learning, it's just doing lookups of previous conversations. Sadly, such bots are more successful in fooling people than any other approach so far in a conversation.
I am not saying it's not AI or that it doesn't learn; I only claim that it doesn't need to have agency or self-awareness, or even any sensors connected to the real world.
As for the personalization, personalized recommendation algorithms are very common, so nothing really new here either. It's feasible that it just filters possible responses by these personal information.
As to whether or not it's "fake", that's a different question. Is illusionist's "magic" fake? In a way, it's the only magic that is real.. I don't think people will accept AI until we actually cut the brain completely and find out, to our chagrin, that it's doing essentially the same thing ("lookup of previous conversations").
But the brain doesn't do the same thing; it synthesizes new information from those previous conversations and creates abstractions and metaphors and such. It's not at all like a chatterbot doing simple lookups.
But in this case, Samantha won't really need to make new metaphors or abstractions, these can come from other people she is connected to. She needs to have certain limited understanding of those metaphors, but doesn't need to be able to create them.
I have not seen the film, but wouldn't that be a good interim solution? You'd need to set it up for round-the-clock support. Multiple interchangeable human Sams transmitting through a consistent voice interface. Voice-to-text-to-voice is already good, so no major breakthrough needed there. Recruiting Sams would be difficult, though it should appeal to people who want to work from home. QA is probably the greatest challenge. Due to costs, it needs to be at the higher end of the market; $1,000s per year. The Sams could be differentiated: "positivity coach", "your mum" etc.
I know I've posted two threads on this subject now, but I don't have anything to do with the film production or marketing, I just find the concept interesting.
No problem. I feel this film did a very good job of showing how normal and integrated an AI could be in our day-to-day lives, even if we all don't end up falling in love with each other.
It's an interesting thought experiment, and I think ultimately the legacy of this film will be mainly the same as Minority Report - it'll inspire an entire generation of voice tech that's far, far superior to what we've got today.
Minority Report sort of blew the lid off of touch and gesture controls and made a lot of tech demos into viable commercial products. It took vague excitement around gesture and touch and turned it into a concrete, specific vision.
Her demonstrates voice tech in our lives in a way that isn't horrifyingly stilted, awkward, intrusive, and dumb. Everything we've got - including Siri - is child's play in comparison. We've been saying that voice technology needs to be smarter and better, but this is a concrete vision of just how it can do that.
The first and second act are quite plausible, but Jonze complelety blew it in the third, by introducing her crush on the influental hippie buddhist philosopher ("The Way of Zen") Alan Watts.
Using AI to discuss jealousy and modern relations ships is too off.
Gotta plug in the Halo/Marathon reference here: They did a great job of covering this, I highly recommend the game series/books(original halo trilogy). Nylund's and the Halo universe take on AI is very much underappreciated. I'm glad windows named their 'siri' competitor Cortana with Jen Taylor as voice actress to boot!