However the biggest idea in the show was the relationship between god (Anthony Hopkins), self (the robots) and consciousness as an extension of inner conversation.
I had to look up the idea:
That was almost certainly one of your spiritual guides talking to you. Don't give that treasure up, you don't need to be "religious" to continue it. You just need to accept your spiritual dimension (that we all have), which is usually much easier for us when we're kids. Once we grow up, we develop our ego further, which acts like a barrier between ourselves and our spiritual dimension (i.e our guides). Btw, a substance like LSD or psilocybin can break down the ego temporarily and thus open you up further to experience your (natural) "link" to the "other side". And this is the reason why a lot of people report experiences of spiritual nature when they consume such substances (in the correct set and settings). Daily meditation practice is also a very good idea to keep it going.
Don't forget to read to the end...this isn't a 'spiritual' thing.
Still, it lives on. Not many incorrect ideas do. Perhaps it's because it is a more concise and elegant answer to several unresolved questions (why did self-awareness arise? why did religions change at that particular time?) in ways we don't have any other good answers for.
It's a brilliant idea, that sadly happens to be wrong.
Wait, what? Sounds cool and I'd love a source on that.
Basically, before 3,000 years or so, it was common for people to hear the gods talk. Somehow, that changed and it became far more rare, with only a few (prophets and such) claiming to actually hear the gods' voices.
The bicameral theory is that there are still such people today, schizophrenics, and it is one half of their brain talking to the other - but they perceive it as an external agency. And that somehow this was the common state of humanity in the past, but changed. We became self-aware - we recognized the internal voice as part of ourselves.
This does actually fit many historical facts well. Still, it is likely wrong for other reasons.
To me it seemed like they barely touched on this. I had high hopes for the show in that we'd get some interesting perspectives on what a world with AI might be like, the moral and ethical issues associated with sentient-seeming machines, etc. Some kind of "Humans"/"Äkta människor" meets "Black Mirror". But that never materialized. To make matters worse the slipshod storytelling and lackluster character development became a distraction such that it made it difficult to take the AI aspects all that seriously.
>> To me it seemed like they barely touched on this.
You may want to finish all of the first season then. It is the major theme.
I did, and your point nails it. It felt like something they tacked on to the last episode or two as opposed to being woven throughout the entire season.
Was the eponymous theme park not enough?
If they were capable of that level of AI they would have reached exponential growth shortly - to suggest someone would use it to build theme parks is downright retarded.
Like you said - AI is a plot device in that show, just another form of "magic" that lets them ramble on about consciousness while pretending to be sciency.
Fun fact: PoI basically predicted Snowden and his revelations - the episode with NSA whistleblower aired before Snowden did his thing.
I tried PoI myself and couldn't get through it, because it was just so dull -- run-of-the-mill low budget procedural with that glossy network-TV feeling of Hollywood reality to it (e.g., everyone in the show looks like a model, except eggheads or villains, where they're allowed to cast someone unattractive).
Several people have told me it gets much better, but I just don't want to have to wade through all the mediocre episodes.
No idea how good the list is or how well it works.
What's the current primary use for machine learning? Determining consumer behavior? Guessing at which ads to display?
The character played by Hopkins tried his hardest to keep the technology within the park. That seems pretty odd at first but his reasons to do so are stated explicitly in the final episode.
Many people try to get the technology out but they all seem to have failed so far.
I don't doubt that that would be a very real consideration, a pressure out there in the world that anyone developing AI would have to contend with.
I think the problem is with treating this like an inevitability. You are always going to deal with the idiosyncracies of the world: personalities, motivations, contingent world circumstances.
And beyond that, I think we have to keep in mind that this is, after all, fiction. The writers may arbitrarily structure their world in a manner that puts a spotlight on things they feel are interesting. And, within the limits of that spotlight, they may consider things like artificial intelligence in a way that is sincere to the subset of futurist considerations they are interested in investigating. And perhaps that involves suppressing plausible forces we would expect to see, such as military and corporate interest in advanced AI tech.
Can't imagine the character played by A. Hopkins would even pick up a phone call from military.
edit: In fact half the time tech moves in the opposite direction. Going onto the internet to argue that potentially civilization-changing technology won't be used largely for entertainment because it'll end up exclusively owned by government shows a profound lack of awareness.
We can say that the vast majority of living beings on Earth do not seem to seek any form of radical self improvement beyond ordinary developmental learning and mastery of survival skills. There is no intrinsic reason that there must be an impulse to exceed oneself. Why would AI be different?
AI's, like humans, need a purpose. The best way of achieving that purpose is by self improvement so any AI which does not self improve to some extent will be replaced by another that will. Just like humans who don't care to self improve eventually don't reproduce enough to spread their genes.
An alien AI, on the other hand, would most likely be incomprehensible to us, at least until we understood the aliens that created it.
It's almost as though it's science fiction...
Then there's Sci-Fi that's actually beleivable - like The Martian - I'm OK with this as well.
But then there's this pseudo-sciency trying to look serious while actually being BS part like Interstellar, The Arrival, etc. The main point of science there is not to be a fictional setting but trying to give a feeling of realism trough sci/techbabble. I don't even mind cheap sci-fi drama movies, for example I liked The Fountain, but when they try to pretend they are realistic that's what breaks the immersion for me completely because I start to critically evaluate the plot and it just falls apart.
I think you missed the point entirely of the films you mentioned
Well, someone did the credits warp manually, at least: https://www.youtube.com/watch?v=HxFh1CJOrTU
The video in the article shows basically shellcode injection, but that's not timing-sensitive, it'd just take longer for a human. And, as seen above, similar things are possible for humans, if just less convenient to do so.
Wow that just gave me an idea for a story. Humans found out they are mere pawns in a simulated reality, discovered a 'hack' to alter reality, but the 'hack' would take hundreds of years to complete. So generations of humans toiled over at completing the 'hack', passing the baton through each generation.
There could be so many possible storylines from this - corruption/destruction of reality, dictator wanting to change the past, or a cult hell-bent on changing reality, or sorcerers practicing 'magic', or a lone protagonist who's on the verge of completing that hack after hundreds of years, but suffer from ethical conflict and existential crisis.
At the risk of sounding somewhat stupid, but shouldn't this contain "Swans are white" for it to be a correct answer?
If instead we use abductive inference, we might seek the simplest and most likely explanation given our universe of observations. Sherlock Holmes was a big fan of abduction!
Much of real-world reasoning is abductive to a greater or lesser extent. There is a well-known joke about some motley band of engineers, logicians, mathematicians, statisticians, etc etc catching a train through the Highlands. They see a black sheep, the engineer says "look, all sheep in Scotland are black!", the statistician says "no, you can't say that – just that MOST sheep in Scotland are black", another says "no, we can only say that at least ONE sheep is black", another says "no, it's only black on at least one side", then the one you're stuck next to at the party says "you're all wrong, we can only say that at least one sheep in Scotland is black on at least one side at least some of the time". The last statement is fully deductive; the rest of them are abductive, and more-or-less useful.
As a gauge for how far we are from AI you can consider what sort of modeling capacity is required until an AI can ask, when presented with such a sequence: "What country is the swan from?" or, even more impressively: "Do you know where this took place and what country the swan's parents were from?" For the first question it would then abduce a color. Same for the second but perhaps it could include probabilities based on estimated number of each color and the genetics of swan color.
This post is a rotation meant to provide a better sense of scale for the problem at hand.
Certainly! Synthesis rather than reformatting (or, more commonly, regurgitation). Analysis and abduction are more than just "put it in your own words". More useful too.
There is something of a rush on at the moment to generate chat-bots to replace FAQs. Every Slack/Fleep/Blern/Crank channel appears to have five or six memoisation bots. Seems to be largely a solved problem!
When we can start having bots that can be sensibly interrogated for a summary (or even a "hey, you've been away for several hours: here's the key points"), we can finally abandon the chatrooms and let the generative bots flood them with abductive content, and the precis bots can then ping you every couple of weeks when something important comes up.
"Am I in the United States around the first part of the 21st century?"
"Oh, how unfortunate - now I have to ask another question or you may think I'm not sentient."
$( <MM> <PROOF_ASST> THEOREM=whiteswans LOC_AFTER=
* Assume it is provable that ( l e. S /\ l e. W ) implies for all l ( l e. S /\ l e. W ), and assume that g e. S . Then it is provable that if ( l e. S /\ l e. W ) then g e. W .
h1::whiteswans.1 |- ( ( l e. S /\ l e. W ) -> A. l ( l e. S -> l e. W ) )
h2::whiteswans.2 |- g e. S
3:1:bnj1361 |- ( ( l e. S /\ l e. W ) -> S C_ W )
5:3:sseld |- ( ( l e. S /\ l e. W ) -> ( g e. S -> g e. W ) )
qed:2,5:mpi |- ( ( l e. S /\ l e. W ) -> g e. W )
$= ( cv wcel wa bnj1361 sseld mpi ) DGZAHMCHIZBGZAHOCHFNACONDACEJKL $.
$d S l
$d W l
But you're probably right, the answer would be 'white', at least until a black swan comes along and utterly fucks with her worldview. Humans prefer certainties and binaries, and eschew uncertainties, probabilities, and multiplicities. So they employ all sorts of cognitive errors to avoid these things. This s a problem, because the universe rarely comes in binaries or delivers enough information for real certainty. I would hope that machine consciousness would avoid these errors, as I think they are the foundations of some of our nastier tendencies.
I wonder how general is that. I'd like to believe it's more of a mindset thing - I definitely saw people reasoning this way, but I also know some that handle uncertainty pretty well. I'd like to include myself in the second group - personally, I'm actually suspicious of anything that sounds binary in the real world - it means I'm being fed some artificial boundaries.
It would be OK to deduce that the expected answer is white or something like that (taking human unreasonableness into account).
Lilly = Swan, Swan = White, Bernhard = Green, Greg = Swan.
Color of Swan or Greg = White
it likely that it's white but there is no way to know for sure.
Australian black swans are black, but chicks are light grey:) Lilly could be chick while Greg could be adult Australian swan
Coupling the whole robotics and AI thing. In Asimovs stories they first built the robots, which get smarter and smarter, but this is not how reality worked. Robots are rather specialized and most AI/automation we have today is about data, which is virtual.
They're all like, lets build a robot, then make it intelligent, but it should also move like a human, oh and why not replace their internals with life-like organs?
It's like they make a run through all science and engineering in about 30 years of development and pretend it's mostly the work of one master mind, which is ridicioulus. The whole "technical" side of Westworld is trash.
It's more a philosophical story than technical or psychological, with a few deus ex machina to get the ball rolling.
I suspect this isn't an accident, given (some of) the same writers/producers are developing the Foundation series for HBO. There are MANY Asimovian themes scattered throughout the series, and even a few things I'm reasonably certain are direct references. (Ex: "Someday".)
> It's more a philosophical story than technical or psychological
And that's why I enjoyed it. My favorite science fiction always fits that description, especially if the big philosophical question comes around for technical reasons. (Such as in "The Cold Equations.") That, and the first season was very much a complete story. They could stop here and I'd be happy with it.
The series had its flaws, but I'm very optimistic if this is the shape of TV scifi to come. Even if it's all adaptations or of a derivative form.
But the rest feels a bit... meh.
The charaters don't have any depth.
Bernard and Ford had the most, but the rest?
The Maeve story-line was utter crap and the people around it wer basically imbecills, Maeve included.
William had something going on for him and when everything got together I was blown away, but only for a moment because some of the story telling puzzles were solved and not because he is a good character, his development is simply implausible.
Teddy is just... empty?
Dolores was okay, but since the whole story was about her uncovering her past and with it her personality, it took the show till the end of the season till she got some depth.
The scifi aspect is also minimal and basically a huge plothole, it's more fantasy to me.
The tech I can deal with, because they don't even try to explain it, and there was a nice moment where Maeve goes bonkers when confronted by the fact that she doesn't really have free will.
But my biggest gripe was that park security was a joke. It has a kind of Star Wars stupidity - "There's a problem down on the planet that could be hostile? Let's send the captain, first officer and chief medical officer". There's a problem in the park, so they send the head of security alone who conveniently can't get a signal back to base. And then there are the personnel who are wearing armour so ineffectual that they all drop like flies with a single bullet. You'd think that they might design weapons that were biometrically (or at least RFID-tagged to be realistic) linked to real people so they couldn't be stolen.
It seems like the entire park is manned by about 100 people. And until the final twist was revealed, I did wonder how the hell any of the chronology actually made sense - as in the starting town scene was being reset so often that it would be an insane clean-up job every night. Not to mention that there were parallel storylines where characters that had dependent stories seemed to be apart during resets e.g. Dolores and young William, Teddy and old William were being aired at the same time.
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
do answer many of OP's points about goal oriented behaviour.
I think that if the show spent time covering how the tech came to be, it would just dilute the philosophical concentration of the series, which in my opinion, was extremely stimulating and appropriately delivered for television's standards. No I do not believe that Mr. Ford developed sentient beings with the help of just one other person, but who's to say he didn't just fork some open source framework (in his backstory, of course) and spend some ridiculous amount of an inherited wealth to make it all come true?
I had my share of disappointment as the show progressed, but it definitely added something to the mix that I was not expecting.
That's not right is it? At least, not logical induction?
Supposing the second premise were 'Lily is female'; the answer to 'What gender is Greg' should obviously not be 'female'.
Inductive reasoning (induction) guesses the general case from particular instances.
I presume that you are thinking about deductive reasoning (deduction), which guesses the particular instance from the general case.
Finally, abductive reasoning is usually the goal instead of simple induction, and technically what the neural networks do: compute the simplest general case that best explains particular instances they are trained with.
Using their flawed logic, since Lily and Greg are both swans, then Bernhard must be a swan, so Greg could be white or green.
You're right about the answer not being correct, and not for the reason I gave. The test was a butchered version of the black swan problem. It shows that you can't generalize given prior examples. Even if you've seen a million swans, and they were all white, you haven't proven that all swans are white.
I haven't seen the show, so I don't know if they were intentionally going for that, or they screwed it up and honestly thought it was a proper example of inductive reasoning.
But induction is more like learning about the world by observing it, and making probable guesses. So it may be correct for a machine to observe that if one swan is white, the best guess for the color of another swan (in the absence of other info) is white.
I'm not familiar with that definition of induction, but I haven't studied ML at all.
For that reason, I find it difficult to have a completely unbiased opinion on our current state of progress.
But... while we seem to be making great progress, it seems like we're a long way off understanding how the human mind works.
AlphaGo was an amazing achievement, but I think it's unlikely that the human mind tackles Go in the same way.
It's obviously possible that there are multiple routes to a general human level intelligence. But I think it's still unclear if the way AI is currently being developed is one of them.
However, I think that the first AI humanity manages to build will be more or less a copy of a human mind and only later will we learn how to construct minds "from scratch". Akin to how a beginning programmer will often scrape together bits from various sources to build his/her first program and only later can make original work.
This is a basic result in computer science: universal Turing machines can simulate any other Turing machine. If you accept strong AI, then you already accept the likelihood that the brain is reproducible via a Turing machine.
This is very certainly true, which is what makes AlphaGo interesting to watch and study. The human mind, even one that has trained on Go for years on end, will still work with abstractions and ideas that do not relate to the game. AlphaGo and other computers lack this attribute, as any and all abstractions they may have learned relate entirely to the game.
Any ideas about the "human perception" of Go they may have gleaned from games that are included in the initial training dataset, I suspect have long been supplanted by novel notions gathered during the phase where the Neural Nets played against themselves. These phases are documented in the AlphaGo blog from Deepmind.
I suspect that we may reach "human level intelligence", but that this intelligence will not arise in the same way. That is to say, computers will at some point match us in most tests of intelligence, but the solutions they devise will be completely novel.
My initial reasoning was that the park doesn't get continually reset every day, nor does it necessarily happen over night. It gets reset when a storyline finishes or a catastrophe happens in one location, e.g. the shootout in the starting town. It would make sense for guests to be given allocated windows for entry (note how the park is clearly not teeming with players, we meet perhaps 10 extras over the course of the entire series). Then that 'game' gets run, people play through the stories and then a new cohort begins. This might take, for example, a week.
We see the transition from the church in the Maze town being totally covered in sand, to being unearthed again. That clearly isn't an overnight (or even a week's) job.
Recall William had to get permission to launch an incendiary attack at the prison. It's possible that the control centre would deny the use of dynamite to a player if it was used in a location that would be frequently re-used.
Something else you pretty much just have to accept is that through all sorts of mayhem, the humans stay safe (well, until the end at least). There's some rather inconsistent hand waves around guns and bullets but one has to believe there are still ample opportunities for serious injury in some of the scenes we see.
[SPOILERS]Dolores kills a human outright by aiming at their head mid-season. Then it isn't mentioned again.[/SPOILERS]
I always assumed something in the suit lining made the guns faux-fire, and the suit responded by exploding a pocket of air or something. But then you can see The Guy In The Black Suit (I've forgotten his name) load his revolver by manually inserting bullets, so I have no idea how that'd work. It's probably one of the only things that bothered me about the series.
It doesn't especially bother me though. We know that TV and movies generally have a convention that trauma that would put someone in the ICU in the real world is brushed off as a flesh wound. And I'll accept technobabble about the bullets. I'm also happy to accept that maybe Westworld takes place in a culture where theme park risks on par with base jumping are considered fine and proper.
Edit: popular holiday destinations like Mexico are reasonably dangerous, maybe Westworld is on the other side of the border ;)
Especially since a lot of hosts "work" at night. :)
Yes, the hosts are busy, but it's definitely possible they could be recalled to a location for pickup (or whatever) when they're free.
Unless we achieve the ability to essentially run a MITM attack on the brain, to intercept commands from the brain and to provide it with sensory input, VR won't come close to being something genuinely deserving the name virtual reality.
There are scenarios where you are more constrained in real life (e.g. sitting in a vehicle of some sort) where g-forces aside, you can probably get pretty close with vision, sound, and some fairly basic force feedback will be able to get you to a fairly decent simulation. But I agree that anything involving running around and physically interacting with a 3D world is a lot more challenging.
Currently you can't even really walk around in a room without being incredibly restricted in terms of room layout, furniture and let's not forget you the headset with cables attached.
Last but not least even this hypothetical virtual reality is still just that: virtual. Simply the knowledge that something is merely virtual will always be somewhat of a disappointment, the same way a copy, no matter how perfect, is not quite as appealing as the original.
The concept of Westworld has an authenticity to it, that virtual reality can never hope to reach. As humans we are just weird that way.
But we're all critics, so YMMV.
It seems that sometimes a lower production budget yields a better story. Forced to rely instead on the viewer's imagination and subject interest. Especially if it's done with aplomb and intelligence.
I found the dialogue in Humans to be too "explanatory." Haven't seen a full episode of Westworld yet, but imagine the same is true. Big budgets tend to to that.
It's probably due in part to being an original and even higher production values, so you get actual good writers. Maybe also a higher-brow target audience.
Humans seems smarter about a similar subject matter, but westworld is just so much fun.
Fortunately there is no conflict and I can watch both!
Season 1 focuses on the next step which is consciousness and choice, and the 2 critical theories to explain the emergence of these in the show is bicameralism and memory. It doesn't dwell too much on the how and focuses more on the consequences of it, which is a fascinating journey.
The writers also appear to have used a variation on recursive plot, which is nice to see.
Thought experiment/short story that goes into this in depth: https://gist.github.com/deanmarano/142df7a8a824ab05fc777d8e0...
The crux of the story hinges on the magical spontaneous development of general intelligence, so it's pretty unconvincing as a specific plausible scenario IMO. But the general idea, that an AI may take unethical/unprecedented actions to maximize a harmless goal, is a good one.
What color is Greg? Answer: white
How the hell is that logical? Greg may be white, he may be black or any other color. You could guess Greg is likely white. The correct answer is "can't tell". But I'd expect an intelligent responder to reply with questions and complaints of unanswerablity like I have done here.
The cited example is taken from the bAbI papers, and is a case of inductive reasoning:
Inductive reasoning (as opposed to deductive reasoning or abductive reasoning) is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.
In reality this is often the best you can do, as all you have to work on in the end is the input from your own sensors.
We've been doing this with regard to various animals since forever. Every time a chimp learns sign language, an elephant cares about its mother, or a dolphin has sex for fun, everyone loses their minds falling all over themselves to "prove" that we are qualitatively different. That there's something else going on with us that is special.
I'm not passing judgment here. I do it too. It's extremely convenient for me to do so. If you start thinking of intelligence as a spectrum with some species closer to one end of it than others, it gets a lot harder to justify most of what we do to animals. And there's a dark place I don't want to go that suggests that some people are far enough down the spectrum that maybe you could justify doing bad things to them.
It get really messy and ugly in both directions when you think about intelligence as a spectrum. At what point should an animal be considered smart enough to merit "human" rights? At what point should a human be considered so dumb that it doesn't?
We, as a society, are not ready to have that conversation. We lack the moral fortitude to do so, which is why I happily participate in this artificial segregation.
But we are going to be forced into dealing with it much sooner than we are ready to. I fully expect that within my lifetime, there will be Bladerunner scenarios with lifelike robots who are practically indistinguishable from actual people.
We live in a bubble here, where all the women are beautiful, all the men are above average, and all the children are FBI agents.
Human memory is far more corrupt and fallacious than we want to think it is. Weird social interactions with slightly (or very) dysfunctional people are far more common than we tend to think they are. Spend a day on the subway in NYC. Spend a day in my hometown in Texas (population: 498).
Many of these people could easily be simulated with a high degree of believability. The real hypocrisy here is that no one wants to believe that you, as an individual, could be simulated with believability. I'll go on the record and say that it would be trivial to simulate me. I'm not that special.
The problem we have with AI tests is that we are testing against the ability for an AI to be anyone. We're checking to see if an AI could be as good at impersonating absolutely anyone as one of the top .00001% of human character actors.
We aren't checking the lower bounds. Because that's extremely uncomfortable for us. We're maintaining goals and standards that are designed to make the tests fail.
Again, we are doing it for good reasons. We haven't yet solved the problem of how to treat each other when we know that we're only dealing with humans. We aren't ready to talk about bringing other entities into our world yet.
Bladerunner was prescient; Westworld is near future. We need to get our shit together because these issues are going to come up far sooner than we expect. And when we're talking about an entity that speaks to us in our own language, with our own idioms, with our own concepts of feelings and emotions--it's going to be a lot harder to maintain the pretense that we are somehow qualitatively different.
On the other hand, this could be really convenient for us. A moment of solidarity, if you will. We could create believable robot characters that we unite against and focus all our hate, violence, racism, and abuse towards. Maybe we all get along better after that.
But what does that say about us? And have we really solved our problems? I think that's the question Westworld is asking, similarly to the question some open world games ask, like EvE Online: in a universe where everything is permissible, who are you?