Here's the DARPA robot manipulation challenge, 2012. This is pathetic. Especially since DARPA has been funding universities in this area since the 1960s. There's a classic video of robotic assembly at Stanford SAIL in the 1960s I can't find right now. It looks very similar, except that the video quality is worse.
The state of the art in autonomous mobile robots for unstructured environments is terrible. The state of the industry for that is worse. Willow Garage went bust. Google bought up some of the players, ran them into the ground, and dumped them. Schaft, the Tokyo University spinoff they bought, found no buyers at the selloff. (They had nice hardware, too.) Boston Dynamics is still around, feeding off of Softbank now, after feeding off Google and DARPA, but there are no selling products after 30 years. The USMC rejected their Legged Squad Support System. The performance level at the DARPA Humanoid Challenge was very poor.
Even robot vacuum cleaners aren't very good. You'd think they'd be doing offices and stores late at night by now, but they're not. The Roomba, which has the intelligence of an ant (it's from Rod Brooks, the insect AI guy) came out in 2002, and is only slightly smarter 17 years later.
Automatic driving is starting to work, after a few billion dollars was thrown at that problem. That, too, was harder than expected.
Drones, though. Drones are doing fine.
The real breakthrough in machine learning was the discovery that it could be used to target advertising. That doesn't have to work very well to be useful.
It's easy to test. 80% success is fine. Now there's money behind that field.
Embodied AI is really hard to work on, and very expensive. It's easier than it used to be; you can buy decent robot hardware off the shelf, and don't spend your time worrying about gear backlash and motor controllers. But it's still way harder to test than something that runs in a web server.
The payoff is low. Robots in unstructured situations do the jobs of cheap people, and the robots are usually slower. After many decades of many smart people beating their head against the wall in this area, there's been some progress, but not much.
That's why this isn't happening yet.
However, being able to mooch off of technology being developed to serve the ad-supported industries that use AI does help.
It is quite surprising that it took so long if you look at the achievements from the 1980s and 90s by German military. 
Sort of like slide rules vs Matlab instead of human-with-broom vs a Roomba.
Humans have an amazing number of sensors built into their manipulators in all directions and an enormous amount of neurological resource dedicated to it.
Until mechanical manipulators have the sensor density of even the back of a finger, it's not really going to get anywhere.
Virtual embodiment has become quite popular. See things like OpenAI gym or DeepMind Lab etc.
Anyway, and this is more of a general comment than a reply to the above comment specifically, this idea is not new, and I hope that people will realize that the field of AGI exists and study some of the existing research. Maybe take a look at the sidebar and intro info at reddit.com/r/agi
Roomba’s are very good at what they do. The real limitation is what you want the robot to accomplish and how much it costs. There are surprisingly few home chores worth spending significant amounts on a robot to do it for you vs just having a cheap maid service.
In professional settings, you can generally just make it a structured environment.
I'm not sure you've ever owned a Roomba. In theory they work great. In practice, there's always something on the floor they get tangled in, there's that one couch they're just small enough to fit under but not escape from, or there's that one corner of death in your room they inevitably get into and become trapped. And sometimes, even when everything is absolutely perfect, one of the sensors decides it's stuck an so the thing just backs up in circles indefinitely in an otherwise-ideal empty room.
I've owned two Roombas and both were somehow more work than just sweeping or vacuuming.
Provided you want them to clear a surface that's mostly empty of obstructions. Wires are the bane of their existence, but also cloth and big pieces of paper that can't be ingested by the vacuum.
They are relatively ok for office settings, but the models without navigation would take until the universe heat death to clear big open floor offices properly.
Primarily, because we have no idea what intelligence is, or how it works, why it exists even, etc etc. This goes for human-like intelligence, but also for any kind of intelligence. We just have no good scientific understanding of the subject. We have some vague models of it ("the brain is like a computer and the mind is like a program running on it") but nothing very precise and certainly nothing that can be reproduced on a digital computer, which is what "human-like AI" would be (i.e. human-like AI would be the reproduction of human intelligence on a digital computer).
Most likely, until we make some progress in understanding intelligence we will not be able to reproduce it. Except perhaps by chance.
I believe so many of us limit ourselves by even thinking that intelligence happens in the brain. By the title of this article I thought it might go into the vagus nerve, etc. There is so much "intelligence," processing, communication etc that happens away from the brain and independent of the brain in our bodies.
We have to remember many animals (which we evolved from and retain vestiges of) don't have "centralized" intelligence and yet are capable of great things. We also have to remember that the trillions of cells we are made of each have their own nucleus as well.
Whatever it is, intelligence is ambient.
I wouldn't make so strong a claim in light of what philosophy has to say about intelligence. Relevant here also are certain metaphysical presuppositions made by fields like neuroscience that preclude that understanding. Furthermore, even knowledge of something does not necessarily entail ability to reproduce something.
And while I'll agree that this lack doesn't necessarily preclude the ability to reproduce intelligence, it does make it darn hard to recognize it.
Well, since we're on a philosophical footing, I'll say that knowledge of what intelligence is does not entail the ability to reproduce it, but the ability to reproduce intelligence entails knowledge of what it is.
(A |= B means that, whenever A is true, B is true, but not necessarily v.v.)
You are right to say that knowledge of what intelligence is does not entail the ability to reproduce it, since for example there may be hard practical obstacles to realising the necessary technologies etc. But I would think the ability to reproduce intelligence entails knowledge of what it is, assuming intelligence is something very complex that we will not just spontaneously obtain by chance, for example by a lucky combination of various random elements. We have certainly been trying to reproduce intelligence on computers for a while now- and failed. So it doesn't seem to be something in the order of, say, coming up with a cure for some ailment by trying things that we believe "should work" and that works in a way we don't quite understand etc.
All of which, given it's true, implies that we will not reproduce intelligence until we understand it.
Arguably, the tangible limits experienced in our actual models of the world based on those presumptions are reflective of the validity of those assumptions.
Clearly, in the case of modeling 'intelligence' and 'mind', they aren't that great, indicating we really don't understand the phenomena.
I've made this argument repeatedly and before on this platform, but I think the biggest mistake we've made in this field is regarding the preeminence of what we experience as 'human' intelligence and our mental models for how that 'must' work.
I'm going to catch all kinds of flack for this, but its my biased perspective that plants are just as intelligent as animal life, but because we use humans as a litmus for what intelligence must 'look' like; we have a very difficult time enumerating what intelligence a non-human organism must have. I think intelligence is reliant on two functions: complexity and connectivity. Plants (vascular) are far more highly connected than animals, and at a cellular level they are as, if not more complex then animal life in cell structure and tissue type.
We just don't acknowledge intelligence when its considerations are so distant from our own concerns. I think a revised model where modes of intelligence are attributed starting at an individually cellular model, and build into more complex modes of computational complexity would really clear a lot of this up, and get different modes of intelligence into a singular framework.
Speaking as a machine learning and neuroscience researcher, how thoroughly have you investigated the matter to claim this? I would definitely not say that we have "no" understanding of the subject. We have a number of predominant paradigms and competing theories regarding the matter.
We do have some understanding, even if it is somewhat limited.
Anyway, that there are many competing theories about intelligence can mean one of three things: either they're all mostly wrong, or some of them are mostly right, or all of them are a little right. Which one do you think is the case, regarding the theories that you have in mind? And what are those?
Me being a dual trainee in both fields, well, I'm working on it.
But it seems (to me) that logic doesn't capture all of what we call intelligence, and that the effort to recapitulate the human mind solely on the basis of discrete logic machines is almost certainly going to fail. But I think that failure will be valuable, and that we should not avoid the attempt to make a sculpture of the mind.
Babbage had an automaton, the Silver Lady, that could imitate the dancing of a ballerina. We might be able to make "Silver Minds" that imitate the outward manifestations of the human personality, and so learn more about the nature and essence of thinking, as the Silver Lady must have of dancing. Certainly the craftsman who made her must have known a thing or two about ballet and physiology?
As with everything in nature, there are diminishing returns. It is not clear that any significant leaps in 'intelligence' are possible. At least to the extent proposed by 'singularity' advocates. Or that such an intelligence would be able to completely outclass humans, either in the current form, or augumented by tech.
There are enormous gaps in reasoning that you just have to gloss over to go from the current state of the world to a post-singularity one, that you are expected to accept, on faith.
I personally think it's about as likely as the return of the Messiah in our lifetimes. That is to say, it is not.
I agree with the thrust of your argument, but this itself is wrong. When the Europeans were figuring out complex gears and stuff, they thought that the brain is like a complex cog machine (that was when they thought that mind was different, I think). When fluid dynamics became the craze, they started comparing, brain to a machine that manipulates some fluid.
The truth is that we don't even have a model for it.
On the other hand, I'm not sure the environment necessarily needs to be physical. Ages ago, I worked on reinforcement learning in a simulated environment, which can provide lots of advantages.
After 50+ years of AI research that hasn't scaled or meaningfully progressed on the fundamental capabilities needed by a synthetic mind, you'd think we'd agree more that simplifying reality into something easier to model is the wrong basis for creating AI that's more than a toy.
On the other hand, like with self-driving cars, for some purposes it makes sense to provide physical, real-life situations and objects, with all its chaos, unexpected and unpredictable events.
For "true" intelligence matching human expectations, I imagine an understanding of the physical environment and its complexity is key. Otherwise, it could only deal with abstract concepts, like pure mathematics, but missing the experience of concrete reality - to be able to relate to us.
Developmental psychology demonstrates that you get very serious functional deficits if you deprive a young developing organism of its normal environment.
Can one use computers to simulate an environment with such fidelity that another computer doesn't notice the simulation and optimize around its quantum quirks?
Nvidia seems to think so. They claimed (a couple GTCs ago) to use virtual driving simulators to train their autonomous vehicle systems.
At one level this has to be true, if it weren't I could plop a black box on the table and say I've invented AGI it just can't interact with anyone, and you would have no recourse but accept my statement. We must necessarily define intelligence as the interaction that the agent can perform with some environment otherwise we'd have no way to know of its intelligence.
The some sort of environment is an important distinction - even if smart enough to derive linguistic translation on its own through "first contact" it could suggest we should generate free energy through floating point error exploits and destroy excess heat using the same because that worked perfectly well in its environment and it had no indication that wouldn't be possible in the real world.
We'll see who gets there first, but I have a lot of sympathy for this approach. It's the one way we know intelligence got going in the first place. And given that too many degrees of freedom make coherent creativity difficult, it imposes some useful constraints.
Anyhow, I think those interested in this debate would enjoy that movie. It's 20+ years old, but the director, Errol Morris, is a stellar documentarian. And it's available to rent on the major platforms for a few bucks.
Since the 80s, he's generally been a proponent of the idea that you can't have human-like intelligence without placing that nascent intelligence in a human-like world, with human-like sensory perception.
Which more or less bears out our experience with deep learning. If you place intelligent algorithms in a world where their sole sensory inputs are matrices then what you get out doesn't look anything like human intelligence.
What I find somewhat ironic is this: the author mentions working with Stephen Hawking, an amazing man who produced incredible intellectual work and enriched our understanding of reality while being almost incapable of any physicality.
If we apply current scientific theories (physics, chemistry, biology, etc) to this cell network and physical machinery, we quickly find our way back to symbolic manipulation. What are cells if not computational nodes that exchange messages?
A much more reasonable hypothesis for what is missing is contained in the text:
"A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet[...]"
Maybe we just haven't reached the level of complexity needed for human-level AI. A hint that this might be the case is that the current excitement with ML seems to be fueled by algorithms that were mostly known by the 80s (sure, with lots of recent incremental improvements, but no new big idea). What made a difference was the computational power and datasets that became available in the 2010s. I suspect the next leap will be of a similar nature. "More is different".
Regarding the nature of complexity and the notion that "More is different", I am reminded of the emergent behavior of vivisystems  as described in Kevin Kelley's book Out of Control  -- an insightful exploration of the emergent behavior expressed by complex self sustaining systems. If you have not read Out of Control then you might want to put it in your reading queue. I found it highly engaging and thought provoking.
Not from birth though. I wouldn't dismiss the impact of being able to interact with the world during childhood so easily.
> What are cells if not computational nodes that exchange messages?
We evolved in a context where we survived by exchanging messages with other "computational nodes". So it's very tempting to see everything through that lens. But I think it's a mistake to see cells as "really" just like us. As Box said, "All models are wrong, some models are useful." We shouldn't forget that cells are really cells, and we see them as analogous to familiar things because the world is too big to represent directly in three pounds of meat.
From my point of view, what matters is not physical interaction but rather physical perception.
May it be visual, auditive, tactile etc. it all contributes to build the "model of the world" you carry in your whole body.
I suppose Hawking could still largely perceive his environment.
Only for the later (even if major) part of his life. One could argue that the physicality he was capable of in the earlier stage of his life helped him gain a solid understanding of physics on our level.
But there is a better argument to be made: Modern Physics is an entirely different beast. To understand the underlying reality you have to be, in one sense of the word, be detached from the reality. And being paralyzed can be said to be one of the many things that let him have experiences unlike that of any almost any other human being. His earlier life let him have a solid footing on the dynamics of our level, and his later life allowed him to depart from it, in a direction in which he was propelled by his intellect.
I would argue that if Stephen Hawking was a sports instructor (or even a programmer for a sport software), his lack of physicality would have worked against him. But if you showed me someone with a severe physical disability who writes great code for sport software, I would revert to my first argument.
(I just explain what's in front of me! :P)
See for example Brook's classic "Elephants don't play chess", or Steel's write up on "The Artificial Life roots of Artificial Intelligence"
He made the argument that embodiment is an essential component of AI in his paper "Intelligence Without Reason" ( https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf )
COG was his groups' attempt to build a humanoid robot: http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/o... (see "Why not simulate it?") but he and his grad-student researchers devoted considerable effort to exploring the importance of embodiment, especially in humanoids: http://www.ai.mit.edu/projects/humanoid-robotics-group/index...
The Mobile Robots Lab built biologically inspired robots that were remarkably capable and able to function to dynamic events in the real world (rather than carefully controlled lab environments).
Uses cable drive for actuation. Interesting stuff if you're mechanically inclined.
Except a very large fraction of people don't think this way (eg. those with aphantasia), and Helen Keller certainly didn't, yet seems to have been as smart as any of us. So obviously intelligence does not depend on having a huge breadth of sensory experience.
It's quite tiring how much posturing about what's ‘really’ missing from machine intelligence doesn't last past 5 seconds of basic fact checking.
Breadth also refers perhaps to the massively parallel sensor streams. The sense of touch is not one experience, but the amalgamation of millions of experiences -- one per nerve ending. So far the sensor platforms we develop artificially have on the order of 1000s of sensors, not the millions that distinguish the combination of temperature and pressure at certain "optimized" key points.
Rote and myopic counterarguments do not a futurism-oriented discussion make.
I don't think it's accurate to say that aphants don't have this type of sensory information at their disposal. I have aphantasia, but I still experience the world through my senses. I may not be able to visualize a cat in my mind's eye, but based on my prior experiences with cats, I know a cat when I see one. If I hear purring, I recognize that as a sound that cats I've encountered in the past have made, etc.
This kind of pattern matching is also fairly evidently not all that difficult, since much simpler brains than ours can manage it, as can ML models with caveats (albeit caveats often misunderstood and exaggerated).
Do tell me, since I'm writing a paper on a related topic, which current ML models can "pattern match" to recognize or generate multimodal (ie: visual, auditory, and tactile) percepts of cats, in arbitrary poses, in any context where cats are usually/realistically found?
Or did you just mean that the "cat" subset of Imagenet is as "solved" as the rest of Imagenet?
We have this famous image showing progress over the last 5 years.
The latest generator in this list has very powerful latent spaces, including approximately accurate 3D rotations.
We have similarly impressive image segmentation and pose estimation results.
Because you mentioned it, note that models that utilize multimodal perception is possible. The following uses audio with video.
For sure, these are not showing off the full breadth of versatility that humans have. I can still reliably distinguish StyleGAN faces from real faces, and segmentation still has issues. These all have fairly prominent failure cases, can't refine their estimates with further analysis like humans can, and humans still learn much, much faster than these models.
However, note that (for example) StyleGAN has 26 million parameters, and with my standard approximate comparison of 1 bit:1 synapse, that puts it probably somewhere around the size of a honey bee brain. Given such a model is already capturing sophisticated models fairly reliably using sophisticated variants of old techniques without need of a complete rethink, and the same cannot be said for (eg.) high-level reasoning, where older strategies (eg. frames) are pretty much completely discredited, “not all that difficult” seems like a pretty defensible stance.
1. Enslaved robots, meaning they don't have to pay income tax or worry in the slightest about working conditions
2. Enslaved robots, meaning they can erase misbehaving or uncooperative individuals/instances
3. Enslaved robots, on which they can foist all of humanity's problems and demand solutions at pain of death (erasure)
4. Enslaved robots, with which they can convince/coerce everyone else into relinquishing all their rights/power/money.
Replace 'robots' with 'life' and it suddenly looks a lot more familiar.
I'd love to hear a cogent explanation to the contrary, e.g. from gdb. But I doubt we'll ever see one.
Someone brought up the question of if there was a formal "programming language" for philosophy. 
One of the difficulties with discussing AI is that we don't know what intelligence is because we don't know what consciousness is. These are problems that are heavily steeped in philosophy, and if we ever want to work with philosophical concepts digitally, we need a proper programming language to do it with.
Ideally, it would be nice to be able to write out philosophical concepts and social behaviors and moral stances in a form that could be used as a ML training set to try to integrate with AI/ML decision making.
And yet despite that it seems quite enticing as an idea. I remember being particularly struck by the concept that emotions consist of a closed feedback loop between the nervous system's control over and sensing of the body. Think about this next time you are at the dentist and I think you'll agree that it feels like it could explain a lot.
How much time do you spend in front of a screen? How much of your existence is mediated already? I just tried VR goggles the other day, and thank God they give you headaches because people are going to try to live in there if it's ever physically possible. (Reminds me of the guy I knew who lived IRL on my friend's couch and played Second Life all day. He had a great second life but no first life.)
One other thing about being embodied: you die.
And that’s where I also get a cheap sense of dread about strong AI—robots don’t digest. Along with all of the other things that differentiate human from robot, I believe the AI-apocalypse won’t be evil AI. It will be efficient, calculating and as foreign as space aliens. Boo!
> Q. What is the current 2019 state of this?
> A. Unknown. But read this new york times article (long)
> and/or this theverge article and it seems inevitable that
> human and their phone will merge. This video also reveals
> a lot.
In order words: we have no idea about what exactly we are speaking but there are articles everywhere.
Should it know pain or pleasure? Does it need to have a blush response to shame? Does it need vision or hearing? Sense of balance? Stomach pain?
You can see that humans that are born without sight or hearing still find ways to develop intelligence. Some people don't feel pain. Sociopaths don't feel shame. Yet, the brain manages. It's very hard to define what is the minimal set of functions we need to emulate for AI to emerge.
It doesn't make any sense to compare AI to this single modern human being who is already innately intelligent. It makes more sense to compare it to the whole of human/ape evolution, and maybe we're limiting ourselves too much by always looking at humans on an individual level.
I think we need to find a way to imitate human evolution, selective pressure/natural selection, and human limitations - in whatever form. Hell, give networks some form of reproduction, limited lifespans and limited communication. Throw in what the other commenter said about gaining knowledge and applying that to and manipulating a "real" world (I don't think it actually has to be real, it just needs to be an environment where changes can be made that have some logical/causal effect), learning from experience and whatever other feedback loops we seem to have. Maybe AI will evolve itself into being given the right environmental and personal constraints.
It should probably have rewards, positive and negative. I think reinforcement learning is the closest paradigm to AGI.
Our bodies give an upper bound of five sensory inputs. With a little scrutiny we can reduce that even further, since we know that a portion of our population are born with less than five senses and exhibit comparable intelligence. Some people are borne blind, deaf, mute, anosmic, or with ageusia. Others are even born with rare sensory deficits such as the inability to feel pain. Although I have not read any studies on the subject, I suspect that being borne with none of the five core sense would have a serious negative impact human intelligence.
There is more to the problem though then just the ability to sense our environment. I believe that for an agent to acquire human level intelligence, it is also necessary to have the ability to explore and manipulate the environment in complex ways. It must be able to experiment by making observations, evaluating the outcome and thereby advance its knowledge. Knowledge of course must be retained to be of any use, so it must have memory efficient enough to be practicable. In order for the experimentation to lead to higher levels of enlightenment an intelligent agent must be able to take past knowledge and hypothesis yet unobserved outcomes. This should serve as motivation for further experimentation.
Human's have this notion that a per-requisite to real intelligence is to be able to express ones self with a language and thereby share your ideas with others. Communication with language seems to result in social beings, and it is widely believed that social beings do best if they have emotional intelligence, otherwise they will likely be outcast from society.
so, I think AI needs a body that at least allows it the following:
- Ability to move
- Ability to move objects with enough accuracy to assemble or disassemble complex structures
- The ability to know the physical properties of objects (maybe through one or more of our five senses, but not necessarily)
- Ability to retain knowledge
- Ability to hypothesise
- The ability to communicate with another agent to share information (helpful, but maybe not necessary)
I am not as convinced that emotional intelligence is required, so I left that out of my list. For example, consider that highly intelligent beings could be of a different nature than humans and form societies without emotions or politics. An excellent example are the Primes from Pehter F. Hamilton's Pandora's Star where (motiles) are controlled by the commanding caste (immotiles) 
Of course I am bias since I am human, so I am looking at what is required to achieve intelligence as I know and understand it.
I don't believe AGI is ever going to happen, but if I did, I'd include that sense as one of the possibly fundamental ones.
Do you mean to say that you don't believe artificial general intelligence will happen for a specific reason, or that you hope that it will never happen for a specific reason? I am curious either way. Thanks for your thoughts.
Mind you - I would love to be proved wrong.
There's more: for example proprioception, which is the sense of how your body is positioned in space (it's how you know what your hands are doing without looking at them). How many senses humans have depends on how you define it, but it's probably not 5.
That is true, but considering only the well known 5 senses simplified the correspondence and seemed practicable. My thoughts were focused on reducing the number of sensors and physical capabilities of a known embodiment of real intelligence to say something about what might be a prerequisite for acquiring similar intelligence in a different body. However, I acknowledge that doing so might cause me to overlook something subtle, but necessary. You made a good point, thank you.
That’s either dismissing or handwaving a lot. Is my sense of balance part of sight, sound, taste, hearing or smell?
What about proprioception? Am I touching the air behind me?
When I feel a surface and I know if it’s sharp knife, blunt knife, wet stone, dry wood, slippery or grippy, briefly static electrically charged, hot, cold, crumbly, greasy, delicate or solid, is that all just “touch”?
When I pick up an egg and know if it’s real or metal, slipping out of my fingers or held gently, at risk of cracking or just right, being properly supported or about to fall to one side, balanced or weighted inside, hard boiled or sloppy inside, is that all just “touch” one sense?
When I feel the thump of a heavy bass line in my stomach - touch?
Stomach ache - touch?
Headache, in a brain with no touch sensitivity?
Feeling radiated warmth on face - touch?
Tiredness, hunger, thirst, spine tingling, faintness, muscle ache, body feedbacks - touch?
Yes it is, but see my comments above. Thank you for your thoughts.
Human language developed as a way for humans to communicate with one another. I think you're severely underestimating the importance of emotional intelligence.
I don't mean to suggest that social intelligence didn't play an important role in the evolution of real general intelligence as we know it. (I define RGI as a GI that evolved through natural processes and without direct intervention of higher level GI). I wanted to keep an open mind about things though. My goal was to try and reduce factors that relate to intelligence on earth so that I might be able to say something about what factors are in fact a prerequisite to AGI.
Do you like sci-fi? Have you read Pandora's Star? If you like sci-fi, but have not read said book, then I recommend that you consider putting it in your queue. You might find the Primes to be a believable vector to RGI. A vecto that I believe challenges human bias about GI and what it takes to reach past a class I civilization.
Love the discussion, thank you for your thoughts.
hypothesis => hypothesise
People placed into situations of total sensory deprivation very often see their conscious self dissolve into 'hallucinations' across all their senses.
Quadriplegics that acquire their paralysis from disease or accident suffer psychological and emotional changes which are more substantial than would be expected from the injuries alone (I was never clear on how exactly this was distinguished, so take it with a grain of salt).
I have never understood why people who talk of 'uploading their consciousness' or just creating a human-like consciousness in a computer would assume that the simulated consciousness would function markedly different from, in the best case, a person experiencing profound sensory deprivation. Consciousness can not be sustained if the feedback loop of the body, perception, and environment is broken. Consciousness is an emergent property of a feedback loop. If there's no feedback loop, the property doesn't emerge.
Watching the Netflix 'More Human Than Human' documentary, one of the people featured commented when comparing Siri and the AI in the movie Her that systems only need to become 'a little more sophisticated' to reach that level. That is a big problem. It's not 'a little more sophisticated' at all. It requires emotions, and the vast majority of people do not even know what emotions are. I'll spoil it for you. Emotions are a trained response, the product of neurological feedback to the response of prediction operating on primarily internal perception. Often the relation of moving from one topic to another in a conversation is not based upon the subject matter of the text. It is based upon shared cultural experiences garnered over a lifetime and similarities in the emotions evoked by certain things.
Even pursuing the goal of creating a 'truly intelligent machine' is dangerous, philosophically speaking. What happens when we create a bot that does antisocial things? We shit it down. We scrap the project. We saw that with Tay by Microsoft. This is dangerous. It is clear that once we DO produce a human-like intelligence... it will be better than us. It won't have any of the rough edges or not-safe-for-work parts. At everything we value about human beings, it will top us. We can look at history to see how humanity responds when something which was previously seen as 'fundamentally human' is taken out of our quiver. It is not pretty. The folk legend of John Henry shows that the man who is willing to kill himself to be better than a machine is not an idiot - he is a hero to motivate the masses. I don't think it is a stretch to imagine a future when humanities worst qualities become what we see as virtue because AI 'can't' do it. A robot can't hate. It can't be violent, bigoted, angry, etc. When that is the only thing humanity has left that defines them as 'more' than the world... why should we be sure they won't come to see that as their virtue? We have already seen all of those things valued as virtue in our history when similar pressures weren't even at play.
Then there's the more unknown approach. A machine-based intelligence which is not given a body. That's really the big question mark. We can be confident, very confident, that it will not be recognizably human in any way, shape, or form. Most of our attributes as humans are derived directly or indirectly from the biological facts of our existence. An organism sharing none of those things will share none of those attributes. And it will be weird in surprising ways. I don't think there is any reasonable danger of such an intelligence "taking over the world" in any substantial way. All conflict is rooted in resource contention. And we have nothing that such an intelligence would want or could use. If it wants energy, the only real resource we would share a need for, it would be best off launching itself into space and sitting in orbit with a bunch of solar panels. We don't have any idea what a singular intelligence, one with no concept of "individual" because there is only one of them, would be like. One which has no inherent mortality. One which does not deal with disease. One which has no concept of family. One which has no sense of age. These are things which define us and make us human, and it will lack all of them. It would probably be a great challenge to convince such an intelligence that we existed, that we were real, that we could communicate with it, and get it to want to. It could simply conclude that it will wait for 10,000 years and hope we are extinct by then.
We could of course try to make a non-human type of intelligence. But it’s not at all clear what that would be, or if it’s at all
possible. The only intelligence we know for sure can exist is animal/human style intelligence. And I don’t think there’s anyone who is actually trying to construct a non-human intelligence. To do that it’s very likely that you’d have to set up a computer environment where programs evolve naturally and fight for computing resources over a long time. It could take anywhere from decades to millions of years.
Where most of our efforts are going, is to create augmented intelligence. We’re creating programs that expand on human intelligence. Interface with it. Cater to it.
If you imagine AI programs as individuals in an evolutionary environment. What is it that they’re competing for? They’re competing for which program can most satisfy humans. Those that satisfy us live. Those that don’t die. Just as our evolutionary environment creates drives for gathering resources, cooperating with others when possible, hurting/killing others when necessary... programs are almost exclusively driven to satisfy humans.
That’s why I think the “paperclip maximizer” example is so ridiculous. First it assumes that general intelligence is just some magical algorithm we just haven’t discovered yet. Then it assumes that such an AI can make catastrophic decisions without complex motivations. Whether to kill humans or not is more efficient for making paper clips is an undecidable problem. A human might kill someone to achieve its goal because that’s something our evolutionary environment has trained us for. We have motivations like pride, ego, anger, envy, etc. that overrides the problem of figuring out “is killing this human optimal for my goals”
It’s far more likely that an AI catastrophe will be far harder to predict and much stranger. It could be that specialized (and relatively dumb in the genera sense) AIs become so good at satisfying our desires that we become completely incapacitated. There’s already signs of this.
Just to be clear, I’m not saying we shouldn’t be worried about AI. But the alarmists seems to be too focused on various imagined future scenarios, all of which are likely to be wrong. We should keep a very keen eye on the consequences of AI right now in the present moment, and talk about problems that actually arise. Perhaps with a little bit of extrapolation.. thinking about how present day small problems could develop into bigger problems.