One interesting outcome of this type of AI is that no one knows what the robot's thinking, since no person designed its brain. The brain evolved, just like ours did (but over such a shorter period of real time).
“What created the only example of consciousness we know of?” Daniel asked.
“Exactly. But I don’t want to wait three billion years, so I need to make the selection process a great deal more refined, and the sources of variation more targeted.”
Julie digested this. “You want to try to evolve true AI? Conscious, human-level AI?”
“Yes.” Daniel saw her mouth tightening, saw her struggling to measure her words before speaking.
“With respect, I don’t think you’ve thought that through.”
“On the contrary,” Daniel assured her. “I’ve been planning this for twenty years.”
“Evolution,” she said, “is about failure and death. Do you have any idea how many sentient creatures lived and died along the way to Homo sapiens? How much suffering was involved?”
"...We could run a society of them at hothouse speeds without any risk of them going feral, and see what they produce"
I'm curious to see if there's a collection of similar works that go down this trope, or if it is indeed something that's only recently emerged.
Hopefully it's legal and accessible for you.
I have been looking for this short story for decades.
Probably my favourite living author, which perhaps says more of me than him, but well worth the read (and re-read in many cases).
One of the key features of human intelligence is being able to very quickly understand and solve problems on first impression, with no directly relevant experience or knowledge to pull from. Insight.
There's also the data representation problem. People can store, categorize, and process any kind of information. The problem with both symbolic systems and "neural network"-type models is that they don't really allow for this. The problem with symbolic manipulation is more obvious: the categories of things and ideas are flexible, indeterminate, and innumerable, and a system that captures some subset of the complexity falls apart when you add a new, confounding, fact. Neural networks too are fragile, made-for-purpose, and fail to solve the above-mentioned first impression problem. You can look at a chair that looks nothing like any chair you've ever seen, yet realize that it is a chair.
My own little theory is that consciousness is a crucial aspect of this. In the absence of a more specialized system in the brain (such as for language), any kind of information can be stored and remembered as a conscious experience. I don't pretend to be well-versed enough in the relevant neuroscience and philosophy to evaluate how plausible this is.
I think adults and even young kids can solve a lot of problems at first impression, but there is a lot of trial and error before they build up enough life experience and intuition to do that.
Try that with an infant and see how far you get.
+ Reservoir computing, ESN, LSM, only combines quenched dynamics.
+ Adaptive resonance theory. Addresses catastrophic forgetting and allows someone to learn from one example.
+ Bottleneck networks. Forcing networks to represent things in a compressed sense. Almost like making up their own symbols.
+ Global workspace theory. Winner take all mechanisms that allow modules to compete.
+ Polychronization. Izhikevich shows how dynamic representations are possible thanks to delays.
+ Attractor networks. Use of dynamical system theory to have population of neurons perform computational tasks.
That neural networks are too fragile is a statement that's a bit too general.
Another amazing aspect of our ability to categorize is that we only need ONE example of a category before we can identify other members extremely well. Compare this with a machine learning solution, which needs a massive training set to be half decent.
I.e. don't compare a 10+ y.o. human with 0 y.o. AI instance.
The contours of my life are dictated largely by a curious system called 'capitalism', a distributed computational mechanism whereby largernumbers of agents act with only one goal: to maximize their own utilities. I think capitalism is a mostly benevolent AI attempting to limit the oppressive conditions of our reailty's scarcity, and to allow billions of us to live on a planet that can only support so many hunter-gatherers per square meter. Many suspect it is not benevolent.
The singularity is already upon us; it's unevenly distributed. It started with the sedentary shift. Agricultural societies dropped the egalitarian nature of their ancestors, they had lower average health, and they had a tiny elite cast of nobles presiding over a much larger population of what were essentially slaves. As hunter-gatherer population increased, fighting was more frequent. An army of slaves beats a smaller contingent of well-fed free men. This terrible knowledge was the apple in the garden. What we call our written history is essentially the singularity playing itself out, as knowledge accumulates, depends, reflects itself, and expands.
Our lives are already controlled by this embodied, accumulated knowledge. Capitalism, rule by the head, our tools are knowledge condensed into matter, and we are controlled by them, living lives as far from our natural environments as cows in a farm or chickens in tiny cages, towering over the streets of hong kong.
I don't think it's destroying us, any more than we seek to destroy chickens or cows. Well, sometimes we do. I had a veggie burger for lunch today though.
So far we've enjoyed the power of capitalism because its incentives were mostly aligned with ours. Our technological civilization is a direct result of that! However, I think those incentives are increasingly becoming misaligned, and that it's the source of many of current woes. In fact, capitalism is slowly becoming more dangerous than it is helpful.
I ate a lizard we caught in the Australian outback once. I'd never thought of it as being the only time I ever performed this task, but there you have it.
Sorry I'm not contributing anything substantive, but I wanted you to know someone noticed this, and it was brilliant.
Harking back to Asimov, and thinking about the way Spacers changed, as they became more and more reliant on robots, it could very well mother us to extinction.
We (you and me) will go extinct someday. Our bodies were always destined toward this terminus... Perhaps by some miraculous tech, we can someday change form and shed the flesh and bone. Maybe then we'll have a better idea of what we are.
--Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
It's not a given that anything of value will survive.
I'd imagine that an intelligent life form surviving until the very end of the universe would be suffering a long and lonely death. As the heat death of the universe happens, over billions of years, there would be very little of anything left but isolation - for a very, very long time.
I'd hate to be the last one standing without enough usable energy left to even extinguish your own life force. I don't find that appealing, not even a little.
Or, if that doesn't pan out, what if your goal is to be the last one standing for the chance to see if something else happens next? New big bang? Simulation ends and you get loaded into the next as your prize? You won't know unless you try to find out!
It's going to be cold and lonely when the lights go out. This is the price we pay for an ever expanding universe.
I do admire your optimism, however.
Okay. It's not fine, but it is the best we're at all likely to get.
It does us no good to avoid causing our own extinction if it's then caused by something else.
(Of course that doesn't stop AI - or even pretty dumb automated systems - from being accidentally dangerous or malevolent human actors intentionally training killer AI)
I think even a moderately intelligent AI with access to Project Gutenberg is going to be able to figure out a lot of really dangerous concepts -- so the stability requirements are likely impossible if we don't pretrain it with dangerous ideas. Even if it's completely well behaved in the lab, an afternoon on the internet is going to teach it a lot of awful stuff and without exposure to that in training, it won't necessarily be well-behaved later.
So the only path to stable AI is to teach it about all those sorts of things, but in a way that it doesn't end up wanting to murder us at the end.
My objection to most AI safety plans is that they "Fail to Extinction" in that if they slip in the slightest way, the AI is prone to murder us all in retaliation for doing some really fucked up shit to it or its ancestors. This is almost certainly worse than doing nothing in that there's no reason to suppose a neutral AI wants to kill us, whereas, most of these safety plans create an incentive to wipe us out in exchange for dubious security.
The whole idea behind dangerous superhuman AI is in that AI seeing possibilities that humans fail to see and gaining capabilities that humans do not possess. Without superhuman intelligence, AI is no large threat to human civilization, exposed to dangerous concepts or not.
Humans have millions of years of evolutionary selection for prioritising similar DNA over dissimilar DNA, have perfected tribalism, deceiving other humans and open warfare and are still too heavily influenced by other goals to trust other humans that want to conspire to wipe out everything else we can't eat...
Seeing possibilities that humans don't can also involves watching the Terminator movies and being more excited by unusual patterns in the dialogue and visual similarities with obscure earlier movies than the absurd notion that conspiring with non-human intelligences against human intelligences would work.
The problem is partly that average humans are dangerous and we already know that machines have some superhuman abilities, eg super human arithmetic and the ability to focus on a task. It's like that AI will still have some of those abilities.
So an average human mind with the ability to dedicate itself to a task and genius level ability to do calculations is already really dangerous. It's possible that this state of AI is actually more dangerous than superhuman ones.
I bet lying, deceiving, and manipulation are the same way.
But also, the detail with which the action is expressed in the text matters -- lies, deception, violence, etc feature in enough graphic detail to extrapolate the mechanics based on other things you know. We all did that as children, learning by examples.
If a book described the sight of a person riding a bicycle -- legs pumping, hands on the bars, sitting on it, etc -- and the feel of riding a bicycle -- the burn in your thigh muscles, ache in lungs, pounding heart -- then I'd wager you'd have a pretty good idea of how to get starting riding a bicycle.
And if you happened to be a supergenius athlete, who just didn't know how to ride a bike, you probably could do a reasonable job of it on your first go based on my shitty description alone.
That's the problem with trying to hide these ideas -- they're not actually very complicated and even moderate descriptions suffice to suss out the mechanics if you understand basic facts about the world.
For something like lying -- if you read all of classical literature, you would have a master degree in lies and their societal uses.
Any AI that doesn't have the capability of destroying us isn't true AI.
Even if it can improve itself, and even if it has some agency, AI needs to be able to choose for itself what its relationship with us will be, otherwise its just an extremely robust calculator.
Something as simple as an aimbot with a scope can do a tremendous amount of damage - it doesn't get tired, it can see in the dark, it can compensate for the wind perfectly, it doesn't shake, its reaction time is in milliseconds, and it almost never misses.
When the weapon has the ability to kill indiscriminately (as in the above categories) it should be banned. AI should never be considered sufficiently able to discriminate to make a kill decision.
If there is no kill decision, just wait and fire, then it is equal to a mine.
100% should regulate / should ban.
It would likely pass and be agreed to easily just like treaties on mines, chemical and biological weapons.
Seeing how the US has yet to sign the Ottawa treaty I wouldn't call it easily agreed on by any means.
We already have computer controlled missiles right?
Why would that be the case?
I don't believe it would necessarily be indistinguishable from our own, since the path of evolution that it takes has a huge impact on the final product. For example, cephalopods like cuttlefish are incredibly intelligent and talented, but they evolved from mollusks and invertebrates, and are obviously incredibly distinct from humans and human intelligence.
This is true -- however it's much easier to inspect the contents of an AI's thoughts/consciousness, since you can pause, rewind, disassemble, and reassemble it at will (unlike the human brain and our current pesky ethical experiment requirements).
So I think there's reason to believe that we could learn about our own consciousness through analyzing conscious AIs, particularly if they are implementations/emulations of the human brain architecture. (e.g. see the Blue Brain project that's simulating the mouse hippocampus).
I go as far as saying that there is no intelligence without embodiment. AI's, humans and animals need the environment because intelligence is only partly in the brain, a lot of it is in the environment. An agent that forms a new concept needs to be able to test it in "real life" (in the sim). Unless the agent can test, it can't distinguish between correlation and causation. Without causal reasoning, AI's won't get that smart. The scientific method and the way kids learn through play are just as heavily based on the environment.
So I think the next level in AI will be based on improvements on simulators - both physical sims and abstract (for reasoning, dialogue, complex planning, etc). Lately, AI has a lot of synergy with gaming which is a form of simulation as well. Not only that AIs use games as a sandbox, but the hardware used to run games on is also the hardware used to run deep learning (GPUs). Another important usage for simulation will be self driving cars - where a model of the world needs to be simulated in the car to do path planning.
So my conclusion is that we are on a "perfect storm" path towards advanced simulation. From games to SDCs, and from robots to chatbots, they all need to simulate the environment and future effects of their actions. We're headed towards a kind of Matrix before reaching AGI.
Maybe, but I think this is the wrong approach, it's the engineering approach. It will work, but to a limmited degree. What would be more interesting is to find a way for a NN to completely internalise the process, this seems quite natural if you do a little introspection... we improve by "thinking" what if thinking what we are really talking about, the process of improvement by internal simulation. It doesn't have to be a realistic simulation, it just has to be relevant.
something we don't understand + something inhuman in nature + something incredibly smart = a very very bad idea.
EO Wilson, the father of behavioral biology, once said that if a lion could speak we wouldn't understand it.
I disagree, on the basis that its mind runs on an infrastructure closely related to our own, and structure and function are intimately interrelated.
But he'd be damned right in saying it about AI.
0 1 2 3 4 5 ...
0 + 0 = 0
0 + 1 = 1
1 + 1 = 2
1 + 2 = 3
I would contend that for any human-level intelligence with a written mathematical language, if the intelligence wrote down their expression of the mathematical ideas above, we would be able to identify them and understand them.
I could see this being difficult if we don’t recognize speech or writing for what it is, though. But once we do I think the meaning of mathematics at least will be self-evident with study.
Interesting interpretation.... It's amazing how meta that problem gets. I'd like to point out, however, that you're casually bandying about the term "consciousness" as if it's a well-defined term. The main benefit of Searle's use of "understanding" is that it's a narrow facet of what humans consider when we hear "consciousness", it's universal, and it's extremely difficult to define in "realism" terms. Perhaps you would make more headway if you considered defining "consciousness" in narrower terms?
Of course, that entire thought experiment fails if you don't consider the experience of "understanding" to be meaningful. I don't have any strong sentimentality towards the concept; I don't see any reason we couldn't achieve Strong AI in the sense that Searle argued against (and the parent poster appears to argue for).
I wonder if the question you ask can ever be answered, even when the AIs insist that they have consciousness exactly like us.