Most of the points in the essay (there's more than three) seem to me to be wrong or off target. I started typing a point-by-point response but it turned out quite long. If someone has the impression that the essay has some decisive strong point, could you point it out so I can respond to just that?
EDIT: as a demonstration, I will deal with the first point in the essay. It says: "super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely". Actually, getting to strong AI through math (rather than through mimicking humans) sounds more probable to me. We already have formalisms that can compute the most accurate possible prediction and the most efficient possible way to optimize a a utility function (inferring the right physical laws in the process) if given tons of computing power, for example look up Solomonoff induction or Marcus Hutter's AIXI. These count as superintelligences, or at least superweapons that can destroy the world. Stross's argument does not demonstrate the unlikelihood of someone implementing a fast approximation to AIXI tomorrow.
I agree with you, but I think I can respond in summary instead of point-by-point:
"First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely."
The arguments that follow don't say anything about why it's impossible or even prohibitively difficult to do, they only provide reasons why people wouldn't want to try. There are, however, motivations beyond those he takes into consideration.
Uploading:
Again, he doesn't make any arguments as to why uploading is not achievable, he just talks about the very hard ethical questions that arise when dealing with uploaded intelligences. That didn't stop us from inventing nuclear weapons and a host of other ethicaly challenging technologies, so why would it stop us from inventing uploading?
One particular statement he makes in this section I do want to address specifically:
"Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death."
This is so obviously untrue that I don't know how any intelligent person could use it in a serious argument. Religions that believe in immortal souls will simply maintain that the uploaded copy is just that: a copy, a soulless simulation of a real person. How can you possibly prove this one way or another? "Soul" is a purely religious concept, beyond any temporal means of observation or measurement, and therefore not subject to empirical study. I still think you would see plenty of religious people oppose uploading, but for different reasons.
He finishes by discussing the possibility that our entire universe is already a simulation being run in the "real" world, and admits that nobody can prove this one way or another, at least not anytime soon.
"Religions that believe in immortal souls will simply maintain that the uploaded copy is just that: a copy, a soulless simulation of a real person."
This might be true of some religious institutions but not of all. The concept of the 'Soul' is actually derived from Aristotelianism (there is no mention of it as such in the Bible) and many sects, (particularly Catholics) believe that in order for something to be intelligent it needs to have a soul, that is, an immaterial aspect. If an uploaded person, or an AI, or space alien or whatever is demonstrated to be intelligent, it would by definition have a 'soul.'
You would still get lots of folks who refused to accept the person-hood of uploaded persons and those folks might actually dominate, but the actual theological consequences would be different (and far more complex) than you imagine.
The concept of the 'Soul' is actually derived from Aristotelianism (there is no mention of it as such in the Bible)
Wrong, and wrong. The notion of the "soul" in Greek philosophy predates Aristotle; it's found in Plato's Phaedo, for example-- and there are several words in the Hebrew bible that are usually translated as "soul" (nefesh and ruach, which correspond roughly to psyche and pneuma, respectively.)
"and there are several words in the Hebrew bible that are usually translated as "soul""
I stand corrected on this.
"The notion of the "soul" in Greek philosophy predates Aristotle."
True, but my point is that certain sects of Christianity borrow heavily from Greek philosophy when trying to intellectualize what a 'soul' actually is. Some tend to follow the Aristotelian conception rather than the Platonic (they are distinct.) I'm not trying to say that Aristotle invented the concept of the soul if that's what you're thinking.
Very interesting. I see a few possible lines of argument that sects who believe this (must have soul to be intelligent) could take. The more crude argument would be a variation of "no true scotsman:" It's not really intelligent, it's just doing a very good job of reproducing the behavior of an intellingent being." There are a lot of issues with that argument, so a better approach might be, "When it becomes intelligent, God endows it with a distinct, unique, and individual soul." Another possible approach would be, "The original and the copy share the same immortal soul," but that would be incompatible with some concepts of the afterlife. There's also a version heavy on FUD: "When it becomes intelligent, it is possessed by an evil soul."
Yes, religions will almost always maintain a position that retains their previous "correctness", but if you could make a "soulless simulation" that is indistinguishable from the "real thing", it'd make a lot of people scratch their heads and say "wait, so what role does the soul play here?" ... it would seem to make the soul redundant. (Of course, that feeling of the soul's lack of necessity has already happened for most that have any knowledge of modern neuroscience.)
There's also the position in Peter F. Hamilton's Night's Dawn series that all sentient minds have souls, regardless of how they were created or whether there's a similar mind somewhere out there. And presumably Christian's can always say "God can figure it out" with regards to what exactly where the boundaries should be.
Religion is a fine art of emotional manipulation, but it has a weak spot: it requires an intact emotional (limbic) system. What if our artificial counterparts forgo that?
Many philosophers and theologians have made completely rational (emotion-free) arguments in favor of religion (sometimes a particular religion, sometimes religion in general).
That being said, I'll accept your fallacious assumption for a moment and turn your question around: what if an emotional system is necessary for self-aware, volitional intelligence? Our emotions are inseperably intertwined with the other aspects of our minds, and it might very well be that they are a fundamental aspect of consciousness.
Many philosophers and theologians have made completely rational (emotion-free) arguments in favor of religion (sometimes a particular religion, sometimes religion in general).
Many philosophers and theologians have attempted to make completely rational arguments in favor of the existence of God. My liberal arts theme in undergrad was philosophy so I've read a fair amount, but I have yet to see a compelling argument yet. Could you please make some suggestions?
Just because you didn't find them compelling doesn't mean they are somehow flawed. There are whole schools of philosophy that are mutually exclusive and yet are subscribed to each by many very intelligent people who are unable to decisively prove each other wrong. Having studied so much philosophy, you of all people should understand that there is a huge difference between a rational, consistent argument and a proof.
EDIT: Proving or disproving the existence of God is, in general, a fool's game. One could probably prove that certain types of deities are impossible, but it's impossible to prove that no form of deity is possible. On the other hand, proving the existence of God would almost certainly require a well-documented divine intervention, and even then many people would (rightfully) remain skeptical.
Also, my previous post didn't say anything about proving the existence of God: I said that they made "arguments in favor of religion," which is not the same thing. First, not all religions involve deities. Even if you restrict the discussion to religions involving deities, it is possible to argue that it is desireable or advantageous to believe in God without first proving that God exists. Furthermore, even some atheists argue that religion in general or some religions in particular have had a net positive impact on humanity.
Good points here, although i must note that your appeal to authority ("many intelligent people") does not lend credence to any of these arguments. For many people, they are no more true than any random syntactically correct sentence. And in any case, i feel this thread is diverting to a ... religious argument.
p.s. "...but it's impossible to prove that no form of deity is possible" - could you point to a proof of that?
It was not an appeal to authority, nor was it intended to support any of those arguments. It was intended to underscore the point that a particular philosophical argument is not automatically invalid simply because some people, no matter how intelligent and rational, do not find it convincing. In effect, it was a rejection of gp's appeal to his own authority as a student of philosophy.
As for your latter point: if you disprove the possibility of deities with a particular quality or set of qualities, I can then describe a deity which lacks those qualities and is therefore outside the set of deities you have proven impossible. For example, if you can somehow prove that there has never been divine intervention of any form (which I highly doubt is possible to prove), then I could respond by positing a deity that does not intervene in the temporal universe at all, but instead rules only in the afterlife. Since the afterlife is beyond temporal measurement, it is by definition beyond empirical observation, and there is therefore no way you can disprove the existence of an afterlife. By extension, there is no way you can disprove the existence of a deity which rules the afterlife.
Thanks for your time. See, i am an eliminativist myself, so i would doubt things like afterlife and spirit are accessible to us humans. I might accept forms of non-intervening deities if it were confined to setting the initial conditions of the big bang. Point is, arguing against it or for it is like arguing the weather.
Well, if you accept that it's impossible to prove that we aren't in a simulation - then it's possible we are in a simulation, and that would make the being responsible for simulating us a deity of sorts.
a) The arguments i 've heard about religious faith attempt to explain metaphysics by resorting to metaphysics, which, to me, is weak circular logic; but, fair, i don't have the means to disprove them either. My critical attitude was towards religion, the social construct which has historically been used to manipulate human groups through religious faith. Then again maybe religion has been evolutionary advantageous, although it hasn't been around long enough to be selected out of the population.
b) You are right. AFAIK, religious experiences involves the parietal and temporal lobes, so its not just the emotional system. Agree that it's an important (irreplaceable?) part of our self-awareness. However the fact that non-religious people can act ethically in a harmonious society indicates that it might be reduntant. Still, religious beliefs arise in a brain, so there will probably be a way to turn them off (probably not without sideeffects).
Don't they have to? Isn't everyone irrational to some extent? I mean it may be as little or minor as believing one economic/political theory over another in the absence of clear evidence. Or betting on string theory over another explanation.
Cognomancers, at this point, will attempt to restate what they mean by "irrational" without using it or any close synonyms. They might also try to figure out if religiosity is correlated with greater levels of whatever they mean by "irrationality," or not.
Making a random decision in the absence of empirical evidence is rational thinking. It's irrational to make it in the presence of overwhelming evidence or counter evidence.
CS is an interesting personality. On one hand, he writes fiction about the singularity, AI and interstellar travel, and is clearly enthusiastic about them. On the other hand, in his essays he states that humanity will probably never leave the solar system, and that a singularity will likely never happen.
Also interesting is that most of his arguments seem to be economic ones. In short, "it doesn't seem useful so the required investments to develop it will never be made". Not so much that it is impossible physically.
He might be right, he might be wrong, only time will tell. Personally I don't think it is very useful to make predictions about this (which is kind of the same stance that he takes "it's unwise to live on the assumption that they're coming down the pipeline within my lifetime...").
I always kick the tyres and try to pick holes in an idea before I take it out for a long road trip. That way I'm less likely to have to call the AA ...
Actually, getting to strong AI through math (rather than through mimicking humans) sounds more probable to me. We already have formalisms that can compute the most accurate possible prediction and the most efficient possible way to optimize a a utility function (inferring the right physical laws in the process) if given tons of computing power, for example look up Solomonoff induction or Marcus Hutter's AIXI. These count as superintelligences, or at least superweapons that can destroy the world. Stross's argument does not demonstrate the unlikelihood of someone implementing a fast approximation to AIXI tomorrow.
While that'd be a nice thing to have, it doesn't seem anything like an AI in the usual sense... i.e. the sense in which AIs have personhood and their own thoughts and feelings and goals and so forth.
Such things may not have personhood but they do have internal reasoning, can be built to have real-world goals, can solve any "intelligence-bound" problem (like math or physics) better than any human, and can cause something like the Singularity to occur (which the essay was arguing against in the first place).
It seems that conceptualizing AIs as human-like is the #1 top mistake that leads people to think AIs are impossible (or, alternatively, safe once they're built). It's more enlightening to think of an AI as a cold mathematical machine that uses computational power to achieve whatever goal the original programmer specified, possibly increasing its own power in the process. It will happily eat your babies if it needs more atoms. Its lack of personhood won't save you.
Eliezer Yudkowsky has a very good synonym for "AI" that doesn't trigger anthropomorphism: "Optimization Process". An example is Darwinian Evolution. And it is quite obvious that Evolution isn't alive, sentient, let alone human-like.
Good old evolution. Humans are just one of its fruits, yet we think so highly of ourselves.
If a superhuman AI exists, exactly what evolutionary niche are humans going to fill? Because there's a word for species which don't have an evolutionary niche: extinct.
"It seems that conceptualizing AIs as human-like is the #1 top mistake that leads people to think AIs are impossible (or, alternatively, safe once they're built). It's more enlightening to think of an AI as a cold mathematical machine"
But really, even humans are just cold mathematical machines when you get below a certain layer.
The anti-human-equivalent AI argument was really strange. When you cut out the irrelevant evolutionary stuff and avoid the segue into ethics, he is arguing that we won't have human-level AI in machines because humans don't want to design machines that behave like humans. But in fact that is exactly what many people want to do. Very incoherent argument.
Exactly. And I really don't think Vinge meant "AI having the same goals and attitudes as a human" by "human-equivalent AI". Rather, its pretty clear in context he's talking about intelligence.
Exactly what I wanted to post. There is a lot of totally unsubstantiated and downright incorrect stuff in this.
Very surprising coming from a SciFi person - most people who write SciFi (Scientific themed) tend to perform exhaustive research into existing science. Disappointing...
Your rebuttal can be used as is to argue that an oracle to the Halting Problem is a superintelligence, and that Stross's argument doesn't demonstrate the unlikelihood of someone implementing a fast approximation to it tomorrow.
You're right but this doesn't kill my argument, because not all instances of the halting problem are difficult. It's quite possible that some path to godhood (compared to a puny human) passes only through "easy" instances of the halting problem, efficiently solvable by some simple algorithm. In fact this is how we humans got to our current state of science and progress. Our math capabilities are evolutionarily new and inefficient: some of us can't even multiply 32-bit numbers!
Sure it's possible, but you have no evidence for it. Consider what your argument reduces to:
- CS says the path to super-intelligent AI lies through human-equivalent AI (actually, he says Vinge's program says that, and he explicitly criticizes it)
- but I can find an uncomputable perfectly ideal math function
- and I can hypothesize that maybe it'll be possible to approximate it cheaply
- and I can hypothesize that easy instances of it are good enough
- and I will now view human progress as a way of approximating that function, and submit that only easy instances were used.
This isn't merely red herring, it's red herring full of contradictions. If easy instances are enough, what's the relevance of the provably ideal uncomputable problem, besides rhetoric? If humans used easy instances, and they're the only example we know of that achieved I, how is that not weak evidence that AI through easy instances will pass through human-like I?
> If easy instances are enough, what's the relevance of the provably ideal uncomputable problem, besides rhetoric?
It shows that there's a path to AI that's gated on solving formal math problems instead of imitating wishy-washy emotional humans. This point may be obvious to you, but apparently it wasn't obvious to Stross.
> If humans used easy instances, and they're the only example we know of that achieved I, how is that not weak evidence that AI through easy instances will pass through human-like I?
Humans are also the only animals that can play chess, but chessplaying software didn't pass through a human-like stage where they got nervous, forgetful, etc. My intuition says math can't be that radically different from chess (both involve search on trees with a complicated evaluation function), and we're only a handful of new insights from having computers decisively beat humans at math. And at that moment, by implication from my previous point, they might just also beat us at everything else :-) Remember that, on an absolute scale, humans are about as bad at math as you can be and still build a civilization (otherwise civilization would've happened at a lower level of development) and also that we probably suck at math because math is a conscious task (see Moravec's paradox).
Enhancements to primate evolutionary fitness are not much use to a machine, - Said who?
I strongly suspect that the hardest part of mind uploading won't be the mind part, but the body and its interactions with its surroundings. That's actually the easy part, the motor and sensory systems even in humans are largely deciphered.
Uploading implicitly refutes the doctrine of the existence of an immortal soul I thought this was refuted centuries ago, and anyway, it's a red herring.
our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it Is the "real world" something stable? The same human brains performed great with both stone age cave living as well as modern megacity-life.
And so on and so on. The basic argument is "It is definately possible, but we may have reservations about it, ergo it's impossible"
a) Faulty genalization. Walking, sensing, thinking. Evolution gave them to us, and the machine would benefit from having these capacities, unless it would live in an alternate universe.
b) Stephen Hawking
c) False, soul is not a corporal thing [for those who believe in it], whatever the corpus.
d) We are limited by what we cannot perceive through our brains. We are "dragged back" by our inability to perceive higher intelligence, not because we 're lazy to adapt.
EDIT: as a demonstration, I will deal with the first point in the essay. It says: "super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely". Actually, getting to strong AI through math (rather than through mimicking humans) sounds more probable to me. We already have formalisms that can compute the most accurate possible prediction and the most efficient possible way to optimize a a utility function (inferring the right physical laws in the process) if given tons of computing power, for example look up Solomonoff induction or Marcus Hutter's AIXI. These count as superintelligences, or at least superweapons that can destroy the world. Stross's argument does not demonstrate the unlikelihood of someone implementing a fast approximation to AIXI tomorrow.