The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim:
Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
---
If only the people building the AIs were this stupid!
> Systems of this sort cannot be modelled mathematically
There's an amazing naive confidence required to say in absolute terms that something cannot be done. Essentially every great human achievement was considered impossible at one point.
“The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” - Paul Krugman in 1998
To the defense of the authors that exact point is defended just after they expose their initial propositions. Their discussion is much deeper than what the comment above restricts it to.
I will be nitpicky here. They say they will offer two specific reasons. But they don't. They write one reason, which has two predicates. The two points make up together one reason. The two points are not reasons in themselves, only together do they form an argument.
Obviously this is not a substantial critique of their argument. But it is quite telling that even in their description, which should be the best edited and most polished part of their book, they write sloppy things like this.
Click on the link. See the section labeled "Book Description". The first paragraph following that label ends "It offers two specific reasons for this claim:"
And true enough there are two numbered point which follows. But they are not "two specific reasons". They are together one reason.
Let me illustrate what I mean by that:
Let's say I'm accused of being an axe murderer. In my defence I can offer the following argument:
1; i have proof that I was far away a few hours before the murder
2; the distance between where I was and where the murder happened is such that I couldn't possibly be there in time.
This is not two reason. This is a two sentence argument, and one reason. Let's see an example with two reasons:
1; I was far away from the crime scene, see proof...
2; I don't have arms, thus even if I were there I couldn't possibly wield an axe.
These are two separate reasons.
Why is this thing important? For two reasons:
Before I part with my money, and a substantial chunk of my free time, to read their argument I have to ascertain that they are worth this investment. To do this they have to signal in various ways that they are serious people and not bozos. Having impeccable book description would be in their favour in that department.
The second reason is even more important: If they truly would have two independent reasons to support their argument that would make it stronger. Even if they make some mistake in one, the other might still stand.
As it is now (based on the description and their first chapter exceprt) this is how their argument stands:
their thesis: artificial intelligence that could equal or exceed human intelligence is impossible.
1; The only way to engineer such technology is to create a software emulation
of the human neurocognitive system.
2; To create a software emulation of the behaviour of a system we would need
to create a mathematical model of this system that enables prediction of the
system’s behaviour.
3; It is impossible to build mathematical models of this sort for complex
systems.
These points form a chain. If any of these points is false, their argument fails. Thus they don't have two reasons, they only have one reason.
And I have serious doubts about every single of those statements alone. Which makes their arguments super shaky.
So it's not like mathematicians have ever tried to model complex dynamical systems? My entire PhD in complexity science and computational neuroscience must therefore be a joke, and the equations all figments of my imagination.
> Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
Book author has apparently never walked around a Walmart at 11pm on Saturday. I'm pretty sure some general Python or shell scripts would win in intelligence.
10,000 years from now a supreme intelligence notes that superintelligence is a capability of complex dynamic system- the intergalactic metaswarm; and that systems of this sort cannot be modelled organically by biological processes in a way that allows them to operate within a skin sack of wet meat.
Intelligence is situated, we may not be able to replicate human-like intelligence since it presupposes a human being, but it also doesn't mean we cannot come up with a different kind of intelligence which is functionally similar for e. flight.
One could make the argument that it is impossible to build a human-level AI that uses less mass and less energy than a human brain. I don't see how you can make an argument that it is flatly impossible given larger mass and energy budgets.
All of these books and articles etc providing proof-positive that we are special snowflakes in the universe and no machine could possibly keep up with our specialness feel to me like the very serious scientists who made the claim that heavier-than-air flight was impossible.
More-seriously though, a biological example of an AGI would just be a regular old intelligence. You are (presumably) human and intelligent, and therefore have instantiated the intelligence algorithm on a biological substrate. You might argue that we are not actually general intelligences, especially when not caffeinated. I think this is a bit of a cop-out and we all know what we're talking about here :)
Just as birds proved that heavier-than-air flight was physically possible, humans prove that you can make matter perform as though it is intelligent.
I'm reminded of an excellent series of short pseudo-documentaries that explained life on Earth for aliens, and in that narrative they were trying to discover if there was intelligent life on Earth. The conclusion was that the dominant life form was Corporations, and individual humans were more like ants or worker bees. Then they decided the corps must be intelligent because they were terraforming Earth to make it warmer.
They are machines made of humans. Their point is about AGI working fully without human intervention. Of course, even pen and paper makes you more intelligent than you without.
Automation is an example where it is shifting. A clear example of this is self-checkout at grocery stories or the use of robotics at warehouses such as Amazon.
We will be better justified in noting a shift when the machines begin to take on management responsibilities. These will pe programmed presumably on behalf of the board or stockholders, and dictate the activity of remaining human employees.
A machine is not just its "management responsibilities". A machine is all of its parts. By shifting, I mean that some of the "parts" were people and are now automated or rely on robots.
Does it really make sense to say that a machine is not having its parts changed because its "management system" has remained the same?
The table of contents and the summary suggest extraordinarily weak arguments, from "one of the most widely cited contemporary philosophers". Maybe full brain simulation is impossible, but on the other hand (1) maybe it's not and (2) maybe there are other systems that can exhibit intelligence, other than one specific primate organ.
Has anyone actually read the book? Is the argument just poorly characterized in the summary?
I'm going through it, got it recently. Arguments are stronger than what the TOC and excerpt reflect. While I'm still not convinced by their position (and don't fell like I will be), it clearly is a good support for thinking about all those questions so I'm enjoying it. That's why I shared it here, it is more about what I get from it rather than the philosophical position.
Note:
The entire book is predicated on having nailed down a definition of human intelligence.
Even the Philosophers are still out on that one, and will take great delight in frustrating you with counterexamples that prove you haven't hit the epistemological or ontological neighborhood yet.
Any time you see a Mathematician, physicist, computer scientist, AI reasarcher, what have you try to justify something like this, please… please remember the massive asterisk and footnote reading: "according to my hackneyed definition of something that isn't even widely accepted as settled, and serves more to just try to even get this argument capable of being had with any semblance of concreteness such that my field can be applied to it."
The map is not the territory, and YMMV. Philosophically speaking, that means we're realistically talking an issue of "if your definition is actually correct." Aldo note, that everyone can be in consensus on what everyone considers the dictionary definition, but the way life actually turns out can be completely different.
Reality, as it were, cares not one lick what you think about it. Much like a teenager being told not to go out like that.
It's not the Artificial Intelligences I fear, but the Artificial Stupdidity. And that's existed since the 17th century (at least). see also: Banks, Phone Companies, Government (esp, at local levels).
As I can tell, they don't deny that a full atom-for-atom simulation of the human brain would work. They just argue that you can't simulate a human brain with abstract mathematical models (e.g. ANNs).
on page xi: "A2. The only way to engineer such technology is to create a software emulation
of the human neurocognitive system. (Alternative strategies designed to
bring about an AGI without emulating human intelligence are considered
and rejected in section 3.3.3 and chapter 12.)"
This claim seems central to the book's thesis and, at least to me, very nonobvious/unintuitive so it's odd that its support is apparently relegated to one sub-sub-chapter near the beginning and one chapter near the end of the book. The introduction doesn't really even attempt to sketch a suggestion of an argument for this.
Most of the atoms would be useless in a simulation. Energy transfert and production, waste management, immune system, Neuro transmitter recycling, non cognition related protein and RNA expression...
I agree, but I'd also be curious to hear why the authors think that is not the case (unclear to me, but I didn't read very far as it is not very accessible to me as a non-philosopher).
> Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
I wonder whether their argument also shows that systems of this sort cannot be modelled in a way that allows them to operate in a blob of cells inside a mammal's skull, but I'm not going to pay to find out.
I'm not sure I'd want to bother with a book written by authors who are convinced that something is impossible. It's a very tenuous intellectual foothold, which is usually intended for a gullible audience.
Here is a different thing which is wrong. There is a tacit premise that intelligence equal to or in excess of human intelligence is required in order to subdue humanity ("Rule the World"). But that doesn't appear to be necessarily so, because less intelligent humans rule over more intelligent humans.
The world is not currently ruled by the most intelligent, though those who rule make use of intelligent people, and intelligent people are sometimes well rewarded for assisting power.
You are not powerful because you are intelligent. You are because you got privileged and allowed to acquire power, you made ethical and moral choices allowing for it and you are fine exploiting others intelligence, power or resources (by coercion or profit sharing).
>[the human brain] cannot be modelled mathematically in a way that allows them to operate inside a computer.
This would be one of the most significant discoveries made in the last century. For me, a layman in the relevant fields and subfields, to entertain the possibility I think I would need to be reading a news article about, like, a precocious Stanford grad student and his prepublication causing controversy in academic circles.
while likely correct (confirmation bias) i’d add that none of this prevents poorly designed systems from having partially
potentially lethal (at scale) unintended consequences.
we (developers) need to be far better at second-order/second-level thinking.
Everything is possible including AGI and simulating universe. If some mathematicians and physicists think these things are impossible it is up to them, time will show.
People who rule the world are already mindless machines. Or atleast machine parts of mindless mega machines.
Like a machine there are very strick rules to what they can say, do and think. Over time and through constant repetition they become incapable of doing anything other than performing to that spec.
The biggest problem the world faces, is this robot army has lost touch with its humanity, and its become very hard to dislodge them out of the loops they are trapped in.
Yeah, I think the willingness of humans to reduce themselves to a stereotype -- politically, professionally, or socially -- in order to advance themselves (or to just fit in) is a much greater danger to civilization than an all-powerful AI rising up and turning all us humans into powerless minions of a robot overlord.
Stuart Russell started to make this point in his recent book "Human Compatible", I think, but never quite got there. A lovely intro to the state of modern AI, to my eye the book fell short of actually prescribing HOW we might shape AI to be a positive force for humankind, rather than let it grow mindlessly into a problem child shaped only by obvious stereotypes and stratifications.
Hopefully it can learn from its parent what NOT to do...
I'm curious as to why do you think it's worth buying this book only to form an opinion. To me such a book can just be a long article posted for free on the net.
People already posted various fragments and other people are not impressed. That alone is a pretty decent decision making process.
Many similar negative books about AI have preceded this one over the years, perhaps starting with Hubert Dreyfus' "What Computers Can't Do" in 1972.
In my experience, few naysayer arguments prove to be the game-stoppers they imagine. Alternative paths around those obstacles always seem to arise, like the ability to learn from millions of examples -- something never imagined by Dreyfus in the days of 1 KhZ computers and magnetic drum storage.
The super intelligent AI creates an even smarter AI, which replaces the parent, and so on and so forth, forcing the AI into an endless series of reproduction.
----
The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim:
---If only the people building the AIs were this stupid!