Hacker News new | past | comments | ask | show | jobs | submit login
Why Machines Will Never Rule the World – Artificial Intelligence Without Fear (routledge.com)
26 points by ta988 on Aug 23, 2022 | hide | past | favorite | 81 comments



To save anyone else the trouble:

----

The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim:

    Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.

    Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
---

If only the people building the AIs were this stupid!


> Systems of this sort cannot be modelled mathematically

There's an amazing naive confidence required to say in absolute terms that something cannot be done. Essentially every great human achievement was considered impossible at one point.


“The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” - Paul Krugman in 1998


Krugman was right that most people have nothing to say to each other. He just didn't realize that it wouldn't stop them from talking, incessantly.


"Heavier-than-air flying machines are impossible" -Lord Kelvin, 1895


To the defense of the authors that exact point is defended just after they expose their initial propositions. Their discussion is much deeper than what the comment above restricts it to.


> It offers two specific reasons for this claim:

I will be nitpicky here. They say they will offer two specific reasons. But they don't. They write one reason, which has two predicates. The two points make up together one reason. The two points are not reasons in themselves, only together do they form an argument.

Obviously this is not a substantial critique of their argument. But it is quite telling that even in their description, which should be the best edited and most polished part of their book, they write sloppy things like this.


They offer them in the book, the comment above is incomplete and erroneous.


I was not using the comment above.

Click on the link. See the section labeled "Book Description". The first paragraph following that label ends "It offers two specific reasons for this claim:"

And true enough there are two numbered point which follows. But they are not "two specific reasons". They are together one reason.

Let me illustrate what I mean by that:

Let's say I'm accused of being an axe murderer. In my defence I can offer the following argument:

1; i have proof that I was far away a few hours before the murder 2; the distance between where I was and where the murder happened is such that I couldn't possibly be there in time.

This is not two reason. This is a two sentence argument, and one reason. Let's see an example with two reasons:

1; I was far away from the crime scene, see proof... 2; I don't have arms, thus even if I were there I couldn't possibly wield an axe.

These are two separate reasons.

Why is this thing important? For two reasons:

Before I part with my money, and a substantial chunk of my free time, to read their argument I have to ascertain that they are worth this investment. To do this they have to signal in various ways that they are serious people and not bozos. Having impeccable book description would be in their favour in that department.

The second reason is even more important: If they truly would have two independent reasons to support their argument that would make it stronger. Even if they make some mistake in one, the other might still stand.

As it is now (based on the description and their first chapter exceprt) this is how their argument stands:

their thesis: artificial intelligence that could equal or exceed human intelligence is impossible.

1; The only way to engineer such technology is to create a software emulation of the human neurocognitive system.

2; To create a software emulation of the behaviour of a system we would need to create a mathematical model of this system that enables prediction of the system’s behaviour.

3; It is impossible to build mathematical models of this sort for complex systems.

These points form a chain. If any of these points is false, their argument fails. Thus they don't have two reasons, they only have one reason.

And I have serious doubts about every single of those statements alone. Which makes their arguments super shaky.


So it's not like mathematicians have ever tried to model complex dynamical systems? My entire PhD in complexity science and computational neuroscience must therefore be a joke, and the equations all figments of my imagination.


You should make your own opinion on the book, this exact point is discussed for a full chapter and they don't say such systems can't be modelled.


> Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.

Book author has apparently never walked around a Walmart at 11pm on Saturday. I'm pretty sure some general Python or shell scripts would win in intelligence.


10,000 years from now a supreme intelligence notes that superintelligence is a capability of complex dynamic system- the intergalactic metaswarm; and that systems of this sort cannot be modelled organically by biological processes in a way that allows them to operate within a skin sack of wet meat.

And yeah, like, that's obviously accurate.



Are you flapping your meat as you write this?


there is a false equivalence here.

Not able to model != not able to implement.

Intelligence is situated, we may not be able to replicate human-like intelligence since it presupposes a human being, but it also doesn't mean we cannot come up with a different kind of intelligence which is functionally similar for e. flight.

(not rooting for AGI to happen anytime soon)


One could make the argument that it is impossible to build a human-level AI that uses less mass and less energy than a human brain. I don't see how you can make an argument that it is flatly impossible given larger mass and energy budgets.


Checkmate!, atheists.


Wow.


All of these books and articles etc providing proof-positive that we are special snowflakes in the universe and no machine could possibly keep up with our specialness feel to me like the very serious scientists who made the claim that heavier-than-air flight was impossible.


And in both cases those making the claims have somehow managed to ignore existing biological counterexamples.


I'm sorry what is the biological counterexample for AGI?


You, presumably.


Can't tell if joking... but, I'm not a machine.


But aren't you a Metal Liqaz?

More-seriously though, a biological example of an AGI would just be a regular old intelligence. You are (presumably) human and intelligent, and therefore have instantiated the intelligence algorithm on a biological substrate. You might argue that we are not actually general intelligences, especially when not caffeinated. I think this is a bit of a cop-out and we all know what we're talking about here :)

Just as birds proved that heavier-than-air flight was physically possible, humans prove that you can make matter perform as though it is intelligent.


Machines already rule the world. We call them corporations.

Thus far their parts are people. That will shift.


I'm reminded of an excellent series of short pseudo-documentaries that explained life on Earth for aliens, and in that narrative they were trying to discover if there was intelligent life on Earth. The conclusion was that the dominant life form was Corporations, and individual humans were more like ants or worker bees. Then they decided the corps must be intelligent because they were terraforming Earth to make it warmer.


Can you find the reference? Sounds very interesting!


I found it. It's a five part series by Adam Westbrook called Parallax.

https://youtu.be/42XcN52xEfU


Great thank you!


https://www.thwink.org/sustain/glossary/NewDominantLifeForm.... is an entirely serious summary of the argument. I've never seen a satirical documentary based on h ideas though :(


I dug it up, see parent


This concisely nails my expectations as well


They are machines made of humans. Their point is about AGI working fully without human intervention. Of course, even pen and paper makes you more intelligent than you without.


Great point. The only modification that I would make:

* That is shifting


It isn't, yet.


Automation is an example where it is shifting. A clear example of this is self-checkout at grocery stories or the use of robotics at warehouses such as Amazon.


We will be better justified in noting a shift when the machines begin to take on management responsibilities. These will pe programmed presumably on behalf of the board or stockholders, and dictate the activity of remaining human employees.


It is already happening. Management responsibilities are being automated. For example, DAOs: https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...

A machine is not just its "management responsibilities". A machine is all of its parts. By shifting, I mean that some of the "parts" were people and are now automated or rely on robots.

Does it really make sense to say that a machine is not having its parts changed because its "management system" has remained the same?


> And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory?

Guys guys we don't need to worry about skynet because our banks are bad at UX.


The table of contents and the summary suggest extraordinarily weak arguments, from "one of the most widely cited contemporary philosophers". Maybe full brain simulation is impossible, but on the other hand (1) maybe it's not and (2) maybe there are other systems that can exhibit intelligence, other than one specific primate organ.

Has anyone actually read the book? Is the argument just poorly characterized in the summary?


I'm going through it, got it recently. Arguments are stronger than what the TOC and excerpt reflect. While I'm still not convinced by their position (and don't fell like I will be), it clearly is a good support for thinking about all those questions so I'm enjoying it. That's why I shared it here, it is more about what I get from it rather than the philosophical position.


Note: The entire book is predicated on having nailed down a definition of human intelligence.

Even the Philosophers are still out on that one, and will take great delight in frustrating you with counterexamples that prove you haven't hit the epistemological or ontological neighborhood yet.

Any time you see a Mathematician, physicist, computer scientist, AI reasarcher, what have you try to justify something like this, please… please remember the massive asterisk and footnote reading: "according to my hackneyed definition of something that isn't even widely accepted as settled, and serves more to just try to even get this argument capable of being had with any semblance of concreteness such that my field can be applied to it."

The map is not the territory, and YMMV. Philosophically speaking, that means we're realistically talking an issue of "if your definition is actually correct." Aldo note, that everyone can be in consensus on what everyone considers the dictionary definition, but the way life actually turns out can be completely different.

Reality, as it were, cares not one lick what you think about it. Much like a teenager being told not to go out like that.


Machines will never rule the world but the owners of the machines will.


> [complex dynamic systems] cannot be modelled mathematically in a way that allows them to operate inside a computer

Thank God an entrepreneur and a philosopher figured this out for us. I guess we can all pack our bags and go home.


It's not the Artificial Intelligences I fear, but the Artificial Stupdidity. And that's existed since the 17th century (at least). see also: Banks, Phone Companies, Government (esp, at local levels).


It's the "Natural Stupidity" that terrifies me. Just look at the (non-technology) news.


I don't believe it. We are pretty clever creatures, I bet we figure it out one day.

I don't worry to much about us creating AI but I am concerned for our over reliance on it.


These complex dynamic systems that's can't be modeled with math will also believe almost anything, including machines ruling the world.


You can read an excerpt here: https://philarchive.org/archive/LANWMW-2

As I can tell, they don't deny that a full atom-for-atom simulation of the human brain would work. They just argue that you can't simulate a human brain with abstract mathematical models (e.g. ANNs).


on page xi: "A2. The only way to engineer such technology is to create a software emulation of the human neurocognitive system. (Alternative strategies designed to bring about an AGI without emulating human intelligence are considered and rejected in section 3.3.3 and chapter 12.)"

This claim seems central to the book's thesis and, at least to me, very nonobvious/unintuitive so it's odd that its support is apparently relegated to one sub-sub-chapter near the beginning and one chapter near the end of the book. The introduction doesn't really even attempt to sketch a suggestion of an argument for this.


To run an atom-for-atom simulation of the human brain would be unbelievably expensive and would not provide any real useful results.


Most of the atoms would be useless in a simulation. Energy transfert and production, waste management, immune system, Neuro transmitter recycling, non cognition related protein and RNA expression...


Right, that's what the authors are arguing I believe.


yes, but there are likely other ways to make machines with agency, self-awareness, and general intelligence.


I agree, but I'd also be curious to hear why the authors think that is not the case (unclear to me, but I didn't read very far as it is not very accessible to me as a non-philosopher).


I stopped reading because the author had too many preconceived notions for me to take their argument seriously. Unlearn, they must.


Oh jesus not this shit again. Will someone please make a bingo card or a flowchart of this discussion already? I'm too lazy.


> Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.

I wonder whether their argument also shows that systems of this sort cannot be modelled in a way that allows them to operate in a blob of cells inside a mammal's skull, but I'm not going to pay to find out.


GPT3 and Dalle2 will at least rule the world of blogging, content creation and SEO for the foreseeable future.


I'm not sure I'd want to bother with a book written by authors who are convinced that something is impossible. It's a very tenuous intellectual foothold, which is usually intended for a gullible audience.

Here is a different thing which is wrong. There is a tacit premise that intelligence equal to or in excess of human intelligence is required in order to subdue humanity ("Rule the World"). But that doesn't appear to be necessarily so, because less intelligent humans rule over more intelligent humans.

The world is not currently ruled by the most intelligent, though those who rule make use of intelligent people, and intelligent people are sometimes well rewarded for assisting power.


You are not powerful because you are intelligent. You are because you got privileged and allowed to acquire power, you made ethical and moral choices allowing for it and you are fine exploiting others intelligence, power or resources (by coercion or profit sharing).


>[the human brain] cannot be modelled mathematically in a way that allows them to operate inside a computer.

This would be one of the most significant discoveries made in the last century. For me, a layman in the relevant fields and subfields, to entertain the possibility I think I would need to be reading a news article about, like, a precocious Stanford grad student and his prepublication causing controversy in academic circles.


while likely correct (confirmation bias) i’d add that none of this prevents poorly designed systems from having partially potentially lethal (at scale) unintended consequences.

we (developers) need to be far better at second-order/second-level thinking.


> Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.

There is nothing about AI that requires it to exist inside a computer...


Everything is possible including AGI and simulating universe. If some mathematicians and physicists think these things are impossible it is up to them, time will show.


People who rule the world are already mindless machines. Or atleast machine parts of mindless mega machines.

Like a machine there are very strick rules to what they can say, do and think. Over time and through constant repetition they become incapable of doing anything other than performing to that spec.

The biggest problem the world faces, is this robot army has lost touch with its humanity, and its become very hard to dislodge them out of the loops they are trapped in.


AKA "self-inflicted robotification"?

Yeah, I think the willingness of humans to reduce themselves to a stereotype -- politically, professionally, or socially -- in order to advance themselves (or to just fit in) is a much greater danger to civilization than an all-powerful AI rising up and turning all us humans into powerless minions of a robot overlord.

Stuart Russell started to make this point in his recent book "Human Compatible", I think, but never quite got there. A lovely intro to the state of modern AI, to my eye the book fell short of actually prescribing HOW we might shape AI to be a positive force for humankind, rather than let it grow mindlessly into a problem child shaped only by obvious stereotypes and stratifications.

Hopefully it can learn from its parent what NOT to do...


I'm curious if any of the people in this thread have read this book before commenting on it


I'm curious as to why do you think it's worth buying this book only to form an opinion. To me such a book can just be a long article posted for free on the net.

People already posted various fragments and other people are not impressed. That alone is a pretty decent decision making process.


Many similar negative books about AI have preceded this one over the years, perhaps starting with Hubert Dreyfus' "What Computers Can't Do" in 1972.

In my experience, few naysayer arguments prove to be the game-stoppers they imagine. Alternative paths around those obstacles always seem to arise, like the ability to learn from millions of examples -- something never imagined by Dreyfus in the days of 1 KhZ computers and magnetic drum storage.


>such a book can just be a long article posted for free on the net.

Possibly! I don't feel comfortable jumping to that conclusion without at least taking a look at what the authors actually say in the book though.

>People already posted various fragments and other people are not impressed.

Most people are probably reacting to the title and description, which may likely even be chosen by the publisher, not the authors.


I've read a quarter of it so far and I can tell you most of them have not as they comment on tiny points or even the position of other commenters.


As rule : it's never never ever! However it's very unlikely,to see super advanced intelligence which is threat to humanity in near future.

Most likely we will create "ai god's", who will be using light to communicate over inter galactic distance.


Super intelligent AI could go one of two ways:

1. It takes over the entire planet and then the Earth itself goes off looking for other sentient planets to hang out with.

2. The AI runs the numbers, and decides it can’t be bothered with all of this “life” BS, at which point it just turns itself off.

I see scenario 2 as the most likely and possibly quite frustrating for AI researchers.


Scenario 3: The AI doesn’t care either way and just continues plodding along.


Scenario 404: It turns out to be a natural academic, researching difficult mathematical and philisophical problems of its own volition.


Scenario 4:

The AI decides that maintaining biological life is inefficient and turns that stuff off. No fancy viruses needed, just stop feeding it.


Scenario 5:

The AI decides we are so cute, decides to keep us as pets


Scenario 6:

The super intelligent AI creates an even smarter AI, which replaces the parent, and so on and so forth, forcing the AI into an endless series of reproduction.


Scenario 7: The AI decides to spend it's life thinking and talking about the meaning of life and everything




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: