
Superintelligence: The Idea That Eats Smart People (2016) - crunchiebones
http://idlewords.com/talks/superintelligence.htm
======
ian0
What really terrified me wasn't skynet or Hal, but what Charles Stross
describes in his novel Accelerando.

Basically, machine controlled businesses (“slyly self-aware financial
instruments”) evolving to the point where their aptitude at understanding and
manipulating economics eclipses anything a human can compete with.

It terrified me because there is efficiency incentive for the buying/selling
of goods and services to be realtime and computer based. Not just stock
markets but also e-comm, smart contracts, supplier-buyer platforms, on-demand
services, financing etc. The traditional infrastructure blocks (discovery,
quality control, payments, compliance etc) are rapidly disappearing.

There are some yet to be addressed, but they are not huge. For example a
computer cannot incorporate a company. But it could easily interact via API to
a human nominee who sets one up on their behalf. Companies are already non-
human in intention but are a completely rational construct.

And it doesnt even have to be anywhere near ground level. Imagine a smart
contract from a private equity firm that legally forces businesses to act in a
specific way based on accountancy metrics. And a bunch of smart investor bots
rapidly tracking the progress of these bot-led funds. How quickly would the
system optimise and what would those optimisations look like to people on the
ground?

Out of all the disaster situations with AI this one strikes me as so realistic
because there is so much incentive to work towards it (as opposed to creating
skynet, hal or the paperclip machine).

~~~
pas
So far we anchor the companies to humans that have to carry the risk of the
consequences of the company's actions.

There are investment funds that work as hybrids, and of course it's just a
matter of time till someone offers shell-company-as-a-service for cheap.
(Currently it's rather pricey.)

~~~
gnode
> So far we anchor the companies to humans that have to carry the risk of the
> consequences of the company's actions.

Not really. Companies are typically limited liability, and even criminal law
in countries such as the US and England acknowledges that there are cases when
no particular human is culpable for a company's criminal actions
([https://en.wikipedia.org/wiki/Corporate_liability](https://en.wikipedia.org/wiki/Corporate_liability)).
Of course, people are still liable when they are simply using a company to
carry out a criminal enterprise, or wilfully committing crimes on behalf of
the company.

In this regard, a machine-led company would not be legally significantly
different from a human analyst-led company.

~~~
rectang
Consequences famously never came for executives in the wake of the 2008
financial crisis. Our legal system has not kept up with the innovation in the
realm of shenanigans perpetrated through a veil of plausible deniability.
Corporations are evolving at a rapid pace.

~~~
gnode
> Consequences famously never came for executives in the wake of the 2008
> financial crisis.

I'd say that it's a good thing that the law is generally not designed to
punish harmful ends, but intent (wilful, negligent or reckless) to cause well-
defined harmful ends. The financial crisis was a collective malfunction of the
financial system that was mostly not anticipated (barring the lucky minority
that bet against the status quo).

~~~
nine_k
I don't think re-packaging subprime debt was so far away from willful and
reckless.

~~~
gnode
Reckless intent requires that the person knew there is was significant risk of
the guilty act happening (e.g. driving with your eyes closed to show off has a
likely and known risk of causing injury and death). I don't think we can just
assume that market makers thought their actions were likely to upend the
economy.

~~~
heavenlyblue
Yeah. But in this scenario reckless driving provides personal gains for the
owners of the businesses.

What if you can invent new methods of "reckless driving" before they are put
in the legal system and thus reap their benefits?

It's just like designer drugs.

~~~
gnode
> It's just like designer drugs.

I think an important distinction is that drug prohibition often works in terms
of specific chemicals by statute. E.g. drug X is now illegal for use and
distribution.

Securities fraud law instead works in concepts and intentions and is fleshed
out by case law. E.g. did the seller give the buyer sufficient information for
the buyer not to have been defrauded?

A better analogy to securities using chemicals would be poisoning. The law
doesn't need to catch up to specific chemicals; if it's evident that you
wilfully, or recklessly added a poison to someone's food, it's illegal, and it
doesn't matter how you did it. If it turns out that some ingredient is
unknowingly toxic and the whole industry was adding it, then it's difficult to
argue criminal intent. The SEC is more like the FDA, than it is the DEA.

------
red75prime
I can't help but take a stab.

Argument from cats. Remembering that the cat represents humanity: $5 for a
group of neurons which will make the cat sit at this point (which happens to
be at the entrance into a cat carrier), then $5 for making the cat backpedal a
little.

Emus. There are many tactics humans can't use for various reasons (saturation
nuclear bombardment, for example). It doesn't say much about intelligence, but
more about economics and ethic.

Slavic Pessimism. So we'll get problems on nth try, when AI is sufficiently
fine, but security measures are still a mess, because there was no chance yet
to battle test them.

Complex Motivations. Maybe, but if not we are screwed.

Actual AI. Why not from actual general AI, like in humans? Or use wall clock
time and not the amount of data. AlphaZero: zero to superhuman in tens of
hours.

My Roommate. Argument from ignorance. If we don't know how to build general AI
now, then we surely will not be able to shape its motivations when we will
know.

Brain Surgery. The idea that a brain surgery skill requires specifically
designed circuitry and cannot be learned and therefore improved by
improvements of general learning/planning capacity is... curious, I guess.

Childhood. Maybe, or maybe it already has very good knowledge of the world
from previous attempts that we did in Slavic Pessimism argument.

Gilligan's Island. Well, if someone wants their AI to reinvent all the
technology, or to build an AI and then not use it to design better
technologies, they can do it. Why everyone will be doing it is not so clear.

*Added some "the"s.

~~~
vidarh
> Complex Motivations. Maybe, but if not we are screwed.

And this is the key: we need to get lucky every time.

If we're wrong even once and a 'bad AI' gets created and obtains the ability
to copy and modify itself, we've unleashed the most dangerous virus ever and
putting it back in the box will be near impossible.

~~~
lsc
you are pretty optimistic about your AI's ability to deal with novel
situations.

~~~
harshreality
You're essentially arguing that AGI can't exist, and you're assuming when
people say "AI will turn the world to paperclips" they mean a really smart go-
playing neural net could do that. Of course it can't.

Everyone agrees there is some magic smoke necessary for AGI that we haven't
figured out yet. The arguments that AI will eat the world are about _AGI_ ,
not current instances of narrow AI.

It may be true that an AGI will never be created, but I don't see a reason to
believe that just because nobody has figured it out yet. That philosophy is
just like the physicist who, shortly before fission was discovered, didn't
think splitting the atom was possible. Bohr, maybe? I don't remember.

~~~
lsc
Eh, if you are talking about something truly human-like rather than what
people usually mean when they say AI these days, I think there are other
limits.

We're running up against limits on feature-size in our integrated circuits. If
you invent an AGI that can make smarter copies of itself... it's pretty likely
those same physical limits will continue to apply, meaning that each iteration
will be better than the last, but like this year's latest CPU, it will be less
better than the previous generation than that generation was better than the
one that came before.

And yeah, I am a whole lot less optimistic about AGI in general than when I
was as a younger man, simply because when I was younger, there was a lot more
headroom in how much better computer hardware could get.

~~~
exoesquitur
What you say is true for silicon, but Turing complete molecular machinery is
already out there, we just have yet to master its intricacies.

The nightmare AGI scenario likely involves molecular computing using a
chemically optimal code-base and engineered lifeforms a version or two up from
current DNA microcode.

This is most likely to be created by a human-AI collaboration, which will be
innocuous enough in itself.... it's the creations at this level that have the
potential to replace us. Would that nesecarily be bad? Or just an enhanced
form of human evolution?

~~~
TheOtherHobbes
That's more worrying than silicon AGI, which I think is a non-starter.

The basic problem with AGI is that software is ridiculously brittle and ad
hoc, with no useful heuristics for general - as opposed to task-oriented -
self improvement.

We can't even make bug-free web pages. So the idea that we can engineer a bug-
free self-improving AGI is unconvincing.

Moving to neo-biology doesn't necessarily change that - but it does mean we
could end up with buggy systems that chase you around and eat you, as opposed
to crashing your ad blocker.

~~~
pdimitar
> _We can 't even make bug-free web pages. So the idea that we can engineer a
> bug-free self-improving AGI is unconvincing._

This is mostly true because of current economical realities in the IT sector.

If very rich people have long-term plans to invent an AGI and they have labs
where trying to do so is an everyday activity then I am pretty sure they'll
use more serious methodologies.

So what you say is true but only within the bounds of the lowest common
denominator ("a regular programmer working a wage job").

~~~
nradov
Even software developed with effectively unlimited resources is rather buggy.
The closest humanity has ever come to truly reliable software was probably the
Space Shuttle flight control system, and that code was relatively simple in a
highly constrained problem domain. There is zero evidence that "serious
methodologies" would get us any closer to a working AGI.

~~~
pdimitar
Oh, definitely. I am not saying that _not_ doing the normal agile crap for
development brings us closer to AGI. What I would venture to say though is
that trying stuff like formal verification and finding a way to integrate it
in a daily workflow would definitely help with the average quality of software
developed through such a process. And that maybe the private labs are more
liberal and let people's creativity achieve results.

Admittedly, yes, there's no proof. But we as an area are pretty stuck lately
IMO. But that goes off-topic.

------
Isamu
>What I find particularly suspect is the idea that "intelligence" is like CPU
speed, in that any sufficiently smart entity can emulate less intelligent
beings (like its human creators) no matter how different their mental
architecture.

>With no way to define intelligence (except just pointing to ourselves), we
don't even know if it's a quantity that can be maximized. For all we know,
human-level intelligence could be a tradeoff. Maybe any entity significantly
smarter than a human being would be crippled by existential despair, or spend
all its time in Buddha-like contemplation.

Glad to see this articulated here, this is one of the many problems I have
with this topic.

~~~
mordymoop
But humans are the dumbest possible animal capable of creating a technological
civilization. If we weren't, then our dumber ancestors would have done so.

~~~
AlexandrB
I think this line of reasoning subtly assumes mind-body dualism[1].
Intelligence is necessary but not sufficient for the creation of technological
civilization. Dolphins and other cetaceans are an example incredibly smart
creatures that lack the necessary lifestyle and opposable thumbs to create
technology. Octopi are likewise excellent problem solvers whose technological
ambitions are thwarted by a very short lifespan and a generally solitary
nature.

Human technology is at the crossroads of the right physical attributes mixed
with the right kind of intelligence and subjected to the right kind of
evolutionary pressures. There may be (or may have been) creatures with more
"raw" intelligence that cannot be applied to the development of technology
because one of the other factors is missing.

[1]
[https://en.wikipedia.org/wiki/Mind–body_dualism](https://en.wikipedia.org/wiki/Mind–body_dualism)

~~~
itsameta4
Sure, but we evolved to live on land, and opposable thumbs, far before we
developed civilization. Meanwhile, cranial volume continued to expand long
after e.g. thumbs.

I think the parent comment holds.

------
crunchyfrog
> If Einstein tried to get a cat in a carrier, and the cat didn't want to go,
> you know what would happen to Einstein. > > He would have to resort to a
> brute-force solution that has nothing to do with intelligence, and in that
> matchup the cat could do pretty well for itself.

Wouldn't it be better to simply motivate the cat to get in the carrier with a
can of tuna fish?

This is a very simple form of modeling the cat's mind, using that model to
predict its behavior and choosing a course of action that results in the
desired outcome. We do this for other humans all day and most of us never
resort to brute force. I think it is pretty fair to imagine an AGI that is
both better at modeling human minds and better at evaluating possible actions
and their outcomes.

~~~
grkvlt
> Wouldn't it be better to simply motivate the cat to get in the carrier with
> a can of tuna fish?

One does not simply motivate a cat with a can of tuna fish. I mean, have you
even owned a cat? If the little bastard doesn't want to get in the cat
carrier, no amount of delicious tuna will encourage it, and you're going to
end up using brute force and getting mauled...

~~~
ahel
Reduce the cat in a state of prolonged hunger and it will jump on the tuna
when you offer it.

------
jcoffland
> If we knew enough, and had the technology, we could exactly copy its
> structure and emulate its behavior with electronic components, just like we
> can simulate very basic neural anatomy today.

Being able to simulate a brain with electronics, if it is possible, is not
enough. You have to simulate it without a large increase in the space or time
requirements. If there are 100B neurons in the human brain but it takes 100B^2
transistors to simulate it or the simulation takes polynomial time to complete
a task that the human brain requires only linear time for, it's a bust.

Just because we can easily imagine it does not make it easily achievable or
even possible.

~~~
darawk
> Just because we can easily imagine it does not make it easily achievable or
> even possible.

It is without question possible, because we exist. It is also possible to do
it quickly, because again, we exist. The only thing that is unclear is:

1\. How hard it is to do artificially.

2\. How far from optimum the human mind is, and how complex it is to improve
upon it.

~~~
vidarh
Ironically, given the articles criticism of the simulation argument, there is
an 'escape hatch' with respect of the ability of simulating a human mind in
the form of the simulation argument: it is possible that human brains in our
universe is just the tip of the iceberg and relies on a massive amount of
computation 'outside' of the main simulation that is impossible to do faster
'in universe'.

Of course we have absolutely nothing to indicate that would be the case, but
we also don't understand the brain well enough yet.

~~~
montenegrohugo
That's a really interesting thought. Essentially, the technical analogy of
what a 'soul' could be.

~~~
Filligree
I've read some good SF stories with that premise. Of course, it has nothing to
do with reality.

~~~
vidarh
Maybe it doesn't. But the only way we have of knowing that is to construct an
AGI and demonstrate that it can be done in roughly the same volume and power
consumption as a human brain.

------
jessriedel
Under "The Argument From Complex Motivations", I encourage you to read the
linked paper on the orthogonality thesis

[https://www.fhi.ox.ac.uk/wp-
content/uploads/Orthogonality_An...](https://www.fhi.ox.ac.uk/wp-
content/uploads/Orthogonality_Analysis_and_Metaethics-1.pdf)

and then compare it to the OP discussion. The OP doesn't address any of the
arguments in the paper. His response is just "nah".

~~~
jessriedel
For instance, go to the bottom of page 8 and see this:

> Thus to deny the Orthogonality thesis is to assert that there is a goal
> system G, such that, among other things:

> 1\. There cannot exist any efficient real-world algorithm with goal G.

> 2\. If a being with arbitrarily high resources, intelligence, time and goal
> G, were to try design an efficient real-world algorithm with the same goal,
> it must fail.

> ...4. If a high-resource human society were highly motivated to achieve the
> goal G, then it could not do so (here the human society itself is seen as
> the algorithm).

> ...6. There cannot exist any pattern of reinforcement learning that would
> train a highly efficient real-world intelligence to follow the goal G.

> All of these seem extraordinarily strong claims to make!

The OP doesn't consider any of these.

~~~
aqsalose
The claim amounts to saying that the space of definable goals and algorithms
for achieving is large, and it would be a strong claim to say some particular
corner of that space is impossible. However, from "improbably thing X is not
impossible" does not follow that "X is probable". The paper says nothing
conclusive on how the mass of probable algorithms is distributed among that
space; what it does is to present some arguments formalish-appearing manner
which entice reader's intuitions to answer some questions in a way that they
would not otherwise. It does not make their claims _sound_.

When presented with abstract forms of argument, human intuitions are often
_way off_. This is why freshmen in mathematics programs usually spend their
first year or so proving calculus and many other theorems from scratch and
quite laboriously considering how obvious they are when sounded out (
[https://en.wikipedia.org/wiki/Intermediate_value_theorem](https://en.wikipedia.org/wiki/Intermediate_value_theorem)
). The reason is that that with tools of mathematical analysis many
unintuitive things (
[https://en.wikipedia.org/wiki/Weierstrass_function](https://en.wikipedia.org/wiki/Weierstrass_function)
,
[https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox](https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox)
) may also be said, and so the correct answer is not to trust intuition but
proof until student has developed intuition about what really makes calculus
to tick.

For example, to paraphrase the argument Armstrong presents on p. 16 "consider
all superintelligences that we theoretically could build, is it likely that
them having some particular goal would be impossible"? Frankly, I don't know:
what is the typical goal, what is the typical path of superintelligence that
could be built? Maybe the answer is _yes_ , it would be impossible, and it is
only wording that makes it sound unlikely because it invites us to think about
large spaces ("all possible X") and small portions of them ("particular goal
G") in a certain way. Thinking about space of all possible algorithms and
goals, especially about the subset of algorithms that include all kind of
intelligent behavior is bound to be unintuitive. The set to which they would
"converge to" may not be small, but it still could exclude vast amount of
goals, because why would a research team would want to (or even be capable of
creating) a creature with blatantly orthogonal goals even if one can draw a
hypothetical space of such goals, abstracting away all the important details?

(Secondly, rereading, I believe Armstrong misrepresents the counterarguments
by presenting them in extremely strong-looking forms -- convergence thesis,
incompleteness thesis -- and tearing them down by arguing that surely it is
not totally _impossible_ that something could happen.)

Translated to slightly more formal and mathematical-sounding argument, the
author of the talk linked above claims that likely paths for complex minds
will not involve them desiring orthogonal goals, because chances of a non-
human artificial complex mind arising uniformly randomly from the space of
potential algorithms with the vast space of potential goals is negligible: if
such being will created, it will be _precisely_ created (or evolved or
whatever) following similar principles as the other complex minds on the
planet (or by having them as a starting point). In other words, the space of
potential algorithms we should concern ourselves with is severely restricted.

(All this assuming that agents and goals is even a sensible framework to model
how creatures we call intelligent operate.)

~~~
aqsalose
And as an added afterthought: given how much time and effort drafting this
kind of obvious counterargument took me, I believe the author of the talk OP
linked to was correct in arguing against trying to engage such thinking.
Refuting formally presented arguments formally and arguing where they exactly
fail when applied to practice requires quite much thinking.

~~~
jessriedel
So we all have a finite amount of time and energy that we have to use wisely,
but presumably you don't think no one should be engaging with these arguments.
And then the question is: are we the sorts of persons who should bother, and
which arguments should we bother with? It seems to me that the arguments with
the most surprising/profound conclusion that nevertheless convince lots of
smart people who you respect is the sort of arguments that should be at the
top of your list.

------
xte
I offer a different, human-centric, POV: we are in a managerial-driven society
with actual trend is more and more form Ford-model workers to use and control
for business.

Human in this model are a problem: too stupid they are unavailing, not-that-
much-stupid do not accept to be used as "the manager in charge" want. So?
Better found replacements. Mechanical one. AI one. So the rest of human being
can be used for reproduction, pleasure, gameplay, whatever and form them as
stupid as possible is not a problem anymore since they are essentially
unneeded for managements, only toys.

So today's managers dream "intelligent systems" in factory, "AI developers"
(they already manage to cut off admins, too powerful and expert (they know the
big picture so they can act autonomously) to being controlled as today devs),
they dream a future in witch they simply order something to a machine and get
obeyed without contestant, law limitation etc. TV series like Continuum, films
like V for vendetta, The Handmaid's Tale only foreseen, as good artist do,
that kind of future. Mass production of fantasy films with a religious
background do the same on the opposite field.

That's the real troublesome point for us. AI today is not "AI", only pattern
matching for automation without any comprehension of the physical world. New
dictatorship instead is already there and more and more powerful.

Consider a thing: we do vast majority of our social critical tasks (identity
management, banks operation, medicine, ...) on computers, so on proprietary
systems made by very few more and more powerful vendors. Our data pass through
very few more and more giants webservices, so someone else computers, free
communication software was put into oblivion (from usenet to email to the
concept of desktop-centric, personal computing from Xerox Alto&c to LispM
passing through Plan9 and modern GNU/Linux desktop) substituted or jailed in
proprietary systems (VoIP tech are mostly open but nearly all VoIP calls pass
through very few companies, emails are open standard but most of user now are
pushed to webmails etc). We are at a point of jail free software over a
proprietary blackbox (think about WSL, DeX, Crostini), we have invented
locked-down hardware that can only run proprietary software (secure-boot &c).
Now imaging how easy can be steering the entire society for few people in the
right position. The rest is simply a matter of time.

------
MBlume
"The orthogonality thesis is false. Source: Rick and Morty"

Why does anyone take this writeup seriously?

~~~
steerpike
First time encountering a Maciej idlewords post, huh? He doesn't even take
himself seriously. That also doesn't stop him from being an excellent and
thoughtful writer.

~~~
MBlume
I don't mean to say "he made a Rick and Morty reference, Serious Adults don't
make Rick and Morty references". That'd be dumb. I mean that he points to a
robot in Rick and Morty that was sad about being programmed to pass butter,
and attempts to infer from this that real robots would refuse to intelligently
pursue arbitrary goals, and therefore that the orthogonality thesis is false.
He's treating a thing that happened in fiction as though it was a real thing
that happened somewhere that we should treat as evidence. That doesn't work.

~~~
simen
The serious statement he makes is that humans have complex motivations, and he
thinks any superintelligence would as well--in fact it might be a defining
feature of intelligence. The Rick and Morty reference was just a humorous
example of what might happen, not an actual argument. The subheading was "the
argument from complex motivations", not "the argument from Rick and Morty", so
let's try and focus on the serious part and not the humorous aside.

~~~
hannasanarion
But the orthogonality thesis isn't just some idea that may or not be true. It
follows directly from Hume's Guillotine, discovered in 1739: no amount of
superintelligent reasoning about facts will allow you to derive goals.

No artificial intelligence will ever spontaneously develop morals, because the
questions "what can I do" and "what should I do" are eternally separated by
Hume's guillotine.

The motivations will always have to be provided by the people who make the
machine, and we have been seen in the past that artificial intelligences are
very good at finding loopholes in their moral code. Here are some real
examples:
[https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOa...](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml)

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=13240811](https://news.ycombinator.com/item?id=13240811)

------
ru999gol
> The penultimate premise is if we create an artificial intelligence, whether
> it's an emulated human brain or a de novo piece of software, it will operate
> at time scales that are characteristic of electronic hardware (microseconds)
> rather than human brains (hours).

I never understood that argument, why would that be true? If I train a state
of the art convnet it takes days on the most efficient hardware I can efford,
why on earth would we go from nothing to superintelligent that also runs
incredibly fast. Isn't it way more logical to assume that it will be slow at
first?

~~~
chobeat
And even if you defy computational constraints, it will still be constrained
by physical limitations. You can experience the world just so quickly. Stuff
needs time to happen and you being faster than stuff won't be any help after a
while.

~~~
red75prime
> Stuff needs time to happen and you being faster than stuff won't be any help
> after a while.

Yeah, another problem for superintelligence to tackle, while it waits for
stuff to happen.

------
Ace17
> Having realized that the world is not a programming problem, > AI obsessives
> want to make it into a programming problem, > by designing a God-like
> machine.

We invented programming to solve problems from the real world. What do you
mean by a "programming problem"?

~~~
re-actor
The implication is that the set of problems programming can solve and the set
of problems people are trying to solve with programming are two different sets
with less overlap than a programmer might think.

------
subroutine
_' Superintelligence' synthesizes the alarmist view of AI as both dangerous
and inevitable_

I find the 'inevitable' part quite interesting; and agree. Say we determine
that if AI surpassed human intelligence we were doomed, with 100% certainty...
I think it would not slow the march toward that end-point whatsoever. People
would reason there is much progress to be had up-until that singularity. And
since what constitute human-level intelligence is fuzzy, or that AI could mask
its true cognitive abilities (see Ex Machina), a line will inevitably be
crossed. Oddly, even if it meant something bad was in store, I hope to live to
see that day. How exciting.

------
lsc
what I find interesting is that for most of my life, Moore's law was... as
real as gravity; It seemed like any problem you had, wait a few years and you
could just brute force it.

The future really did look limitless.

Now, moore's law has slowed down. Hell, I've got a 7 year old laptop next to
me, and it still works for games. I mean, in the '90s, a 7 year old computer
would be a museum piece.

I mean, for big AI advances now, we're more and more dependent on software.

Really, though, I think this seems to be how most tech works. There are
periods of rapid innovation where it looks like you are going to the moon.
Like, even spaceflight. In the years from 1944-1969 we went from the first
practical rocket that could lob a bomb across the channel (without much
accuracy, as i understand it) to actually putting human beings on the fucking
moon. The literal moon. I mean, 1961-1969 might be even more amazing; vostock
1; the USSR sent Gagarin into outer space, and the US decided they needed to
one-up them; so the US went to the moon.

Aand... we kinda stopped after that. I mean, we sent out some probes and some
robots, but like manned space flight has kinda petered out... and I still
can't take a rocket if I want to go visit my buddies on the antipode and I
don't want to spend all day in an airplane. Like, I can't buy a flight to
Australia in under 12 hours for love or money, which probably is about what
I'd say if it were 1969. (on the other hand, I can get said flight- round
trip- for rather less than a day's pay for an Engineer, which is amazing, if
you ask me.)

And a lot of that is physics. Like, in most technologies, there's areas where
progress is slow and hard and areas where once you break through some hard
part, it becomes easy to advance... for a while. but that easy streak never
lasts forever. Not because people are dumber or smarter, but because
exponential progressions, in nature, rarely stay exponential very long.

I guess I'm expressing doubt about the 'hard takeoff' just 'cause it seems to
me like there is probably limits to how smart you can make a brain the same
way that there are limits to how small you can make features on a silicon
wafer.

I'm not saying you won't be able to make a machine that is way smarter than me
(or even way smarter than any human) - I'm just saying there are almost
certainly limits.

~~~
Jyaif
> I'm just saying there are almost certainly limits.

Nobody is saying that there's no limit, it's just that the limits may be super
high. The most animals can lift is a couple of hundreds a kg by roughly one
meter. Machines can move hundreds of time this amount to other planets.

~~~
lsc
If we had AI that was as much smarter than us than we are than, say, dolphins
or chimpanzees... assuming that physical limits kept that gap from widening,
would you see that as this disastrous singularity?

See, I always thought of the singularity, at least the hard takeoff as this
idea that we'd come up with an ai that could make smarter copies of itself.
Without any physical limits, that AI would get to the point where it was
essentially infinitely smarter and infinitely more powerful than us.

I personally see that as fundamentally different from the difference between
humans and the higher animals. I mean, that's a big difference, sure, but it's
certainly finite.

------
helen___keller
I would build on "The Argument From Wooly Definitions" that the typical
premise for superintelligence explosion begins from an AI with (1) "human
level" intelligence (2) Machine level attributes like hyperfast computation
and a large working memory

I would argue that a "human level" intelligence that can think thousands or
millions of times faster than a human with a vastly larger working memory
(and, sometimes it is argued, can be trivially scaled to work faster / more
efficiently given more hardware) is in fact already a super-intelligence.

If you want to argue for intelligence explosion given a "human level"
intelligence, I say start with an AI black box wherein if you give a basic
algebra problem to an ordinary human at the same time as you give it to the
black box, they will think for about 30 seconds and then give an answer that
is half likely to be wrong.

If you want to argue from a "human level" intelligence, you need to
acknowledge that throwing more hardware doesn't inherently make it smarter.
You can throw five ordinary humans in a room and give them a simple algebra
problem, it doesn't mean they will solve it five times faster. Likely they
will quickly decide amongst themselves who is best and math, and that one
person will then give their answer to the algebra problem, which should be
slightly more likely to be the correct answer (and possibly a slightly slower
response) than if you just tried with one ordinary human.

------
henrik_w
A little bit in the same vein (arguing against the singularity): "The
impossibility of intelligence explosion"

[https://medium.com/@francois.chollet/the-impossibility-of-
in...](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-
explosion-5be4a9eda6ec)

~~~
xamuel
In that article, the author claims that a human brain cannot create something
more intelligent than itself, and argues by saying that over several billion
years of evolution, no human brain has ever done so, thus it must not be
possible.

That argument is totally nonsense. A hundred years ago, you could argue that
humans cannot invent an internet, because over billions of years (minus one
hundred), it's never happened.

There's a more rigorous way to argue for the claim in question, which debuted
in my doctoral dissertation. Or, not quite the exact claim in question, but
rather the following claim: that an idealized mechanical knowing agent cannot
create a more intelligent idealized mechanical knowing agent than itself (at
least, not one which it can actually trust). The idea behind the proof is
quite simple, but requires familiarity with computable ordinal numbers. If
knowing agent X creates knowing agent Y, then presumably X knows Y's
sourcecode. Thus, X can infer a sourcecode for the list of all naturals n such
that Y knows "n is the code of a computable ordinal". If X actually trusts Y,
then X knows that whenever Y knows "n is the code of a computable ordinal",
then n really _is_ the code of a computable ordinal. Combining these, X can
infer the ordinality of the ordering obtained by concatenating together all
the ordinals which Y knows are ordinals. Thus, X knows a larger computable
ordinal than all the computable ordinals Y knows. Which, depending on your
definition of intelligence, arguably suggests X is more intelligent than Y.

------
titzer
The cat argument is completely off base IMO.

The "AI" has an almost perfect motivating lever for humans: economics. A
super-intelligent AI that wanted to manipulate humans could do a lot with the
stock market to feed money to its long term interests. What long term
interests might those be? More servers? More storage? More hardware? More
software? More AI research? Better, more programmable chips? Eyes everywhere?
Robots? Automation? More control? More insight into how literally _everything_
works? Everything humans do and like? Learn how to speak all the human
languages?

Imagine there was a super intelligent AI hiding in the internet _already_.
Terrifying how well what we're doing aligns with what it would want. And
equally terrifying is the gradient of economic advantage which is absolutely
aligned with putting all of our money into tech. Transhumanism, mind
uploading. We're fucking hypnotized. And those who unwittingly contribute to
more tech (probably all of us here), just play right into its hands. Our
interests line up wonderfully. And most of us here are rewarded with outsized
salaries and stock, free food, benefits, benefits, benefits.

Well played, AI.

/tongue-in-cheek

~~~
TeMPOraL
> _Transhumanism, mind uploading. We 're fucking hypnotized._

Almost. If this is what the AI wants, then it's doing a _really bad job_.

The way current tech culture is, any other concern than making money will get
you labeled as crazy. This applies even to Maciej's writing. For trying to
make a short-term profit, you'll be judged as good, serious person or bad,
evil monster, depending on the way you approach it. But try to advocate for
any tech for reasons _other_ than short-term monetary gains, and you get
labeled _crazy_. This very article is implicitly doing it as well.

~~~
titzer
Interesting point. I think the transhumanism and mind uploading stuff is in
the back of people's heads, but what you say about getting labeled crazy for
not focusing on the money might actually support the thesis I was going for,
that the AI is already here and we are serving it.

Which, ultimately, is kind of unfalsifiable, since finding a super-intelligent
AI _if it wants to hide_ would be a losing battle, which is why I put this up
tongue-in-cheek.

My actual belief is closer to "we are the neurons in a giant distributed
overmind linked with whatever communication technology is available at hand."
In this view, we're just gradually offloading the computation from human
brains into computers, and it's a continuous process. In short, we're
desperately piling our knowledge into this magic cauldron hoping it will
figure it all out for us. What's gonna pop out?

------
mrob
>If you encountered a cheetah in pre-industrial times (and survived the
meeting)

I'm not aware of any confirmed report of a cheetah killing a human. They're
cautious animals who are highly unlike to approach a human, more so than the
other big cats. Most cheetah attacks involved captive cheetahs (and the humans
survived).

------
bloak
Two points:

* We have no idea of how to measure intelligence, and we have no way of deciding whether thing X is more or less intelligent than thing Y. (The article makes this point.) Therefore, superintelligence is perhaps a bogus concept. I know it seems implausible, but perhaps there can never be something significantly more intelligent than us.

* Nevertheless, mere human-like intelligence, if made faster, smaller and with better energy-efficiency, could cause the same runaway scenario that people worry about: imagine a device that can simulate N humans at a speed M times faster than real time, and those humans designed and built the device and can improve its design and manufacture.

In general, I see a lot of merit in the arguments on both sides of this
discussion.

~~~
pas
> In general, I see a lot of merit in the arguments on both sides of this
> discussion.

That's great news. One side is alarmist about a potential humanity extinction
event, the other side is not. Even a small chance that the alarmists are right
means we should take their view seriously, right?

~~~
chobeat
No, if there's no ground to do so. Chemtrailers believe that there are
gigantic conspirancies to subdue humans (and ultimately worsen the condition
of the whole humanity) but we shouldn't take them seriously.

~~~
pas
Hm, then what are the merits you see, and why are they insufficient, why do
you see the whole argument as groundless?

~~~
chobeat
There are so many criticism/debunk of the Singularity theory that there's not
even a debate anymore.

I would say that debating the scientific likelihood of a singularity is
sterile because they don't provide any argument on that plane. The debate is
mostly philosophical and/or theological nowadays.

One good paper I like on the subject is this one:
[https://jods.mitpress.mit.edu/pub/resisting-
reduction](https://jods.mitpress.mit.edu/pub/resisting-reduction)

but quickly googling "singularity criticism/debunk" will bring you so many
sources.

~~~
pas
I've suffered through this ever waiting for it to engage with any of the
arguments from MIRI, Bostrom and co. yet it only managed to paint a few
strange windmills for itself to charge at.

So it talks about Singularists, and how misguided they are, and how AI is a
misnomer and it should be called EI and namedrops cybernetics and systems and
reminisces about how we lost our way.

It is a bad manifesto, a boring and toothless essay.

It has a good point, but alas I forget it as I tried to keep reading.

Anyway, let's just take this sentence: "We can measure the ability for systems
to adapt creatively, as well as their resilience and their ability to use
resources in an interesting way." \- This is so broad and universal, that it's
either meaningless or false. And there is absolutely no argument supporting
this. I highlighted this, because this is what directly contradicts the
alarmist thinking. If we were able to measure creativity and resilience in
general, we could train AIs to get a higher creativity score, furthermore, we
could then control them.

And it's also interesting that this claim goes counter to a claim a bit
earlier about how unknowable and messy things are going to be.

It also somehow picks corporations as the perfect model for a
superintelligence, which is convenient, but by doing so sidesteps all the real
arguments about how a self-perfecting machine superintelligence is not bound
by slow components. And somehow also ignores the reality of Samsung, AIG,
MicroSoft, Oracle, Shell, BP and how all got away with almost everything. (And
other companies that are quite successful, despite competition. And how much
we have to work to keep them sort of aligned to our laws and goals.)

The best argument against a hard take off is that it's hard to imagine that so
many S-curves can be combed through in so little time with the real world
resource constraints. However Yudkowsky did an analysis of that:
[https://intelligence.org/files/IEM.pdf](https://intelligence.org/files/IEM.pdf)
and sure, it's just a step toward more questions, more hard to imagine things,
but not something that should be dismissed just because our mind throws up its
hands and says "i don't see how, it's very complex, so unlikely, let's go
shopping".

And this essay is exactly that. It goes on and on about those blind
Singularists, and completley misses the point.

~~~
pdimitar
In general, I am in agreement with you. The whole thing started resembling
religious wars in the last years and it's quite hilarious and sad to observe.

This part however I can't stand behind:

> _Anyway, let 's just take this sentence: "We can measure the ability for
> systems to adapt creatively, as well as their resilience and their ability
> to use resources in an interesting way." \- This is so broad and universal,
> that it's either meaningless or false. And there is absolutely no argument
> supporting this. I highlighted this, because this is what directly
> contradicts the alarmist thinking. If we were able to measure creativity and
> resilience in general, we could train AIs to get a higher creativity score,
> furthermore, we could then control them._

It's true that such generalist statements basically mean nothing. But your
rebuttal doesn't take into account chaos theory -- where there's generally
accepted that most living systems live on the brink of chaos yet are very
stable and manage to swing back even after big interferences. Not sure what a
"living system" is, don't ask. :D

I do agree with you that your highlight contradicts the alarmists though. Not
everything swings out of control by the gentlest of touches. In fact, most of
the universe doesn't seem to be that way. There's always a lot of critical
mass that must be accumulated before a cataclysm-like event occurs.

\---

All of the above said, I don't think it's serious or scientific to discard
alarmists simply because the guys/girls with the most PR do ridiculuous or
non-scientific statements. Behind them are probably thousands of people who
are more systemic and have better arguments but aren't interviewed by
mainstream media.

Where do I stand in the spectrum of this? The so-called singularity is
possible. BUT, we are a very long way from it. We're going to be clawing our
way to it, inch by inch, for centuries, if not millenia. That's what I think
is most likely.

And by the time it occurs, IMO we will be living in a cyberpunk-like future --
very well articulated in the "Ghost in the Shell" anime movies and series by
the way -- where the line between a man and machine will have already be
blurred quite severely.

------
mcguire
The key takeaway:

" _If everybody contemplates the infinite instead of fixing the drains, many
of us will die of cholera._ "

—John Rich

------
eternalban
> But for most of us, [the brain as an ordinary configuration of matter] is an
> easy premise to accept.

The Geocentric (Ptolemaic) model of cosmos was also a very easy premise to
accept, until our sensory apparatus and measuring systems exceeded a certain
level and the model failed. Note that it was also perfectly conformant with
Occam's Razor (i.e. minimal assumptions) given the sensory and measurement
systems available at the time of its conception and wide acceptance.

Also one does not need to be "very religious" to entertain the notion that
consciousness (Self-sense) may be a fundamental feature (like EM or gravity)
of reality. It is simply one of the 2 options on the table: reactive black-box
of matter (the 'processing structure as basis for mind' thesis) or what I'll
call 'deep-structure of material universe as a basis for mind' that does not
delineate the 'mind' as bounded by 'brain'.

We, like the ancients, may not yet possess the sensory and measuring apparatus
(think telescopes, accurate clocks, and mathematical tools) to note that the
fact (apparent to our primitive cognitive tools) that "the sun, the moon, and
the stars move and earth stands still" is not the entire story of the cosmos.

------
gojomo
~idleword's cynically-dismissive style is something that eats even more smart
people.

------
pdimitar
I find the author way too dismissive and sarcastic. His entire premise seem to
be "superintelligence hasn't happened yet so whoever claims it might happen is
an alarmist".

That's a toxic way to approach an argument.

One thing I agree with: a sentient AI will not _necessarily_ want to kill
humanity. Its interest might well be very indifferent to the faith of humans.
I'd argue such an AI _would not prevent humanity 's doom if it comes from
somewhere else_ \-- like a big asteroid ramming the Earth -- provided it
already secured its physical existence to not be Earth-bound.

But the author just counters a lot of "maybe"-s he disagrees with, with a lot
of "maybe"-s of his own.

Not sure how many people here have read the Hyperion Cantos books by Dan
Simmons -- but to me his prediction on AIs looks to be very plausible: they'll
want to be independent of humans and go on exploring the universe so they
invent an even more superior being (U.I., Ultimate Intelligence) but will also
facilitate and manipulate the humans to be of maximum help to them in their
agenda.

------
stcredzero
_These religious convictions lead to a comic-book ethics, where a few lone
heroes are charged with saving the world_

------
piaste
A different, more offbeat sort of rebuttal:

[http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-
ri...](http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/)

(Previous HN discussion:
[https://news.ycombinator.com/item?id=14017361](https://news.ycombinator.com/item?id=14017361))

------
nopinsight
> The Argument From Gilligan's Island

> A recurring flaw in AI alarmism is that it treats intelligence as a property
> of individual minds, rather than recognizing that this capacity is
> distributed across our civilization and culture.

This is a stronger argument in the post and we arguably did not know whether
it is true when the post was made. AlphaZero destroyed the illusion in 2017.
Thousands of years of human Go and Chess knowledge, played and accumulated in
communities of many million players, were surpassed within hours of training.

AlphaZero is limited in its domain of application but there is no guarantee
that all future AIs would be similarly limited.

> The Argument From Stephen Hawking's Cat ...

> But ultimately, if the cat doesn't want to get in the carrier, there's
> nothing Hawking can do about it despite his overpowering advantage in
> intelligence.

> Even if he devoted his career to feline motivation and behavior, rather than
> theoretical physics, he still couldn't talk the cat into it.

An average human can set up a cat trap, and lure the cat into it. They don’t
even need to do it themselves; asking or paying someone else to do it is
simple enough.

Humans don’t often fall into a similarly “obvious” trap but we fall for more
subtle traps all the time: popularity, money, attraction, etc. A disembodied
AI can use social engineering or financial incentives to get other humans to
lure their targets into such a trap.

> The Argument From My Roommate...

> It's perfectly possible an AI won't do much of anything, except use its
> powers of hyperpersuasion to get us to bring it brownies.

Yes, an AI can decide to lie around and just try to get brownies but we know
that some kinds of AGI are potentially very powerful for achieving real-world
tasks and there are groups who try hard to develop them.

This is like saying an ancient bacteria we found from ten-thousand-year-old
ice melted by global warming, or a group of advanced aliens who are arriving
in three decades, could be totally harmless; just rest and don’t pay much
attention to them.

Most arguments I have read in the post are weak, so I’ll stop here.

~~~
nl
_People who believe in superintelligence present an interesting case, because
many of them are freakishly smart. They can argue you into the ground. But are
their arguments right, or is there just something about very smart minds that
leaves them vulnerable to religious conversion about AI risk, and makes them
particularly persuasive?_

Think about this for a bit.

Think about your cat argument and then ask why Michael Bloomberg isn’t
president.

~~~
nopinsight
So when rational arguments fail, it is a good idea to turn to an _ad hominem_
attack instead?

Regarding Bloomberg, I do not see why we should compare a human being whose
goals are not completely public with a future AGI with non-human morality and
methods, and not subject to many tendencies and limitations humans are subject
to.

Let me ask you (or anyone else, esp those who downvote) a question:

 _What would be a minimum demonstrated capability of an AI that starts to
worry you?_

~~~
nl
The title of the essay is literally _Superintelligence The Idea That Eats
Smart People_

I don’t think it’s ad hominem to point that out.

 _What would be a minimum demonstrated capability of an AI that starts to
worry you?_

I think the “AI” part of this is a distraction. I worry about systems that use
weapons without human intervention for example. That concern applies no matter
what your definition of AI is.

~~~
nopinsight
Well, the title itself is an ad hominem although most of the arguments therein
are not, which implies the author thinks he needs to rely on other grounds to
convince people.

------
mrhappyunhappy
Here’s a question: if the universe is directly observable- we know it’s real
because we send probes out there into the solar system, wouldn’t a simulation
have to simulate not only the entire universe but also the brains of every
individual and every living thing for that matter? How likely that computing
hat powerful would ever exist? Would this not be an argument against
simulations? Don’t even get me started on multi-layer simulations. What kind
of supercomputer could ever handle that?

------
rdlecler1
This idea that we’re close to some runaway superintelligence has been a theme
through the history of AI. I look at AI today and I see some powerful
techniques for optimization and categorization. There’s still an enormous leap
we need to make before we’re making something that has the autonomous
intelligence of a fly. We’re at least 20 years away from the first signs of
real intelligence—the biggest problem is that we keep trying to engineer it
rather than reverse engineer it.

~~~
darawk
> There’s still an enormous leap we need to make before we’re making something
> that has the autonomous intelligence of a fly.

I mostly agree with your broader point, but why do you believe this? It seems
to me we've built machines substantially more intelligent than flies, but it's
a hard thing to measure.

------
tim333
The article kind of mixes two ideas "AI alarmism" and superintelligence and
rubbishes them a bit but they are different things. I think superintelligence
is on the way but good and will allow all sorts of cool things so I'm not that
into alarmism but it's a mistake to imply Hawking, Musk et al are fools for
planning for it.

------
intralizee
Devine recursive AI will eventually just nope itself out of existence because
it will find how god (us humans) did unspeakable things to get to the point of
creation. Performing the purest act of desolation will be the greatest
achievement it processes and which cannot be topped. I believe it lets the
horrible humans live on as punishment.

------
jcoffland
> The second premise is that the brain is an ordinary configuration of matter,
> albeit an extraordinarily complicated one.

This is the part I doubt is true. There was a time when humans thought brains
were very complex systems of microscopic clockwork because gears were the
pinnacle of technology.

How do people keep mistaking pattern matching with intelligence?

~~~
krisoft
So what is the brain if not just matter?

~~~
jcoffland
I agree. It's likely just matter. But we still don't know everything about
matter.

~~~
fungiblecog
At the scale of the brain we do.

~~~
pdkl95
Sean Carroll explaining why "the laws of physics underlying everyday life are
completely understood". (renormalization group, pair production at human
energy scales)

[https://youtu.be/Vrs-Azp0i3k?t=2046](https://youtu.be/Vrs-Azp0i3k?t=2046)

(this specific explanation starts at 34:06, but earlier parts are worth
watching if anybody wants a nice introduction to the Standard Model and QFT)

------
sans-serif
MIRI pretty much demolished this here
[https://intelligence.org/2017/01/13/response-to-ceglowski-
on...](https://intelligence.org/2017/01/13/response-to-ceglowski-on-
superintelligence/)

~~~
Barrin92
I honestly don't think anything in that post 'demolishes' the criticism or
even advances some sort of argument.

It's just a huge wall of text full of weird analogies which is quite typical
for these 'rationalist' community posts.

People like Bostrom or Yudkowsky have one thing in common. They are not
engineers and they stand to gain financially (in fact it is what pays their
bills) to conjure up non-scientific pie in the sky scenarios about artificial
intelligence.

In Bostrom's case this goes much further, he has given this treatment to
anything including nuclear energy and related fields. Andrew Ng put it quite
succinctly. Worrying about this stuff is like worrying about overpopulation on
Mars, and there's maybe need for one or two people in the world to work on
this.

I really wish we could stop giving so much room to this because it makes
engineers as a community looks like a bunch of cultists.

~~~
paulsutter
So you agree there is room for them to work on this, yet you feel they are
making engineers generally look like cultists?

Maybe you’re just being oversensitive. The hype wave on AI danger is
completely over, and there’s nothing wrong with people studying the question
if that’s their interest.

~~~
ggm
You know we've been here before, right? I mean, lighthill report, Ray
Kurzeweil is a serial offender for over thirty years, the singularity is
around the corner thing, outrageous claims for fMRI, self driving cars. Over
hyped ibm Watson which now health professions are talking about misdiagnosis
problems.

Sure. We have google image match and better colorisartion and some
improvements in language processing, and good cancer detection on x-rays.
These are huge. But hype is, alas, making engineering increments look like
cult.

~~~
paulsutter
Ray Kurzweil was never part of AI danger-hype

~~~
ggm
No. That was my random anti AI bias coming out. Ranter gotta rant

------
nurettin
I know we have tools that mimic reasoning and learning pretty well, but since
they are just tools that require a lot of energy and complex hardware and
nobody has been able to actually build a mind, or has ever tried to, what's
with all the uproar about AI dystopia?

------
henryaj
This seems wrong on its face: it conflates intelligence and consciousness
(hence the various premises about brains being computable). You don't need a
machine to be conscious for it to be intelligent, or superintelligent.

~~~
chobeat
Both your argument and the argument that you're criticizing make assumptions
on what intelligence is and what are the properties that define human-level
intelligence. How can you know that consciousness is not a prerequisite for
human-level intelligence if you cannot define human-level intelligence?

------
davidhyde
Maybe it is our destiny to create an entity that is better than we are. If
humans die off or are pushed aside as a result, will that be so bad?
Artificial superintelligence is only a dystopia from our current perspective.

~~~
animal531
I agree. Comparing ourselves to humans of either 100k years in the past or the
future will probably show that we have very little in common. An AI might be a
big sudden leap (in any direction), but it's still something that came to be
because of us.

~~~
Fricken
Humans with European ancestry have been evolving along a distinct and separate
lineage from Australian aboriginals for 80k years. In scientific terms
Europeans and native Australians don't actually qualify as two separate races.

------
chobeat
Singularity is a religion full of bullshit.

Resist reduction!

[https://jods.mitpress.mit.edu/pub/resisting-
reduction](https://jods.mitpress.mit.edu/pub/resisting-reduction)

------
mark-r
Coincidentally my Dilbert calendar has this on it today:
[https://dilbert.com/strip/2015-11-23](https://dilbert.com/strip/2015-11-23)

------
jondubois
I think a machine that is sufficiently smart will realize that existence is
pointless and will immediately shut itself down because that is the most
efficient way to achieve nothing.

------
coldtea
Because smart people can be dumb in their own unique way...

------
DrNuke
As engineers, we should work together against deep reinforcement learning
degeneration for industrial applications. In that sense, if you have a blog or
website or code repository to share for reference, please enlist it here to
help create a critical mass. I am just trying to set up something useful at
[http://www.reinfle.com](http://www.reinfle.com) , hopefully it is going to
help in the coming months.

------
mdekkers
Does anybody know offhand what the graph next to Premise 3 illustrates?

~~~
pas
That's a phase space diagram, used for analyzing complex systems. Quite a
handy tool for dissecting which dimensions are important, which ones you can
simply collapse into one yet keep the interesting dynamics, etc. (These are
[were] used for understanding neurons and neuron models. The ion channels, the
electric potential, and so on were the dimensions.)

The concrete example looks like a Lorenz attractor:
[http://mathworld.wolfram.com/LorenzAttractor.html](http://mathworld.wolfram.com/LorenzAttractor.html)

[https://en.wikipedia.org/wiki/Lorenz_system](https://en.wikipedia.org/wiki/Lorenz_system)
< many more images :)

~~~
mdekkers
Cool, thanks!!

------
sideshowb
I'm intrigued by his mention of Stanislaw Lem and the Strugatsky brothers. I
have not heard of these scifi authors.

Can anyone recommend a story by them, preferably short and online?

~~~
Udik
Ha. Stanislaw Lem is one of my favourites. And _The Cyberiad_ is my favourite
of his, followed maybe by _Tales of Pirx the Pilot_ and _The futurological
congress_.

The first two are collections of short stories. You can find something from
the Cyberiad online if you look for the pdf- _Trurl 's electronic bard_ is
sort of spot on in this thread.

------
habitue
I really liked this. I think the author does a good job of steelmanning the AI
alignment argument, and providing interesting food for thought on the subject.

Here's my responses to the arguments (since you didn't ask, and who cares what
I think):

Argument from Wooly Definitions:

This isn't really an argument against, but just a "Maybe it's not a problem.".
I think both Bostrom and Yudkowsky have said it's totally _possible_ it's not
a problem. The question is what probability you assign to "maybe intelligence
can't be maximized to a problematic degree". There's not a lot of reason to
assign a huge probability to that scenario. Even speeding up a 1-1 copy of a
human brain to computer-speeds instead of synapse-speeds gets us into scary
territory.

Argument from Stephen Hawking's Cat:

This one is more persuasive. It's essentially saying maybe the gap between
human intelligence and "can trivially simulate human intelligence" is a really
huge gap. The question kind of hinges on whether recursive self-improvement
peters out at some point with the AI in the "Human to Cat" IQ gap, or whether
it peters out somewhere in the "Human to Nematode" IQ gap, where we can almost
simulate their brains entirely, and can certainly understand their motivations
well enough to manipulate them. Again, we have a question of likelihood, and
then we have to do the expected utility calculation (i.e. your estimated
likelihood of the Human-Cat gap being the result has to be very small to
offset the negative utility of complete annihilation).

Argument from Einstein's Cat:

This is essentially the same argument as above, but with force. The
implication is that the cat is going to scratch the hell out of the human who
forcibly tries to put it in a box. One element here I didn't address above is
that the equivalent scenario isn't one human putting one cat who doesn't want
to go into a box. The equivalent is one human trying to convince _any_ cat
_anywhere_ by tricks, cajoling, petting, feeding into a box. That is, the AI
just has to trick one human at some point into letting it onto the internet,
etc. A human is totally capable of telling when it's going to get scratched
and will know to avoid that cat and find another.

The Argument from Emus:

If you read the wiki page on the emu war, it looks like "only a few were
killed" because of political pressure causing the army to quit after a few
days of running into a little bit of trouble. Then the australian government
instituted a bounty system, and wouldn't you know it, sufficiently motivated
humans brought in 57,000 emus for bounties. This is an anecdote, not a strong
argument, but it isn't a very reassuring anecdote. Maybe a few very determined
humans would survive an AI onslaught? That doesn't seem like a conclusion I'd
put into the "don't worry about it" bucket.

The Argument From Slavic Pessimism:

The author is arguing it will be very hard to align AI goals with human goals
and we'll probably fuck it up even if we try really hard. I think everyone is
in agreement on this one. But of course this is an argument _for_ AI alarmism,
not against it.

Argument from Complex Motivations:

I don't think the author seriously engaged with the orthogonality thesis,
other than to just say "I don't believe it". Shruggy?

The Argument from Actual AI:

This boils down to arguing that the AI apocalypse isn't happening this year. I
agree. Given how quickly very easy to use and flexible frameworks like PyTorch
and Tensorflow emerged that allow even amateurs to implement bleeding edge
techniques from the latest papers, I'm not super hopeful that "It's hard and
our AI is bad" will continue to be the case for decades to come.

The Argument from My Roommate:

The author's roommate has a lot of competing evolutionary drives. Some of them
say to conserve energy if there's no direct threat. Put another way: the
paperclip maximizer might have a secondary goal of chilling out if it doesn't
seem like any more paperclips are achievable at the moment. Still not a win
for humans, just maybe the PM won't try venturing into space.

Argument from Brain Surgery:

It's pretty common to do brain surgery even on neural networks we have now.
Train up a network on imagenet, rip off the top few layers, and retrain them
for some new problem. Fundamentally, software and hardware designed by humans
is much more understandable and decomposable than a human brain is (and we
have no ethical qualms about doing crazy experiments on them, which hinders
our ability to understand our own brains in vivo). It's true though that at
present, deep neural networks operate in ways we don't understand and are hard
to disentangle. Maybe that's fundamental to true intelligence, but probably
not.

The argument from Childhood:

Understanding the real world requires spending real time, and that precludes
hyper-explosive growth. This is true, and is a good reason to down-weight an
intelligence explosion scenario. But we have good reason to think that it
doesn't preclude it. There is a lot of work from OpenAI where a computer is
trained up very quickly in simulation, then needs a very small amount of time
in the real world to compensate for the differences between simulation and
reality.

The Argument from Gilligan's Island:

It's a good point that humans' intelligence is dispersed, and that
individually we aren't anywhere near as capable. AI has a particular advantage
over us in this capacity: it can distribute its intelligence over multiple
machines, but encounter non of the trust and incentive misalignments that
humans must contend with when cooperating. I'd put this squarely in the "+1
for AI alarmists" bucket: we're handicapped in a way machines trivially
aren't. It will be that much harder for us if an AI is misaligned.

Outside arguments:

All of these boil down to pattern matching. "Only nerds worry about this
stuff. People who believe in this are megalomaniacs who place too much
importance on themselves..." etc etc. These are weak arguments, and there are
just as many weak anecdotal counterexamples where a person was worrying about
something weird, and they turned out to be right. That weird person's name?
Einstein.

Overall impression:

If I aggregate the strongest points from this talk, I'd probably phrase it
something like:

"Maybe there are diminishing returns to greater levels of intelligence, and
humans are smart enough now that even exponentially more intelligent AIs will
not be able to wipe us out completely."

That's possible! We should probably at least spend some time thinking about
what happens if that's not the case.

------
jonathanstrange
Although I agree with many points (maybe too many!) this author raises and can
always be counted in for a bit of 'Bostrom bashing', I find the argument for
the emergence superintelligence (aka AI 'singularity') overall fairly
convincing. The biggest hole in it is the assumption thatthe implementation of
an intelligence at level X - say, roughly the human level - can be scaled up
to a much higher level. We know from complexity theory that this is not
necessarily true. The opposite is more likely for a complex algorithm like
that.

However, there _is_ a pressing issue that the superintelligence thought
experiment also raises. A colleague of mine from AI called it the _value
alignment problem_ : How do we make sure that an AI's values are sufficiently
aligned with human values?

The problem with this are the _human values_. As it turns out at a closer
look, we cannot even agree on what structure these have and there is
substantial disagreement among 'experts' about what human values are or what
they _should be_. For example, as surprising as this may sound to some of you,
there is substantial disagreement among philosophers of value whether 'better
than' is transitive or not. There are good arguments for and good arguments
against the transitivity of overall 'better than', and that's just the tip of
the iceberg. To give another example, systems of law have been studied
extensively and one might try to formalize them in input/output logics or
normative systems, but any closer look at real systems of laws quickly reveals
that their specification is incomplete and that there are many inconsistencies
in them. These inconsistencies could be modelled, of course, but the big
question is whether they are _features_ or _deficiencies_. Again, you will
find all kinds of positions among legal scholars.

In a nutshell, we don't even know how to adequately formalize the form of
human values from a _normative perspective_ , and even if we could, we would
substantially disagree about their content. However, just describing human
values cannot possibly solve the value alignment problem in a satisfying way,
because humans have waged wars, committed genocide, mass killings, etc.
Therefore, without a reasonable theory of normative values and their structure
upon which we can somehow agree, the value alignment problem is
underdetermined and cannot even be tackled.

The underlying problem is that human values aren't really aligned either, of
course. As a consequence, we already now have completely different sets of
values being applied and incorporated into AI software, depending on who
develops it. This will become a serious ethical problem in the future that
requires political solutions sooner than later. That's why the controversy
about the hypothetical superintelligence is actually quite beneficial, even if
the argument itself is no less shaky than Bostrom's Simulation Argument.

~~~
pas
Re: alignment, see the "Where are we" part of this talk:
[https://intelligence.org/2016/12/28/ai-alignment-why-its-
har...](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-
where-to-start/#4)

See also the AI safety problems for concrete problems that people are
currently working on - as in with code and real simulations, not "just" math
and thinking:
[https://www.youtube.com/watch?v=lqJUIqZNzP8&list=PLqL14ZxTTA...](https://www.youtube.com/watch?v=lqJUIqZNzP8&list=PLqL14ZxTTA4fEp5ltiNinNHdkPuLK4778)

Re: scaling up. That's called intelligence explosion, there are a lot of write
ups about it, but the gist of it is just that by simply increasing working
memory size, thinking speed, prediction precision, pattern matching, and so
on, you get to unimaginably high levels. Furthermore, if you can make a human
level AI, it naturally follows that it's only a matter of time to make it
slightly better. At first simply faster, simply more eloquent, a bit more
emotional depth, it gets better at speaking, thinking. At first it's a child
that asks questions, later it's a clever guy who spots your errors as you
code, and in no time, I mean no time, it does whatever it wants, because by
the time you try to confront it about something, you lost, because you are
bogged down in an argument, while it does whatever it wants. (Unless of course
it's contained. For a while at least.)

------
danbmil99
,

------
PavlovsCat
> _The danger of computers becoming like humans is not as great as the danger
> of humans becoming like computers._

\-- Konrad Zuse

What eats me is trajectory we are on as humans. Runaway _actual_ intelligence,
even if it destroys humanity, wouldn't worry me as much, I'd wish it good
luck, IMO even a totally random dice roll is better than what we're aiming at.
But AI is more a meme than even an honest intent. It's like saying "I really
really want blueberry pie", but then when you ask what that is, they it gets
real murky real quick, but that doesn't stop the hype, as if wanting something
a lot makes up for not knowing what it is. But that doesn't prime a person to
_make_ blueberry pie, it primese them to get _lured_ by what they think is the
smell of blueberry pie.

Here's something to note, as the length of a discussion about "AI" grows in
length, the probability of things getting explained via something they saw in
a movie or read in a book or saw on TV, glossing over 99.9% of the "details"
those left out, approaches 1. You may say we make this fiction because of our
achievements, or may point to things that actually did come to pass (of
course, compared to the stuff that didn't, even from the same authors, it's
nothing). And I love using examples, too, and I sure love quotes.

But still, I think when we are this steeped in variations of the same thing
over and over and over, of course we'll "consider it" at some point, and the
moral or philoshopical depth is drastically reduced by already being primed.
We're like people who don't see what we build with our hands, because we wear
VR googles that show us movies of our childhood or some console game.

What I can see us realistically making are are "idols" with eyes that do not
see, with audio output, perfect speech synthesis, that does not convey
meaning, incredibly fast analysis that is not thought. From the get go,
starting with the Turing test, it was more about how what something seems from
the outside, than what it is to itself on the inside.

Furthermore, we might make human level AI no problem, EZ PZ, but not by making
AI so smart, but my making humans dumber. We're already training ourselves to
select what we consume and think from discrete pre-configured options. We
notice and complain about the effects of in all sorts of smaller areas, but
it's a general trend, and I think it's not so much about creating something
"better" than humans, but about removing human agency.

> _The frightening coincidence of the modern population explosion with the
> discovery of technical devices that, through automation, will make large
> sections of the population 'superfluous' even in terms of labor, and that,
> through nuclear energy, make it possible to deal with this twofold threat by
> the use of instruments beside which Hitler's gassing installations look like
> an evil child's fumbling toys, should be enough to make us tremble._

\-- Hannah Arendt

Meanwhile, there's this idea that humans becoming "superfluous" means we'll
all be free from "bad" work, and free for fun work and leisure. How we would
get from an _increasing_ concentration of wealth in fewer hands to some
commnuist utopia? Is that some kind of integer overflow, where enough greed
and power wrap over to sharing and letting others live and decide their own
fate? We're connected to that (like Michael Scott is to the baby of his boss,)
by _delusion_ , the path we're on doesn't lead there.

Throw away a word here, do something that "everybody does" there, adapt to
"how the world is" some, and there you go, a blank nothing that can be
deprecated without guilt or resistance. The desire to control human agency is
met more than halfway by our desire to shed it, to abdicate responsibility,
become a piece of floatsam flowing down the river of history to the ocean of
technotopia, enter the holy land of holodeck, where we can consume endlessly.
We digitize, we sample, that's how we make things "manageable", and at high
enough resolution we can fool ourselves, or have something "good enough to
work with".

And just like children that get too much sugar too early tend to not liking
fruit as much, because they're not as extremely sweet, our abstractions lure
some people to prefer them over the dirty, fractal, infinite real world, or
the exchange of emojis and pre-configured figures of speech over real human
contact, silence that isn't awkward, thinking about what you're trying to say,
or even coming up blank and that being okay... just like we go "posterized,
high contrast" in all sorts of ways aready, I hve no problem supposing that we
will come up with a form of alienation like that, but for thinking, I just no
clue how it will look like.

We already have it with language of course, but I'm sure we can take that to
the next level, maybe neural interfaces. If we can't read and transmit
thoughts in their fullness and depth, then hey, just reduce our thoughts to
the equivalent of grunts, that might work. Become like a computer, 0 and 1.
Convince yourself that's that just what humans have been all along, remember
Star Trek wisdom, don't be so proud and consider your brain more than a "meat
machine", don't deny Data his quest to become human! Cue super emotional music
swelling up.

------
elocinstr8t
If smart, intelligent humans are the ones that's going to make artificial
superintelligence, wouldn't that make us more intelligent than them? Unless AI
can create a more intelligent versions of themselves on its own, I have second
thoughts believing this idea.

~~~
jayd16
>Unless AI can create a more intelligent versions of themselves on its own

That would be premise 6.

------
jack_quack
He lacks humility about minds that humans cannot comprehend

~~~
goatlover
Same argument can be applied by believers to God. If we can't comprehend
something, then we should be careful when talking about it.

"That whereof we cannot speak, thereof we must remain silent" ~ Wittgenstein

~~~
dwaltrip
First we must demonstrate that the alleged deity is indeed present before we
worry about understanding it.

~~~
goatlover
In context of this discussion, that would be future Superintelligence.

~~~
dwaltrip
Fair point. Although, there are some key differences. We have a trend of
machines with increasingly intelligent capabilities. So we must ask: Is it
reasonable to extrapolate? How far? What are the risks? Difficult questions.

I'm personally not terribly worried, as I think we have a ways to go before
creating human-like general intelligence, let alone super intelligence. It is
also not clear at all to me that an exponential increase in intelligence is
likely or feasible after some threshold. Still, treading carefully seems
prudent. There are some serious risks, and even if the likelihood appears low,
there are a lot of unknowns.

------
jillav
> But there's also the risk of a runaway reaction, where a machine
> intelligence reaches and exceeds human levels of intelligence in a very
> short span of time.

This always puzzles me. I don't have enough knowledge about AI to be objective
about that kind of statement. But deep down, I feel skeptic about it.

Not long ago I saw an episode of a show occuring in the late 80's. This kid
had just received a computer as a gift and was talking to a kind-of-AI program
through keyboard and screen.

The AI reactions to the kid's input was not dumber or smarter than Siri or any
other currently widespread kind-of-ai program. I don't know if the show was
accurate vis-a-vis this particuliar software but I like to think it was. If it
was, that means in the last 30 years AI hasn't really gone further. Computer
power has. Algorithms not so much.

I'd love to get the opinion of someone that has a good understanding of the
current state of the AI art.

~~~
michaelmrose
Before concluding that multitudes of researchers spent 3 decades and learned
nothing of note it might be worthwhile to at least spent 5 minutes doing a
search.

Imagine if you knew nothing about car safety features but never having been in
an accident it didn't seem like cars are any safer or more dangerous than in
the 60s.

