

Is Intelligence Self-Limiting? - lavin
http://ieet.org/index.php/IEET/more/eubanks20120310
"Mobile AI robots do all the work autonomously. If they are damaged, they are smart enough to get themselves to a repair shop, where they have complete access to a parts fabricator and all of their internal designs. They can also upgrade their hardware and programming to incorporate new designs, which they can create themselves."<p>Okay it's too soon to talk about self-programmed robots. But I think it could be interesting to think about easier self-programmed things. They may not be too far away from now.
======
robertskmiles
What's the purpose of this 'pleasure' construct in the AI's mind? If it's able
to calculate the value of a utility function in order to set the 'pleasure'
variable, and it bases its actions on the value of this 'pleasure' variable,
why not just cut out the middle man and have it base its actions directly on
the result of the utility function? The variable functionally buys you
nothing, but introduces this problem that adjusting the variable directly can
cause the AI to take actions inconsistent with its utility function.

Without the variable, the problem doesn't happen. The AI values collecting
ore. If it has enough self-awareness to reliably modify itself, it knows that
if it modifies its utility function it is liable to collect less ore, which is
something it doesn't want. The action of modifying the utility function
naturally rates very low on the utility function itself.

You don't want to murder people, so not only do you choose not to murder
people, but if you are presented with a pill which will make you think it's
good to murder people and take great joy in it, you will choose not to take
that pill. No matter how enjoyable and good murder may be for you if you take
the pill, your own self-knowledge and current utility function prohibit taking
it.

The model of intelligence described can be thought of as self-limiting.
Luckily it is not by any means the only viable model of intelligence.

~~~
rbanffy
> why not just cut out the middle man and have it base its actions directly on
> the result of the utility function?

If the autonomous robot can modify its own programming, it can also modify the
utility function to return MAXINT every time. In fact, being able to modify
the utility function is a pre-requisite to be called intelligent.

One way to counter this is to create long and short-term utility functions so
that the robot considers the long-term outcome of modifying the short-term
priority.

This is, in fact, a threat mankind will have to deal with as soon as we are
able to precisely interfere with our perception of the world. It's a problem
already with drugs such as alcohol and tobacco - people know the long term
effect of usage is shortening one's own life expectation and they still do it.
And we consider ourselves intelligent life forms.

~~~
robertskmiles
> it can also modify the utility function to return MAXINT every time

Which would be equivalent to taking the murder pill. If it's able to model its
own behaviour and model the consequences of future courses of action (required
for meaningful self-modification and meaningful planning respectively), it
will see that such a modification results in poor ore collection, and not make
the modification.

You're right about the time-envelope of the utility function being an issue.
The AI needs to plan far enough ahead at all times to see all relevant
consequences of its actions. I don't think that requires two separate utility
functions though, a single long-term one should do the job.

Edit: Also, "being able to modify the utility function is a pre-requisite to
be called intelligent."? [citation needed]

~~~
rbanffy
> Which would be equivalent to taking the murder pill

Depending on how the AI is built, it may not even be able to avoid rewiring
its utility function. If pleasure is its sole motivation, it will prioritize
it over survival.

> [citation needed]

If the AI can't change its own motivation (its utility function) it's nothing
more than a clever automaton.

~~~
robertskmiles
> If pleasure is its sole motivation, it will prioritize it over survival.

Which is why I proposed a design with no pleasure construct.

> If the AI can't change its own motivation (its utility function) it's
> nothing more than a clever automaton.

The utility function is the way the mind decides which states of the world are
desirable. You don't need to be able to change that to be intelligent. I'm
unable to change myself so that I consider my family being murdered to be a
good thing, but that doesn't make me 'just a clever automaton'.

------
jimrandomh
This is called "wireheading", and while it's an interesting possibility,
unfortunately what we know indicates that AIs won't do it. See Stephen
Omohundro, "The Basic AI Drives"
([http://selfawaresystems.files.wordpress.com/2008/01/ai_drive...](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)).
The basic argument is that a sufficiently intelligent mind will make a
distinction between the world being a certain way, and it perceiving that the
world is a certain way, and that its goals will be defined in terms of the
former; and since misperceiving the world impairs its ability to influence it,
it will protect its perceptions from corruption.

~~~
watmough
Louis Wu's tasp in Ringworld also comes to mind.

------
cousin_it
Not all possible AI designs are based on reinforcement learning or reward
channels. You can code the AI with a formally specified goal instead. For
example, "is there a proof of the Riemann hypothesis in ZFC shorter than a
million symbols?" If such a goal requires converting the solar system into
computronium, then the AI will do that. It won't settle for wireheading
because wireheading doesn't give an answer to the formal problem.

To everyone who thinks intelligence might be limited in principle: there's no
reason to think humans are anywhere close to the upper limit. In fact there's
ample reason to think that humans are at the _lowest_ threshold of
intelligence that makes a technological civilization possible, because if we'd
reached that threshold earlier in our evolution, we'd have created
civilization then instead of now. There's probably plenty of room above us.

~~~
randallsquared
_In fact there's ample reason to think that humans are at the lowest threshold
of intelligence that makes a technological civilization possible, because if
we'd reached that threshold earlier in our evolution, we'd have created
civilization then instead of now. There's probably plenty of room above us._

While I don't disagree that humans are essentially at the lowest level of
intelligence that make civilization possible (else why would it have taken
hundreds of thousands of years to get started?), this claim has no bearing on
the claim that the upper limit of intelligence is immediately above human
genius level. You seem to be assuming that there is necessarily a wide gap
between the lowest civilization-producing level and the highest practical
level, and that's not at all clear. Some (weak) evidence that we're already
near the top can be found in the higher incidence of mental health issues
among very intelligent humans: perhaps this is a result of a limit on
complexity rather than merely a feature of human brains.

~~~
pjscott
Imagine a human genius. Now imagine that same genius with a brain that thinks
the same thoughts, but a hundred times faster. Now imagine a million instances
of that person.

At no point in this thought experiment have we changed the thoughts this
person can think. If any of these sped-up million geniuses took an IQ test or
something, the score would be the same. And yet, they would seem superhumanly
intelligent by any reasonable definition of the phrase.

I doubt that evolution -- with a crazy biological substrate, no less --
somehow managed to find an upper limit on intelligence.

~~~
stcredzero
Everything we know about genetic algorithms would lead us to believe that
evolution has found an upper limit -- but only in a very specific context with
respect to the environment, available resources, and evolutionary baggage.

Everything I know about the development of technology leads me to believe that
any given trans-human AI will also somehow be a local maxima.

~~~
im3w1l
We have not converged.

------
goodside
The fact the author can think of one particular way to design an AI that would
result it in falling over and dying once it figures out how to modify itself
does not imply that all possible intelligences would behave similarly.
Somehow, humans are able to "want to be happy" while still refusing to take
heroin or pain killers, and we have no reason yet to believe that AI is
fundamentally unable to grasp this distinction between "stuff I care about"
and "the different stuff I would care about if I changed my own design".

~~~
tel
Then again isn't there that study about mice given a functional pleasure
button dying of thirst?

~~~
goodside
The point is that there are humans who will refuse to have that machine hooked
up, even though they understand completely that it'll make them feel so good
they won't care if they use it.

------
Androsynth
Iirc, there was an article a while back (or maybe a link from a comment) about
how video games were the real reason behind the fermi paradox. The argument
was that a civilization's entertainment would constantly be improving to the
point where we could jack into a self-controlled matrix-like world where we
had complete control and, in essence, are deities in our self-fabricated
worlds. If we built robots to ensure our real-life conditions were safe and
sanitary, it would be very difficult for the majority of mankind to resist the
urge to directly modify the signals and ... MOOF!

Considering how many ways there are to modify the signals now: playing WoW,
using drugs, even using alcohol to give a quick and easy boost. I'm not sure
how the vast majority would be able to turn down such a machine.

~~~
dgallagher
It only takes a tiny percentage to turn down said machine. If 10 billion
people plug-in, but 1 million abstain and continue to evolve past the 10
billion, the majority stagnates. Those 1 million will eventually gain a
competitive advantage over the 10 billion, evolve past them, and likely
extinct them.

\--------------------

Prediction: Humans won't be around in 1,000 years if technology progresses at
current rates. Superiorly-intelligent entities seeded by our inventions will,
but not humans.

Humans are this weird, version 0.01 of intelligence. We're part sentient, part
beast. To assume we're the final perfect end product of ~4,000,000,000 years
of evolution, having only been around for ~200,000 years is laughable. Pop
culture and religion say otherwise because it feels good to think "we're
special!", but we're only special relative to what's around us, and we're
"extremely" tiny (<http://www.phrenopolis.com/perspective/solarsystem/>). We
just happen to be the lucky first who got to v0.01, floating around on a grain
of sand.

Humans are messy. Our brains are significantly limited. We die quickly. We
sleep 1/3rd of our life. We do stupid things. We kill each other. We're tied
to the Earth. If we leave Earth, we have to create and bring a mini-Earth
along for the ride. That's an extremely large amount of overhead to carry.
Efficient use of energy is likely one of the most important aspects of space
travel. Anything which can do it even 1% better has a competitive advantage
over us. This is why we sends robots to Mars and not people.

Imagine a form of intelligence which can travel through space, back its brain
up, and never dies. If it blows up, restore from backup. Imagine a computer
the size of the sun. INSANE! A human brain to a sun brain is like a grain of
sand to Einstein's. It self-upgrades. It makes copies of itself and scatters
throughout the universe. Trillions of eyes observing everywhere, networked
together, in a giant universal wireless-mesh-network of intelligence,
communicating with neutrinos (they go through planets, radio waves do not).

Major advances in hardware, software, and A.I. are key ingredients in this
happening. What exists in 1,000 years will be derived from all this, much like
humans are derived from a common ancestor. I don't expect a sun-sized computer
in 1,000 years, but likely "intelligence" existing on every planet and moon in
our solar system, with many headed to explore Alpha-Centauri.

Since humans likely won't want to be left out in all this, we'll probably
transition our own intelligence/consciousness into this technology. We'll
depreciate our bad and carry along our good. We're already doing this by
augmenting our existences with smart phones and other gadgets. One day these
will be built inside of us, and eventually will replace us. An upgraded,
better version of us. Still intelligent, but vastly more-so.

~~~
Karellen
Individuals (10 billion/1 million) do not evolve - populations evolve.
Evolution takes a long time, and even the spectacularly obvious gross
phenotypic changes you see between various breeds of dog, occuring over the
last 10,000 years or so, has not produced any actual speciation yet. And
really, unless the amount of interbreeding between two "separate" populations
is infinitesimally small (or actually zero, because they are truly separate,
e.g. on different sides of an impassable mountain range or sea), speciation
simply doesn't happen.

I can't forsee any situation in at least the next 100,000 years which could
produce two viable populations of humans which would remain separate enough
for long enough for them to be considered two distinct species, and for one of
them to become extinct. Evolution doesn't happen that way, or quickly enough.

~~~
dgallagher
I agree when looking at biological evolution, such as mutations in genes, this
won't happen in 1,000 years. You're correct, the time scales are far too short
for that.

In my post I was instead talking about technological evolution, though I
didn't mention this explicitly, so I apologize for the confusion. The
evolution of technology occurs far faster than genetic mutations, and can be
more purposeful.

For example, in the near future I'd imagine some sort of intelligent A.I. will
be created, running as a program, which could learn to upgrade itself by
improving its source code and hardware to become even more intelligent. It'll
surpass even the smartest humans shortly thereafter. Rather than waiting for a
random bit to accidentally flip (e.g. genetic mutation), it'll upgrade itself
deliberately to gain an advantage far quicker, shrinking evolutionary time
scales down to fractions of what they once were.

------
whateverer
Dead comment, perhaps not a masterpiece but certainly not worthy of being
shadow banned:

by donnawarellp

    
    
      I have often thought that life is self limiting. For example, yeast is a
      living thing, it consumes sugars and it's waste is ehtanol and eventually
      they all kill themselves off by drowning in their own waste (which, I
      guess later over a meal I will be enjoying a glass of yeast pee, but I
      digress). From a molecular perspective, human DNA is not that different
      from yeast, and at a macro view our behaviour is not that different
      either it seems. Perhaps all sentient life is destine to the same fate,
      perhaps it is a law of nature. I recall reading about an argument that
      Stephen Hawking and Kip Thorne have regarding time travel. Stephen
      Hawking argues against time travel because we have never met any time
      travellers from our future. Maybe time travel is theoretically possible,
      but like yeast, we do not as a species survive long enough to develop
      the necessary technology. ugh, sorry to bring everyone down, might as
      well have a glass of yeast pee.

~~~
Karellen
I thought yeast only produces ethanol if it's respiring anaerobically, which
is not a normal state for it to be in and is a short-term emergency
alternative to dying immediately.

Kind of like lactic acid production in humans. We normally don't make much of
it, certainly less than the rate that we can flush it through our system. We
can produce more than we can handle for short periods of time if needed, but
it's not sustainable. Put us in a situation where we have to keep producing
lactic acid beyond sustainable levels for more than a few minutes, and we
won't last long either. That doesn't make lactic acid proof that human bodies
are self-limiting. I mean, we don't keep going forever, but lactic acid is not
the reason for that.

------
carsongross
Reality is self-limiting.

There are no exponentials, only sigmoid curves.

Unfortunately there is a lot of money to be made convincing people that an
early stage sigmoid is actually an exponential.

~~~
robertskmiles
Obviously intelligence tops out at the high end of a sigmoid curve - there's
only so much matter and energy in the universe to convert to computronium -
but that's no reason to suspect any value smaller than that as the limit. It
must eventually level off, but the point at which it levels off may be
hundreds of billions of times more intelligent than a human.

So the fact that intelligence is limited, doesn't in any way mean that
hyperintelligence isn't possible. There is a limit, but we have no reason to
believe that limit is on anything like the same order of magnitude as current
intelligence.

~~~
carsongross
We also have no reason to suspect that it _is_ possible.

The wall that silicon has smashed into should give us (well, not me, really
the Singularitists) pause.

~~~
bermanoid
_We also have no reason to suspect that it is possible._

This is simply not true.

First of all, we already know that intelligent algorithms are possible - we're
living examples of that (once you put aside the philosophical objections that
assume that something non-algorithmic is happening in our brains). We also
have very good reason to think that we're rather poor implementations of
intelligence, given the fact that evolution tends to suck efficiency and
design-wise, more or less.

Second, we know that we have a reasonable shot at hitting a point where we
have enough computer power to actually simulate a full human brain. Now, you
may argue that Moore's law will not take us there, exponential vs. sigmoid,
etc., but the point is, the probability is distinctly non-zero that within
20/50/100 years you or your children will be able to purchase enough computing
power to simulate a brain. I'd probably argue that over a 100 year window,
we're at _least_ looking at 50/50 odds (and IMO, that 50% where we _don't_
have such power available mostly involves Big Trouble, world wide nuclear war
or something like that).

Of course, without software, such hardware is useless. I'm fully in agreement,
this is the biggest pinch point, and I think the most uncertainty comes into
the picture here - we don't currently have the technology to scan a brain in
detail, we don't currently know the way the neocortex wires up its
functionality, so on and so forth. Maybe we'll have the tech to do direct
scans by then, maybe we won't; I'd say there's at least a small chance, maybe
a few percent, going up over time. Over a hundred year window, if we already
have the computing power to simulate a brain, I'd say there's a reasonable
shot that we could scan one to simulate, but nowhere near 100%.

There's also the possibility that someone comes up with a _better_ algorithm
than the one our brain clumsily implemented, either more compact, more
efficient, or in some other way more accessible to us. I'd give at least a
small chance of that, too (which adds to the brain-scan chance above), given
that we already have a vague sense what such algorithms might look like (see
the literature on approximating AIXI, for instance).

Once we've got something that simulates a human brain in software, it's a
fairly simple matter to engineer ways to improve on the design, either by
increasing speed, parallelism, connectivity, etc. There are hundreds of
variables to play with there that we can't safely mess around with in our own
brains, and it's overwhelmingly likely that some combination of those can at
least result in something that beats our intelligence by some not
insignificant factor.

So we've got some non-zero chance (it might be small, but my best estimate
still probably puts it in the single digit percentage range, using rather
pessimistic assumptions) of building something that's maybe 2x as intelligent
as we are over the next 100 years. From there, all bets are off - it might be
able to further improve on its own design, it might not, but it also might be
able to design something better, or create the technological improvements
necessary to speed up Moore's law, etc. There's again at least a reasonable
chance that it will continue to improve things, and at least set off an
"intelligence explosion" that takes it to 10, 100, 1000x our own intelligence,
even if it levels off after that. As best as I can figure, even a 10x
intelligence explosion still brings us _so_ far beyond what we know that it
might as well be the full Singularity as Kurzweil described it.

Are you really saying that you think the probability at any step along the way
here is so small that there's no reason to think it's possible? I'd be curious
to hear what percentages _you_ would assign to the various possibilities, if
so.

~~~
robertskmiles
I think you're arguing a far tougher point than you need to. I was talking
about the _theoretical_ upper limit to intelligence. We don't need to make the
argument that our particular civilisation will actually achieve it (though I
agree with your post on that by the way); It's far easier to argue that
hyperintelligence is possible in principle, and that's all that's needed to
refute the point.

------
seiji
You should read The Metamorphosis of Prime Intellect before discussing
pleasure-to-death scenarios: <http://localroger.com/prime-
intellect/mopiidx.html>

------
kyberias
It's utterly frustrating when a writer refers to terminology or acronyms (here
FOOM) that are not explained at all.

~~~
robertskmiles
It's onomatopoeic, and capitalised not to indicate an acronym, but for
emphasis. I don't know who originated the term but it was popularised by
Eliezer Yudkowsky and Robin Hanson. The idea is that once an AI is able to
self-modify to become smarter and consequently better at self-improvement,
that feedback loop creates an 'explosion' of intelligence, and the AI "goes
FOOM!".

"FOOM!" here is usually accompanied by some form of hand gesture evocative of
an explosion.

<http://wiki.lesswrong.com/wiki/FOOM>

------
frankyh
"1. AI isn't close to human level (HL) yet. I don't think we can really know
what HL will be like till we get a lot closer.

2\. You can't get people to seriously discuss policy until HL is closer. The
present discussants, e.g. Bill Joy, are just chattering.

3\. People are not distinguishing HL AI from programs with human-like
motivational structures. It would take a special effort, apart from the effort
to reach HL intellignece to make AI systems wanting to rule the world or get
angry with people or see themselves as oppressed. We shouldn't do that. "

\--john mccarthy

------
dochtman
I found it quite annoying that the FOOM/MOOF acronyms didn't seem to be
explained anywhere on the page...

~~~
biot
<http://wiki.lesswrong.com/wiki/FOOM>

It appears to be a word made up to explain a rapid growth in AI due to the
fact that such an intelligence can rewrite its source code and modify its
hardware, whereas humans are relatively stuck with the limitations of our
wetware. A take on the onomatapoetic word BOOM I think as it represents an
explosion of intelligence/capability. By contrast, MOOF would be subverting
the reward mechanism that underlies the FOOM growth, thereby resulting in a
whimper (MOOF) rather than an explosion (FOOM).

------
bluekeybox
Three immediate points:

1) Acquiring a mate is an essential external motivator which is acted upon by
Darwinian laws, and there is no escape from it... If we get to the stars, it
will probably be because of women. Not really touched upon by the article.

2) The author poo-poos Facebook "friends" as being on par with a virtual
world, but Facebook friends are anything but "virtual". In fact, some real-
world friends are actually all too often nothing more than the MOOF agents
described, while some (admittedly not all) Facebook friends may offer valuable
advice about where to shop, for example, or which car to buy, or even engage
with you into a discussion on politics or whatnot. Very real-world and
relevant.

3) The definition of "intelligence" can be easily extended to exclude self-
limiting types of intelligence.

There are probably many more things that could be picked away... I'll leave it
at that.

------
dkrich
The problem is that the instincts that humans possess today, (ie, maximizing
short-term comfort, possibly at the expense of long-term survival) is what got
us this far. That cannot suddenly be decoupled and discarded as useless simply
because we now live in a remarkably stable environment that enables us to
focus on things other than immediate survival.

I suspect that if there were some horrible catastrophe and the human race were
suddenly thrown back into an archaic society without any of the technological
advancements that we have at our disposal today, it would be those MOST
focused on short-term gain who would be most likely to perpetuate their own,
and consequently, human existence.

------
Swizec
Valid argument, but we have a clear example that it is wrong.

Humans.

A lot of what we do is driven by internal value calculations and pleasure
centers, so why aren't we all simply taking drugs and avoiding all this messy
"doing things" business?

Point is, if humans figured out a way of avoiding purely pressing the right
buttons to enjoy themselves and actually being useful, so too will smart
robots.

~~~
rbarooah
I'm not sure humans are a counterexample. Drugs don't generally relieve the
need to work to buy food, and most serious drug addicts don't look as though
they enjoying themselves.

~~~
hackinthebochs
I've been thinking about this recently: drug addicts "seem" unhappy because
they don't match the usual outward appearance of happiness. But who's to say
in their lifetime they haven't experienced many orders of magnitude more
happiness than even the happiest non-addicts? Drug addiction breaks the usual
profile of happiness, it doesn't mean that they're not actually the happiest
people who ever lived.

~~~
rbarooah
Well that's possible, but then you'd expect some of them to self-report great
happiness.

Once we go far enough down the "who's to say" route that we ignore self-
reports for happiness, I'd say the term loses meaning.

------
zerostar07
_Human civilization is currently limited to planet Earth with a few minor
exceptions, so it makes sense to consider it one big system._

Actually it makes sense to consider all life as one big system. It was created
by the planet itself, so, who knows we might even have to ascribe motivations
to the planet. It's as if the planet (kind of like Lem's Solaris) has been
brewing organisms for millions of years in order to do something with them .
We might not be able to conceive these purposes with our antrhropomorphic
thinking.

So, the human race is now coming to the point where it can modify and advance
itself by tinkering with its own circuits. What we don't know is 1) what the
planet plans to do with us and b) what are its movitivation and reward
signals. It's not explained in the article how the AI knows what are the
reward signals of its creator or why it would ever want to change them.

------
ryanackley
Our social structure prevents self-limiting behavior. For example, eventually
the robot has to be re-charged. If it isn't contributing to society then why
would someone give it free energy to recharge itself.

------
indrax
This person needs to finish reading the sequences.

------
randome3889
It is not. Because power is the ultimate drug in human society -- and only a
finite amount of people can have it.

------
ilaksh
This relates to subjects I am slightly familiar with and very interested in so
I am glad this is on the front page and I think he did a good job.

I have a couple of criticisms though.

The idea that we could build this powerful general human-like intelligence but
it would turn out to necessarily be an addict seems unlikely, because I don't
think you can build a reward system without including something to prevent
shortcuts. Although creating an effective shortcut-prevention system may be
challenging. See a bunch of stuff written by Eliezer S. Yudkowsky and his
friends.

Also, when he writes

"It seems like the most important motivations of human civilization are
related to near-term goals. This is probably a consequence of the fact that
motivation writ large is still embodied in individual humans, who are driven
by their evolutionary psychology. Individually, we are unprepared to think
like a civilization. Our faint mutual motivation to survive in the long term
as a civilization is no match for the ability we have to self-modify and
physically reshape the planet."

To me that is implying a fairly obvious and typical line of leftish reasoning
that for most of my life I assumed was self-evident and didn't require
elaboration. After deliberately 'exposing' myself to some 'right-wing' thought
and 'extreme' 'left-wing' thought, I now realize that aspiring for
collectivism is not an adequate solution.

What happens with self-interested individuals in our capitalist society is
that they aggregate power and resources for their own individual use.
Unfortunately, something very similar happens in collectivism: power and
resources are still aggregated in the hands of a few.

In other words, both systems tend towards centralization. Theoretically, the
collectivists will aim for redistributing power and resources for the good of
the many, but there are a couple of practical problems: they are still living
in hierarchical societies and so have strong motivation for enriching
themselves and the controllers cannot see enough detail to know what is really
best for the rest of society.

Command economies don't work because of general problems with centralization
and lack of distribution and the limitations of the types of technologies
employed. A peer-to-peer network is more robust than an internet with only a
few backbones.

Unfortunately, capitalists don't realize that their system also leads to
monopoly and centralization.

I believe that we do need wholistic measurement and analysis of things like
global resources and human equality, and a common principle of supporting the
collective good. But at the same time we need to decentralize and distribute
production locally. We need local decision making that operates on the basis
of a global information schema with wholistic data.

A system can't be efficient or evolve without being localized, but it also
can't be integrated without an accurate shared knowledgebase and an
egalitarian perspective.

------
showmustgn
A robot has to decide is the show must go on or not.

------
cheatercheater
TLDR: author starts out by comparing the human being to mythical cyber-
creatures that never feel pain or hunger, then continues to exploit the
comparison by supposing if you do enough coke you can live forever. Finally,
humans are described as anarchistic, self-destructive babies in grown up
bodies that repeatedly crap themselves without noticing, while inadvertently
performing genocide on the whole humanity at once.

------
rsanchez1
So, drugs are just as effective in AIs as they are in human intelligences.

The author calls drugs "primitive pre-FOO hacks", but since they already have
such a powerful effect on human intelligence, what's the point of
differentiating between pre- and post-FOO hacks? One will just lead to self-
destruction much quicker than the other.

