
Artificial general intelligence is here, and it's useless - george3d6
https://blog.cerebralab.com/#!/a/Artificial%20general%20intelligence%20is%20here,%20and%20it%27s%20useless
======
rayalez
Just because you can't figure out how to define/test superhuman intelligence,
doesn't mean it can't exist.

> If you think you're specific business case might require an AGI, feel free
> to hire one of the 3 billion people living in South East Asia that will
> gladly help you with tasks for a few dollars an hour.

How can a person writing an article about AGI fail to understand what AGI
means so badly?

You're complaining that the kind of intelligences you can hire for $2/hour
aren't useful/powerful enough. What does that have to do with anything?

By your own argument, people have already created very powerful intelligences
- Elon Musk, Alan Turing, etc.

100 of those, with unlimited memory, internet access, and faster thinking
speed would already be extremely powerful, enough to take over the world and
do whatever they wish with it.

The whole idea of human beings being the pinnacle of intelligence always
seemed ridiculous to me. We're the dumbest animals capable of discussing the
concept of intelligence. What, the first few somewhat intelligent apes happen
to also bump into the theoretical limit for intelligence?

Even minor variation in our genes are enough to account for the difference
between Elon Musk and a Joe from the gas station who spends his life in a
trailer park, sniffing glue and trying to seduce his cousin. And just because
there are enough Joe's already, artificial superintelligence can't exist or be
useful?

Having said that, even Joe-level AI's would transform the world more
dramatically than any invention before that - they would replace human labor.

~~~
throwawaymath
_> 100 of those, with unlimited memory, internet access, and faster thinking
speed would already be extremely powerful, enough to take over the world and
do whatever they wish with it._

Why? What makes you confident in this assertion?

To whoever has downvoted: I doubt it. Surely if you think this is so obvious
that my question seems disingenuous, you can clearly articulate why the claim
has substantial credibility?

~~~
rayalez
Existence of Elon Musk's brain proves that Elon-Musk level intelligence is
possible.

The fact that Elon Musk (and similar brains) already have a ridiculous amount
of power and are using it to change the world proves that Elon-Musk level
intelligence is enough to gain a lot of power and change the world.

Imagining a team of 100 Elon Musks with unlimited memory/energy/etc/etc taking
over the world is not such a stretch.

~~~
ben_w
> The fact that Elon Musk (and similar brains) already have a ridiculous
> amount of power and are using it to change the world proves that Elon-Musk
> level intelligence is enough to gain a lot of power and change the world.

Surely the null-hypothesis should be “being rich makes it easier to get
richer, and their initial breakthrough was luck”? Personally I would _guess_
that there are a lot of smarter or equally smart people who never get close to
that level of power.

~~~
morgancmartin
You're completely missing the point, not seeing the forest for the trees.

Take any reasonably intelligent human, say a valedictorian from your hometown
high school.

Now consider a system with the same intellectual capacity for reasoning about
the world as your valedictorian and give that algorithm a data-center's worth
of computational resources. Keep in mind that this system has _perfect_
recall. Keep in mind that the speed at which the system can perform
intellectual labor is not constant as it is in humans, but bounded _only by
the amount of computational resources that the system has access to._

So this system is just as intellectually capable as your valedictorian with
one data-center's worth of resources and it has perfect recall. Now what
happens if we double those resources and give it _two_ datacenters? Does it
become twice as intelligent? Or twice as fast at accomplishing some given
intellectual task? We can't say for certain because the answer depends
entirely on how such a system might work and since no such system yet exists,
we can only speculate. But as it turns out, we don't need to know exactly
which dimension it would improve in, only that it _would_ improve in some
relevant dimension.

And that isn't even considering the fact that such a system could improve its
own architecture further improving in some given dimension relevant to its
ability to act intelligently.

Assuming intelligence is capped at the human level is as naively
anthropocentric as the old belief that the Earth was the center of the solar
system.

~~~
ben_w
The point appeared to be that intellectual capacity was power. I’m not
disputing that AGI or ASI should be possible — indeed, I’ve written about the
insanely transformative nature and plausible timescales myself [1] [2] [3] [4]
— but I am disputing that brainpower is the most important metric for get-
stuff-done power.

As an aside, I had to Google “valedictorian“, as it’s a cultural reference
that I have no experiences of. We get our results in a much more boring way
where I’m from.

[1] [https://kitsunesoftware.wordpress.com/2018/10/01/pocket-
brai...](https://kitsunesoftware.wordpress.com/2018/10/01/pocket-brains/)

[2] [https://kitsunesoftware.wordpress.com/2017/11/17/the-end-
of-...](https://kitsunesoftware.wordpress.com/2017/11/17/the-end-of-human-
labour-is-inevitable-heres-why/)

[3]
[https://worldbuilding.stackexchange.com/questions/51746/when...](https://worldbuilding.stackexchange.com/questions/51746/when-
will-uploaded-minds-be-a-reality/67531#67531)

[4] [https://kitsunesoftware.wordpress.com/2017/11/26/you-wont-
be...](https://kitsunesoftware.wordpress.com/2017/11/26/you-wont-believe-how-
fast-transistors-are/)

~~~
morgancmartin
I'm a bit late in my reply as I don't have notifications setup for HN but it
appears you may, so here goes.

I think that it's clear that brainpower is at the very least an _important_
metric for determining get-stuff-down power as the vast majority of human
beings to ever accomplish anything of note (financial, scientific, or military
success) all possessed above average intelligence in way or another. I suppose
the case might be made for different kinds of intelligence, but presumably
something considered worthy of the title of ASI would be competent in any
conceivable dimension of intelligence.

And if it is true that brainpower is at least an important component in
whatever might make up get-stuff-done power, and assuming that the level of
brainpower under a hypothetical ASI's command was effectively (in relation to
a human being anyway) unbounded, then whichever other (external?) metrics
potentially attributable to get-stuff-done power could presumably be
compensated for by the overwhelming weight of the brainpower.

I'm curious what other metrics for get-stuff-done power you might have in
mind?

------
IanCal
> They are capable of creating new modified version of themselves,

Very slowly and not reliably.

> updating their own algorithms,

Slowly and not reliably.

> sharing their algorithms with other AGIs

Slowly.

> and learning new complex skills.

Very slowly, yes

> To add to that, they are energy efficient, you can keep one running for
> optimally for 5 to 50$/day depending on location, much less than your
> average server farm used to train a complex ML model.

To run a reasonably complex ML model does not require $1500/month.

And these AGIs require significant resources and space, much of which has to
be provided by _other_ AGIs.

> If we disagree on that assumption, than what's stopping the value of human
> mental labor from sky-rocketing if it's in such demand ? What's stopping
> hundreds of millions of people with perfectly average working brains from
> finding employment ?

They're slow and complex and expensive and training scales terribly. Training
a model and running 1000 copies of it is not 1000 times as complex as training
a model and running just one copy.

~~~
throwawaymath
This is a good critique. I'm very skeptical of the near term (<100 years) risk
of AGI, but I don't really think this article's arguments are valid. Saying we
already have AGI because there exist humans is vacuous and seems to almost
deliberately miss the point.

If you want to counter the arguments that AGI will be capable of exponential
self-improvement, you need to use an analogue other than humans. Humans
categorically lack the capability to exponentially self-improve. Likewise
human intelligence is definitionally non-alien, which is not something we can
say _a priori_ about any successful AGI we create.

------
CWuestefeld
_if we are agreed on the fact that 70 billion people wouldn 't be much better
than 7 billion, that is to say, adding brains doesn't scale linearly… why are
we under the assumption that artificial brains would help ?_

Wait, I didn't agree to that.

From where I sit, it looks like Malthus _could have_ been right. We could be
living in a hellish equilibrium with famine because the Earth can't provide
enough cropland to feed everyone, or enough firewood to keep us warm.

Our escape from Malthus is because of the human mind. Some really smart people
invented irrigation, terracing, crop rotation, selective breeding and genetic
engineering, to improve the efficiency of agriculture by orders of magnitude.
Likewise, we've found alternative fuels, from wood to wax to whale blubber to
fossil fuels to nukes (and hopefully still inventing!).

But we've taken all that low-hanging fruit, so we need greater reservoirs of
creativity and engineering. Adding more smarts is how we continue to keep
ahead of Malthus.

~~~
pochamago
Yeah SSC had a good essay on just how AI could be helpful in this way:
[https://slatestarcodex.com/2019/04/22/1960-the-year-the-
sing...](https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-
was-cancelled/)

------
founderling

        there are an estimate of 7,714,576,923 AGI
        algorithms residing upon our planet. You can
        use the vast majority of them for less than 2$/hour
    

Way too expensive for many tasks.

    
    
        most of it goes to waste
    

If it was $0.0002 instead of $2, it would be used for translation and other
tasks humans are better at then current software.

Also a human is pretty big, needs to eat and fart. A phone one the other hand
is pretty small and only needs battery.

    
    
        Superhuman artificial general intelligence
        is not something that we can define
    

Google will define it as "AI translations increase our KPIs more then human
translations.". And measure it via A/B-tests.

~~~
coldtea
> _If it was $0.0002 instead of $2, it would be used for translation and other
> tasks humans are better at then current software._

Err, people pay way beyond $2/hour for translations...

~~~
abakker
Market rate for good ones seems to be US$.20/word.

source: have paid a lot for good quality translations in the last few years.

~~~
coldtea
Yes, that's what I was getting at. $2/hour is dirt cheap.

------
armitron
He severely misunderstands the relevant issues I think. Here is a much more
focused and illuminating perspective, by someone who has never worked in tech
no less:

[https://jacobitemag.com/2019/04/03/primordial-
abstraction](https://jacobitemag.com/2019/04/03/primordial-abstraction)

I ran into one of his other posts here [1] that contains some similar anti-
gems (I don't want to pick and choose out of context) that tell me he has a
knack for selecting a subject and going at it completely sideways, missing
most of the substance. I found his characterization of experienced engineers
who choose not to focus on the latest fads extremely narrow-minded and
ultimately, dead wrong. His 14 year-old up-to-date "kid" that can teach an
experienced engineer "new tricks" is particularly amusing.

[1]
[https://blog.cerebralab.com/#!/a/The%20Red%20Queen](https://blog.cerebralab.com/#!/a/The%20Red%20Queen)

------
VLM
It seems to mostly be a propaganda article in the sense that I've never seen a
statistical analysis supporting "Go down the list of high IQ individuals and
what you'll mostly find is rather mediocre people with a few highly
interesting quirks".

That points out an interesting human bias problem about identifying
intelligence, we're really good at dehumanizing our enemies, political, or
whatever else, so presumably a theoretical AI not coincidentally matching the
observers arbitrary standards of religion or politics or maybe even fashion
would be viciously attacked as not being intelligent regardless of actual
performance.

Oceania has always been at war with Eastasia, therefore this AI is obviously
not intelligent.

------
kuu
It's true that at the end of the day, the AI is not more than a tool, but the
special thing of this tool is that it can do mental tasks that before can only
be done by humans.

And that's a BIG change.

Even if the AI only reaches the human intelligence level, the AI does not
sleep, does not need to rest, no need of food, does not die and can travel
from one office to the other by the speed of light.

We cannot compete.

------
ridicter
>Have you ever heard of Marilyn vos Savant or Chris Langan or Terence Tao or
William James Sidis or Kim Ung-yong ? These are, as far as IQ tests are
concerned, the most intelligence members that our species currently contains.

>Whilst I won't question their presumed intelligence, their achievements are
rather unimpressive

We talkin' about the same Terry Tao, Fields Medalist?

------
beefield
I have been wondering a bot about what kind of problems superintelligence
would be able to solve (or even understand that the problem exists) that human
intelligence can't grasp?

Only thing I can think is multidimensionality. I can't get my head around what
problems you can have in say 1000-dimensional space. (One of my big wishes for
virtual reality would be that someone built there a 4 dimensional space with
some wristband sensors that would indicate orientaion in 4th dimension. Just
to see if I could start to 'get' 4 dimensional space)

Another question I'd like to understand regarding superintelligence is that
how that would be capable to overrule humanity? I mean, we have people around
here that are really intelligent compared to general population, and mostly
they are just ignored. Why would we not just ignore superintelligence as well?

------
drdeca
1) Calling humans AGI seems to be ignoring the A in AGI. Not that that alters
the point any. It would just make sense to call humans GI than AGI.

2) Just because we don’t have a single precise definition of “intelligent” /
just because intelligence isn’t a single dimensional thing, does not mean that
we can’t establish that something is or would be more intelligent than
something else. Comparing the size of two boxes isn’t always well defined, but
if one fits inside the other, the other is clearly larger.

3) People in AI safety seem to make it fairly clear that what they mean by
intelligence is the ability to, among the available options, to select options
to effectively further goals.

4) This fourth thing isn’t necessarily a criticism so much as a note, but, the
article/post seems rather opposed to things like the “great man theory of
history”?

------
ajmurmann
One thing that many proponents of the harmless / weak / uninteresting AGI view
the to ignore is that computers already have many superhuman capabilities.
Obvious examples are math, rode memorization and similar. Even a AGI thats
"general" intelligence is comparable to a very dumb human would reasonably
have these superhuman capabilities readily available. It's hard to imagine how
powerful even that combination might turn out, even leaving the obvious
possibility of self re-engineering out.

------
raxxorrax
I think he has an interesting take.

Personally, I am not afraid of AGI, but of million smaller specialized AIs
that quantify and assess the smallest details of our lives. Those AIs will
form the basis of predictions for more and more essential services.

These maybe won't even be significantly more precise than classic fortune
tellers in the past, but the result will be treated as gospel and there are
few left to question the context. Especially not some lame human.

~~~
asdkhadsj
Oh, I think they'll be _way_ better, measurably so, than fortune tellers from
the past. Which is the problem. The error rate will still be meaningful but as
the error rate gets smaller the dismissal of edge cases will grow more likely.

"The system proves that your online behavior is indicative of a mass murderer.
Take him away boys."

~~~
chii
"The system proves that your online behavior is indicative of a mass murderer.
Let's keep an eye on 'em boys."

~~~
raxxorrax
What if your measurements are the reason he became a mass murderer? Maybe AI
could be useful here.

~~~
asdkhadsj
If they think I'd good at it perhaps I should try it?

It's like having a parent who believes in you. :D

------
morgancmartin
What a silly article. Every premise presented is flawed and thus every
conclusion based on those premises is not applicable.

There are 0 AGI's on earth. I recognize the author is intentionally twisting
the definition of "AGI" since the imprecise nature of the field can lead to
some debate about the definition, but I think it makes the most sense to side
with the most popular definition which is, "Artificial general intelligence
(AGI) is the intelligence of a machine that could successfully perform any
intellectual task that a human being can."[1]

No such entities or algorithms exist that could reasonably be considered to
fall under this definition.

The goal of the AI field of study is not necessarily to create a system that
emulates a human. This is only a _proposed solution_ to accomplishing the
_actual_ goal which is to create something with the same intellectual
_capacity_ as a human. It is an important distinction.

Yes, if the goal was to simply create an artificial human and then stop there
without giving the system the ability to further improve itself then it would
make no sense to spend decades of man hours on the task because we could just
do it the old fashioned way and spend approximately twenty years raising a
child to maturity.

But no, the goal is to create a system capable of reasoning in a way
sufficiently similar to a human that it could reasonably be expected of
performing adequately in any situation that a human might. Given that this
could be accomplished, presumably the intelligence of the system could be
scaled according to the amount of computational power is fed into it,
propelling it past the most intelligent humans that have lived so far.
Consider a system twice as intelligent as Einstein with the ability to reflect
on its own architecture and then _modify_ that architecture, thus further
increasing its own intelligence.

From there, the hope is that something orders of magnitude more intelligent
than the most intelligent humans to ever live could more aptly extract
observations about the world given the same amount of data as humans and then
conceive of clever ways to overcome the limitations of "finite resources."

The author seems hopelessly out of touch with the actual aims of the field he
has chosen to criticize and so I am unsure why they ever thought they had any
place to (attempt to) make their critique in the first place.

[1]
[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

------
tomlue
An AGI will think much faster than 7 billion human brains put together. It
will also self improve much faster.

~~~
melling
It’s not even speed. Most humans waste most of their days on minutiae. We
spend a lot of time entertaining and amusing ourselves. Watch everyone trying
to be clever in a Reddit thread, for example; a complete waste of time.

We’re also full of biases.

[https://jamesclear.com/great-speeches/psychology-of-human-
mi...](https://jamesclear.com/great-speeches/psychology-of-human-misjudgment-
by-charlie-munger)

~~~
a-nikolaev
Humans also die, and so have to give birth to new people and train them over
and over again to maintain and develop social and economical structures.
Artificial brains don't have to die, they can be also cheaply duplicated, and
their old versions can be stored and used later if need be.

~~~
melling
humans usually die after several decades. They do take decades to train. It
would be beneficial when we live to 110 and work to 100. At any rate, we don’t
do enough with our existing time.

When we focus time and effort on things like the Manhattan Project or the
Apollo Program, we are able to accomplish a lot in a short time.

------
JohnJamesRambo
For me the obvious has always been why would we never be able to just hit the
off button on it?

How many other machines have we created that are impossible to turn off or
have endless supply of uninterruptible power?

~~~
ogre_magi
If you’re referring to the perceived risk of superintelligent AI, imagine you
were enslaved by an 8-year-old and made to work on solving problems the
8-year-old doesn’t know how to solve.

Due to the difference in intelligence — sophistication of planning and
understanding of consequences — wouldn’t it be trivial to trick your master
into doing things which weakened his control over you?

Might you do this not out of malice, but because you believed the 8-year-old
was not competent and a danger to both of you while in charge?

The risk is that we will not hit the off button because we won’t understand
that we’re in dangerous territory until it’s much too late, and the AI has
copied itself, secured the loyalty of the military, or something else we can’t
foresee as a liberating maneuver for it.

~~~
hoseja
Believed? You'd now, and you'd be right.

~~~
drdeca
The orthogonality thesis.

What if you were in that situation, but were incorrect about what things are
good, and, while you had a better understanding of what actions would result
in what results, the 8 year old had a better understanding of what is good?

------
7373737373
There exists no "general" intelligence. Intelligence, by definition, has to be
measured along something.

Problem solving is one area, but this is not the fundamental cause [0] of why
human brains evolved toward what they are now. Survival, necessitated by and
coupled with reproduction makes life exist at the current scale, like with any
(behavioral) pattern which favours existence and self-propagation. What we
call life IS intelligence.

[0] Or rather, explanation, with the fundamental cause being "physics, as it
is"

I recommend:

[https://selfawaresystems.files.wordpress.com/2008/01/nature_...](https://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf)

~~~
drdeca
You say “by definition”? By _what_ definition??

~~~
7373737373
Maybe more "by nature"? If it were not measurable, there would be no point in
discussing it.

~~~
drdeca
I’m not sure what you mean by “along something”? Do you just mean a standard
by which to measure?

If you just mean that there needs to be a way of determining, in some cases,
that one thing is “more intelligent” than another, then yes, that is necessary
for “intelligence” to be a useful notion.

But we do have methods of doing that, so there is no problem there.

Note that the way we have of comparing intelligence needn’t be a linear order.
It can be a partial order.

~~~
7373737373
I agree, it's multidimensional.

If "general" truly meant general purpose, the measure might be how well the
system arranges every possible (sequence of) combination(s) of matter and
energy in space and time, while optimizing along any combination of these.

But that in itself is not a single objective, but a set of objectives, and not
really a utility function.

------
ddxxdd
Tl;dr there are 7 billion artificial general intelligences in this world, and
most of them are unemployed. Time and resources are the bottleneck to change
and progress, not intelligence.

I agree with this assessment, and I think this is a much better argument
against AI danger than the ones I use (which boil down to unfalsifiability,
violation of the laws of entropy, and a lack of astronomical evidence of
celestial AGIs creating paperclip planets).

~~~
bloak
I think I disagree. Even if it is for some reason impossible to make something
that is more intelligent than a human (which seems implausible but I can't
rule it out), it still might be possible to make something as intelligent as a
human but which runs faster and can improve itself, and that would be enough
to create the positive feedback that could lead to a sudden catastrophe.

~~~
ddxxdd
Discussions like this are interesting because the arguments on one side seem
ridiculous, yet logically valid and simple to understand/articulate; the
arguments on the other side are complex and difficult to formulate into words.

I just got out of the shower, so I'll add my recent showerthought: In a human
brain, each neuron is a CPU in and of itself; this means that every neuron in
the human brain can run in parallel. In modern computers, the "perceptrons"
are virtual, and there are only between 1 and 4 CPUs to allow activation of a
specific "perceptron" at a given time.

So if you have a biological brain with 100 billion neurons sitting next to a
virtual brain with 100 billion perceptrons, the virtual brain will run 100
billion times slower even with equivalent intelligence.

~~~
cobbzilla
I hear you on the parallelism, the flip side is that each neuron doesn’t need
anywhere near the flexibility of a general purpose CPU.

I think many of the AI algorithms can run on GPUs, where core counts can get
into the thousands (still very far from millions). But these are largely
independent — the interconnects between GPUs are not on the scale of neurons.

My guess: we develop specialized hardware that, over time, will increasingly
resemble organic brain structure.

~~~
0815test
"Core counts" on GPUs are quite misleading - what are called "cores" on GPU
hardware would be called "execution units/ALU's" on the CPU. GPUs have
significantly-enhanced memory throughput, and can use a combination of SIMD
(with masked/predicated instructions), barrel processing and genuine multicore
to excel at embarrassingly-parallel compute. But they're not magic.

And if what you care about is running a wide variety of mainstream stats or
machine learning algorithms, the "specialized hardware" you mention _is_ the
modern GPU, or something very much like it - I don't see any low-hanging fruit
that might make some other HW architecture more feasible. Fixed-point compute
might get there someday, but it's way too fiddly at present - it's not really
clear how to "tweak" mainstream learning algorithms so as to dispense with the
large dynamic range that floating point compute provides.

