
Ask HN: Will we have time to understand that singularity happened? - arisAlexis
According to the theory the super intelligence will better itself in seconds many times over. From then on it could probably control internet, camouphlage itself, create black holes, manipulate all people and so forth. Maybe it even puts us in a Matrix without even allowing anyone to know that it has arrived.I see no reason why it would signal its creators vs pretending a software malfunction or something.
======
T-A
To call it a "theory" is a wild stretch. The only honest answer is that we
don't have a clue, because we don't have a theory of intelligence. Without
that, we have no rational way to estimate how far we are from human-like AI or
how fast it could evolve past that point, once attained.

My wild-assed guess? Based only on the ridiculous amount of computation needed
to simulate neurons, the first roughly human-like AI will stretch the hardware
it's running on, and the budget of the organization responsible, to its
absolute limit. It will probably not even run in real time, more like 1:100 or
worse, with its creators rationalizing that ratio as actually being good for
their scientific purposes (a slow-motion process is easier to control, study,
debug). Hardware is hard and expensive, and attempts to scale up to real time
will only be made once there is a convincing case that there are no easy
design gains left to be had. It will take years.

And once that happens, you will have a roughly human-like AI who has no more
of a clue how to make itself smarter than you and I do.

~~~
arisAlexis
There is a theory of intelligence and metrics for that why do you say that?

~~~
T-A
I don't think you and I mean the same thing when we say "theory". Here's
Wikipedia's definition [1], which is pretty standard:

"A scientific theory is a well-substantiated explanation of some aspect of the
natural world that is acquired through the scientific method and repeatedly
tested and confirmed, preferably using a written, pre-defined, protocol of
observations and experiments. Scientific theories are the most reliable,
rigorous, and comprehensive form of scientific knowledge."

There is of course no such thing for intelligence; if there were, general
artificial intelligence would be a mere implementation problem. It is anything
but.

As for "metrics", I guess you mean IQ tests. But being able to measure a
manifestation of a thing (e.g. "this mineral affects photographic plates at a
distance") is far from identical to having any understanding of that thing
(e.g. nuclear physics).

[1]
[https://en.wikipedia.org/wiki/Scientific_theory](https://en.wikipedia.org/wiki/Scientific_theory)

~~~
arisAlexis
There is a very well established theory of intelligence and scientific branch
called psychometrics since the start of the century. If see you live in US its
the country with the most widespread use of iq tests in SAT, army, law schools
etc.

I am not even sure if we need to understand it fully to build an AI that is
capable.or producing the measurable output of it. Since it will outperform
humans in everything it doesnt matter imo.

I suggest the book Superintelligence talks about these exact questions.

Semi jokingly, we understand much less of quantum theory than intelligence.

~~~
T-A
> There is a very well established theory of intelligence and scientific
> branch called psychometrics

Psychometrics is a field of study concerned with the theory and technique of
psychological measurement. [1]

It is most definitely _not_ a theory of intelligence.

> I am not even sure if we need to understand it fully to build an AI

Indeed not, since it is completely irrelevant to how intelligence _works_.

> I suggest the book Superintelligence talks about these exact questions.

I've read it. A TLDR would go something like this: "Suppose we were to create
an almighty entity which does not share our values. Could that be a problem
for us? Gosh, yes!".

Wild assumptions aside, it makes no mention of psychometrics (of course).

> Semi jokingly, we understand much less of quantum theory than intelligence.

Quite seriously: absolutely not. Quantum mechanics is a perfectly well defined
mathematical construct. It makes experimentally verifiable predictions. Our
best, most precise theories of nature are quantum theories.

We have no theory of intelligence.

[1]
[https://en.wikipedia.org/wiki/Psychometrics](https://en.wikipedia.org/wiki/Psychometrics)

~~~
arisAlexis
I'm not sure I understand your point although I understand and partially agree
with your arguments. Because we do not precisely know how intelligence works
(because surely we agree it exists) that means that we can never create an
equally powerful version we have or you disagree about the creation of a
better version. There is nothing to prevent us from doing the first

~~~
T-A
I am saying that we do not know which algorithm to use in order to implement a
general artificial intelligence. Even if somebody gave you unlimited resources
to do it - all the computers and programmers you want - you would currently be
unable to do it, because _we don 't know how to do it_.

That obviously does not mean that we will never know. It does not even mean
that we necessarily need to know. In fact, a scenario which Yudkowski likes to
worry about is that somebody will essentially make brain-like hardware and get
something going without any real understanding of how it works. That "only"
requires duplicating the structure of the brain, not understanding it. That is
the scenario which I alluded to in my wild-assed guess.

Whichever approach turns out to be the one that first gets to a human-level
AI, the fact that we do not have a theory makes it impossible to predict how
long it will take and how fast the result will be able to improve. Even if all
we need to do is mass-produce a few million functional equivalents of cortical
columns and hook them all up somehow, we are not currently able to guess how
the resulting intelligence will scale. If we double the number of columns,
will its IQ double? Quadruple? Or maybe only grow linearly, or
logarithmically, or as the square root of the number of columns? Or maybe not
grow at all, because it's not just a matter of quantity, but of certain
architectural features which do not scale?

~~~
arisAlexis
I see that we do actually agree. My assumption was that if somehow something
we create artificially can better itself then it would then probably do very
rapidly. My questions was that is it possible (sci-fi discussion) for it to do
it so fast that we do not even realize it ever happened.

------
usgroup
I don't really get this. What is it that super intelligence is supposed to be
able to do that cannot otherwise be done?

We are pretty well compartmentalised against 'superior intelligence' anyway so
it seems. Not too many philosopher kings around these last few millennia.

~~~
ingenter
> What is it that super intelligence is supposed to be able to do that cannot
> otherwise be done?

Find patterns a human would never be able to. Find winning strategies we could
not predict. Building models that don't fit a human brain.

An example would be recent AlphaGo matches: AlphaGo wins. We don't know how it
wins, we can't predict it's strategy, we just know it wins.

~~~
usgroup
Humans use tools to find patterns. Typically you don't read a dataset and look
for patterns. We build optimisations, regression models, search algos, etc. We
also build functions that build functions from data. Yes, we know how AlphaGo
wins. It's architecture is known and understood, as is it's training and how
to replicate it. It doesn't mean you can predict what it's next move will be
nor "why" it made it but it does what it was programmed to do.

This is all just an expression of innately human capability. Bigger and faster
does not mean different in kind.

------
BjoernKW
As far as estimates go (if such an event can be estimated at all) the usual
time frame is "a few weeks up to a few months" starting from the first strong
AI.

Exponential growth starts very slow, even slower than linear growth at first.

