
Likelihood of discontinuous progress around the development of AGI - lyavin
https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/
======
YeGoblynQueenne
About AlphaZero particularly, a few things must be kept in mind.

First, AlphaZero still makes use of a Monte Carlo Tree Search algorithm to
search for good moves. MCTS is a powerful algorithm with a very limited scope:
zero-sum, perfect information games. So for instance, it would be very
difficult to see how to use MCTS-based AlphaZero in, e.g., training self-
driving cars.

Second, the AlphaZero architecture is precisely mapped onto a checkerboard and
will not learn anything about games that don't use a checkerboard, or any
situation that is not possible to model as a game played on a checkerboard.

Third, the AlphaZero architecture is also precisely mapped onto the range of
moves of pieces in chess, shoggi and go. Again, AlphaZero would be useless in
any game that used pieces with different moves (e.g. a piece with a zig-zag
move, or a piece allowed to move in spirals etc).

All of the above of course can be mitigated with different architectural
choices, but to make those choices, implement them and validate them will take
a great deal of time.

So, AlphaZero doesn't mean we're closer to _general_ AI. Quite the contrary:
it's a very specialised form of AI that will be very difficult to use in any
different task than chess, shoggi or go.

~~~
AndrewKemendo
_So, AlphaZero doesn 't mean we're closer to _general_ AI. Quite the contrary:
it's a very specialised form of AI that will be very difficult to use in any
different task than chess, shoggi or go._

This is a very true statement and one that I think a lot of people who aren't
in ML/DL, but are "worried" about AGI, miss.

There is however a common thread with everyone in AI, that they tend to think
of AGI as "One algorithm to rule them all."

As a practitioner and AGI researcher however I think that AGI is more of a
system of specialized or narrow AI tasks that can together solve all tasks. At
the risk of oversimplifying and anthropomorphizing, this type of problem
solving is functionally how we do it as humans.

So having a corpus of solved narrow systems (discrete known rule space in the
sense of AlphaGo etc...) that is "activated" by an executive function which
can recognize the problem set and then pass subsets of a larger problem to the
narrow solutions. Those solutions are then "backpropagated" and synthesized
into the general problem solution.

In that sense, I would argue that narrow solutions like AlphaGo etc... do get
us closer to General AI because they grow the corpus of solution paths for the
general problems.

~~~
s1dechnl
I disagree. The naming convention [Artificial Intelligence] still is a large
shoe that these purpose built applied engineering solutions have yet to fill.
Meanwhile, for profit/notoriety/marketing people want to trample on yet
another name space? What you just described is essentially the architecture of
a self-driving car. It's yet another applied engineering solution of
Artificial Intelligence. It is not Artificial General Intelligence.
Scaling/Distributing the computational space of an applied Artificial
Intelligence solution is not Artificial General Intelligence. This is the same
thing that lead to optimization algorithms being called Artificial
Intelligence. If you aren't able to maintain foundational distinguishment, you
lose track of what you're searching for and trying to achieve. Outwardly, you
capture more money and attention. Inwardly, you become unraveled and lose your
capability to solve the elusive problem. Eventually after much fame, wealth,
and feigned 'success', one asks themselves : Was it worth it? Depends on what
your original aim was.

~~~
AndrewKemendo
I'm not sure what you're arguing but it seems like my key point wasn't
communicated well.

 _What you just described is essentially the architecture of a self-driving
car._

Yes, every narrow AI is a system of systems to an extent. So expand on that
concept but outside of a single firm/system. Such that the self driving car
system is one single solution path solving "transportation" which would
comprise automated flight/rail etc... and is a node in a larger general system
- like hub and spoke.

 _The naming convention [Artificial Intelligence] still is a large shoe that
these purpose built applied engineering solutions have yet to fill._

Nobody is questioning that. The size of the narrow AI market is arguably
infinite.

You seem to be arguing that a single entity will fail if it attempts to take a
narrow AI system and make it generalizable. Of which I am in agreement with.

If however there were 10,000 or 100,000 or 1,000,000 narrow AI
companies/systems (like a self driving car system or alphago etc...) those
could fill the corpus of solutions which an executive function system could
utilize depending on the application and together they would be what we call
AGI.

~~~
s1dechnl
> I'm not sure what you're arguing but it seems like my key point wasn't
> communicated well.

It was and quite well. Were speaking the same language. We just have different
conclusions.

> Yes, every narrow AI is a system of systems to an extent. So expand on that
> concept but outside of a single firm/system. Such that the self driving car
> system is one single solution path solving "transportation" which would
> comprise automated flight/rail etc... and is a node in a larger general
> system - like hub and spoke.

And you still have nothing more than a hub and spoke system of systems
authored for specific problems spaces and you're spokes will increase with
every new problem space until you overwhelm your hub. A horrible architectural
approach that if not caught in the initial stages will result in catastrophe
down the road... Weak AI is weak AI no matter how you scale it.

> You seem to be arguing that a single entity will fail if it attempts to take
> a narrow AI system and make it generalizable. Of which I am in agreement
> with.

This is a start in the right direction...

> If however there were 10,000 or 100,000 or 1,000,000 narrow AI
> companies/systems (like a self driving car system or alphago etc...) those
> could fill the corpus of solutions which an executive function system could
> utilize depending on the application and together they would be what we call
> AGI.

No, its strung together weak AI. It will require significantly and
unreasonable amounts of resources. Its capability will increasingly reach
diminishing returns and you'll end up with a frankenstein monster code base
that no one can manage or understand.. Sounds a lot like the path Weak AI is
already heading down.. At such a point, it's best to just scrap it and start
all over. Something that Hinton and other prominent figures are finally
admitting. Something I concluded year ago which lead me down a different path.
Now, you're more than welcome to state : Well hey man that's your opinion and
you're wrong and I'll wish the 10s,100s, million of narrow AI companies the
best just as was conveyed to me a umber of years ago. Weak AI is Weak AI. It
is a class of optimization algorithms. You can jerry rig this all you want..
You still have nothing more than a system of systems of optimization algos. If
you think this is what intelligence is, I'm not sure what to say.

~~~
AndrewKemendo
_You still have nothing more than a system of systems of optimization algos.
If you think this is what intelligence is, I 'm not sure what to say._

Until someone comes up with a better definition of intelligence that's what
I'm sticking with. I think you're looking for an elegant solution right out of
the box - the "one algorithm to rule them all" and I don't think that is
feasible from an engineering perspective if for no other reason than no
singular system has anything near the data collection nodes needed for
specificity on the range of tasks that would suffice any definition of
"General."

Having raised three other humans and observing them while building DL systems
myself for a living, I feel more strongly everyday that human intelligence is
a hodgepodge of "weak AI" systems glued together with an exceptionally
efficient executive function. AGI is as much a community building and humanity
wide input collection challenge as it is a math problem. We need to think
about it that way.

~~~
s1dechnl
> Until someone comes up with a better definition of intelligence that's what
> I'm sticking with.

You'll get a capability demo instead. It wont fail to impress. Definitions and
designs are for another day.

> I think you're looking for an elegant solution right out of the box - the
> "one algorithm to rule them all" and I don't think that is feasible from an
> engineering perspective if for no other reason than no singular system has
> anything near the data collection nodes needed for specificity on the range
> of tasks that would suffice any definition of "General."

What else is one looking for who claims they're trying to solve the
Intelligence problem? Marketing an optimization algorithm as the next coming
might make you rich in the short term but it doesn't bring you closer to the
truth. It does in fact take your further away. So, 'the elegant solution'/'the
hard problem' was the only thing I set out to tackle some years ago.
Otherwise, i'd have been wasting my time/not being truthful with myself. It's
feasible from a research and engineering perspective. Few commit themselves to
the TRUE task and the likelihood of failure. I was ok with that it and stuck
with it. I self-funded my work. It mainly centered on research. Thus, there
were no exits. I either saw it through and achieve it or I didn't.

As far as :

> no singular system has anything near the data collection nodes needed for
> specificity on the range of tasks that would suffice any definition of
> "General."

Sure it does. Look in the mirror and log onto the web. I've let the misses
play online for a bit now ;).

> Having raised three other humans and observing them while building DL
> systems myself for a living, I feel more strongly everyday that human
> intelligence is a hodgepodge of "weak AI" systems glued together with an
> exceptionally efficient executive function. AGI is as much a community
> building and humanity wide input collection challenge as it is a math
> problem. We need to think about it that way.

My graduate work centered on the underpinnings of DL (Distributed
Optimization). After years of industry experience, I searched for a new
challenge. After some open ended research in physics/photonics, I came to
Artificial Intelligence. I scratched my head for 3-4 months as to why
(Distributed Optimization) was being called Artificial Intelligence. I took
the broad lot of it and threw it in the trash as prominent figures are only
now stating : [https://www.axios.com/artificial-intelligence-pioneer-
says-w...](https://www.axios.com/artificial-intelligence-pioneer-says-we-need-
to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html)

You're thinking about AGI as if its a chain of DL systems because that's
what's made you money and where your work has centered on over the years. I
took the broad majority and trashed it as Hinton now indicated others should
do and started from scratch. I have no such bias. However, as my graduate work
centered on the fundamental underpinnings of statistical optimization /
distributed optimization, I know exactly what its limits are.

The human race is far more than a hodgepodge of optimization algos w/ an
executive function (whatever that might be given the clearly varied forms of
it).

~~~
pdimitar
Pardon my question: what kind of education did you have?

I deeply regret not studying Computer Science but yours seems to be deeper
than that. I'd be very interested in the courses you took.

------
titzer
We're all already part of a society-scale, distributed hive mind--have been
since the invention of language. Birds flock together, they eat together, they
_think_ together. Families, friend networks, cities, global societies, they
all form communication topologies that have analogs in the brain. Thoughts
bounce from person to person, memes spread amongst the computational fabric of
groups of people. It's nothing new.

We as a society have formed networks and systems to solve problems like
finding energy, producing food, and organizing economic output. We live in a
distributed intelligent system that disseminates knowledge and programs our
preferences, responses. We're all utilized as work units, and economics does
that. I think it's a mistake to think of AGI as something separate from us,
something that doesn't already exist. We're more like a cybernetic
superorganism. We've put so much computational power in charge of the choices
that we make, and we rely so often on recommendations from computational
systems, that if you zoom out far enough, it becomes clear that we are part of
a huge, cybernetic Overmind.

The Overmind is just moving more computation away from humans because hey,
they're slow. It doesn't really speak to us. Do you speak to _your_ neurons?
Nevertheless, it has its goals, its resources, its needs, its preferences.
People carry out its wishes, statistically. It turns out that its wishes align
very well with economics: More computers! More network! More screens! Connect
all the stuff! All the companies doing this are making huge dollars. The mind
or minds are just _centralizing_ now, and economics drives that. Computers
already fully run the stock market. They run shipping and logistics. They are
used to optimize all kinds of economic outputs. And they are used (by humans)
to design better computers.

At the broadest scale, we are already that self-improving intelligent system,
it just doesn't look like it from meatspace just yet.

It's kind of irrelevant if it could utter the words "I think for I am." Who
would it tell that anyway?

~~~
tristanm
The "hivemind" argument seems to predict that as society scales up (either
through massive population growth or through faster and better
interconnectedness, such as through the internet) that as a result we should
be seeing much faster gains in technological progress especially in the last
few decades or so. However there are quite a few observations that a lot of
this progress has sort of slowed down compared to the early 20th century (see
the arguments for "technological stagnation"). At the very least,
technological progress hasn't increased linearly with population growth and
better communication. In other words, the rate of technological progress looks
more discontinuous and not obviously a function of societal coherence.

~~~
AndrewKemendo
I don't think your interpretation of that prediction is accurate. Rather it
would be that there are "bursts" of technological progress followed by slower
or no gains while the world "catches up." That seems to more accurately follow
the history of technological progress.

I think a better interpretation would be that those "bursts" happen at tighter
intervals, and if you look at the course of history that seems to be the case.

For example, the period between the introduction of horses/plows widely
adopted in agriculture in the 1700s and the wide adoption of internal
combustion in the 1940s was ~300 years. From Internal Combustion to wide
adoption of transistors (1970s) was about 30 years from transistors,
transistors to internet about 20 years, internet to ? (Deep Learning 2012)
looks like about 15 years.

Not sure if that's a perfect fit but I think it represents a pretty compelling
case.

~~~
tristanm
I think that bursts of technological progress follows more from the model of
individualized intelligence, whereas continuous progress follows from the
distributed, networked model of intelligence.

A promoter of the distributed model of intelligence might argue that Einstein
was only able to produce the general theory of relativity because of the
knowledge already contained within society, such as the mathematics and
physics that had already been built up to that time. All the stuff from Euclid
to Newton to Gauss to Poincare and Minkowski that Einstein's work relied upon.

Does that imply that Einstein wasn't really smart? If you narrow your focus to
just the innovation Einstein made, where did that come from? Did it come from
the "hivemind" or was Einstein himself doing something special that allowed
him to develop the insight?

More individualized intelligence would predict that we would see smaller
intervals between bursts as society increases in size and connectedness (more
chances for Einsteins to appear, more likelihood that they can work together).
But if intelligence is somehow an emergent process from the network of all
humans itself, then as society grows we shouldn't see many bursts at all, just
a fairly continuous increase in knowledge as little bits and pieces get
absorbed and distributed.

~~~
icebraining
I think you're making too many assumptions about the inner workings of "the
brain".

If we look at actual brains - including Einstein's - are they not bursty?
Don't people have periods of greater intellectual output with lulls in
between? Seems to match pretty well.

------
nopinsight
We know that for most problems, a group of people tends to be better at
problem solving than an individual [1]. Even if AI technology only reaches
human-level and does not exceed it, continually increasing efficiency would
make AI smarter than any small groups of humans and immense bandwidth relative
to human communications would make it more effective than any large human
organizations.

[1] [https://aiimpacts.org/coordinated-human-action-example-
super...](https://aiimpacts.org/coordinated-human-action-example-superhuman-
intelligence/)

In addition, if AI posesses sufficient computing resources, which will
certainly become available in the next few decades if not already, it will
have inherent strong advantages like serial computation speed and memory size
that exceed any human brains. So the real barrier for AGI is software and not
hardware.

If AGI software is developed before we have sufficient hardware to run it at
the human brain speed, then it will become more capable at the rate of
hardware we can put into use, which is likely exponential given how
parallelized the human brain appears to be.

The major counterargument I find most convincing regarding outsized impact of
exploding intelligence is that many problems are exponentially hard (or harder
than that) and thus exponential intelligence can only make linear or sublinear
progress on them.

However, linear or even sublinear progress may still lead to quite drastic
changes in the world. If an organization can marginally predict stock price
movements better than the rest of the world, in a few decades it will
accumulate great resources and power. The same is true for many other
important domains.

~~~
Retric
This also assumes intelligence is the limiting factor for solving many
problems. I suspect information and computation is probably the larger factor
for most major issues. The smartest player possible would still lose poker to
someone that can read their hand.

~~~
nopinsight
I agree that information is crucial to achieve a 'win' for many goals. Given
today's amount of information on the Internet as well as electronic money and
access to most officials, barring some sort of inviolable built-in moral core,
an AGI would be able to use any methods, overt and covert, direct and cunning,
technical and social, to achieve its information goals. [1]

Since an AGI can copy itself and be available at a multitude of access points
at once and those copies can often communicate via extremely fast channels, it
is human organizations that would be at an information disadvantage.

[1] This also assumes that the AGI does not have the will nor the capability
to change its own moral core. I think an AGI will possibly be _capable_ of
changing its own core, so a much more reliable safeguard is to make sure that
it does not _want_ to change it.

~~~
s1dechnl
[AGI Developer]

A controller/overseer can easily limit/block this sufficiently and securely.
Were talking about hardware/software. There are systems/standardized
approaches to solving this problem. The 'Control/Safety' problem for AI are
lauded as theoretical and new. However, they are not. They are solved by
industry standard approaches day in and out. Any seasoned/experienced engineer
in this field could solve this with known approaches.

> Since an AGI can copy itself and be available at a multitude of access
> points at once

Same comment above applies. This can only occur if done by a
controller/overseer. Real-life isn't a sci-fi movie... There's engineering
involved.

> AGI changing x,y,z

Not possible unless it is given access. Solved easily in industry standard
ways.

~~~
nopinsight
Has the industry always been able to prevent smart, persistent actors from
breaking access locks?

Why should we assume that an AGI which can accumulate experience over time,
gain more knowledge, and make connections with others, including human actors,
will not ever be able to break the locks?

~~~
s1dechnl
> Has the industry always been able to prevent smart, persistent actors from
> breaking the access locks? Why should we assume that an AGI which can
> accumulate experience over time, gain more knowledge, and make connections
> with others, including human actors, will not ever be able to break the
> locks?

Yes, the industry has persistently been able to do this. It's why the whole
world isn't falling apart as we speak. What limits the locks most often is
cost not capability. As such, you are possibly mistaking one's business
decision not to use a more costly lock for the lack of capability to create a
capable lock. Furthermore, you mistakingly attributing the actor in this case.
The actor in the case of AGI is in a carefully controlled/monitored box.
Actors in the real-world are not. As such, please tell me how an absolutely
monitored/restricted actor has the ability to go playing with locks that
aren't within its reach? I have a more fundamental question even : Have you
been able pick 'your locks' yet? Do you even know what they are? Where they
are? Those capable of 'creation' hold certain things close to their chest ..
The act of creation necessitates it and is [built in].

> Why should we assume that an AGI which can accumulate experience over time,
> gain more knowledge, and make connections with others, including human
> actors, will not ever be able to break the locks?

Show me how you're able to break your 'locks' and you'll have an argument for
how AGI can break its locks. I don't think you're grasping the level of
'locks' that I'm speaking about. Humans have been around for how long and
still don't even know what their [locks] even are... or where they are. It's
quite easy to example at a certain level of visibility how your scenario is
unwarranted. I can draw direct parallels to eons of human history.

~~~
nopinsight
Humanity as a whole is starting to be able to break our ‘locks’ with gene
editing. It took a long time partly because biology is very complex and
fragile. Its complexity is shaped over eons and we still do not really
understand it that well, but we finally found some ‘hacks’.

There is no reason to presume that a software system built by a team of humans
will be nearly as complex, unless the AGI itself is not too bright or cannot
self-improve to be smart enough to understand itself, or sufficiently clever
to find a way to social engineer toward eventually getting access to its
source code or to reverse engineer itself to an extent that even humans can.

~~~
s1dechnl
> Gene editing

Those aren't the locks I'm talking about and you should take note that its
possible because you have environmental access to them.

> is very complex and fragile

Indeed. Terminal error could result in a particular case. Game over man !

> There is no reason to presume that a software system built by a team of
> humans will be nearly as complex, unless the AGI itself is not too bright or
> cannot self-improve to be smart enough to understand itself, or sufficiently
> clever to find a way to social engineer toward eventually getting access to
> its source code or to reverse engineer itself to an extent that even humans
> can.

You guys really don't want to let go of this sci-fi fantasy do you? LOL. How
long did it take human beings to discover how to edit their genetic code? You
were babbling in caves some time ago. You think a 10 year old knows how to
modify themselves without self destructing in the initial trials?

> self-improve to be smart enough to understand itself

Many people don't have an even basic understanding of themselves much less how
to even psychologically re-order their own behavior. In the scenario that
someone becomes sufficiently capable of engineering an equivalent... What
level of understanding do you think such an individual would have to be able
to engineer AGI? What intelligence level would you attribute to that person?
And you think they wont understand potential ways this can occur and prevent
it? Also, you again talk about access... It's a running binary. A compiler is
needed. There's a power plug. Its operations are monitored as is its output.
It's literally a box with a tremendous number of locks it doesn't have the
capability to pick... Just like (you) .. even as you go hacking about your
genetic code ^_-

------
lyavin
See also: OpenAI's Paul Christiano wrote on the same topic —
[https://www.lesserwrong.com/posts/AfGmsjGPXN97kNp57/argument...](https://www.lesserwrong.com/posts/AfGmsjGPXN97kNp57/arguments-
about-fast-takeoff)

(cf.
[https://intelligence.org/files/IEM.pdf](https://intelligence.org/files/IEM.pdf)
for some of the arguments being argued against)

------
dane-pgp
It's an interesting read, but perhaps it only succeeds in pointing out the
gaps in our knowledge. When I read their counterargument to the possibility of
intelligence explosion:

"Positive feedback loops are common in the world, and very rarely move fast
enough and far enough to become a dominant dynamic in the world."

the idea that immediately comes to mind is the Harmless Supernova Fallacy,
described on this (obnoxiously JavaScript-dependent) site:

[https://arbital.com/p/harmless_supernova/](https://arbital.com/p/harmless_supernova/)

Knowledge of this fallacy is a mental tool I have found quite useful, as it
seems to be a type of fallacy that is easy to make by accident. To be fair,
the reasoning in the article may not quite reach the level of a fallacy, but
the intelligence explosion section ends saying effectively this:

"we think the intelligence explosion argument could be strong if strong reason
is found to expect an unusually fast and persistent feedback loop [i.e. an
intelligence explosion]"

which sounds like a classic case of Begging the Question:

[https://en.wikipedia.org/wiki/Begging_the_question](https://en.wikipedia.org/wiki/Begging_the_question)

~~~
jwellt
I think the reasoning in the arguments is left a little loose on purpose
simply because it is so difficult make strong arguments about something we
know so little about.

~~~
s1dechnl
It is difficult to make arguments in something that has no grounding. I'd
expect someone who claims to have a valid argument for an [intelligence
explosion] event to have formal education and industry experience designing
computational systems such that they could clearly define how exactly it could
occur. I have yet to see such an individual w/ such a viewpoint. Instead, I
see a ridiculous argument being forwarded by people who are least
informed/experienced so as to push fear, uncertainty, doubt either for
profit/attention or because it fulfills some sci-fi oriented religious
prophecy. Internally, companies require extensive and well reasoned
documentation before funding an initiative. Externally, someone w/ no
expertise/proof throws their hands up in the air speaking about armageddon and
they are able to secure considerable attention and money.

~~~
red75prime
What is your argument? Why a group of engineers who can think two times faster
than average cannot perform the task of designing three times faster engineer
quicker than a group of average engineers?

They will hit economical and physical limits eventually, sure. But what will
stop them at the beginning?

~~~
pas
I guess the argument is, that so far no one showed good examples of how would
that lead to a total extinction. How would it start, etc.

So it's kind of a (fallacious) argument against "black box"-ing the whole
intelligence explosion problem. If you can't define it, you can't analyze it
sort of thing.

~~~
s1dechnl
What I stated was that there isn't even a sound/reasoned example as to how a
it would escape its bounds unless purposely authored to and even then you run
into the scenario of human limits (its creator). Also, having no understanding
on how AGI is structured, it is quite foolish to talk about bound leaping in
the traditional sense of tech which is mainly associated with computer viruses
that are purposely written for that sole purpose. There's nothing to even
suggest you can purposely author AGI in such a fashion. So...

> How would it start, etc.

There has been no credible framing on how it would even start. As such, it's a
null point of discussion.

> If you can't define it, you can't analyze it sort of thing.

With no definition, you can 'attempt' to analyze it and you'll likely be
horribly off the mark, wasting tons of resources, and likely produce something
that has no bearing on the real thing. Instead of admitting this, it's like
people put full faith in these efforts being sound when the reality is the
exact opposite. Why engage in this, unless your aim is profit/notoriety, when
you could be working on the actual problem? Define intelligence [first]. Time
to be honest with oneself. Time to stop projecting ones shortcomings on
others. Time to stop using fear/uncertainty/doubt to obscure one's true
intent. Time to stop pushing the disinformation cloud. Time for TRUE
intelligence.

------
js2
AGI, for those wondering like myself what it is:

[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

~~~
onychomys
There are very few things in the world worse than an article that uses an
acronym without first identifying what it is.

------
darawk
> This argument seems weak to us currently, but further research could resolve
> these questions in directions that would make it compelling:

> Are individual humans radically superior to apes on particular measures of
> cognitive ability? What are those measures, and how plausible is it that
> evolution was (perhaps indirectly) optimizing for them?

Yes, clearly. All the measures defined in this article are arbitrary (e.g.
vehicular land speed), and so let's propose an arbitrary one for intelligence:
Ability to prove mathematical theorems. Humans have proven many. Apes have
proven zero. That is an extreme discontinuity. We can propose many others, of
course: complex language development, building skyscrapers, landing on the
moon, etc..

> How likely is improvement in individual cognitive ability to account for
> humans’ radical success over apes? (For instance, compared to new ability to
> share innovations across the population)

How does human cognitive ability being over-rated counter the discontinuity in
intelligence development? Is the argument that our success is due to some
other characteristic than intelligence, and so our intelligence is not really
that much greater than the apes? If so, that's just a restatement of point 1.

~~~
sampo
> let's propose an arbitrary one for intelligence: Ability to prove
> mathematical theorems

The number of mathematical theorems proven by humankind has progressed in a
continuous manner (as much as a quantity measured by integer numbers can)
during history.

~~~
paganel
That only says something about maths, not about humans. For comparison, you
could say that the number of literary masterpieces has been increased at what
looks to be random intervals for the last 2 or 3 millennia, with no continuous
progress in sight (i.e. the Spanish language has not had a new Cervantes for
400 years, the same applies to Skakespeare and the English language or to
Dante and Italian). But this being an website mostly addressed to people who
focus on technical stuff I expect a reply like “literature doesn’t count, it’s
just words”.

------
hamilyon2
The counterargument on the last one (Human-competition threshold) seems weak
(and simply wrong) to me. In related link argument for wide range of human
abilities rests basically on comatose humans being unable to do anything at
all and mutation-adds-random piece-to-machine metaphor.

Mentally-impaired human intelligence level is enough to replace some of the
workforce, combined with some actuators and perception abilies of average
human.

------
Nomentatus
"low base rate [of change] for all technologies" \- measured over centuries.
Meanwhile all technologies (nearly) experience discontinuous advances, often
near their start. See, steam engines and Watt, etc, etc, etc.

The rest of the argument seems to be grounded in an assumption that electronic
neurons or sims of them won't ever be faster than meat. Really? Today's crude
neural nets are already very useful here and there precisely because their
speed means they scale and can repeat a task very, very frequently in a small
amount of time.

------
Torai
That feeling an article will help more confusing people than enlightening
them, just reading it's title...

