
Can we rule out near-term AGI? [video] - pshaw
https://www.youtube.com/watch?v=YHCSNsLKHfM
======
marvin
Just for posterity, to see if I was completely bonkers in 2018: I believe that
it is possible to realize AGI today, using currently available hardware, if we
just knew enough of the principles to create the right software.

Novel computational concepts have often been demonstrated on very old
hardware, with the full knowledge of the tricks required to make it work.
Often, more powerful hardware was required in order to pionéer the technology,
and often the proof-of-concept on older hardware is too slow and clunky to
have been a compelling product. But it's often been physically possible for
longer than people realize.

I've never made a "long game" statement like this before, so it'll be
interesting to read this comment about what I thought before in 2038 or 2048,
if it still exists then.

~~~
Florin_Andrei
I think we're still lacking the conceptual framework, the grand scheme if you
will.

AGI probably requires some kind of hierarchy of integration, and current AI is
only the bottom level. We probably need to build a heck of a lot of levels on
top of that, each one doing more complex integration, and likely some
horizontal structure as well, with various blocks coordinating each other.

~~~
ilaksh
There are many hierarchical AI systems though. For example Hierarchical Hidden
Markov models.

Pretty sure Deep Mind made a RL system a bit like that.

Also reminds me of Ogmai AIs technology.

~~~
Florin_Andrei
Well, a scooter and the Falcon Heavy are both "vehicles", but that doesn't
mean the scooter will ever do what the rocket does.

------
simonh
He says general AI is on the same spectrum as the AI technologies we have now,
but is qualitatively different. I'm sorry but that's a contradiction. If it's
on the same spectrum, then it's just a quantitative measure of where on the
spectrum it lies. If it's qualitatively different, it's on another axis,
another quality is in play.

His definition is also rubbish. Being useful at economically valuable work has
nothing necessarily to do with intelligence. Writing implements are vital in
pretty much all economic activities, many couldn't be done at all without them
before keyboards came along.

Deep Learning is great, it's a revolution, but it's a fairly narrow
technology. It solves one type of task fantastically well, it just happens
that solving this task is applicable in many different problem domains, but
it's still only one technique. At no point did he show how to draw a line from
Deep Learning to General AI in any recognisable form. It just looks like a
hook to get you to hear his pitch.

It's a great pitch, but it's not about AGI.

~~~
wycs
>He says general AI is on the same spectrum as the AI technologies we have
now, but is qualitatively different.

No it is not. The basic premise of fixed-winged aircraft was the same from
Wright brothers to modern jets. Yet the wright brothers flyer was useless and
a modern jet is not.

We have agents that can act in environments. His claim is that getting these
agents to human-level intelligence is a matter of compute and architectural
advancements that are not qualitatively different that what we have now. This
just does not strike me as an absurd claim. We have systems that can learn
reasonably robustly. We should accord significant probability to the claim
that higher-level reasoning and perception can be learned with these same
tools given enough computing power.

He claims we cannot "rule out" near-term AGI. Let's define "rule out" as
having a probability of 1% or lower. I think he's given pretty good reasons to
up our probability to between 2-10%. For myself, 10-20% seems a reasonable
range.

~~~
spuz
> No it is not.

What claim are you responding to here? Simonh said:

> He says general AI is on the same spectrum as the AI technologies we have
> now, but is qualitatively different. I'm sorry but that's a contradiction.

Which I agree with. How can two qualitatively different things be on the same
spectrum? You later say yourself:

> His claim is that getting these agents to human-level intelligence is a
> matter of compute and architectural advancements that are not qualitatively
> different that what we have now.

Which seems to be the opposite of what simonh said and it's confusing to say
the least.

~~~
wycs
You are right. I don't think I read his comment very carefully before
replying.

------
gugagore
I think what I found most lacking in this video is that data did not play a
role in the overview. The presenter discusses that a ton of computation is
needed to do deep learning, but doesn't explain why. And really, it's because
the models are big and the training data is even bigger. So computation
improves, and helps you deal with bigger models and bigger data, but where
does the data come from?

The big question to me isn't whether computation can scale, which this video
makes me belief it will. It's whether the data will scale. In RL domains with
good simulators, such as Go and the Atari games, data doesn't seem to be an
issue. The in-hand robot manipulation work also makes heavy use of simulators
to reduce the amount of real-world time needed to collect data. But I don't
see an argument that we will need to get high-fidelity simulators to train
these agents in.

I do love the in-hand robot manipulation work, because it's one of the few
that shows that results from simulation can be applied to real robotic
systems. And while I hope for the sake of robotics that we can get better and
better simulators, it's surprising to not see that as the central focus in
conversation about getting AGI to emerge from gradient descent on neural
networks.

~~~
gdb
(I gave the talk.)

We are already starting to see the nature of data changing. Unsupervised
learning is starting to work — see [https://blog.openai.com/language-
unsupervised/](https://blog.openai.com/language-unsupervised/) which learns
from 7,000 books and then sets state-of-the-art across almost all relevant NLP
datasets. With reinforcement learning, as you point out, you turn simulator
compute into data. So even with today's models, it seems that the data
bottleneck is much less significant than even two years ago.

The harder bottleneck is transfer. In most cases, we train a model on one
domain at a time, and it can't use that knowledge for a new related task. To
scale to the real world, we'll need to construct models that have "world
knowledge" and are able to apply it to new situations.

Fortunately, we have lots of ideas about how this might work (e.g. using
generative models to learn a world model, or applying energy-based models like
[https://blog.openai.com/learning-concepts-with-energy-
functi...](https://blog.openai.com/learning-concepts-with-energy-functions/)).
The main limitation right now: the ideas are very computationally expensive.
So we'll need engineers and researchers to help us to continue scaling our
supercomputing clusters and build working systems to test our ideas.

~~~
iotb
What are your thoughts on Starting over completely from scratch as Geoffrey
Hinton has suggested? What are you doing as a group to attract and bring on
such individuals? Does this occupy any portion of your efforts at OpenAI?

If you were given a demo of an AI system that uses a completely
new/revolutionary approach towards various different problems with success,
how open would you be to rethinking your position on 'Optimization
techniques'?

Modeling seems as a stop-gap towards getting over the limitations of Weak AI.
As I recall, this is what knowledge-based expert systems tried in a time's
past and failed at because it's nothing but a glorified masking of the
underlying problem with limited human inputted rulesets. I don't agree with
Yann LeCun that the way forward to AGI is modeling. I feel like it's the best
solution people worked up towards the limitations of Weak AI which were
broadly and publicly acknowledged in 2017 and early 2018.

> The main limitation right now: the ideas are very computationally expensive.

This is because the fundamental core set of algorithms being used by the
industry are fundamentally flawed yet favorable to big data/cloud computing..
A quite lucractive business model for currently entrenched tech companies.
It's why they spend so much effort ensuring the broad range of AI techniques
fundamentally stay the way they are.. because if they do, it means boat loads
of money for them.

> So we'll need engineers and researchers to help us to continue scaling our
> supercomputing clusters and build working systems to test our ideas. When
> you're attempting to resolve something and you are shown YoY that it isn't
> being resolved and requires even more massive amounts of compute, it means
> you're doing something wrong. It will be better to take a step back an re-
> evaluate your approach fundamentally. Again, what is the willingness you
> have to do so if shown something far more novel?

------
blueadept111
Yes, we can rule out near-term AGI, because we can also rule out far-term AGI,
at least in the way AGI is defined in this talk. You can't isolate
"economically beneficial" aspects of intelligence. Emulating human-like
intelligence means emulating the primitive parts of the brain as well,
including lust, fear, suspicion, hate, envy, etc... these are inseparable
building blocks of human intelligence. Unless you can model those (and a lot
else besides), then you don't have AGI, at least not one that can (for
example) read a novel and understand what the heck is going on and why.

~~~
isseu
Are lust, fear, hate requirements for intelligence? There are part of human
intelligence for sure. I feel the a problem is that we don't have a good
definition of intelligence.

~~~
byteface
Yes. We learn through our emotions and use them for heuristics. They are
measures of pleasure/stress against access to maslows needs. This drives
instincts and behaviours. Also gives us values. When I 'think' or act I use
schemas but don't knowingly use a GAN or leaky Relu. I personally learn in
terms of semantic logic, emotions and metaphors. My GAN is the physical world,
society, the dialogical self and a theory of mind. He never mentioned amygdala
or angular gryus or biomimmicking the brain or creating a society of
independant machines. Which we could do but aren't even trying to my
knowledge? I mean there's Sophia(a fancy puppet) but not much else.

We get told to use the term .agi despite public calling it .ai as that's just
automation. But this feels like we're now allowed to call it .ai again? It was
presented as, given these advances in automation we can't rule out arriving at
apparent consciousness. But with no line between.

We do have a definition for intelligence. Applied knowledge.

However here's another thought. Several times in my life I knowingly pressed
self destruct. I quit a job without one to go to despite having mortgage and
kids. I sold all my possessions to travel. I've dumped a girls I liked to be
free. I've faced off against bigger adversaries. I've played devils advocate
with my boss. I've taken drugs despite knowing the risks etc... And I
benefitted somehow (maybe not in real terms) from all of them. Non of these
things seem like intelligent things to do. They were not about helping the
world but about self discovery and freedom. We cannot program this lack of
logic. This perforating of the paper tape (electric ant). It's emergent
behaviour based on the state of the world and my subjective interpretation of
my place in it. Call it existential, call it experiential, call it a bucket
list. Whatever.

.agi would need to fail like us, to be like us. Feel an emotional response
from that failure. And learn. Those feelings could be wrong. misguided. We
knowingly embrace failure as anything is better than a static state. i.e
people voting Trump as Hilary offered less change.

We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music
plays my body seems to respond before my brain intellectually engages. So we
need to consider physiological as well as phsycological. We have more that
2000 emotions and feelings (based on a list of adjectives). But that probably
only scratches the surface. What about 'hangry'? Then learning to recognise
and regulate it.

diff( current perception of world state, perception of success at creating a
new desired world state (Maslow) ) = stress || pleasure.

Even then how do you measure the 'success'? i.e.I have friends with
depressions and they don't measure their lives by happiness alone. I feel
depression is actually a normal response to a sick world and that people who
aren't a bit put out are more messed up. If we created intelligence that
wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate
like we do with people.

Finally I don't think they can all learn off each other. They need to be
individual. language would seem an inefficient data transer method to a
machine. But we indivudate ourselves against society. Machines assimilating
knowledge won't be individuals. More swarm like. We would need to use
constraints which may seem counter productive so harder to realise.

Wow. I wrote more than I inteded there. But yes. Emotions are required IMO.
Even the bad ones. Sublimation is an important factor in intelligence.

~~~
dfischer
I really enjoyed reading this. Thank you. It relates to some thoughts that
have been percolating. I’m actually giving a small internal talk on a few of
these ideas.

Thanks!

------
oliveshell
I’m not too convinced by this guy’s argument: as evidence, he presents the
progress made by deep learning/CNNs in the past few years. He then rightly
acknowledges the difficulty of getting machines to do abstraction and
reasoning, noting that we have ideas about how to approach these things but
that they require much more computing power than we have now.

...Then he basically asserts that we can extrapolate the near-term
availability of tons more compute power from Moore’s Law, which is where he
lost me.

We’re already running into the limits of physical law in trying to move
semiconductor fabrication to smaller and smaller processes, and there are very
real and interesting challenges to be overcome before, I think, we can resume
anything close to the exponential growth we’ve enjoyed over the last 40 years.

This guy may well think a lot about these difficulties, but not mentioning
them at all made his argument sound incredibly naïve to me.

~~~
wycs
>...Then he basically asserts that we can extrapolate the near-term
availability of tons more compute power from Moore’s Law, which is where he
lost me.

That's not what he's asserting. Even with Moore's law dead, OpenAI claims
there is significant room with ASICs, analog computing, and simply throwing
more money at the problem. There is a ton of low-hanging fruit in Non-Von
Neumann architectures. We should expect it to be plucked, as we have huge use
case which is potentially limitlessly profitable.

~~~
iotb
Accelerating the efficiency of an optimization algorithm doesn't get you AGI,
this should be clear by now. As for fielding such systems, one quick way to
destroy humanity to a degree is to turn everything into a glorified
optimization problem which will no doubt be turned against people to maximize
profit.

~~~
red75prime
If you'll accelerate AIXI (optimization algorithm) [0], you'll get (real-time)
AGI.

[0]: [https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI)

~~~
iotb
If only this wasn't an fundamentally flawed theory that isn't scalable based
on computational complexity and information theory.

~~~
red75prime
Approximations to AIXI are computable.

------
mckoss
It does seem he is conflating "progress" with "investment". Yes, the world is
spending exponentially more compute each year since 2012 on training networks.
The marvel is that neural architectures are scaling to more complex problems
without much architectural change. But this is not an argument that AI is
getting more efficient or productive over time and hence we can expect
exponential performance improvements (like Moore's law).

------
tomiplaz
The talk's title has very little to do with the actual talk. The talk is about
progress in narrow AI in the last 6 years or so. While fascinating, it's still
only progress in narrow AI. To make artificial intelligence general, one would
have to somehow define a fitness function that itself is general. But how does
one do that? How does one say to a machine: "Go out there and do whatever you
think is most valuable"?

If some kind of goal has to be defined, it seems it will always be a narrow
AI, where some outside entity defines what its goal is, instead of itself
coming to a conclusion what it should do in general sense. Even if that
machine is able to recognize the instrumental goals for reaching the final
goal (and acting accordingly), it still feels like a non-general intelligence,
like connecting the dots based on the available input and processing, just to
come closer to that final goal. If no final goal was given, I presume such a
machine would do nothing: it would not randomly inspect the environment around
itself and contemplate upon it; there would be no curiosity, no actions of any
kind to find out anything about its environment and set its own goals based on
observation.

It seems that for AGI to come, some kind of spontaneous emergence would have
to occur, possibly by coming up with some revolutionary algorithm for
information processing implemented inside an extremely capable computer
(something that biological evolution has already yielded).

It is interesting, humbling and a bit depressing to apply the same reasoning
to us, humans. We are relatively limited in terms of reason, its just that it
is not obvious to us, just like it is not obvious that the Earth is round, for
example.

~~~
tlb
Having novel, unique goals is not necessary for AGI. Normal people don't have
novel goals -- they mostly want love, comfort, and respect. An AGI that had as
its goal "gain people's respect" would exhibit an unboundedly interesting
range of behavior.

~~~
visarga
Well said. Humans (and all living things) have the same ultimate goal: life.
They need to keep themselves alive somehow and keep their genes alive by
reproduction. That single goal has blossomed into what we are today. If we
train AI with an evolutionary algorithm, and let it fend for its needs
(compute, repair, energy), then it could learn the will to life that we have,
because all variants that don't have it will be quickly selected out of
existence.

I think AGI could happen with today technology if we only knew the priors
nature found with its multi-billion year search. We already know some of these
priors: in vision, spatial translation and rotation invariance, in temporal
domain (speech) it is time translation invariance, in reasoning it is
permutation invariance (if you represent the objects and their relations in
another order, the conclusion should be unchanged). With such priors we got to
where we are today in AI. Need a few more to reach human level.

------
gambler
_> "Prior to 2012, AI was a field of broken promises."_

I just love how these DNN researchers love to bash prior work as over-hyped,
while hyping their own research through the roof.

AI researchers did some amazing stuff in the 60s and 80s, considering the
hardware limitations they had to work under.

 _> "AT the core, it's just one simple idea of a neural network."_

Not really. First neural networks were done in the 50s. Didn't produce any
particularly interesting results. Most of the results in the video are a
product of fiddling with network architectures, plus throwing more and more
hardware at the problem.

Also, none of the architectures/algorithms used by deep learning today are
more general than, say, pure MCTS. You adapt the problem to the architecture,
or architecture to the problem, but the actual system does not adapt itself.

~~~
habitue
So, they didn't have backprop and automatic differentiation in the 50s. That's
pretty fundamental and not just "fiddling with architectures"

~~~
Cybiote
This statement is fairly inaccurate. If you check Peter Madderom 1966 thesis,
you'll see that it states the earliest work on automatic differentiation was
done in the 1950s. It's just that back then, it was called _Analyitc_
differentiation. You can see many of the key ideas already existed back then,
including research into specializations for efficiently applying the chain
rule.

[https://academic.oup.com/comjnl/article/7/4/290/354207](https://academic.oup.com/comjnl/article/7/4/290/354207)

~~~
habitue
Ah, you're right on AD. But backprop was invented in the 80s

------
emtel
I think a lot of people who are emphatic that AGI is a long way off are saying
so out of an allergic reaction to the hype, rather than due to sound
reasoning.

(And let me be clear that none of this is an argument that AGI is _near_. I'm
saying that confidence that it is far is unfounded.)

First, there are many cases in science where experts were totally blindsided
by breakthroughs. The discovery of controlled fission is probably the most
famous example. This shouldn't be surprising - the reason that a breakthrough
is a breakthrough is because it adds fundamentally new knowledge. You could
only have predicted the breakthrough if you somehow knew that this unknown
knowledge was there, waiting to be found. But if you knew that, you'd probably
be the one to make the breakthrough in the first place.

Second, most claims about the impossibility of near-term AGI are totally
unscientific. By that, I mean that they aren't based on a successful theory of
falsifiable predictions. What we'd want, in order to have any confidence, is a
theory that can make testable predictions about what will and won't happen in
the short term. Then, if those predictions turn out to be true, we can gain
confidence in the theory. But this isn't what we get. What we get is people
saying "We have no idea how to do x, y, and z, therefore it won't happen in
the next 50 years". I don't see any evidence that people were able to predict
even the incremental progress we've seen say, two years out. The fact is that
when someone says "it'll take 50 years" that's just sort of a gut feeling, and
people will almost certainly be making that same prediction the year before it
actually happens.

Third, I think people have too narrow a view about what they imagine AGI might
look like. People tend to envision something like HAL, that passes Turing
tests, can explain chains of reasoning, and has comprehensible motivations.
Let's consider the case of abstract reasoning, which is something thought to
be very difficult. We tried and failed for decades to build vision systems
based on methods of abstract reasoning, e.g. "detect edges, compose edges into
shapes, build a set of spatial relationships between those shapes, etc". But
humans don't use abstract methods in their visual cortex, they use something a
lot more like a DNN. The mistake is in thinking that because the mechanism of
successful machine vision resembles human vision, therefore the mechanism of
successful machine reasoning must resemble human reasoning. But its quite
possible that we'll simply train a DNN by brute force to evaluate causal
relationships by "magic", i.e. in a way that doesn't show any evidence of the
sort of step-by-step reasoning humans use. You can already see this happening
- when a human learns to play breakout, they start by forming an abstract
conception of the causal relationships in the game. This allows a human to
learn really really fast. But with a DNN, we just brute force it. It never
develops what we would consider "understanding" of the game, it just _wins_.

Sorry the third point was so long, let me summarize: We think some things are
hard because we don't know how to do them the way that we think humans do
them. But that doesn't serve as evidence that there isn't an easy way to do
them that is just waiting to be discovered.

~~~
roh0sun
Worth considering these two points.

Inteligence is not one dimensional as is evolution =>
[https://backchannel.com/the-myth-of-a-superhuman-
ai-59282b68...](https://backchannel.com/the-myth-of-a-superhuman-
ai-59282b686c62)

Silicon based intelligent machines might not energy efficient =>
[https://aeon.co/essays/intelligent-machines-might-want-to-
be...](https://aeon.co/essays/intelligent-machines-might-want-to-become-
biological-again)

~~~
seagreen
Why do people keep reposting that first article? I encourage you to reread it
with a critical eye.

A choice quote:

 _In contradistinction to this orthodoxy, I find the following five heresies
to have more evidence to support them._

 _Intelligence is not a single dimension, so “smarter than humans” is a
meaningless concept._

 _Humans do not have general purpose minds, and neither will AIs._

This is _awfully_ similar to "On the Impossibility of Supersized Machines"
([https://arxiv.org/pdf/1703.10987.pdf](https://arxiv.org/pdf/1703.10987.pdf)).

EDIT: Re: energy efficiency: the problem is that humans are too energy
efficient. Your brain can keep functioning after 3 days of running across the
Savanna without food, which is (a) awesome, and (b) not really helpful
nowadays. The cost of this is that you can only usefully use a little energy
each day, say 4 or 5 burgers at most. AGI prototypes will usefully slurp in
power measured in number of reactors.

------
iotb
Did Artificial General Intelligence get redefined again towards something more
short term? So, first AI is hijacked and hyped 50 ways to sunday.. Then came
the apocalyptic narratives/control problem hype to secure funding for non-
profit pursuits and research. Then, when it was realized that narrative
couldn't go on forever. AGI was hijacked added to everyone's charter and a
claim was made that it will be made 'safe'. Now, AGI's definition is getting
redefined to the latest Weak AI techniques that can do impressive things on
insane amounts of compute hardware. How can you ever achieve AGI if this is
the framing/belief system the major apparatus of funding/work centers on?
Where is the true pursuit of this problem? Where are the completely new
approaches?

One cannot rule out something unless they've spent a concerted amount of time
dedicated solely to trying to understanding it. If there is no fundamental
understanding of Human intelligene, what is anyone frankly talking about? or
doing?

I have yet to hear a cohesive understanding of human intelligence from various
different AI groups. I have yet to hear a sound body of individuals properly
frame a pursuit of AGI. So, what is everyone pursuing? There seems to be no
grand vision or lead wrangling in all of these scattered add-on techniques to
NN. I do see a lot of groups working on weak AI or chipping away at AGI like
featuresets with AI techniques making claims about AGI. Everyone has become so
obsessed with iterating that they fail to grasp the proper longer term
technique for resolving a problem like AGI.

Void from the discussion are conversations on Neuroscience and the scientific
investigation of Intelligence. There's more sound progress being made in the
public sector on concepts like AGI than in the private sector. Mainly because
the public sector knows how to become entrenched, scope, and target an
unproven long term goal and project.

The hype as far as I see it is clearly distinguishable from the science.
Without honest and sound scientific inquiries, claims in any direction are
without support. Everyone's attempting to skip the science and pursue
engineering in the dark with flashy public exhibitions namely because of
funding.. You can't exit such a room and make sound claims about AGI. If a
group claims they are pursuing AGI, I expect almost all of their work to be
scientific research pursuing an understanding.

That being said, it appears no one is interested in funding or backing such an
endaevour. Everyone states they want to back/invest in such a group on paper
but when it comes down to it the money isn't there, they obviously are
targetting shorter term goals/payouts, and/or don't frankly know what type of
pursuit or group of individuals are required. No one wants to take the time to
understand what such a group would look like. No one wants to make a truly
longer term bet. This is why things have been spiraling in circles for years.

So, as it has been stated time and time again.. AGI will come and it will come
from left field. There are individuals who truly care to pursue and develop
AGI and they're willing to sacrifice everything to achieve it. If no funding
is available, they'll fund themselves. If groups wont accept them because they
aren't obssesed with deep learning or have a PhD (clearly the makeup that only
results in convoluted weak AI), they'll start groups themselves.

Passion + Capability + lifelong pursuit is how all of the great discoveries of
time have come to us. The mainstream seemingly never understanding such
individuals, supporting them, or believing them until after they've proven
themselves. No pivots. No populist iterations. A fully entrenched dedication
towards achieving something until its done.

So, no.. you can't rule out AGI in the near term because there is no spotlight
on the individuals or groups with the capability to develop it on such time
horizons and the thinking just frankly isn't there in celeberated groups with
funding. Everyone's in the dark and its an active choice and mindset which
causes this.

Geoffery Hinton says start all over.... Yann LeCun raises red flags. No one
listens. No one acts. Everyone wants a piece of the company that develops the
next trillion dollar 'Google' like product space centered on AGI but no one
wants to spend the time to consider what such a company would be, what is
human intelligence, who is looking at it in a new way from scratch as some of
the most important people in AI have stated. So, you see... This is why the
unexpected happen. It is unexpected because no one spends the time or
resources necessary to cultivate the understanding to expect its coming.

------
cvaidya1986
Nope I’ll build it.

------
tim333
>Can we rule out near-term AGI?

In keeping Betteridge's law, no, not really. Hardware capabilities are getting
there as evidenced by computers trashing us at go and the like and with
thousands [0] of the best and brightest going into AI research who's to know
when someone is going to find working algorithms?

[0] [https://www.linkedin.com/pulse/global-ai-talent-pool-
going-2...](https://www.linkedin.com/pulse/global-ai-talent-pool-
going-2018-jean-fran%C3%A7ois-gagn%C3%A9/)

~~~
zackmorris
Not sure why you're getting down voted for this since I was going to say the
same thing. My feeling is that AGI is 10 years away, certainly no more than
20. That's coming from a pure computing power and modeling perspective, for
example just by putting an AI in a simulated environment and letting it brute
force the search space of what we're requesting of it with 10,000 times more
computing power than anything today. Finding a way for an AI to focus its
attention at each level of its capability tree in order to recruit learned
abilities, and then replay scenarios multiple times before attempting
something in real life, are some of the few remaining hurdles and not
particularly challenging IMHO.

The real problem though (as I see it) is that the vast majority of the best
and brightest minds in our society get lost to the demands of daily living.
I've likely lost any shot I had at contributing in areas that will advance the
state of the art since I graduated college 20 years ago. I think I'm hardly
the exception. Without some kind of exit, winning the internet lottery
basically like Elon Musk, we'll all likely see AGI come to be sometime in our
lifetimes but without having had a hand in it.

And that worries me, because if only the winners make AI, it will come into
being without the human experience of losing. I sense dark times looming,
should AI become self-aware in a world that still has hierarchy, that still
subverts the dignity of others for personal gain. I think a prerequisite to
AGI that helps humanity is for us to get past our artificial scarcity view of
reality. We might need something a little more like Star Trek where we're free
of money and minutia, where self-actualization is a human right.

~~~
iotb
Entrenched mindsets don't like for the flaws in their views to be highlighted.
It's one of Humanity's most serious flaws. As far as AGI being 10-20 years
away based purely on compute power, you can't make this statement accurately
unless you have a firm understanding of the underlying algorithms that power
human intelligence and by extension AGI. From there, you also need to have a
formal education and deep industry experience with Hardware to know what its
capabilities are today, what they will be in the future roadmap wise, and how
to most efficiently map an AGI algorithm to them. I'd say that 0.1% of people
have this understanding and nobody is listening to them.

> The real problem though (as I see it) is that the vast majority of the best
> and brightest minds in our society get lost to the demands of daily living.
> I've likely lost any shot I had at contributing in areas that will advance
> the state of the art since I graduated college 20 years ago. I think I'm
> hardly the exception. Without some kind of exit, winning the internet
> lottery basically like Elon Musk, we'll all likely see AGI come to be
> sometime in our lifetimes but without having had a hand in it.

They don't get lost so much as they become trapped for reasons due to
systematic and flawed optimization structures found throughout society. All is
not lost if one breaks out long enough to realize they can make certain
pursuits if they are willing to make a sacrifice. The bigger the pursuit, the
bigger the required sacrifice. Not many people are willing to do that in the
valley when you have a quarter of a million dollar paycheck staring you in the
face. You could of course make a decision to sacrifice everything one given
day and you'd have 5 years of runway easily if you saved your money properly.
Obviously, VC capital wont fund you. Obviously universities aren't the way to
go given the obsession with Weak AI. Obviously no AI group will hire you
unless you have a PhD and/or are obsessed with Weak AI. Obviously you might
not even want this as it will cloud your mind. So, clearly, the way to make
ground breaking progress is to walk off your job, fund a stretch of research
yourself, and be willing to sacrifice everything. Quite the sacrifice? People
will laugh at you. What happens if you fail? Socially, per the mainstream
trend, you'll fall behind. If you have a partner, this will be even more
difficult as the trend is to get rich quick, get promoted to management, buy a
million dollar home, have kids, stay locked in a lucrative position at a
company. And what of your pride? Indeed.. And therein is the true pursuit of
AGI.

The winners are pushing fundamentally flawed AI techniques because it requires
massive amounts of data and compute which is their primary business model.
They wont succeed because they are optimizing a business model that is at the
end of its cycle and not optimizing the pursuit of AGI.

AGI is coming and it is completely out of the scope of the current winners. If
a person desires to pursue and develop AGI, they'd have to be bold enough to
sacrifice everything... It's how all of the true discoveries are made for all
of time and science. Nothing has changed but for reasons due to money
primarily, when the historical learning lessons are far off enough people
attempt to re-tell/re-invent the wheel in their favor.. Only to be reminded :
Nothing has changed.

The individual discoverers change over time however for they learn from
history.

~~~
zackmorris
Well what I'm saying is that we can derive your first paragraph purely with
computing power. What we need are computers with roughly 100 billion cores,
each at least capable of simulating a neuron (maybe an Intel 80286 running
Erlang or similar), and a simple mesh network (more like the web) that's
optimized for connectivity instead of speed. This is on the order of
100,000*100,000,000,000 = 1e16 transistors, or about 7 orders of magnitude
more than an Intel i7's billion transistors. It would also be running at at
least 1 MHz instead of the 100 or 1000 Hz of the human brain, so we can
probably subtract a few orders of magnitude there. I think 10,000 times faster
than today is reasonable, or about 2 decades of Moore's law applied to video
cards.

Then we feed it scans of human minds doing various tasks and have it try
combinations (via genetic algorithms etc) until it begins to simulate what's
happening in our imaginations. I'm arguing that we can do all that with an
undergrad level of education and understanding. Studying the results and
deriving an equation for consciousness (like Copernicus and planetary orbits)
is certainly beyond the abilities of most people, but hey, at least we'll have
AGI to help us.

Totally agree about the rest of what you said though. AGI <-> sacrifice. We
have all the weight of the world's 7 billion minds working towards survival
and making a few old guys rich. It's like going to work every day to earn a
paycheck, knowing you will die someday. Why aren't we all working on inventing
immortality? As I see it, that's what AGI is, and that seems to scare people,
forcing them to confront their most deeply held beliefs about the meaning of
life, religion, etc.

~~~
iotb
You're focusing on an aspect of Neurons in which there isn't even an accurate
understanding and attempting to make a direct mapping to computer hardware.
This is framing w/o understanding and you should be able to clearly understand
why you can't make analysis or forward projections based on it.

Video cards operate on a pretty limited scope of computing that might not even
be compatible with Neuron's fundamental algorithm. The only thing SIMD has
proven favorable towards is basic mathematics operations with low divergence
which is why Optimization algorithm based NN function so well on them.

This is the entrapment many people in the industry fall for. The first step
towards AGI is in admitting you have zero understanding of what it is. If one
doesn't do this and simply projects their schooling/thinking and try to go
from there, you end up with a far shorter accomplishment.

You can't back derive aspects of this problem. You have to take your gloves
off and study the biology from the bottom up and spend the majority of your
time in the theoretical/test space. Not many are willing to do this even in
the highest ranking universities (Which is why I didn't pursue a PhD).

There is far too little motivation for true understanding in this world which
is why the majority of the world's resources and efforts are spent on circling
the same old time test wagons.. Creating problems then creating a business
model to solve it. We are only fooling ourselves in this mindless endeavors.
When you break free long enough, you see it for what it is and also see the
paths towards more fundamental pursuits. Such pursuits aren't socially
celeberated or rewarded. So, you're pretty much on your own.

> As I see it, that's what AGI is, and that seems to scare people, forcing
> them to confront their most deeply held beliefs about the meaning of life,
> religion, etc.

One thing about this interesting Universe is that when a thing's time has come
it comes. It points to a higher order of things. There's great reason and
purpose to address these problems now and its why AGI isn't far off. If you
look at various media/designs, society is already beckoning for it.

~~~
zackmorris
You know, I find myself agreeing with pretty much everything you've said
(especially limitations of SIMD regarding neurons etc). I'm kind of borrowing
from Kurzweil with the brute force stuff, but at the same time I think there
is truth to the idea that basic evolution can solve any problem, given enough
time or computing power.

I guess what I'm getting at, without quite realizing it until just now, is
that AI can be applied to ANY problem, even the problem of how to create an
AGI. That's where I think we're most likely to see exponential gains in even
just the next 5-10 years.

For a concrete example of this, I read Koza's Genetic Programming III edition
back when it came out. The most fascinating parts of the book for me were the
chapters where he revisited genetic algorithm experiments done in previous
decades but with orders of magnitude more computing power at hand so that they
could run the same experiment repeatedly. They were able to test meta aspects
of evolution and begin to come up with best practices for deriving evolution
tuning parameters that reminded me of tuning neural net hyperparameters (which
is still a bit of an art).

Thanks for the insight on higher order meaning, I've felt something similar
lately, seeing the web and exponential growth of technology as some kind of
meta organism recruiting all of our minds/computers/corporations.

