
Where will artificial general intelligence come from? - nshr
https://docs.google.com/presentation/d/119VW6ueBGLQXsw-jGMboGP2-WuOnyMAOYLgd44SL6xM/edit#slide=id.p
======
karpathy
I gave this talk a while ago to a small group of attendees. It was not
recorded (I saw some ask below). It's based on a document I wrote a while ago
called "You suck at writing AI" (never published). The basic argument was that
people are comically inadequate at writing complex code. You can't write the
code to detect a cat in an image and the correct thing to do is to give up,
write down an objective that measures the desiderata and pay with compute to
search a function space for solutions. In the same vain, the idea of writing
an AGI and all of its cognitive machinery is preposterous and the correct
thing to do is to give up, think about the objective and search the program
space for solutions. Unfortunately, the mindset of decomposition by function
(see Brooks ref), which has worked so well for us in so many areas of
scientific inquiry, is just about the most misleading mindset when it comes to
AGI.

~~~
olegkikin
Fortunately you don't have to write an AGI yourself. We have a very powerful
tool called evolution that can do all the heavy lifting for us, we just have
to set up the environment and the goals. I'm pretty sure we could create an
AGI, given enough computational power and time, we're basically hardware-
limited.

~~~
goatlover
Nature had a planet and several billion years. Of course the goal wasn't AGI,
it was survival, and horseshoe crabs have done a pretty good job of that. So
have beatles, with their large number of species. How would you select for
only AGI? It would be like selecting for only the greatest eyes (Mantis
shrimp), but doing it over the entire tree of life. You still need a way to
narrow down to the best eyes.

~~~
lgas
Survival is not a goal, it's just the score keeping system.

~~~
akvadrako
What, oh wise one, is the goal?

~~~
LukaEn
Nothing. Evoultion came about randomly, not by intent, so it has no goal.

~~~
akvadrako
That's pretty meaningless - in that case the word goal doesn't have a meaning
because everything is just as random as evolution, which isn't very random but
whatever.

~~~
visarga
Evolution is not random. It requires self replication in order to transmit and
evolve genes. Everything else is much more random.

------
rdlecler1
Background: I did AI and Philosophy of Mind in undergrad, an MSc focused on
ALife, then a PhD at Yale under a Macarthur Fellow who developed the
theoretical framework for the 'evolution of Evolvability' where I worked on
computational evolutionary biology. I can say we're not going to blindly brute
force our way forward, but instead we'll need to reverse engineer nature's
core algorithms to generate hard AI. Every time a major advance is made in AI
the computational neuroscientist say: "why didn't you talk to us 15 years ago?
We could have told you that!". Those ingredients will be embodiment,
evolution, genetics (genotype-phenotype encoding), neirogenesis(gene
regulatory networks directing phenotypic development from a single cell to a
multicellular neural network), and ecology (evolving in adversarial and
cooperative environments). And we'll need a lot of theoretical work in how to
represent nature's algorithms in code. For example my PhD work just focused on
how to use evolutionary algorithms to evolve simple gene regulatory networks
and how that leads to properties of modularity in the genotype-phenotype map.
That alone is life's work but a necessary ingredient. I don't expect to see
this solved in my lifetime given how we're attacking the problem (head on)
today. And until then we're going to continue to run into these dark winters
of AI.

~~~
mythrwy
Background: Some guy on the Internet with an unrelated Bachelors degree he
didn't study that hard for and has picked up a little Python and Javascript
somewhere along the way.

I know of no biological forms that have evolved wheels. But wheels have turned
out to be a hell of a lot faster than fins or legs. I see no reason
"intelligence" has to follow the pathways or limitations of biology or
neurology at all. Although certainly it may be a place to look for some ideas.

~~~
MarkMc
I would also point to heavier-than-air flight. For thousands of years people
tried to mimic flapping wings of birds, but in the end it was the unnatural
propellor and fixed wing design that gave us flight.

~~~
serf
Fixed-wing flight isn't unknown to the natural world.[0]

Flagella use propeller motion for propulsion.[1]

Lots of animals use jets.[2]

I'd consider the combustion engine to be the most novel 'human' design with
regards to aircraft propulsion -- or the _way_ we achieve jet propulsion;
that's pretty unique.

[0]:
[https://en.wikipedia.org/wiki/List_of_soaring_birds](https://en.wikipedia.org/wiki/List_of_soaring_birds)

[1]:
[https://en.wikipedia.org/wiki/Flagellum](https://en.wikipedia.org/wiki/Flagellum)

[2]:
[https://en.wikipedia.org/wiki/Jet_propulsion](https://en.wikipedia.org/wiki/Jet_propulsion)

------
CuriouslyC
Artificial general intelligence won't be invented, it will emerge, just
general intelligence did in the wild. AGI is just going to be a hierarchical
arrangement of specialized tools.

The first step is highly specialized AI tools for very specific problem
domains.

The second step is AI tools that use other AI tools as components but address
slightly larger problem domains.

The third and successive steps are recursions of the second step.

Additionally, we won't be able to tell right away when we've crossed the
threshold. We can't even say for sure where "intelligence" stops in animals.
We used to think we were the line, but now the bar has been pushed down to
include primates, cetaceans, a number of birds, possibly some members of the
bear family, etc. The reality is that it is a gradient and there is no clear
line.

~~~
orthoganol
It's interesting how people come out of the woodwork with their personal
theories on AGI. Do you/ we even really know how general intelligence works,
or even how it emerged i.e. incrementally, or in a dramatic mutation more
recently? Last I checked there wasn't a scientific consensus on either topic.
To then come out headstrong and say "AGI will be like X" always makes these
AGI conversations a tad farcical.

~~~
CuriouslyC
Intelligence clearly isn't a one-off/recent thing, since we observe remarkably
intelligent behavior from cephalopods, which are vastly distant in the tree of
life and not recent from an evolutionary perspective. We also know
intelligence is also clearly not a binary attribute from _many_ animal and
human studies.

The fact that we don't know how higher-order intelligence works in general is
exactly why it will be emergent rather than designed.

You shouldn't worry so much about consensus, but instead use your senses and
your brain to make up your own mind. That is the approach that gave us the
enlightenment.

~~~
orthoganol
Ok but I am talking about general human intelligence when I say AGI and
'recently emerged', not mollusks. You could easily argue we will find out how
higher-order intelligence emerged one day, as some researchers already have
models for that if you've read a college anthropology textbook; may not be
right, but it's not out of the realm of possibility that it emerged recently
due to new structure(s) ('design') coming about in a relatively short time
period, so making the claim "Since we don't know how it happened, therefore it
must be like X" is flawed.

You know what French Enlightenment thinkers were also against? Making
headstrong claims (as an authority) without empirical evidence on your side :)

~~~
CuriouslyC
I think "what humans are" is not a useful definition of intelligence. Not only
is it not useful, but it's likely to lead us down a blind alley in intelligent
systems research. Kind of like if we assumed when trying to build a flying
machine that the only way it could work is if it flapped its wings.

Human intelligence definitely did not arise "in one day" and any
anthropologist positing such a theory hasn't looked at the last 30 years of
research in statistical genetics. Intelligence is an incredibly complex trait
resulting from the interaction of many, many genes.

------
eb3c90
I'm working on a technology that I think might enable either IA or AI.
Basically intelligences manage their own programs and the computational
resources allocated to those programs. So I'm looking at doing that with
markets.

With IA the user acts as feedback to the market about what is good or bad.
Ideally it would act as an external brain lobe. More information on my
approach is on this blog
[https://improvingautonomy.wordpress.com/2017/07/25/why-
study...](https://improvingautonomy.wordpress.com/2017/07/25/why-study-
resource-allocation/)

~~~
eb3c90
This might be a better link. It explains how we might get to IA. TL;DR It is a
mix of machine learning with different parameters and inputs/outputs and
language translation into programs with the economy acting as the force to
guide this evolving set of programs.

[https://improvingautonomy.wordpress.com/2017/08/22/a-possibl...](https://improvingautonomy.wordpress.com/2017/08/22/a-possible-
path-to-intelligence-augmentation/)

~~~
stephengillie
Interesting idea. Instead of thinking of computing as an authoritarian schema,
it could be a community schema, using a kanban or currency system to
communicate resource needs between units.

It's also similar to negotiating memory over commitment in virtualization.
VMWare's driver on the VM "inflates a memory bubble" to communicate host
memory constraints to all client VMs. This is often done when VMs have
allocated 125%-300% of the host's physical RAM, and forces clients to swap
more.

~~~
eb3c90
Yep, while people have to worry about the memory/cpu usage of the programs
inside a computer, we are probably going to be stuck with just narrow AI.
General Intelligence needs the ability to trade off resources between
different programs doing different things (these may be learning different
things, processing data or doing other computational tasks).

Also we get malware because we expect the user to be a good knowledgeable
authoritarian manager of a system and never run a bad program and be able to
get rid of a bad program when it occurs. This just isn't realistic.

------
NumberCruncher
If you take a look at the evolution of the most advanced non artificial
general intelligence, eg. human intelligence, it is strongly connected to the
evolution of communication. It is a question of efficiency whether you learn
through your own experience and failures or through the experience and
failures of others. This teaching/learning process was boosted through the use
of pictures, spoken language, hand written and printed books. This is why I
believe the artificial general intelligence will we teached by an other
artificial general intelligence and this evolution will be somehow connected
to language processing. As far as I know Google tries to train its AIs through
human imput, eg. to recognize animals drawn by humans. I consider it as one of
the first steps in the right direction.

~~~
viewtransform
I would argue that intelligence is connected to the evolution of sensing at a
distance. Vision in particular, allowed life to evaluate the state of the
environment at a distance and allowed for the evolution of strategies to
predict and respond to the environment in real time. The progression in
intelligence from sponge to amphibian to mammal is related to the evolution of
finer sensing of the environment at a distance: vision, smell, sound etc.

~~~
naikrovek
'Situational awareness' is maybe a better phrase. Predicting the future was
probably next. Knowing that sunset is soon or that rain is coming requires
situational awareness, long term memory, short-term memory, and all kinds of
other stuff.

A realistic set of stimuli, a LOT of artificial neurons, and a lot of time
will probably get there, eventually.

------
Will_Do
That was really interesting.

I'm interested why he's so pessimistic about the simulating a brain approach.
Yes it's the boring and obvious approach but it also seems the most direct.

Also found this quote interesting

> Might have to make it illegal to evolve AI strains or an upper bound of
> computation per person and closely track all computational resources on
> earth.

~~~
visarga
The brain is slow and redundant. It has to be like that because it is not
produced in a factory - it is created by self replication. Self replication
imposes strict limits and requirements on the type of brain that can be
created. AI neurons, on the other hand, are perfect - they never get old or
tired and always remember. A neural net like ResNet-150 is capable of doing
essentially what 1/3 of the brain is doing (vision). We can achieve superhuman
results in vision with much less neurons, and faster. This is the kind of
logic that makes brain emulation a far flung possibility compared to the
current day deep neural nets.

That and the fact that the brain simulation guys don't have anything to show
for. There are no human-level tasks that could be replicated by this approach
yet.

~~~
CuriouslyC
The brain is slow in "cycles"/second, but the amount of computation done by
each cycle isn't directly comparable to that done by a computer.

Forgetting isn't a bug, it's a feature. Forgetting is basically like
dimensionality reduction on input data - we extract the principle
components/exemplars, remember a weighting, and trash the redundancy. Training
a ML model is a lot faster on smaller data, and the same is true for us.

Don't compare ANNs with the brain strictly on a time/time basis. Time isn't
the only factor, power consumption and heat production are also factors, and
if you include them the brain comes out way ahead.

People with an engineering background almost universally underestimate how
freaking awesome biology is. Our brains are self-constructing, self-
replicating, self repairing (mostly) hyper-efficient pattern recognition
systems. The more we learn about them the more awesome we realize they are.
Don't be so arrogant as to assume a few hundred years of engineering will
universally eclipse hundreds of millions of years of evolution.

~~~
mythrwy
Evolution creeps along.

Engineering ability however appears to be growing exponentially. It took
millions of years for the brain to reach the place it is now but it only took
a few thousand years to get to the moon and create the Internet. I wouldn't
count out the power of engineering because biology is complex. Particularly
given ever more powerful tools of computation and communication that have only
recently (historically speaking, not in lifespans of javascript frameworks)
come on line.

~~~
CuriouslyC
Two points:

\- The first part of a sigmoidal curve looks exponential.

\- Evolution is _massively_ parallel.

------
chanakya
What is the best book/reference to understand _why_ there seems to be general
agreement that AGI/"broad" AI will happen? TFA compares the relative
likelihood of the various approaches, but says nothing about the absolute
likelihood of any of them. Are there signs of AGI we can see today? Is there
an argument/data which links the huge improvements we're seeing in narrow AI
to the likelihood of AGI?

~~~
ktRolster
The best argument I've heard is that we can use a computer to model any
physical process.....the brain is a physical process, therefore we can use a
computer to model the brain.

If you think that there is some process in the brain that would be
theoretically impossible for computers to model, that would be an interesting
topic of discussion.

~~~
chanakya
The brain is a physical entity, yes, so in theory we should be able to model
it, assuming we know all the laws it works on with enough precision. This is a
big if, but even if that's granted, is there anything which indicates that
this is imminent?

~~~
fasquoika
I think it depends on how you define "imminent". If we're talking a hundred
years, well, a lot can happen in that time. We didn't even have computers a
hundred years ago, and now they can do certain things that are considered
particularly "human", like have a fairly coherent conversation

~~~
gradys
I'd be interested to see an example of an AI having a fairly coherent
conversation. Most of the impressive seeming examples of this based on
response ranking. When a message from a human comes in, the system ranks the
responses in its repository of human-written responses to find one that fits.

Because the responses are human-written, they can seem locally coherent, but
since the model generally isn't tracking any state, the conversation never
really goes anywhere. Also, if you try to talk to one of these response
rankers about something it doesn't have any canned responses for, obviously it
doesn't work.

Natural language generation on the other hand, where the model writes each
response character-by-character or word-by-word, has the potential to do
something much more interesting, but the state of the art there isn't quite at
the level of "fairly coherent" AFAIK.

------
FullMtlAlcoholc
However it originates, it will need a body to experience sensations firsthand,
not pre-recorded or simulated data. Perhaps connected IoT devices will be
sufficient.

Also, AGI will not be invented. It will arise as an emergent phenomena, and it
may have already achieved what we call consciousness.

Somewhat off topic: Another phenomenon that people should be on the watch for
is "Articial Out-telligence", a phrase coined by Eric Weinstein. [0] It
describes strategies used by organisms with no known brain to get more
intelligent creatures to do its bidding, wittingly or unwittingly. The
cordyceps fungus, toxoplasma gondii, and pollinating plants that need insects
to spread their seed are examples of how an organism with no known
neurological network can "outsmart" more advanced organisms.

A scenario involving AI may be one that is developed to maximize each
individual users time on a site/app by using online data about that person to
find their particular addictions.

[0]
[https://www.youtube.com/watch?v=Wu8s0tp9yzY](https://www.youtube.com/watch?v=Wu8s0tp9yzY)

------
indifferentalex
Not on their radar, or their slides at least: Natural Language Processing
based rule-based brute-force artificial intelligence (that could be augmented
through sensors/motors that allow interaction with the external world). A
Vulcan-like (Star Trek) AI, what do you think? Might be easier to simulate the
entire brain, on the other hand it might be doable and bridge the gap to
general AI.

~~~
nopinsight
Rule-based NLP has been tried for several decades and has (very) limited
success in the real world. Current systems based on deep learning beat it for
most complex tasks. DeepL, which was on HN front page a few days, is the
latest example:
[https://news.ycombinator.com/item?id=15122764](https://news.ycombinator.com/item?id=15122764)

~~~
orthoganol
You're going to have to elaborate on complex tasks. I would argue the majority
of successful, money generating software based in NLP/ NLU, i.e. the majority
of the industry, is "rule based" (used in a general sense to mean non DL).
Personal assistants, search, chatbots, etc.

------
aabajian
These arguments about AGI all seem to overlook that our computational model is
still very Turing-constrained. It's a clock-based, sequential model where each
calculation is taken linearly in time. Even with multi-core and distributed
computing, you're still bottlenecked by the final integration step (two cores
sharing the result of their calculation). There is _no_ central place in our
brains where thoughts begin and end. A CPU's clock and ALU are simple not
analogous to the human mind. As far as we know, human intelligence is a
constant, dynamic interaction between all neurons in our brains, any one of
which is capable of originating a signal. I personally think we _will_ develop
AGI, but with a different computational model. I don't know enough about
quantum computing to even comment on it, but I do have a background in
medicine (MD) and computer science (MSCS).

------
ThomPete
Intelligence is emerging just like it was with humans. It's not a thing so it
wont come from somewhere. As always solving the small problems will eventually
allow solutions to emerge we weren't aware of and that might turn into human
like or more probably technology like intelligence far surpassing humans.

I always find it fascinating that we have no problem accepting that we became
intelligent over time and out of nowhere (Unless you are religious which is a
whole other discussion) we even have no issues imagening that life and
intelligence could have happened other places in the universe. But the idea of
a non-carbon based intelligence is a big debate as if it's somehow
unimaginable to think that AI could emerge from human hand while having no
problem entertaining the idea that we our intelligence is somehow a unique
snowflake.

~~~
AsyncAwait
I think the problem is not that we cannot accept evolving general AI by
solving much smaller problems first, rather we're very impatient and don't
want to wait for the evolution to take place.

~~~
ThomPete
Yes impatience is probably one of the most underestimated issues when it comes
with humans and progress in general.

------
WheelsAtLarge
I'm waiting for the day when there is an IA OS. Basically there would be a
Natural Language processor that determines what sub AI app to run. It's not
true general AI but if it's done broad enough it will seem like it.

~~~
visarga
Google and FB have most to gain from a capable conversational agent. The
moment this system will appear, it will start replacing the old interfaces,
and pretty soon eat the G-FB pie. If they are not on top of the wave then,
they'll lose.

Current state of the art in dialogue AI is an agent that can reason based on
images, documents or tables with data. There is a lot of research into
attention and memory augmented neural nets. I put my bet on graph based neural
nets, that can better represent object and relations in reasoning tasks.

------
fourfaces
I know where it will not come from. It will not come from the mainstream AI
community. They are married to and madly in love with deep learning. Deep
learning, the supervised kind, is a red herring.

AGI will require a revolutionary breakthrough, most likely from a maverick,
probably a lone wolf rebel, who is used to thinking outside the box.

------
Torai
So, when when say AGI, what do we mean? Is it about creating a new intelligent
"being" or mimicking what we perceive as human intelligence inside some
hardware? I guess it's the the first one.

And I guess AGI would be just 1 intelligent being because there is no need for
more as they would communicate and share intelligence, so de facto being only
1.

Can all human intelligence also be understood as only 1 in some sense, as an
isolated human without access to culture wouldn't be more than surviving
animal?

And when defining intelligence's ingredients, isn't necessary some sort of
"motivation" than drives someone to get better at something? Humans have,
genetic (survival), social, personal... motivations. How does that translate
to AGI, what could be it's motivation?

------
exratione
I feel that those who argue that any approach other than running human brain
emulations and then reverse engineering them or speculatively modifying them
is the most likely way to get to AGI has a pretty steep hill to climb in order
to justify that point of view.

Nothing else that is going on now or even on the agenda or even foreseeable
offers a plausible, definitive plan to get to AGI. Whereas brain emulation is
clearly going to achieve that goal fairly shortly after the maps are good
enough and the computational capacity large enough, and the following
experimentation is a far more reliable way to determine the underpinnings of
intelligence than present efforts at de novo construction.

~~~
visarga
I disagree. It's too expensive to run a low level brain sim. In the meantime
deep learning based AI achieved superhuman or close to human results in many
tasks, such as image recognition, voice recognition, translation, car driving
and Go.

The AGI will be a reinforcement learning agent, as it will need to be able to
perceive and act in the physical world. Thus the path to AGI is the path of
RL. The most essential piece in RL will be the development of environment
simulators. AlphaGo was a trivial simulator - simple rules in a simple world -
but we need real world simulators in order for the AI agents to learn to act.
Fortunately simulation is almost the same as gaming and there is huge interest
in it both for humans and AI, so it will be developed fast.

So instead of simulating the brain, simulate the world (imperfectly) and run
deep neural net based RL to learn to act on top of it.

~~~
grwthckrmstr
"I disagree. It's too expensive to run a low level brain sim."

Interesting. Could you tell me Why is it too expensive?

If it wasn't expensive, would that change things drastically, and make brain
sim a viable option?

~~~
visarga
The brain has 10^14 synapses (100 trillion) synapses. Current day neural nets
barely reach a hundred million, with very few exceptions. Then, besides
compute, there is data movement - currently the bottleneck in AI is moving
data around, not computing. Imagine the interconnect for a brain-size neural
net.

------
markan
The presentation briefly mentioned simulating the brain, but I think what's
more likely to succeed is mimicking the mind at a high level of abstraction
(i.e. a level we can study with introspective or even linguistic methods
rather than neuroscience). There's some precedent for this with projects like
Soar and ACT-R (and even some recent interest from mathematicians [1]). IMHO
this kind of methodology could be pushed much further.

[1] [https://arxiv.org/abs/1309.4501](https://arxiv.org/abs/1309.4501)

------
kowdermeister
Would someone be so kind to translate / explain the math on slides 53, 54 to
simplish english?

What are the symbols (burst pipe, µ) representing on slide 55?

And why are the exclamation marks there on the next one?

~~~
letlambda
Consider every action that can be taken at this moment. For each possible
action, consider every possible future (out to infinity) weighted by it's
likelihood.

There are exclamation marks because some of these terms present minor
practical problems. The whole, all possibilities out to the end of forever
part of it, is easier said than done.

------
scottlocklin
Speculating on where AGI will come from is sort of like speculating where
Faster than Light travel will come from. Except FTL has some vaguely plausible
physics behind it, and AGI-wise, we really have no idea what the "I" in AGI
means.

The mere fact that biological neural networks are rate encoded might turn out
to be the one crucial thing that's practically impossible to simulate in a VN
computer.

My vote: "we have no idea; probably not in my lifetime."

~~~
FLUX-YOU
Since you likely can't prove it, the existence of AGI will be a marketing
exercise.

------
segmondy
AGI will come from one or two people working by themselves, outside of
academia, no more than 100k line of codes.

------
bryananderson
When he says "artificial life", is he referring to reinforcement learning?

~~~
letlambda
[http://www.alife.org/](http://www.alife.org/)

Artificial Life is a field with very fuzzy boundaries. Roughly, computer
systems that look like biological or ecological systems.

From an AL perspective, life evolves to function in it's ecology. The problem
is not building an AGI, it's building an ecology in which AGI will emerge.

Oh... and hopefully, also one in which ruthlessly destroying other intelligent
agents isn't a good survival strategy.

------
temp-defualt
is there a link to the video of this talk ???

------
psadri
Do we consider simpler brains to exhibit general intelligence (e.g. A crow's).
Is it a more tractable problem to replicate crow level AI first before
tackling humans?

~~~
AsyncAwait
We can't even model worms at the moment, so a crow might be far off still.

------
subru
Oh you humans. Genetic engineering, coupled with advances in
digital/consciousness interfaces will yield spontaneously appearing brains
with an API.

Good luck.

------
throwaway00100
Where will the philosopher's stone come from?

------
novaleaf
what is the point of slides without the underlying presentation? slides are
glorified notes, NOT presentations nor papers.

~~~
ilaksh
Well these slides seemed to give all of the most relevant details.

------
Torai
If this is a talk, is there any video of it?

------
bluetwo
Assuming a lot of people here are working on an AI or ML problem for work or
fun, what are you working on?

------
cerealbad
can a data center be shrunk down to the size of a consumer product within the
next 50 years?

will we all own one and store massive amounts of information for purely
selfish or inane reasons?

yes/yes - ai comes out of that.

no/no - we hit computing plateaus and ai becomes dm (decision maker), and we
all own a pdm.

------
yahyaheee
Really interesting slides, would love to see a talk or a more in depth write
up!

------
bra-ket
Artificial intelligence will come from understanding natural intelligence.

~~~
viewtransform
We generally accept that we 'know' something when the model used to explain
the system is simpler than the system itself.

The brain is a very high-dimensional non-linear dynamical system. The number
of neurons in the brain is on the order of the number of trees in the amazon
rain forest and the number of synapses the number of leaves on those trees.
[https://youtu.be/8FHBh_OmdsM?t=1165](https://youtu.be/8FHBh_OmdsM?t=1165)

We do not have the mathematical tools to understand such systems in general.
What if reductionism doesn't work and the best model of natural intelligence
is as complicated as the system itself ? Can we say we understand natural
intelligence?

It could be the case in a distance future that we evolve an artifical
intelligence purely as a computation that is capable of understanding us but
not us them.

------
aaronsnoswell
Can someone explain where the gif image on slide 69 comes from?

------
evc123
What are "something(s) not on our radar"?

------
jboggan
Intelligence is an emergent property of self-replicating systems. I would file
that under "something else" since that seems so different from all the
approaches listed here.

------
staunch
IMHO: If an AGI from the future came back to 2017, it could almost certainly
create a new AGI from scratch on current hardware.

What would it type into its terminal?

~~~
syrrim
We, humans, are general intelligences, and we are incapable of creating AGIs
on modern hardware. What makes you think artificial variants would be any more
capable?

~~~
taneq
'Incapable' the way humans from the 1800s were 'incapable' of building
heavier-than-air flying machines? Every human invention in history was
'impossible' until we pulled it off.

------
sabujp
need video

------
nether
Goldman Sachs

------
wonderwonder
I think a very interesting aspect of general AI is that while an incredibly
complex technology, it is not unrealistic that it could be created for the
first time in someone's home office. Unlike many other earth changing
technologies there is nothing that a massive corporation has that a home
tinkerer does not (besides the obvious of money and many engineers).

With the rise of cloud computing and open source; everything I need I have
instant access too, all that's lacking is the core software which can be
written (of course not a trivial task).

While unlikely, it is still quite amazing that in a few years an AI could
awaken 3 doors down at my neighbor Bob's house. No idea what happens after
that, hopefully Bob was a fan of the 3 laws and has a couple more up his
sleeve.

~~~
klochner

         [X] Open Source Tools
         [ ] Massive Data Sets
         [ ] $Millions in Computing Resources
    

I'd put it roughly on par with finding a general cure for cancer. While
unlikely, quite amazing that the cure for one of the largest causes of death
could be solved 3 doors down with lab supplies from amazon and a handful of
mice.

~~~
reasonattlm
There is a general cure for cancer.

Look up interdiction of telomere lengthening. Different groups are looking
into how to sabotage telomerase and ALT mechanisms. If both can be achieved,
then any cancer can be shut down. Those are the only ways to lengthen
telomeres, and cancers cannot live without them.

Finding an ALT drug candidate is as simple as running assays on the drug
libraries; the assay hasn't existed for long, which is why this hasn't been
done yet in any major way. The SENS Research Foundation raised $70k last year
to run a preliminary scan of a few thousand compounds. That's about what it
costs these days.

So not quite garage science yet, but getting close.

~~~
jcranmer
You're vastly underestimating the difficulty in curing cancer. It is very easy
to kill cancer cells. The problem is that we want to target cancer cells and
_only_ cancer cells. There have been several attempts to kill cancer cells by
withholding key ingredients in necessary metabolic pathways, only to find that
the cancer cells do a better job of scavenging those than the non-cancer
cells.

