
There’s No Fire Alarm for Artificial General Intelligence (2017) - DarkCow
https://intelligence.org/2017/10/13/fire-alarm/
======
jkhdigital
A few things here immediately reminded me of an article I recently read about
how, regardless of whether or not SARS-CoV-2 was "created" in a lab, the
knowledge and tools for such genetic engineering have reached the point where
it would have been no great feat to do so.

> Progress is driven by peak knowledge, not average knowledge.

The cutting-edge researchers are already building on their work from two years
ago, and in many cases that work has not even been fully digested by other
leading figures in the field, much less industry commentators or science
journalists. Anyone who is moving the needle of peak knowledge in modern
applied sciences is by definition an outlier, and it will take everyone else a
while to figure out what they actually accomplished.

> The future uses different tools, and can therefore easily do things that are
> very hard now, or do with difficulty things that are impossible now.

The second point seems extremely salient for genetic engineering. From what I
have read, processes that required the full resources and expertise of a
cutting-edge lab 10 years ago can now be performed by grad student technicians
in a few hours with kits manufactured by biotech startups.

------
jbay808
There are a lot of people in this thread who seem to be very confident and
skeptical of the author's position. Maybe this would be a good opportunity to
try the exercise he challenges in the article?

Please post your nomination for what’s the least impressive accomplishment
that you are very confident cannot be done in the next two years.

We can check back in 2022 and see how we did!

~~~
memexy
Proving theorems. I will nominate Cauchy's Residue Theorem. It has already
been formalized so a neural network or any generally intelligent agent should
be able to do the same:
[https://www.cl.cam.ac.uk/~wl302/publications/itp16.pdf](https://www.cl.cam.ac.uk/~wl302/publications/itp16.pdf).

Complex analysis in general is one of the more fun parts of math and I doubt
it's getting automated any time soon. It also has a lot of application in
signal processing ([https://www.quora.com/What-is-the-application-of-complex-
ana...](https://www.quora.com/What-is-the-application-of-complex-analysis-in-
engineering?share=1)) so any automation improvements in complex analysis can
be carried over to those fields.

More generally, I don't think AI is the scary monster people make it out to
be. It's another tool in the toolbox for automation and intelligence
augmentation. I don't fear hammers so I don't see why people fear AI and the
benefits of extra automation that AI enables.

~~~
pgt
Depending on who's holding the hammer, I do fear the hammer.

~~~
memexy
Why would you fear the hammer and not the person holding the hammer.

Addressing the problem of a person coming at you with a hammer is a much more
pertinent issue than worrying about self-aware and malevolent hammers.

------
jp555
“Intelligence is situational — there is no such thing as general intelligence.
Your brain is one piece in a broader system which includes your body, your
environment, other humans, and culture as a whole. […] Currently, our
environment, not our brain, is acting as the bottleneck to our intelligence.”

“Human intelligence is largely externalized, contained not in our brain but in
our civilization. We are our tools — our brains are modules in a cognitive
system much larger than ourselves. A system that is already self-improving,
and has been for a long time.”

“Recursively self-improving systems, because of contingent bottlenecks,
diminishing returns, and counter-reactions […], cannot achieve exponential
progress in practice. Empirically, they tend to display linear or sigmoidal
improvement.”

“Recursive intelligence expansion is already happening — at the level of our
civilization. It will keep happening in the age of AI, and it progresses at a
roughly linear pace.“

François Chollet

[https://medium.com/@francois.chollet/the-impossibility-of-
in...](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-
explosion-5be4a9eda6ec)

~~~
Eliezer
Here's an extensive reply to Chollet's essay, also by the author of "There's
No Fire Alarm":

[https://intelligence.org/2017/12/06/chollet/](https://intelligence.org/2017/12/06/chollet/)

> ...some systems function very well in a broad variety of structured low-
> entropy environments. E.g. the human brain functions much better than other
> primate brains in an extremely broad set of environments, including many
> that natural selection did not explicitly optimize for. We remain functional
> on the Moon, because the Moon has enough in common with the Earth on a
> sufficiently deep meta-level that, for example, induction on past experience
> goes on functioning there.

>> The intelligence of an octopus is specialized in the problem of being an
octopus. The intelligence of a human is specialized in the problem of being
human.

> The problem that a human solves is much more general than the problem an
> octopus solves, which is why we can walk on the Moon and the octopus can’t.

>> Recursively self-improving systems, because of contingent bottlenecks,
diminishing returns, and counter-reactions arising from the broader context in
which they exist, cannot achieve exponential progress in practice.
Empirically, they tend to display linear or sigmoidal improvement.

> Falsified by a graph of world GDP on almost any timescale.

~~~
jkhdigital
Still plenty of time for GDP to turn into a sigmoid

~~~
pjscott
Sure, and if we're very lucky it'll eventually go quadratic as our species
expands out into the universe in an ever-widening sphere. :-) Few things can
grow exponentially forever, but a lot of things can grow exponentially for
long enough to have huge consequences -- and that's usually what we care
about.

------
ajuc
If you told people in 90s that at one point in future AIs will be able to:

\- drive a car in traffic in almost all circumstances safer than majority of
human drivers basing only on video input

\- take a video and replace faces from another video well enough that more
than 90% of people are fooled

\- have 95%+ accurate OCR and speech recognition

\- predict people's preferences regarding music/movies/books better than any
human could

\- generate short press articles on arbitrary subject appearing to most
readers to be written by a human being

\- win a game of chess/go/starcraft/whatever against the best human players

and asked them how far from that point till we have a general AI - they would
most likely say less than a decade.

But now that we are there we devalue these acomplishements because we know how
to do them, and still don't know how to do general AI.

~~~
SCdF
> drive a car in traffic in almost all circumstances safer than majority of
> human drivers basing only on video input

Has that happened? Don't self driving cars use a huge range of sensors, and
are they actually legal / widely deployed anywhere at all? Are any for sale? I
don't mean improved cruise control, I mean "OK computer, take me to work".

> take a video and replace faces from another video well enough that more than
> 90% of people are fooled

Are you talking about deepfakes? I haven't seen an example that was anything
other than creepy, and really really obvious. Do you think I'm just really
sensitive to this kind of thing, or have I not seen the really good examples?

> predict people's preferences regarding music/movies/books better than any
> human could

Why do you think that's happened? Spotify regularly recommends me things I
don't like, but I've also never had a dedicated human butler that recommends
me music so it's hard to compare, _but_ but I don't think spotify is doing any
better than last.fm was doing a decade ago (if I had to have an opinion I'd
say it was worse).

\---

I don't want to sound rude or that I'm jumping down your throat, but I don't
really feel like we live in the future you're describing, at least not today.

~~~
leereeves
> haven't seen an example that was anything other than creepy, and really
> really obvious. Do you think I'm just really sensitive to this kind of
> thing, or have I not seen the really good examples?

Here's a really good example (the video, not the audio, which the creators say
was intentionally degraded):

[https://www.youtube.com/watch?v=l82PxsKHxYc](https://www.youtube.com/watch?v=l82PxsKHxYc)

~~~
SCdF
So my positive take is that this is the best (or worst depending on your
perspective) I've seen for sure.

The negative take is that this still looks really fake to me. His eyes are
dead, his head whips around weirdly, his neck undulates and flickers, his face
doesn't appear attached to his head, etc. Also, due to how Obama looks he may
be easier than other people (ie no hair).

Also, this isn't exactly what OP was talking about, though it's similar. They
were talking about replacing faces, whereas this is (I presume) actually 100%
Obama video from his various speeches reconstituted. So tbc my reaction was to
the various videos I've seen of _that_, none of which have been remotely
convincing.

~~~
nimithryn
Not to mention that there's way more video of Obama speaking than most people

------
nopinsight
Our civilization and ability to dominate over other animals largely exist
because of human intelligence. An AGI could be a great partner toward
unprecedented prosperity, but an AGI not _fully_ aligned with human values may
also spur a major catastrophe.

Two promising approaches toward AI Safety I have encountered:

* Make sure the AI is not too certain of itself and will always defer to humans as the final judge. [https://www.ted.com/talks/stuart_russell_3_principles_for_cr...](https://www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai?language=en)

* Get the AI to learn and internalize human values. A possible technique is Inverse Reinforcement Learning: [https://thegradient.pub/learning-from-humans-what-is-inverse...](https://thegradient.pub/learning-from-humans-what-is-inverse-reinforcement-learning/)

Both are being developed by Stuart Russell among others. See his book here:
[https://en.wikipedia.org/wiki/Human_Compatible](https://en.wikipedia.org/wiki/Human_Compatible).
The latter approach was also discussed on several occasions by Ilya Sutskever.

Many believe AGI will not be realized for at least a few decades hence but no
one can really be certain it will not be developed before then.

Since several different approaches together are better than one. Bright minds
including those outside the field should propose more ideas for such a
significant & unique problem of our time.

~~~
taberiand
I'm not sure what the unique existential threat that AGI poses is - humans are
more than capable of spurring major catastrophe all by themselves.

I say we take a chance; if it doesn't work out well we are probably going to
stuff it all up without the help of an AGI anyway, and at least something
intelligent will remain if it decides to kill all humans.

~~~
nopinsight
From the article, would you say we shouldn’t prepare for anything if we
suspect that a horde of advanced alien space ships may be landing on earth in
a few decades (and possibly sooner)?

~~~
taberiand
My point is in thirty years, due to humankinds inability to work together on a
common goal of self-preservation and equality, we'll most likely simply be
further down the path of the probable collapse of civilization.

If we knew aliens were coming, we'd probably be best off hoping for their
benevolence and mercy and preparing as best we could to deserve it.

~~~
nopinsight
I respectfully disagree. You may wish to check out “Enlightenment Now” by
Stephen Pinker for a more optimistic perspective on human progress.

Aliens may not share our values. What we perceive as good may not be so for
them, nor might they care if we are “good”.

------
cosmodisk
"Subsequent manipulations showed that a lone student will respond 75% of the
time; while a student accompanied by two actors told to feign apathy will
respond only 10% of the time"

My own anecdotal data confirms this: a colleague crashed with his bike just
meters from our office. Pretty much the entire company stood there watching
him bleeding and screaming on the pavement. I called the ambulance,as it
didn't seem anyone was considering doing it at all. I'm sure every single of
them would have called the ambulance if they were there on their own. On a
different occasion,a friend pulled a few almost drowning drunk guests out of
water during a wedding,while the rest stood on the shore watching the whole
situation.

------
Animats
The decisive moment for AI will come when a program can run a corporation
better than humans. That's not an unreasonable near-term achievement. A
corporation is a system for maximizing a well-defined metric. Maximizing a
well-defined metric is what machine learning systems do well.

If a system listened in on all communication within a company, for traffic,
sentiment analysis, who responds to whom how fast, and what customers,
customer service, and sales are all saying and doing, it would generate more
data than a CEO could ever process. Somewhere in there are indicators of
what's working and what isn't. That may be the next phase after processing
customer data, which has kind of been mined out.

If this starts working, and companies run by algorithms start outperforming
ones run by humans, stockholders will put their money into the companies that
peform better. The machines will be in charge.

This is perhaps the destiny of the corporation.

~~~
jotm
I've been saying AI won't be Skynet launching the nukes, it will be a
corporation maximizing productivity with complete disregard for human needs
heh

~~~
WalterBright
In a free market, a corporation has to please its customers in order to
maximize profit.

For example, Apple.

~~~
Smoosh
> in order to maximize profit

I'm wondering where this idea comes from (I'm not criticizing you
specifically, but that maxim).

Do no investors value stability, longevity, ethical behavior etc?

~~~
WalterBright
I tend to invest in companies whose products and behavior I like. It's not
terribly surprising that they've done well - pleasing customers is good
business. I've dumped stock in companies that began a business model of suing
their customers to make money - and those companies (again unsurprisingly)
tilted down.

Buy companies that customers love, sell companies that customers do business
with only because they have to.

------
jonnypotty
Artificial General Intelligence isnt a goal its a fear. We reduce everything
that we try and program into computers into simple rules because complexity is
too hard for us to think about. ML has allowed us to move up a level of
complexity but we've had to give up understanding the mechanics of the machine
as a consequence. AGI using machine learning techniques needs a function to
optimise for. If someone can explain to me how you'd even, in theory, go about
teaching an AI about EVERYTHING rather than one specific thing I'll take it
seriously. We think the game of Go, because it displays chaotic and complex
patterns and is difficult to predict, somehow this 'models' the real world.
What it actually shows you is the bewildering complexity the universe is
capable of given only very simple rules. The actual complexity of the world is
on a different scale. Go has an aim, you can measure if you're getting better
at it. General intelligence does not have a defined goal, understand
everything perhaps? Even in theory how do you tell if you are closer to this
goal or not. In a game of Go you 'nearly' won or lost. How do you get close to
AGI and know that you were closer than the last time you tried? Modeling the
state of a game of Go in a computer is trivial. What is the model you use to
teach a computer about the world or the universe?

Imo not only is our software nowhere close to AGI, neither is our hardware or
our ideas.

That said having a fire alarm to tell us the terminator is coming seems like a
great idea in theory. As long as the sprinklers spray something that melts
metal

------
square_usual
> The breakthrough didn’t crack 80%, so three cheers for wide credibility
> intervals with error margin, but I expect the predictor might be feeling
> slightly more nervous now with one year left to go.

Good call on that by the author; according to this summary paper [1], a model
reached ~79-93% accuracy on various Winograd data sets.

[1]
[https://arxiv.org/pdf/2004.13831.pdf](https://arxiv.org/pdf/2004.13831.pdf)

------
MattGaiser
The best AI minds are still working on AI consistently recognizing stop signs.

~~~
Smaug123
The article does directly address this, at considerable length. I'll quote
just the subsection headings from that section, each of which is followed in
the text by many paragraphs of explanation.

One: As Stuart Russell observed, if you get radio signals from space and spot
a spaceship there with your telescopes and you know the aliens are landing in
thirty years, you still start thinking about that today.

Two: History shows that for the general public, and even for scientists not in
a key inner circle, and even for scientists in that key circle, it is very
often the case that key technological developments still seem decades away,
five years before they show up.

Three: Progress is driven by peak knowledge, not average knowledge.

Four: The future uses different tools, and can therefore easily do things that
are very hard now, or do with difficulty things that are impossible now.

Five: Okay, let’s be blunt here. I don’t think most of the discourse about AGI
being far away (or that it’s near) is being generated by models of future
progress in machine learning. I don’t think we’re looking at wrong models; I
think we’re looking at no models.

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=15470920](https://news.ycombinator.com/item?id=15470920)

------
mD5pPxMcS6fVWKE
General artificial intelligence is nowhere on the horizon, but in it's lesser
form - deep learning - it is already happening and already hurting a lot of
people. The problem is that its development is concentrated at the hands of a
handful of companies and they use it to their competitive advantage. Smaller
firms just can't compete. You need 1) algorithms - these are more or less
public 2) a lot of hardware - this is very expensive 3) a lot of (labeled)
data - this is just not available. You can't get access to the training set
that Google collects when billions of people solve reCaptchas, or the NSA
listening to billions of conversations, or the Amazon looking at what people
buy and what people search.

------
m0zg
That's because there won't be AGI in my or my children's lifetime. We are
successfully solving some perceptual problems now, but cognition is so far out
of reach, nobody has even started working on it yet.

~~~
sacred_numbers
We went from discovering fission in 1938 to building the bomb in 1945. I think
it's entirely possible that AGI won't happen in the next century or maybe even
ever, but history is filled with people who say "this can't happen" and are
proven horribly wrong.

~~~
m0zg
Yes, but then we did not invent much else in this area since then. Computing
is fundamentally the same today as it was in the 60's. Sure, transistors are
smaller, there's more memory, and clock is in gigaherts, but it's the same
paradigm. And this paradigm is not going to take us to AGI even if Moore's law
weren't dead for the last decade. Which it is.

~~~
jinpan
Have you heard of this thing called quantum computing?

~~~
catalogia
Are quantum computers even relevant to AGI? I'm under the impression that the
belief that human brains are quantum computers is considered fringe in both
computer science and particularly neuroscience, despite having a handful of
high profile proponents (notably Penrose.)

~~~
shahbaby
People have a common tendency to connect one thing they don't understand with
another they also don't understand.

~~~
umvi
"If only AI were powered by quantum computers, then AGI would emerge."

------
umvi
You don't need a fire alarm for something that doesn't exist and is complete
science fiction at this point. We would be better off building asteroid
defences as long as we are talking about far-fetched threats to humanity.

~~~
jbay808
I mean, general intelligence exists and is not science fiction. I assume I'm
responding to one. What should be the first sign that it's no longer science
fiction, and it's acceptable to start making a plan?

~~~
andrewflnr
That's actually a bit of an interesting question. General intelligence and
human-like intelligence are not necessarily the same, and I'm not 100%
convinced there's overlap. We can solve lots of the problems it occurs to us
to try to solve, but that alone doesn't prove that we're "general
intelligences". There are probably categories of problems we can't solve or
even properly conceive, just like every other specialized intelligence. In
short, be careful with your definitions. :)

~~~
ilaksh
Good point and interesting idea, but it's well established in common usage and
the field of AGI that the term is meant to refer to human-like intelligence.
So that's the default understanding, and you would need to qualify the term to
mean something more general as you are describing.

~~~
andrewflnr
Are these the same people in the field who worry about runaway self-
improvement and paperclip optimization? If so, then someone is being sloppy
with their definitions, because those are not properties of human-like
intelligence.

~~~
jbay808
Human-like in capability, not in goals. Ability to do AI research and craft
paperclips are both human abilities. If the AI has human-like goals then
there's very little need to worry.

------
mellosouls
Needs an editor tbh, far too long and waffly.

I'm guessing from the title there may be a useful point to it, but it could do
with some tough love to unearth it.

~~~
typon
"progress is closer than you might imagine"

~~~
umvi
"but it also may be much, much further away than you might imagine. That's the
problem when you don't understand something. You don't know what you don't
know."

