Ask HN: Will We Get Close to a General AI in 2017? - Mister_Y
======
Quarrelsome
No. Stop reading reddit.com/r/futurology or that _awful_ article by
waitbutwhy. Sure its a possibility but we're still making baby steps and tiny
tools, pastiches of intelligence as opposed to genuine intelligence or
conscious.

People who ask questions such as this often don't consider that it remains
eminently possible that AGI is an impossibility for us to build. Also remember
that anything an AI can do in the future a human + an AI can probably do
better. Right now at least they're just tools we use and will remain so for
the foreseeable future.

~~~
sambe
What is the strongest argument for impossibility being a possibility?

~~~
arethuza
There are arguments, notably in "The Emperors New Mind", that assert that
consciousness must be based on quantum mechanics in some ill-defined way and
therefore that "algorithmic" computers can't therefore be conscious.

[https://en.wikipedia.org/wiki/The_Emperor's_New_Mind](https://en.wikipedia.org/wiki/The_Emperor's_New_Mind)

NB I didn't find this argument particularly strong, but it's a long time since
I read it. The brain would have to be doing something spectacularly strange
for it to be impossible for us to emulate in one way or another.

~~~
sambe
Out of context it indeed sounds rather fuzzy and weak. It's not even intuitive
to me that consciousness comes first. Maybe they are even independent. I
should probably read the book though.

More generally, I don't think invoking quantum effects is an especially strong
argument for impossibility in the general case. We already use quantum effects
to some extent and make progress on quantum computation. It may be evidence of
greater difficulty, but not of impossibility. Unless there is a strong case
for some particular quantum effect being unharness-able.

~~~
k_vi
"It may be evidence of greater difficulty, but not of impossibility"

------
simonh
We don't even have a general outline of a theoretical approach to designing a
general purpose intelligence, let alone implementing one. Until we do, any
speculation about a time horizon for implementation is a pure guess. How are
those guesses working out so far?

1960s Herbert Simmons predicts "Machines will be capable, within 20 years, of
doing any work a man can do."

1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.

2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs)
will occur by 2045, 34 years after the prediction was made.

So the distance into the future before we achieve strong AI and hence the
singularity is, according to it's most optimistic proponents, receding by more
than 1 year per year.

I am not in any way denying the achievability of strong AI. I do believe it
will happen. I just don't think we currently have any idea how or when. If
pushed to it, I'd say probably more than another 100 years from now but I
don't know how much more.

~~~
singularity2001
You should become famous for that law. Eat that, Kurzweil. You found
exponential laws. We found a neglinear counterlaw ;)

------
pps43
The key point is self-learning, or ability of AI to build AI that's better, if
only a little.

This is different from, say, AlphaGo playing against itself to train its
neural network - we want AI 1.0 to _write_ AI 2.0, not just tweak some
coefficients in 1.0.

At the moment all automatically generated code is _less_ complex than source
code of code generator itself. There can be more of it in terms of lines of
code, but it's usually pretty repetitive.

------
hacker_9
If you read the research, there is lots of incremental progress being made.
Mainly with pixels - classifying them into objects, matching object locations
to text, attempting to predict future pixel values etc. But this stuff is very
'surface level', not even close to the way our brains effortlessly interpret
light - classify objects, detect depth, account for lighting, complete objects
we can't see, invoke feeling of the material we are looking at, invoke past
memories, detect threats, and so on - every single millisecond.

This doesn't even begin to get into the core of AGI, which is the 'thinking'
component. Given this amazing mass of data, how do we then make the machine
work towards it's goals? Is this just a neural network? Is it a billion neural
networks? Too many variables to tell.

And even then, if every action it takes is a reaction to the environment, does
it then not have freewill? Do we have freewill? Is 'consciousness' somehow the
key to freewill?

But anyway if you listen to Musk or Hawking, doomsday AI is just round the
corner.

~~~
dasboth
There is research directly addressing "transfer learning" where the goal is to
have more general learning agents. I don't know how advanced it is, but it's
certainly not advanced enough to become anything close to an AGI by 2018.

Ultimately I think there are far more pressing issues in AI around ethics,
bias, or security. It's great that there are philosophers who can sit around
and worry about a possible distant future (and I think as thought exercises
they're fascinating), but it's not what most people in the field should be
concerned with.

------
shpx
If anyone who thinks yes wants to bet $1000 I'll do 1:10 odds.

[https://longbets.org](https://longbets.org)

~~~
natch
For 2017?

From the site:

"minimum bet period is two years"

~~~
shpx
I'll extend my bet to 2018.

I'd extend it much more because the real odds of AGI are closer to 1:100, but
any longer time scale would be a bad return on my money.

~~~
enraged_camel
I always thought betting money (especially with a site like long bets) was
about holding people accountable to their claims, rather than the expectation
of actual return. :)

~~~
natch
I think it's more about bragging and joining an exclusive club (example: I
have this opinion, and I have a pile of money sitting around that I can
dedicate to promoting it amongst a small group of other people I want to
impress). The stated purpose of holding people accountable, I have never
bought.

People who are really working on the ideas discussed (as opposed to just
talking about them and positioning themselves as "futurists") are accountable
in a much more profound way, in that they build their entire lives around
working toward some goal, with all the risk and opportunity cost that entails.
The long bets participants are just playing around, not real players. They may
have a little skin in the game, but it's faux skin.

------
ragebol
No. This [0] is 4-5 years old and I don't think much progress has been made in
getting a computer to classify that image as 'funny' and explain why. And
if/when it could, I doubt we'd call it intelligent. And this is just computer
vision, not mentioning other branches of AI.

[0] [http://karpathy.github.io/2012/10/22/state-of-computer-
visio...](http://karpathy.github.io/2012/10/22/state-of-computer-vision/)

~~~
freehunter
Hell, I know a lot of _humans_ who wouldn't understand why that's funny. And
some of them have 4-year college degrees.

Many responses in this thread are along the lines of "computers will never be
as smart as Carl Sagan", ignoring that most intelligence on this planet could
never dream of being half as smart as the genius we're using to define AI.
Let's start by getting a computer as smart as a border collie, one of the more
intelligent dogs.

Saying a computer isn't smart because it can't laugh at a highly complex
visual joke is entirely the wrong way to define AI, and it's no surprise we
haven't achieved it. It took humans millions of years to get this smart, and
we've only been working on AI for a very, very small amount of time.

------
edgarvm
Better automation != General AI

~~~
sharemywin
I'd argue "minion level AI" >> General AI.

------
tener
Closer, but not close!

~~~
MichaelMoser123
it has a tendency to be just 15 years away - the deadline keeps on moving
though.

I think that the main obstacle is language - language is terribly ambiguous,
and its very difficult to deal with these ambiguities in a program.

Hofstadter [1] says that the core of thinking are analogies and that many of
the allusions in language can be thought of as analogies, however this does
not seem to be the main focus of inquiry right now.

[1] [https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-
Thinki...](https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-
Thinking/dp/0465018475) (my review & summary is here
[http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/po...](http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/post-1.html)
)

~~~
hacker_9
> language is terribly ambiguous

I never get this argument. Sure, maybe it is from the point of the computer,
but we humans use it just fine.

~~~
freehunter
Much of our comedy stems from how ambiguous our language is. How many
petabytes on the Internet are wasted with comments correcting someone's use of
language? How many hours are spent as kids in classrooms learning all of the
context around our languages, and we still get it wrong often enough to be
corrected on the Internet and made fun of in comedy TV shows. We're certainly
_not_ using it just fine, we're using it in spite of all its shortcomings.

Do you know how often I'm driving while my wife is navigating and I ask "turn
left here?" and she says "right"? Now, is she saying "correct, you should turn
left" or is she saying "don't turn left, turn right"?

Outside of a dog, a book is man’s best friend. Inside of a dog, it’s too dark
to read. –Groucho Marx

I haven’t slept for ten days, because that would be too long. –Mitch Hedberg

~~~
hacker_9
> Much of our comedy stems from how ambiguous our language is.

Yes, but we laugh because we understand there is an ambiguity, not because we
don't see it.

> How many petabytes on the Internet are wasted with comments correcting
> someone's use of language?

People are pedantic on the net, plus on the world wide web not everyone is
going to be an English first speaker. Spoken conversations don't have people
correcting your grammar every 5 seconds.

> wife is navigating and I ask "turn left here?" and she says "right"

Well in this case she is being deliberately ambiguous. So you either tell by
her tone inflection, or rely on previous memory. And of course you have the
ability to ask her.

> Outside of a dog, a book is man’s best friend. Inside of a dog, it’s too
> dark to read. –Groucho Marx

> I haven’t slept for ten days, because that would be too long. –Mitch Hedberg

Again, I see and understand the ambiguity. I'm not sitting here dumbfounded; I
'get' that they turned the words back on themselves to mean something else.

------
onion2k
No.

------
AnimalMuppet
My own personal pet theory (guaranteed right or your money back): We won't
have AGI until we have something that can dream.

Will we get close in 2017? No. Not if my pet theory is right, and not if it's
wrong.

------
spiderfarmer
Define "General AI". An AI that can decide by itself which model it should use
to make sense of any given dataset?

~~~
arethuza
"Artificial general intelligence (AGI) is the intelligence of a machine that
could successfully perform any intellectual task that a human being can."

[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

~~~
lordnacho
There's plenty of tasks that very few human beings can perform. And thus
there's task sets that nobody can perform.

What then?

------
Buttons840
Will Google release an AI that can play StarCraft on the same level as humans
in 2017?

General AI will have to wait until after that.

------
richardboegli
No

------
skilesare
Yes. Depends on what you mean by close though.

~~~
AnimalMuppet
Well, however close we get, you can define "close" to be "that close", and say
that, yes, we did in fact get close in 2017. I'm not sure that's useful,
however.

------
ArkyBeagle
No. That which can be done is no longer considered AI.

~~~
baboun
Lies.

------
rl3
No, I don't think so. We'll inch closer, but I doubt we're anywhere near AGI
on the path of software and algorithms running on traditional networked
computing architectures.

That isn't to say the resources don't exist to create AGI. It's possible they
were available a long time ago. If you were to ask some omnipotent future
superintelligence for a way humans could have bootstrapped AGI in the year
2005 using the available technology of the day, it could probably come up with
an answer. Maybe even further back than that, or maybe even present day
wouldn't suffice—who knows.

Trying to emulate biological architectures on silicon can be grossly
inefficient, and may actually be harder from a design perspective. It is the
attempt to formalize and adapt something created by an optimization process
that spanned millions of years, a process that had zero regard for how easy
its creation would be to understand or otherwise reverse engineer.

At the same time, algorithms vastly more efficient than the human brain's
remain a possibility. They need not include the large amounts of evolutionary
baggage that humans have.

Approaching AGI as a raw optimization problem may yield better results.
However, not formally specifying or understanding the underlying mechanisms is
a massive safety issue in the long run.

By the same token, ditching silicon entirely may be a vastly quicker path.
Throwing ethics out the window and experimenting with large quantities of lab-
grown neural tissue might be one way. Creating a synthetic biological
computing substrate another. It's not hard to imagine something like copying
human neural tissue's design, but using materials capable of latencies an
order of magnitude lower, or significantly higher degrees of
interconnectivity.

Looking at the problem from the perspective of strictly space, it's funny to
think that we're unable to recreate the functionality of some tissue contained
within a space that's less than one cubic foot-even though we have seemingly
endless _acres_ of computing power to do it with—that's excluding the brains
of the thousands of scientists and engineers working on AI. Even if you
stacked up _just_ the microprocessors in question, they would occupy a cubic
volume far, far greater than a single human brain—each containing billions of
transistors, and each operating at latencies far lower than the brain. Despite
all this, the human brain requires far lower amounts of energy.

The reason we don't have AGI yet is that it simply takes a lot of time and
effort to invent, regardless if it's ultimately possible with today's
technology. Of course, as other commenters have suggested, ruling out the
possibility that the human brain somehow has seemingly magical quantum
properties that render its recreation an impossibility (on silicon at least)
may be unwise.

------
jdimov11
The term AGI suffers from a greatly exacerbated version of the same problem
that AI suffers. The problem, mind you, has NOTHING to do with science or
technology - it is purely a naming problem.

The term "Artificial Intelligence" is a contradiction - intelligence can NOT
be artificial. Intelligence is the ability of a being to get what it wants. It
is always organic, as it originates in desire.

Just stop calling it "Artificial Intelligence" and enjoy the wonderful
progress that we are making towards getting our machines to help us achieve
what we want.

(To be clear, I'm not saying stop calling it "artificial". I'm saying stop
calling it "intelligence", because it is not, and never will be. Using the
word "intelligence" in the context of machine automation sets entirely
unreasonable expectations and inhibits progress. )

~~~
agd
You seem to be contradicting several well known dictionaries with your
definition of intelligence.

[https://www.merriam-webster.com/dictionary/intelligence](https://www.merriam-
webster.com/dictionary/intelligence)

[http://www.dictionary.com/browse/intelligence](http://www.dictionary.com/browse/intelligence)

[https://en.oxforddictionaries.com/definition/intelligence](https://en.oxforddictionaries.com/definition/intelligence)

~~~
jdimov11
All of these definitions are rather lousy and not particularly useful (except
possibly for 1.b in Webster, but it needs clarification)

~~~
freehunter
Yeah, it's all the dictionaries that are wrong... surely you must be right.

~~~
jdimov11
Right.. How dare I question your sacred Anglo-Saxon dictionaries, which are
the authoritative source of All Truth. Anathema.

