
Today’s dominant approach to A.I. has not worked out - cohaagen
https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html
======
sullyj3
I feel like this article would be warranted if ML results had stalled and not
achieved anything impressive in a few years. But we had AlphaZero recently,
and Duplex is pretty impressive. There's no indication that cool new stuff
isn't forthcoming in the near future. It's entirely possible that the current
tradition will prove to not be up to the task of building an AGI by itself,
and we'll need to invent new techniques. But in the absence of better ideas,
continuing to iterate on ML and neural techniques seems like a good approach.

~~~
new299
Duplex is a demo right?

If AlphaZero (which doesn't impact users directly) and Duplex (which isn't
released yet) are the best recent examples I can understand why there's
negative press appearing.

~~~
jdietrich
Duplex's underlying text-to-speech technology research (WaveNet) has produced
several papers and is now in public beta. It represents a huge advance in
text-to-speech fidelity, using a remarkably straightforward algorithm.

[https://arxiv.org/pdf/1609.03499.pdf](https://arxiv.org/pdf/1609.03499.pdf)

[https://www.isca-
speech.org/archive/Interspeech_2017/pdfs/14...](https://www.isca-
speech.org/archive/Interspeech_2017/pdfs/1452.PDF)

[https://arxiv.org/pdf/1712.05884.pdf](https://arxiv.org/pdf/1712.05884.pdf)

[https://cloud.google.com/text-to-speech/](https://cloud.google.com/text-to-
speech/)

~~~
new299
The first paper is almost 2 years old, and text-to-speech seems to be a
relatively small component of Duplex.

------
Jagat
The first quote that came to my mind when I read this was Frederick Jelinek's

"Every time I fire a linguist, the performance of the speech recognizer goes
up".

Hi NYT writers, "language is infinitely complex" and "statistical approach to
ML isn't working. Let's replace it with logic/rules based AI" don't belong in
the same article.

~~~
stochastic_monk
At the same time, the shallow “understanding” of state of the art NLP methods
are still not what we’re looking for. As we blow through records, we need to
carefully state our shortcomings so that we can look for the next step.

It’s critical to ask how we’re supposed to get to something better.

Or does it not concern you that these methods work by modeling words by their
context, not their content?

------
angry_octet
This is a topsy turvy argument, ignoring the abject failure of the old school
AI approaches in the Chomsky and MIT tradition and the stunning success of ML
statistical approaches. One wonders how little background research the NYT did
before they chose these authors.

~~~
goatlover
Stunning success in creating Hal 9000s or image classifiers? Because
traditional AI was aiming for generalized intelligence and not just success in
narrow domains. Minsky's critique of modern AI (or rather the entire history
of the field so far) is the lack of progress on common sense.

So an ML model can learn to recognize cat pictures better than humans. Great,
but what does it know about cats beyond being able to correctly pick out
pictures with them in it?

~~~
ben_w
I strongly dislike the phrase “common sense”. I have yet to encounter even one
example of “common sense” which accurately represents the world.

“Things fall when you let go of them” «Except flying things, things already on
the ground, and probably other examples too.» “But that’s not what I meant,
_it’s common sense that’s not what I meant_.” «Then it’s a tautology, things
that fall when you let go of them, fall when you let go of them. The real rule
(or rather, good enough at our scale) is Newtonian mechanics.»

Or similar conversations about being in two places at once.

When it comes to cats, what does common sense tell you — What cats do? How
cats respond to things? How they interact physically with their environment?
That’s all statistically learnable.

~~~
YeGoblynQueenne
>> What cats do? How cats respond to things? How they interact physically with
their environment? That’s all statistically learnable.

Not at all. At least not in practice and certainly not for any significant
subset of the things that "cats do" or the ways they "respond to things".

You can certainly collect some examples of all the behaviours you describe
above, but there are so many of them you are never going to have enough to
model a cat's behaviour with anything like significant accuracy.

~~~
ben_w
> never going to have enough

“Hey Siri show me cat videos”

“Here are some videos I found of ‘cat’ on the web:” «Link to YouTube page
saying “About 79,900,000 results”»

~~~
YeGoblynQueenne
The parent discusses:

>> What cats do? How cats respond to things? How they interact physically with
their environment?

Not "how does a cat look on youtube".

~~~
ben_w
You’re the second person who’s tried to correct my interpretation of my own
posts this week. What’s that quote about losing grandparents? :)

Anyhow, my point was that there _is_ plenty of data about _the sort of cat
behaviour which humans collectively find notable_.

~~~
YeGoblynQueenne
I see what you mean- you're saying that you can find 79mil cat videos on
youtube, so you have 79 million examples of cat behaviours.

That might sound like a lot - but it's still not nearly enough to model a
cat's behaviour. To convince yourself that this is the case, subtract 79
million from infinity. The number remaining is the number of cat behaviours
that a model trained on 79 million youtube videos would never have seen and
therefore not know how to deal with.

See, the point is not how much data you have- it's how much data you're
missing. If the amount of data you have is a tiny part of the whole, then you
can't model the whole very well.

It's already hard enough to train an image classifier to recognise still
images (video frames) of cats. You're proposing to train some kind of model
(it wouldn't be a classifier anymore) to recognise -and reason about- not only
the likeness of a cat, but the relation of a cat with its environment; with
_arbitrary_ environments and arbitrary entities in those environments. And the
cat is interacting with those arbitrary entities in the arbitrary environments
in arbitrary ways.

Seriously, you're looking at an extravagantly large number that nothing we
have right now can handle.

What _is_ the quote about grandfathers? ~.^

~~~
ben_w
The quote is “to lose one is unfortunate, to lose two looks like
carelessness.” One person misunderstanding me I can ignore, two in the same
way in a short window is definitely a sign I communicated poorly.

Why do you believe there is infinite cat behaviour? Why would they evolve
that?

Even if they did, the point of learning is to reduce a probability
distribution from “everything is equally likely, from this cat pawing at a toy
to pushing a pen in exactly the right way to forge my signature, from hunting
for a mouse to mugging an old lady for a voting card and using it to cast a
fraudulent vote in her name for the Natural Law Party at the next election” so
the probably distribution — your expectations — fits in a finite brain and
matches all one has seen (70 million videos only need to be 36 seconds on
average to be a lifetime of nothing but cats).

That being the case, all one really needs to do for a _”common sense”
understanding of cats_ is _the set of things human are not surprised by cats
doing_.

As I said before, I don’t like the phrase “common sense” because it’s such a
bad model for reality. That being the case, it doesn’t matter what a cat would
do when, say, elected governor of a small town — _common sense_ , right or
wrong, would say “eat, sleep, meow” or similar. Probably varies by person,
given how many complain that “people just don’t have any common sense these
days”.

Edit: why do you think it’s hard to train a classifier to recognise cats?
Google did that the unnecessarily hard way six years ago, now we have GANs
that imagine into existence cat pictures, as a student project to help apply
for a PhD:
[https://ajolicoeur.wordpress.com/cats/](https://ajolicoeur.wordpress.com/cats/)

~~~
YeGoblynQueenne
>> Why do you believe there is infinite cat behaviour? Why would they evolve
that?

An infinity of behaviours is not a distinct ability that has evolved to
fulfill some purpose. Rather, it's the result of the animal interacting with
its environment. The number of possible such interactions is infinite - or,
well, _most likely_ infinite.

An infinite number of combinations can arise from very simple processes- for
example, an automaton that generates strings in the aⁿbⁿ grammar (n a's
followed by n b's) can go on for ever. There is no reason to assume that a
complex mind in a complex environment will ever run out of combinations of
mind-states and world-states. Accordingly, there is no reason to believe we
will ever be able to collect examples of all those combinations, and represent
them in computer memory.

Edit: I'm not talking about a cat being elected governor here. Just ordinary
real-world behaviours, like all the ways a cat may chase a mouse, say. Try to
observe a cat and systematise its behaviour and see how welll you can do. Then
try to do it with a computer.

With a computer, it should be easier, right?

* * * *

I wouldn't say that the point of _learning_ is to reduce infinity to something
manageable. I think it's more like animal minds, like ours, have some kind of
ability to pick out what is relevant to a learning task from the infinity of
available experiences (incidentally, that's the subject of my PhD thesis :).

However, even as our minds are able to perform this one simple trick, we have
no clue how we do it and can therefore not yet reproduce it with our machines.
The result is the current state of the art in machine learning: data hungry
algorithms that require loads and loads of computing power to reach top
performance. This reliance on large datasets and compute limits progress: so
far we've seen results only in situations were there is sufficient data and
computing power and always in restricted domains (cat videos, vs cats in the
wild). In problems where either there is not enough data, or the data is not
sufficient because the domain is too large and too unconstrained, like natural
language or modelling individual behaviour, progress has been much slower.

In short, modern machine learning substitutes quantity for quality, which has
proven successful in the short term but looks to be self-limiting in the long
run (even before the time when we're all dead). Eventually, we'll need to find
an alternative or progress will stall.

* * * *

>> Edit: why do you think it’s hard to train a classifier to recognise cats?

Yep, that's a good example of what I'm talking about.

The model in the link was trained on 9304 examples and it shows- you can see
the smudges and deformation in the high-res image (and the last one, sent by
another person). I can't find the original dataset, but the results look very
homogeneous, so they're basically just reproducing the training examples
faithfully without generalising well- in other words, overfitting. Which makes
sense: 9304 examples are maybe OK for a school project etc, but nowhere near
enough for a real-world application.

Not that I can see a real-world application for generating faces of cats, but
the point is that if you just want to train a small model to see how this sort
of thing works, then you can certainly do it with a few examples; but if you
want something useful, that approaches state-of-the-art performance then you
need access to a lot more data and a lot more computing power.

I think you're underestimating how hard it is to make machine learning
algorithms work well. It is worth reading announcements in the lay press and
claims in scientific papers with a critical, even strongly skeptical attitude.
Just because Google says that deep learning is the bees knees, don't just
accept it as fact. Try to repeat their feats on your own. See how far you get.

I'm assuming you haven't otherwise you wouldn't be asking that question :)

~~~
ben_w
This is getting too long and detailed to use my mobile to keep replying in as
much detail as it deserves. :)

I get the impression that either (1) you have a _very_ different definition of
“common sense” to me, or (2) you are no longer talking about it. Does this
seem like a fair representation? If so, can you explicitly describe what you
mean by “common sense”?

As for reproducing results: limited experience of simple things only. Full
time job has gone from software to full-time carer for parent with
Alzheimer’s, so I don’t have time for anything more complex than e.g. {train
scikit-learn to read from scratch, then read all the digits in Shakuntala
Devi‘s number, then calculating the answer to her famous question} and timing
it as faster than the human visual system takes to go from a number appearing
to conscious awareness of that.

You know, fun toy examples for whiteboard interviews.

Mainly I’m keeping up to date with the “Two Minute Papers” YouTube channel.
Hopefully I’ll be able to apply to _start_ a PhD when my family can take over
care duties for me…

Quick edit: I think your definition of intelligence is equivalent to mine.
Please elaborate why you disagree?

~~~
YeGoblynQueenne
Sorry for the comment size! I use HN from a PC always and I tend to forget
that's probably not the most common use.

You're right, I'm not talking about common sense. It was another commenter who
mentioned it. I interjected that it's very hard to collect enough examples of
"What cats do? How cats respond to things? How they interact physically with
their environment?" to build a good model of cat behaviour.

I'll be honest and say I have no idea what is "common sense" in the context of
cat behaviour. Not to mention that it's very difficult to agree on a
definition of "common sense". Despite that, I think you'll find there's
general agreement that machine learning models don't have anything that could
be recognised as "common sense". One reason for that is that it's extremely
difficult to collect training examples of "common sense", exactly because it's
so very hard to define it.

Apologies if that was too much of a sidetrack from what you wanted to discuss!

I actually don't have a definition of intelligence :) I'm working off an
assumption that there is such a thing, that it's one process or one set of
processes and that we may be able to reproduce it on computers, at some point
in the future. But not in my life time.

The great advantage of doing a PhD is that you have plenty of time to read up
and experiment to your heart's content. I hope it all goes well for you and
you can soon start your studies.

I'm sorry to hear about yoru parent. You both have my sympathy. Hang in there.

~~~
ben_w
Thanks! I think we’re basically in agreement then, as all of my responses were
predicted on the incorrect belief that you were using common sense as an
argument against AI.

I certainly agree that humans can accurately extrapolate — for example what a
cat is likely to do next — with what seems like less data than any current
machine learning system.

I suspect have my suspicions why, but to keep this short I’ll only say
“catastrophic forgetting”, and separately that the normal approach in ML seems
to be like teaching kids “by asking them random questions from the set of
things we expect them to know at 18”, to almost-quote one of the podcasts I
listen to.

------
scarface74
When the Google defenders brought up Apple's original iPhone demo that only
worked if you followed the happy path and compared it to this staged demo, I
found the comparison wanting.

The iPhone crashing when you did X was a simple debugging exercise. Making a
chatbot that can understand the variety of human speech, translate it to text,
understand it and respond intelligently is a much harder problem.

~~~
joshuamorton
Not particularly. Speech to text is mostly solved at this point. When
constrained to a small set of domains, so is intent recognition and
canonicalization. That's certainly the hardest part, but we are still able to
do it within specific domains.

Once you have a canonical request, response has been solved for a while.
really I think it's more likely that you consider the problems where you
understand how one would debug to be simpler ;)

~~~
realusername
> Not particularly. Speech to text is mostly solved at this point

There's still no software capable to understand me properly in French right
now, speech to text sill has a long way to go.

~~~
megaman22
Is that just a question of expending the resources on that domain, though?
Untold man-hours and compute cycles have been spent on making English speech-
to-text mostly work. I would be surprised if even a fraction of that energy
has been devoted to other languages.

~~~
realusername
The issue with French and why it works so badly is the assumptions people made
when designing the speech-to-text by starting from English. The core issue is
that spoken French and written French are two completely and widely separate
languages, with a much much greater difference than English. So the current
approach to try to use books and mapping words to them isn't just working
well.

What I mean is the current approach is pretty limited.

~~~
dpark
> _The core issue is that spoken French and written French are two completely
> and widely separate languages_

In what way?

~~~
realusername
Written french is largely codified by the french academy which is very
conservative (so the written language did not evolve much in the past 100
years) whereas the spoken language evolved independently. It's a bit like
slangs in English if you want but to a whole new level, the two parts of the
language don't have much in common nowadays. Tenses, pronouns, grammar,
sentence construction and words are different.

~~~
dpark
It’s hard for me to believe the they have diverged that much. I’d expect that
the one would drastically influence the other. But then I don’t know French.

------
gok
> The crux of the problem is that the field of artificial intelligence has not
> come to grips with the infinite complexity of language.

I stopped reading at that point. The field came to grips with the infinite
complexity of language since before it was called "artificial intelligence."

~~~
Animats
Machine translation is getting pretty good, simply from crunching on enough
text. The complexity of language looks finite. Translation between all the
European languages works fairly well. Asian languages, not so much yet.

Strong AI still doesn't work. "Common sense" remains hard. Unstructured
manipulation is still not very good. But legged locomotion is much better, as
is vision processing.

Combining machine learning and geometry has promise. Look at how Waymo does
automatic driving. (Not Uber or Tesla; they have no clue how to do it safely,
as their crash record demonstrates.)

We're way ahead of where things were in the "AI winter", 1985-2005. This time
the startups make money and do useful things. Progress will continue because
there is revenue. AI used to be tiny - about 20-50 people at MIT, CMU, and
Stanford. Now it's at least a thousand times bigger. Progress will be made by
brute force.

(Me: MSCS, Stanford, 1985. I met most of the greats of classical logic-based
AI. Trying to hammer the real world into predicate calculus just doesn't work.
The expert systems guys were in denial big-time about this.)

~~~
seanmcdirmid
> We're way ahead of where things were in the "AI winter", 1985-2005.

That was the second AI winter. The first AI winter was in the early/mid 70s.

It is not inconceivable that a third AI winter will happen eventually.

~~~
sgt101
I think that you've got your timing out there. Money started to flow straight
after Gulf War 1 in 1992/3 because several folks had build components of the
logic systems that were used for organising the logistics (Tate : HTN's;
Winston schedulers - although I am hazy about this) What I am clear about is
that there was money to be had for AI research after that.

~~~
DonHopkins
Sorry to drop in here and interrupt off topic, but you recently wrote a great
comment that I missed the first time around, and now can't reply to, and don't
know how else to get in touch (consider this "moving along"). It's off-topic
but at least vaguely AI related. ;)

[https://news.ycombinator.com/item?id=14983831](https://news.ycombinator.com/item?id=14983831)

That is some fascinating stuff I've never heard before. I would love to
discuss it more and ask you some questions! Maybe you could make a posting
about the Turing Institute or Lighthill Debate? (There's a wikipedia page
about Turing but it's kind of dry and lacking of scandal and palace intrigue.)

I worked at the Turing Institute in 1992, and it was an amazing place with
many great people, including Arthur van Hoff who developed
GoodNeWS/HyperNeWS/HyperLook! Unfortunately I never got the chance to meet
Donald Michie (maybe he dropped by but I didn't recognize him), but I know
from his reputation what a great guy he was. I did hear a funny story about
him:

Donald Michie once overheard his secretary telling someone on the phone how to
pronounce his name (in a Scottish accent): "It's Donald, as in Duck, and
Michie, as in Mouse." He was so pissed he refused to speak to her for a month!
;)

For what it's worth, that's a great way to remember how his name is
pronounced!

When I was there in '92 it was being (mis)run by some upper crust Tory
gentleman who specialized in bailing out failing companies. He was so uptight
he got pissed off I'd written the address as "North Hangover St." on the white
board! Sheez. Instead putting him in charge, they should have just called Old
Glory:

[https://www.youtube.com/watch?v=KXnL7sdElno](https://www.youtube.com/watch?v=KXnL7sdElno)

I've been writing an article (still in draft) about the work I did at the
Turing Institute (HyperLook):

[https://medium.com/@donhopkins/hyperlook-nee-hypernews-
nee-g...](https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-
goodnews-99f411e58ce4)

Fun times! Anyway sorry to interrupt.

------
namuol
> Just as you can make infinitely many arithmetic equations by combining a few
> mathematical symbols and following a small set of rules, you can make
> infinitely many sentences by combining a modest set of words and a modest
> set of rules. A genuine, human-level A.I. will need to be able to cope with
> all of those possible sentences, not just a small fragment of them.

The issue is less about understanding language and much more about creating
models of "the world" through observations, and then applying those models to
perform tasks or answer questions.

Parsing language is actually pretty easy; "knowing" how it relates to a
generalized model of "the world" is the hard part.

> No matter how much data you have and how many patterns you discern, your
> data will never match the creativity of human beings or the fluidity of the
> real world.

This is _firmly_ in the Opinion section of the NY Times...

What Google seemed to do with Duplex might seem like a babystep, but it
doesn't take much imagination to recognize very real possibility for so-called
"genuine, human-level A.I." to -- gradually -- emerge.

~~~
azinman2
Uhh, it does. Being able to have a fun date night conversation has little in
common with making an appointment for a hair cut. Duplex is a codified set of
rules — essentially a set of blanks that need to get filled in and an elegant
way of pulling together a bunch of components to do so. That’s nothing like
free form conversion which requires world knowledge, perspective (gained from
experience), personality, creativity, emotional connection, etc etc.

There’s nothing new going on in AI vs the machine learning “pragmatic
downgrade” of the 90s, just the accuracy is higher.

~~~
Retric
Parsing the language at some level is required for translation beyond simple
1:1 word mapping. Understanding is really a separate task that sits outside of
language and would be just as hard with a very limited language designed to be
easy to decode.

~~~
uryga
Correct me if I'm wrong – are you saying that one doesn't need to understand
something to be able to translate it? Because that'd be a strange argument to
make, looking at the state of Google Translate.

~~~
robkop
> looking at the state of Google Translate

last time I checked Google Translate was a seq2seq based model with symbol
level embeddings.

The way symbol embeddings are generated doesn't lead to any real understanding
of the sentence/ word/ language. It only really leads to understanding what is
normally around that symbol in different situations. You could also very
easily argue that the seq2seq model doesn't provide understanding, it only
learns to encode the general meaning of the sequence in a fixed compressed
format.

It's possible to argue that being able to compress a sequence into a fixed
length vector requires understanding but I would argue that understanding
requires more than just being able to do lossy compression on a sequence. In
my view entity modelling and related problems are much closer to achieving
some level of understanding. They at least are able to use context to figure
out what parts of the sequence have what meaning and the relationship between
separate parts of the sequence.

I'd be interested to hear your view of how Google Translate has understanding.

~~~
uryga
I didn't mean to say GT has understanding, quite the opposite ;) I meant to
use it as an example of how translating without understanding doesn't go too
well in many cases.

I think I might have misunderstood your original comment though, as we seem to
mostly be on the same side of the issue. That being said, could you expand on
this?:

> the seq2seq model doesn't provide understanding, it only learns to encode
> the general meaning of the sequence

Maybe I'm nitpicking, but "encoding the general meaning" sounds a lot like a
form of "understanding", and I wouldn't say seq2seq does any of that. (That's
getting pretty philosophical though...)

------
zerostar07
It was nice when nerdy news were only written in nerdy magazines read by a
minority of people. Nowadays the spotlight is on tech but the coverage is not
geeky, it's always opinionated, dystopian and as bad as politics. They ruin
the fun before it even begins.

------
mindcrime
This article has some serious problems, as others have already noted. But one
point I haven't noticed anyone making is this: the article falls into the trap
expressed in the old saw "once it works, it isn't called AI anymore".

That is, AI researchers achieve a powerful result, and as soon as it's
achieved, it's immediately (or nearly immediately) dismissed as being an
interesting AI result, and the bar is moved forward yet again.

One can almost picture AI researchers as Sisyphus, pushing the rock to the top
of the hill, only to have it roll back down on them, over and over and over
and over and over and over...

------
kevincrazykid
if it's working well enough within the constraint, couldn't we just brute
force the entirety of human language, thereby removing the constraint at some
future point?

~~~
vertexFarm
I feel like there would almost definitely be unexpected difficulties with that
approach, possibly fundamental ones. Human language is crazy.

------
paulydavis
Why have it talk to people taking a hair appointment? It is vastly simpler to
have autonomous agents on both sides talk to each other.

~~~
defertoreptar
It's simpler to have hundreds of thousands of small businesses, from all walks
of life, sign up for a high tech appointment scheduling software?

~~~
mlazos
I feel like a large number of businesses already use software for scheduling
but it’s a matter of having an API that can be exposed to customers that isn’t
really supported. So it might be easier than you’re leading on.

~~~
jhall1468
A large number of medium to large businesses do. The overwhelming majority of
small locally owned businesses don't. They still represent the overwhelming
majority of sales.

Furthermore, big companies all implement their own software or purchase a
variety of systems, some of which have API's, all have different API's and
none have open API's.

A system that can speak, universally, to every Salon (or even say... 60%) is
actually considerably less complex than the universal API you're calling
"easier".

------
tshadley
700,000,000 years ago to 300,000 years ago: all of nervous system life gets by
on "pattern detection" and "curve fitting" (Judea Pearl's term for deep
learning).

Homo neanderthalesis or thereabouts to modern day: language, reason, cause and
effect.

> Today’s dominant approach to A.I. has not worked out.

But "curve fitting" biological neural networks needed 2500 times more
architectural exploration and enhancement to get to rudimentary language
capacity than rudimentary language architectures needed to achieve all of
human knowledge. That suggests we're doing the right thing in growing and
enhancing deep neural network architectures now. Evolution suggests the next
step to language and reason on the right architecture is comparatively easy.

~~~
tacon
>Evolution suggests the next step to language and reason on the right
architecture is comparatively easy.

<Something> suggesting the next step to language and reason is the right path
and easy is the rhyming history of AI for sixty years.

"This time it's different."

~~~
tshadley
> <Something> suggesting the next step to language and reason is the right
> path and easy is the rhyming history of AI for sixty years.

If evolution can be taken as a successful search through the space of what is
possible, language and reason are on the path from curve fitting. But calling
this "easy" is far from my point. If evolution is any sort of guide, deep
neural network architectures are where the real work is needed before language
understanding is possible (contra Pearl, Marcus).

------
xvilka
Why not both? It is like fighting between engineers of digital electronics and
analogue electronics. I believe the practical applications should use hybrid
approach.

~~~
mindcrime
_I believe the practical applications should use hybrid approach._

FWIW, I 100% agree with this. At least in the short-term. I think any "real"
AGI will use elements of symbolic processing and sub-symbolic processing
expressed as probabilistic / statistical pattern matching using something like
ANN's.

The question I have is whether or not, in the end, all of what we call
"symbolic processing" can eventually be expressed as operations at that ANN
level (that is, can the "separate" domains actually be unified)? Given what we
know (or think we know) about how the human brain works, I lean towards the
answer being "yes", but I don't necessarily think we have to have everything
reduced to ANN-level before we can achieve useful AI's. Arguably that last
sentence was sill, because we already have "useful" AI, depending on your
definitions of "useful" and/or "AI". :-)

------
nicodjimenez
AGI is overrated we already have humans for that, machines should focus on the
types of problems that humans cannot already solve easily or cheaply.

------
sixhobbits
Someone sent me this article on IM. My (unedited) response:

nothing worse than someone who both doesn't understand wtf he's talking about
_and_ is very critical

either are ok on their own

>> He‘s wrong then?

well if either author can provide an article from 10-20 years ago, saying "in
10-20 years, we'll invent a single machine that can reach almost human-level
translation proficiency, drive a car in complex environments, beat the best
human player at logical games including chess and Go, beat the best human
player at language/knowledge based games like Jeopardy, hold a lengthy
conversation and answer questions, be able to accurately describe and caption
images and video, and outperform humans at trading the stock market and
detecting fraud but I won't be impressed because all of those things are
simple" then it might have some weight

"The dream of artificial intelligence was supposed to be grander than this —
to help revolutionize medicine, say, or to produce trustworthy robot helpers
for the home."

robots: we already have the vacuum cleaners. Actual flexible domestic work is
coming soon, the moment the military gets bored and releases some knowledge
and/or people have finished making money from helping amazon pack boxes and
starts making money from normal people instead:
[https://www.youtube.com/watch?v=rVlhMGQgDkY](https://www.youtube.com/watch?v=rVlhMGQgDkY)

medicine: same machines already being used to detect cancer and heart
conditions more accurately than humans, as well as folding proteins to create
new medicines. Not sure what more he wants..

"If machine learning and big data can’t get us any further than a restaurant
reservation" because you know, the current state of machine learning has been
around for, what, 12 minutes now and we're still stuck on making restaurant
reservations. Def time to call it quits and start over

". But in open-ended conversations about complex issues, such hedges will
eventually get irritating, if not outright baffling." — weird that it's pretty
difficult to get computers to understand a very ambiguous, illogical and
inefficient form of communication. Luckily humans are so much smarter than
computers that we can easily process 1-billion+ logical inferences a second
and talk to them in an efficient and logical way.

>> Okay you‘ve convinced me

but i've only just started

~~~
neffy
Oh, you never ran into Hugo de Garis?

iirc (it was back in 1997 when I heard him talk), he was promising human level
"artilects" about 2015, and super-human intelligence by 2025.

The problem that I have observed, is that the popular press always ignores the
100 experts that say, nope, not going to happen, for the one "visionary",
who'll promise whatever they want to hear, as long as somebody else is paying
for dinner.

~~~
tgb
If de Garis is (still) predicting super-human intelligence in under 10 years,
then it sounds like he _is_ impressed.

------
tree_of_item
"Not worked out"? It seems like it's working pretty well to me. What a silly
article.

------
ppod
Zombie Searle

------
pixl97
>The crux of the problem is that the field of artificial intelligence has not
come to grips with the infinite complexity of language

Or, put another way, the problem space reality presents is unimaginably large.
So large that it took 4 billion years to achieve human level intelligence.
That said humans have been working on the AI problem a pretty short period of
time and we have made some pretty good strides at reproducing intelligence.

~~~
lostcolony
You know, it's kind of funny. We have no idea if the current path is trending
toward a local optima, or if we're heading toward a global optima.

We are completely aware of the issues of finding a local optima in our AI
(especially ML) algorithms, but little thought has been given if our AI
evolution has been trending toward one or not. We're pouring vast research
reserves along a few pathways, and hoping that our hill climbing leads to a
global maxima (or at least a local one sufficiently high as to be 'human level
intelligence')

~~~
jhall1468
> but little thought has been given if our AI evolution has been trending
> toward one or not.

Wait, you think people aren't thinking about that all the time? They
absolutely are. And there's very little obvious ways to determine if this is a
local or global optima. Resources are limited and everyone wants to follow the
current best results, because that's necessary for business success.

~~~
lostcolony
One would expect a competitive advantage, then, in funding research to try
something else. Exclusively following the current best results just means
equivalency, and provides no competitive edge.

------
aje403
Earnest Davis from NYU right? Next bit of news we’ll hear is “Yann Lencun
starts fist fight with fellow professor”

------
KasianFranks
This group does not know what they are talking about. Mark my words here. It's
about advanced distance calculations and engineered feature attributes.

------
KasianFranks
No it’s not. A complete definition of intellect (let alone consciousness) has
never been done. When it is then progress will be made.

~~~
zerostar07
Why would we need that (and why do you think it is even possible)? We didn't
have a definition of spacetime until we had one. Our current definitions if
intellect and consciousness may be completely off, like the concept of
absolute time.

