
Artificial Intelligence Is Stuck - shazad
https://www.nytimes.com/2017/07/29/opinion/sunday/artificial-intelligence-is-stuck-heres-how-to-move-it-forward.html
======
tim333
I'd dispute that AI research is stuck or that his proposed answer along the
lines of "An international A.I. mission focused on teaching machines to read"
is a good one.

Research seems to be cracking along with AlphaGo, self driving cars and the
like. Recently DeepMind have been doing interesting stuff with dreams[1],
imagination[2] and body movement[3], the last one being a little reminiscent
of his daughter inventing a way to exit her chair mentioned in the article.

Re government intervention it's not something like CERN where you need
billions in capital and it's not an area where a big government project is
likely to be the best use of capital.

[1] [https://www.bloomberg.com/news/articles/2016-11-17/google-
de...](https://www.bloomberg.com/news/articles/2016-11-17/google-deepmind-
gives-computer-dreams-to-improve-learning) [2]
[http://www.wired.co.uk/article/googles-deepmind-creates-
an-a...](http://www.wired.co.uk/article/googles-deepmind-creates-an-ai-with-
imagination) [3] [https://www.theverge.com/tldr/2017/7/10/15946542/deepmind-
pa...](https://www.theverge.com/tldr/2017/7/10/15946542/deepmind-parkour-
agent-reinforcement-learning)

~~~
deepnotderp
There's a LOT of non-Deepmind research as well, some things that have actually
been published _before_ Deepmind.

Don't get me wrong, Deepmind puts out a _lot_ of great work, but do look at
other research labs as well, especially the ones that aren't as well known.
Deepmind markets their research really well, but there's a ton of other labs
doing good work as well.

~~~
nefitty
Any tips on how to find the more obscure stuff?
[http://kurzweilai.net](http://kurzweilai.net) occasionally bubbles up really
crazy stuff, but perhaps there is a journal or other curated resource you know
of.

~~~
deepnotderp
I'd recommend following Miles Brundage on Twitter, dude goes through a mind
boggling number of papers and uncovers loads of hidden gems.

------
Animats
I've been down this road. I used to be interested in "embodiment" as a path to
AI, but in the sense of things that could move around in the physical world,
not fall down, and not bump into stuff. Low-end mammal level AI was the goal -
mouse level. I made some progress in that area [1], hit the limits of
available simulators, spent several years on improving physics engine
technology, and eventually sold that off to a game middleware company.

Boston Dynamics took that much further. But it's the same approach I used -
analysis of the dynamics, not AI. It's a complicated problem in dynamics, but
it barely needs AI at all. Boston Dynamics, unfortunately, demonstrated that
even if you spend $120 million, you're not at a minimum viable product that
sells yet. Really cool legged robot prototypes, though.

This was all before machine learning took off. For a while I was looking at
adaptive model-based control, which is a lot like machine learning. Machine
learning seems to be getting good at what the front end sections of the visual
and auditory cortexes do. This is real progress. But a whole organism is still
out of reach.

There's a business case for focusing on language skills, but in a way, it's a
distraction. The mammals all have close DNA compatibility, but only humans do
language much. If we can get into the entry-level mammal range of AI, we
should be getting close. I once said something like that to Rod Brooks when he
was promoting Cog (a robot humanoid head with a lot of compute power), and he
said "I don't want to go down in history as the man who developed the world's
greatest robot mouse."

Reverse engineering biology is going very slowly. See "openworm.org", which is
an effort to develop a good computerized model of the C. elegans nematode that
runs in simulation. C. elegans has 302 neurons, the wiring diagram is known,
and it still doesn't work. This shows how little we really know about nervous
systems.

[1] [https://www.youtube.com/watch?v=kc5n0iTw-
NU](https://www.youtube.com/watch?v=kc5n0iTw-NU)

~~~
deafcalculus
What's the killer app for mouse level AI?

~~~
Kraxenbichler
With the right priors - self-driving cars, for example.

------
rwnspace
I sense the hand of an editor. Particularly regarding the title.

Embodiment seems to be a branch with low-hanging fruit, when it comes to
advancing AGI. I think the economic structural problems are important, but
it's possible to over-egg the details and for some lab to stumble on an
experimental paradigm with features we didn't realise were implicated a
priori. When it comes to other AIs, the idea that we are stuck for
pragmatic/practical issues is a little silly.

I'm no expert, just a person with an arm-chair (and too much time on my
hands), but I suspect that idealising the feature-space we work with can hide
as many things as it reveals - it may turn out that the computational problems
are so large because we are mostly attempting to solve them ex nihilo. That
is, embedding in an environment plays as much a role in the process of
intelligence as a neuronal structure does; genes and evolution provide a mode
for translating environmental computation into neuronal computation. The vast
scope of what we don't know about the role of glial cells for cognition (and
the little that we do) makes me doubt that complex structures of binary
mechanisms will be sufficient. But again, that's just my speculation, and
perhaps lack of education.

~~~
visarga
> Embodiment seems to be a branch with low hanging fruit, when it comes to
> advancing AGI.

If it were "low hanging" it would have been picked already. Reinforcement
learning with AI agents is hard, especially in a dynamic environment with many
types of objects.

I think the path towards AGI is to do simulation coupled with deep learning.
Simulation would open the door to predicting non-trivial effects that cannot
be learned by example because they are so rare that there are no training
examples. We can generate artificial training examples to cover all the rare
cases.

~~~
rwnspace
I'm sorry if 'low-hanging' comes off as disrespectful - I'm just guessing that
aspects of embodiment, once understood, will be capable of fairly trivial
description and reap large consecutive rewards. I remember your u/n from other
posts, we seem to have similar interests but you are vastly more educated in
the engineering of AI. What's your background, if you don't mind sharing here?

I am suspicious non-contingent aspects to cognition remain that simulation and
deep learning don't necessarily grant, though they might well be sufficient.
I'm not smart enough to be sure, and I'm stretching for a description: a child
self-reared to adulthood in the wild won't display what we usually consider
essential facets for 'humanlike' levels of intelligence or competence. We're
hardly trying to build a caveman.

They lack whatever is crucial in socialisation -- the ability to make subtle
differentiations between other agents' actions and motivations seems to endow
self-awareness, and abstractions for successfully handling novel objects and
ordering perception relevance. Successful generality to our degree seems to be
better 'outsourced' rather than hard-coded into solo agents, at least in the
natural examples. Though I understand that's not necessary, perhaps there are
good reasons for it. I feel like the first AGI will actually look a lot more
like "multiple similarly 'perspected' AIs interacting with one another leads
to each carrying the G in AGI". Essentially I'm suggesting it's hard to have
generality and relevance to our proficiency (or better) without a 'culture'.

What I'm thinking seems to boil down to inserting some of Piaget's ideas into
the philosophy of AI, which might be a bit much, and I'm open to charges of
bullshit.

~~~
visarga
I am just a hobbyist in machine learning. The current situation in AI is that
we can only do limited aspects of perception: like vision and hearing, at a
superficial level. We can recognize objects but we can't recognize relations
between objects nearly as well. There is no global scene understanding yet.
With text, we can do syntax and translation, but we can't do reasoning except
on trivial cases. There is no power of abstraction yet, of transferring
knowledge between domains, which is essential for advanced intelligence.

So, before we have an embodied agent, we need to solve the reasoning and
abstraction part, and my money is on graph signal processing (a kind of neural
nets) and simulators (also implemented as neural nets). We need to move from
simple object recognition to reasoning and simulation on graphs of objects and
relations.

------
pgodzin
> Not long ago, for example, while sitting with me in a cafe, my 3-year-old
> daughter spontaneously realized that she could climb out of her chair in a
> new way: backward, by sliding through the gap between the back and the seat
> of the chair. My daughter had never seen anyone else disembark in quite this
> way; she invented it on her own — and without the benefit of trial and
> error, or the need for terabytes of labeled data.

I really hate the constant comparisons of AIs to babies. The author's 3 year
old daughter has had 3 YEARS of sensory data obtained through moving and
trying to fit through things. That is terabytes worth of data! I would expect
an AI to be able to generalize once as well.

~~~
zebrafish
Yes terabytes.... Probably petabytes or more. And all of it is unlabeled data.

If you fed video and sensory data to a deep net for 3 years and somehow were
able to come up with an activation function that modeled "survival", I still
highly doubt that anything at all would come out that remotely resembles human
intelligence. There's no way that i'm aware of to label reality in real time.

~~~
cbames89
One way to label is by expectation. If the outcome is what you expected, then
it gets one label. If it didn't it gets another label. Deepmind has had some
success with this approach.

------
backpropaganda
An agent can be intelligent without it learning how to read human language.
Look around, most organisms in our world communicate using extremely simple
binary language or don't communicate verbally at all. Yet, they are
intelligent enough to do very complicated tasks which current robots fail to
do. Intelligence is an easier problem than language, and thus should be solved
before language.

What a sigh of relief to read a refreshing take on the real progress of AI.
Yes, it's stuck, and that's the real problem of AI, that we haven't been able
to do anything significant after perception. However, unlike the author, I
don't think the solution is to nationalize AI research (we're not close enough
for that), but to fund more non-deeplearning research for 5-10 years, and then
we might see some progress in non-perception tasks.

~~~
observation
> Look around, most organisms in our world communicate using extremely simple
> binary language or don't communicate verbally at all.

Yes.

> Yet, they are intelligent enough to do very complicated tasks which current
> robots fail to do.

True.

> Intelligence is an easier problem than language, and thus should be solved
> before language.

Wrong.

This is the classic mistake everybody makes, including people in Computer
Science.

Because if that were so our robots would _already_ be clambering backwards
through chairs (per the metaphor in the article).

You have to think of deep evolutionary history. It took centuries to come up
with advanced mathematics, so in some strange to humans sense, this isn't that
hard. Same with language, it only took tens of thousands of years.

For Nature to learn how to develop a nervous system capable of flexibly
interacting with the environment, culminating in our brains, took hundreds of
millions of years.

This isn't an claim that we have to wait that long to re-engineer such powers,
but it is to point out that if the possibility space for developing a nervous
system was much larger than for the same organisms to learn language...

tldr; Walking is hard.

We have been conflating what is easy for us, with what is objectively easy,
because we don't appreciate the Deep Time that Nature has been working with. I
suspect we will develop EMs (brain emulations, a sort of short cut) before we
understand what we are doing but I hope that is wrong.

~~~
mannykannot
If I am following you correctly, you are arguing that walking is a harder
problem than language because it took much longer to evolve.

This seems to assume that a facility for language and advanced mathematics is
independent of the existence of a nervous system capable of flexibly
interacting with the environment, but it seems plausible, indeed probable,
that language, consciousness and math depend heavily on the prior neural
infrastructure, and their development was the most recent step in a process
that has been going on since the evolution of the first synapse.

On the other hand, I am skeptical of the somewhat popular view that the key to
generalized AI is to make robots that interact more thoroughly with their
environment, and that they will then find their own way to language and
consciousness. Partly, this is because I do not think that if you
intentionally pursue the robotic goal, you will necessarily create the sort of
infrastructure that is generalized enough to be the basis for the emergence of
language.

~~~
observation
> If I am following you correctly, you are arguing that walking is a harder
> problem than language because it took much longer to evolve.

This is a thorny subject. So I am saying that in some objective way, walking
is harder than language because Nature took millions/billions of years to
traverse the solution space. Then... once we had a huge number of
preconditions existing, then we had the development of language.

I am not saying that this means if it takes 10 years to develop language with
some artificial means that it will take 100,000 years to develop walking.

What I am pointing to is that we ought to appreciate that if even blind
natural selection took that long, then the possibility space to develop a
nervous system must be much larger than we have anticipated.

As evidence of this: consider how (at least in popular culture, but also in
comp sci in the old days) we developed chess playing computers and it was
broadly assumed that breakthroughs in getting robots to walk and talk would
soon follow through. That did not happen. It was a natural assumption but it
was wrong.

> This seems to assume that a facility for language and advanced mathematics
> is independent of the existence of a nervous system capable of flexibly
> interacting with the environment, but it seems plausible, indeed probable,
> that language, consciousness and math depend heavily on the prior neural
> infrastructure, and their development was the most recent step in a process
> that has been going on since the evolution of the first synapse.

I don't know the answer to that. On different days I think one or the other is
true. On Day #1 I think Nature obviously required walking before talking, but
we could develop them differently, just has we didn't need to develop better
horses to produce cars. On Day #2 I think to myself there's a deeper sense in
which you really do require walking before talking because otherwise why
didn't Nature develop biological microlife which evolved communications
ability long before it developed legs. So...

> On the other hand, I am skeptical of the somewhat popular view that the key
> to generalized AI is to make robots that interact more thoroughly with their
> environment, and that they will then find their own way to language and
> consciousness.

We cannot be certain consciousness or intelligence are high probability events
once you have life. We could be like those French artifact makers who made
such exquisite mechanical toys for the aristocracy but ultimately got nowhere
whereas the English inventors meddling with water and steam power really
kicked off a revolution.

Who is Silicon Valley is genuinely looking at the fundamentals of A-Life or
AI? OpenAI? MIRI? Stanford? DARPA?

------
fauigerzigerk
I can't follow the author's logic. First he complains about the limited
breadth of AI approaches (bottom up) and then he makes the case for more
central coordination of research efforts.

Contrary to applied physics or medicine, AI doesn't require massive capital
investment like building a particle accelerator or running clinical trials
over years.

So if we already suffer from a lack of diversity, why should we ape the
organizational structure of those fields?

~~~
ethbro
The author's general problem is that he seems ignorant (or willfully ommiting)
the expert systems period in AI (realized and popular in the 80s, academic
foundations discovered in the 60s and 70s).

I agree with the article that GP AI is likely to ultimately be a fusion of
bottom-up with top-down systems, and that expert systems seem to be getting
short shift after their earlier failures while neural networks are possibly
receiving overly optimistic expections.

To be fair, I believe this is the author:
[https://en.m.wikipedia.org/wiki/Gary_Marcus](https://en.m.wikipedia.org/wiki/Gary_Marcus)
, and he appears to have a cognitive neuroscience background as opposed to
computational AI. So I wouldn't be surprised if he actually was unaware of
1960s-80s CS AI research.

~~~
candiodari
Examples of top-down algorithms, in my opinion (since bottom-up and top-down
are debatable concepts in many concrete situations), include:

\- Genetic algorithms

\- Q learning

In the sense that they learn general behavior first and then learn ever more
little "tricks" to be used in particular situations. Both are more effective
when combined with ANNs. But when they start they're only aware of very high
level goals.

That said, I also have kids, and while they're bigger now, I would argue the
idea that humans work top-down from the very beginning doesn't survive caring
for a toddler for a few hours (babies can't really move, so they don't make
particularly stupid decisions. Toddlers and up to teenagers make idiotic
decisions that make sense from particular perspectives. For instance, they
exhibit extreme short term decision making (like taking a huge risk of falling
down just to get a little piece of candy).

Top-down decision making isn't just something that is eventual emergent
behavior, it's learned behavior. Telling a toddler that to get candy he should
go to the store, get flour, sugar and ... and follow this recipe doesn't work.
They get distracted after 30 seconds. It's not that they're trying to fail,
their mind just doesn't let them focus beyond a certain (short) amount of
time. Adults have the same limit, just longer time, but they have learned to
compensate for it. For instance using TODO lists, or project plans.

~~~
ethbro
I've always had the suspicion that memoization (or lack thereof) is the
primary reason really young children do and think a lot of the ways they do.

As adults, it seems like we don't actually experience every sensation of the
world anymore. Most of the time it's already high level categorized (e.g.
"apple") by the time it hits our conscious mind.

~~~
candiodari
I agree, it's obvious that human intelligence is >99.9% a behavior copying
algorithm. What we call rational thought is in reality restricted to conscious
exercise and it is a learned skill. A trick, nothing more, and especially not
a core part of our behavior. It is not that different, at a low level, from
learning to juggle balls. Rational behavior, firstly, most people just don't
have it at all, and secondly even in the people who do behave rationally
occasionally, it is only when things are happening slowly enough and they're
putting in constant effort toward maintaining that rational behavior,
constantly second-guessing themselves and going back in memory every few
minutes to evaluate your own actions and formulate a plan.

And if you've been to the third world (or just a large poor part of a large
western city), you'll know this is true: billions of people have never learned
to act rationally, and only few and far between will ever act rationally. You
can do a thought exercise with these people and figure out with them what the
rational action is, and the vast majority will simply act anyway.

------
unix1
In 1988 Hubert L. Dreyfus and Stuart E. Dreyfus released a paperback version
of their previously published "Mind over Machine" book, in which they mostly
spend time debunking the myth that expert systems and rule-based programs are
ever going to have "intelligence" on par with human brain.

The book is an interesting read in itself, but what I found remarkable is that
in the 1988 release they added a "preface to paperback edition" in which they
used a couple of pages to give their views on artificial neural networks,
which (though not new) was gaining some steam at the time. The conclusions
they reached are as relevant now as they were 3 decades ago.

There have been no new breakthroughs in this area. Most of the research being
done is in application of what we have known for decades in specific areas,
with minor insights into tweaks and uses of combinations of algorithms to
better solve specific problems. The big differences between then and now are:
(1) technology is more accessible - data is easier to collect, store and
output via many input/output methods; and (2) the hardware is significantly
faster - we can now go through more data, make algorithms run faster, and
appear to perform better.

This inevitably brought a lot of hype, including many predicting human-like
artificial intelligence not too far away. But maybe those with experience in
60s and 70s in the field in USA and Japan can draw a parallel between what's
happening now and what has happened few times in the past in this area:

\- companies perform neat promising demos with unrealistic implicit or
explicit promises

\- investors pour money in

\- media hype ensues

\- after awhile - no new breakthroughs: still can't turn ANN or expert system
into a human brain

\- outcome is improvements in limited use cases

\- hype dies down, but we can repeat the cycle after improvements in hardware

Edit: formatting

~~~
candiodari
> There have been no new breakthroughs in this area. Most of the research
> being done is in application of what we have known for decades in specific
> areas, with minor insights into tweaks and uses of combinations ...

There are 2 huge problems with that:

1) nobody is trying to "embody" an intelligence with any sort of research
project behind it. Nobody's even trying to create an artificial individual
using neural networks. There are several obvious ways to do this, so that's
not really the problem.

Therefore I claim that your implied conclusion, that it isn't possible with
neural networks somewhere between premature and wrong.

2) What if the difference between an ANN and our brain is a difference of
scale and ... nothing more ? We still do not have the scale in hardware to get
anywhere near the human brain, and just so we're clear, the differences are
still huge.

Human neocortex (which is roughly what decides on actions to take): 100
billion neurons

Human cortex (which is everything that directs a human action directly.
Neocortex decides to throw spear and the target, cortex aims, directs muscle
forces, moves the body and compensates for any disturbance like say uneven
terrain): another 20 billion neurons.

Various neurons on the muscles and in the central nervous system directly: a
few million (mostly on the heart and womb. Yes, also in men, who do have a
womb it's just shriveled and inactive). They're extremely critical, but don't
change the count very much.

AlphaGo 19x19x48, times 4 I think. About 70000 neurons, and that does sound
like the correct number for recent large-scale networks.

A human neuron takes inputs from ~10000 other neurons, on average. A state-of-
the-art ANN neuron takes input from ~100, and since it's Google and they've
got datacenters, AlphaGo was ~400.

So the state of the art networks we have are on par with animal intelligence
of the level of a lobster, ant and honeybee. I think it is wholly unremarkable
and understandable that these networks do not exhibit human-level AGI.

What is remarkable is what they can do. They can analyze species from pictures
better than human specialists (and orders of magnitude better than normal
humans). They can speak. They can answer questions about a text. They can ...
etc.

Give it a few orders of magnitude and there will be nothing these networks
don't beat humans on.

~~~
mehh
Hmm is this not the same argument that the classical AI gave with a few more
rules and it will be alive!

------
bhickey
The author is just spouting off on a topic he doesn't understand. It's just a
rehashing of Chomsky's hatred of statistical NLP. He pulls off the neat trick
of approximating knowledge of artificial intelligence by hoodwinking the New
York Times, but he doesn’t have insight into the topic he's talking about.

~~~
jesperlang
Can you describe what you mean by "he doesn't understand"? His background is
in psychology and neuro science so his view on intelligence is probably quite
different than person coming from computer science.

For me he puts words onto something I've felt recently, that what we're doing
is cool and all, but just doesn't feel like the right way to approach it.
We're just putting loads of data and computing power into something that
produces results that looks intelligent, but digging deeper bares no
resemblence to what a neuro scientist would call intelligent..

~~~
bhickey
I'm riffing on his writing --

> Even Google Translate, which pulls off the neat trick of approximating
> translations by statistically associating sentences across languages,
> doesn’t understand a word of what it is translating.

This is just another incarnation of "AI is the thing we haven't done." He's
parroting Chomsky's disdain for statistical models and John Searle's
fundamental misunderstanding of AI. For the former, Norvig has a fair rundown
of Chomsky's complaints
([http://norvig.com/chomsky.html](http://norvig.com/chomsky.html)).

> bears no resemblance to what a neuroscientist would call intelligent

TensorFlow gets results. The neuroscientist can claim it's a P-zombie, but
they need to point to some criteria for accepting something as intelligence.
Otherwise we're just moving goalposts.

~~~
mannykannot
>> Even Google Translate, which pulls off the neat trick of approximating
translations by statistically associating sentences across languages, doesn’t
understand a word of what it is translating.

>This is just another incarnation of "AI is the thing we haven't done."

I don't think so - it appears to be an objectively correct assessment of the
current state of the art.

> Otherwise we're just moving goalposts.

The first movement of the goalposts was to call '80s technology AI. Now they
are drifting back to where they started.

On the other hand, I am surprised by the claim that AI is stuck; my outsider's
impression is that progress has accelerated. Perhaps the impression of being
stuck comes from more people realizing how difficult a problem it is.

~~~
ballenf
>>> Even Google Translate, which pulls off the neat trick of approximating
translations by statistically associating sentences across languages, doesn’t
understand a word of what it is translating. >> This is just another
incarnation of "AI is the thing we haven't done." > I don't think so - it
appears to be an objectively correct assessment of the current state of the
art.

How deep an understanding is required to meet the threshold? The skepticism
feels like "no true Scotsman" applied to the definition of _understanding_.

I observe the following in young children when exposed to a new word:

0\. First exposure to totally new word used in a sentence with more familiar
words.

1\. Brief pause

2\. Mimic pronunciation 1-2 times

3\. Process for minutes, hours, or days.

4\. Use the word in a less than 100% correct way

5a. Maybe hear the phrase repeated back with the error "corrected" (hello
internet)

5b. Maybe hear more usage of the word in passing from others (with varying
degrees of "correctness")

6\. Recurse for life.

At what point did the person _understand_ the word? How is AI translation
substantially different?

I'm not sure _I_ understand any word in a way that would satisfy AI skeptics.

~~~
mannykannot
This is a discussion of the current state of affairs, not, for example, a
Searle-like claim that understanding can not and will never be achieved. To
substantiate a claim of 'no true Scotsman', I think you have to present an
actual case where you think a machine has achieved understanding, but which is
being unreasonably dismissed.

Ironically, your last sentence has 'no true Scotsman'-like reasoning, along
the lines of 'no true AI sceptic would fairly evaluate a claim of machine
understanding.'

BTW, I am not a skeptic of the potential of AI, though I am skeptical of some
claims being made.

------
bitL
AI is currently about giving us an equivalent of a pocket knife for some tasks
that are viewed as being in the "intelligence domain". That's all. Nobody
really thinks we are anywhere close to general intelligence. Or like the
famous saying "computers are bicycles for brain", current AI is about adding a
small electric engine to those bicycles to make them easier for everyone.

~~~
kobeya
> Nobody really thinks we are anywhere close to general intelligence

The kind of people that attend AGI conferences do.

------
the8472
> Even the trendy technique of “deep learning,” which uses artificial neural
> networks to discern complex statistical correlations in huge amounts of
> data, often comes up short.

That doesn't seem to be very surprising given the limited complexity compared
to say a fly's brain. Artificial NNs manage to work because they are highly
specialized to a specific task.

------
rdlecler1
The problem we've always had with AI was that most people were trying to
engineer it rather than reverse engineer it. Every time there would be a major
advance the computational neuroscientists would say: "we knew that, you should
have come talked to us 15 years ago." There's some work out there on this, but
it's more basic research on how to use developmental and genetic and
evolutionary algorithms to grow neural networks. Most AI researchers try to
skip this step but it's what's holding back progress.

~~~
observation
What is a good book on genetic algorithms?

~~~
rdlecler1
It's been about 10 years since I left the field. I believe Rodney Brooks did
some work on this. Eggenberger was also doing some interesting work about 15
years ago--not sure where that is today. The computational demands for
evolutionary & genetic algorithms are significant but there's no free lunch.

------
unityByFreedom
> An international A.I. mission focused on teaching machines to read could
> genuinely change the world for the better — the more so if it made A.I. a
> public good, rather than the property of a privileged few.

> author: Gary Marcus is a professor of psychology and neural science at New
> York University.

Not sure what he has in mind. There are already a lot of smart people building
Q&A systems. We need tests to establish if a system can read. Once you have
those then you can throw a competition up on Kaggle with a big purse.

~~~
nopinsight
No systems can really understand what they read or translate yet. They are
basically sophisticated pattern matching systems.

Check out Winograd Schema:
[https://en.wikipedia.org/wiki/Winograd_Schema_Challenge](https://en.wikipedia.org/wiki/Winograd_Schema_Challenge)

Overview by an expert:
[http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...](http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html)

An example: The city councilmen refused the demonstrators a permit because
they [feared/advocated] violence.

When you switch between "fear" and "violence", the meaning of 'they' change.
There are many more examples like this.

The best performance in the first round of the 2016 challenge was 58% by a
neural network based system. Random guessing would yield 44% (some questions
had more than 2 choices). Human performance was 90.89% with a standard
deviation of 7.6%.

Here are the challenge problems used in the first round:
[http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...](http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/PDPChallenge2016.xml)

Human Subject Test Performance:
[http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...](http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS2016SubjectTests.pdf)

~~~
PeterisP
It's important to note that Winograd Schemas don't really test if the system
understands _those_ sentences, they essentially test the system has
appropriate "common sense" knowledge/experience about how our world and
society works, i.e., it tests whether the system understands whatever _other_
data sources are usable to find out about this topic.

To give the proper answer in the example you use, a human (or a system) needs
to know how such permits are issued and what are the common reasons for
refusing such permits. As such, a sufficiently sophisticated pattern matching
system is perfectly sufficient to answer such questions - there's a simple
pattern difference that fearing violence causes you to refuse permits but
advocating violence causes you to get refused. It's worth thinking about where
do humans learn this? For the Winograd schemas like putting a trophy in
suitcase, it's the basic childhood experience of putting stuff in boxes that
we all share, but a machine won't (unless it's raised as a child-robot). For
schemas like this one, it's understanding how our society works learned by
participating in our society for years, which we all share, but a machine
won't (unless we allow machines to participate in our society). I.e. it's not
so much a measure of _intelligence_ as a measure of shared background
experiences. A human from a hunter-gatherer tribe wouldn't be able to answer
the councilman-permit schema, but that doesn't mean he/she isn't intelligent.

The difficulty there is caused mainly by the need to have domain-specific
knowledge in a wide range of domains - we will perceive systems as "dumb"
unless they share the same background knowledge that most humans have gained
by being part of our society and basic schooling, and since the machines won't
do that (yet), we're looking for "unnatural" ways of getting common sense
knowledge without the direct experimentation and participation that we do.

------
unityByFreedom
Click-baity. AI tech isn't stuck. There are many forthcoming breakthroughs,
particularly in medicine, which should really benefit humanity. Radiology is
poised to let CNNs make radiologists a lot more efficient. We just need to
build the labeled datasets.

If we invest heavily in some AI tech, let it be to produce huge medical
datasets. The software and hardware is ready. We're only lacking sufficient
data to make more diagnoses with super-human accuracy.

~~~
pdimitar
_> There are many forthcoming breakthroughs... We just need to build..._

Not sure how you don't see the irony. This has probably been said thousands of
times for many scientific areas throughout history. Example:

There are forthcoming breakthroughs in humanity being an interstellar
civilization. We just need to build faster-than-light engines and terraforming
equipment. Nothing major, right?

~~~
unityByFreedom
> There are forthcoming breakthroughs in humanity being an interstellar
> civilization. We just need to build faster-than-light engines and
> terraforming equipment. Nothing major, right?

Building a dataset is easy and not something you would compare to faster-than-
light engines. Believe it or not, some major breakthroughs _are_ held back by
simple lack of funding, and lack of awareness.

To make a dataset you need to pay radiologists to label enough data for the
system to do its job well. This could be thousands, or hundreds of thousands
of images. It is _technically speaking_ very doable, but also very expensive.
Then there are data privacy issues stopping you from sharing data. These are
social issues, not engineering issues.

~~~
pdimitar
Your reply, while overall correct, still underestimates the problem of "how do
we invent AI?". Data-sets shouldn't be "tuned" or "refined" (tems I see in
practically every "AI" article); if they need to be remade then the consumer
is not only not AI -- it's not even a clever NN implementation.

Forgive my cynicism if you can, but in my eyes you guys just support what
might make you money one day (or already does) and thus aren't objective.
You're like the parents that are completely blind to their child's defects due
to paternal / maternal hormones.

There's no AI on this planet. There are not even _beginnings_ of an AI. Deep
learning is practically a statistically biased classification algorithm and
not much else.

To me the term AI is being abused. I want AI to exist, but I am seeing every
indication that the area is falling victim to capitalistic interests and this
won't change anytime soon.

~~~
unityByFreedom
> Your reply, while overall correct, still underestimates the problem of "how
> do we invent AI?"

I'm not talking about building a real AI.

I actually agree with you that we're nowhere near developing that. Not sure
where you got any other idea from me.

I'm saying there are some machine learning problems that could be served by
some simple data entry. This could save lives, including yours and mine, via
advanced cancer detection [1]

You're right that since I studied data science, I'm incentivized to advertise
its usefulness. But, I studied data science because I believe it is a growing
part of our future.

You can try it yourself too. There are many tutorials online. Making use of
machine learning gets easier every year.

[1] [http://money.cnn.com/2015/03/12/technology/enlitic-
technolog...](http://money.cnn.com/2015/03/12/technology/enlitic-
technology/index.html)

~~~
pdimitar
I would gladly take the time, if only I had some. :(

Thank you for your kind answer.

------
dwighttk
Saved you a click: huge government program to teach ai to read

~~~
dwighttk
well, someone broke off the clickbait from the headline... It used to be:
"Artificial Intelligence is Stuck. Here’s How to Move It Forward."

------
ksec
Is the Author arguing there will be another AI Winter?

I have a simple theory ( I am not sure if there is a proper term for it ) that
goes and solve all the above problem mentioned.

When ever a technology that is capable of producing some form of economical
value, it will continue to improve and tackle what ever hurdle or barrier you
think it has.

In all of the previous AI era, research were funded by government or large
company like iBM. But none of those has ever made a impact or profits that
value more then you have invested in. Expert System never caught on.

And this is why everyone is excited, for the first time ever we have AI (
Machine Learning ) producing useful results in a MUCH MORE cost effective way.
And these saving will means companies are investing back into AI research for
further improvement and benefits. The whole AI research has created a self
sustained cycle that we know, at least for the next 5 - 10 years will not be
lacking any fundings.

------
Dzugaru
"Stuck"? Cmon, imagenet winning deep nets are 5 (five) years old. Give it
time.

------
teabee89
I agree that AI is hyped, but I believe the problem is that we absolutely
don't care about 50 years of neuroscience research. We know so little about
the brain, but already much more than back when "artificial neurons" were
modeled. The only company that I believe is on the right track is Numenta,
they focus on reverse-engineering the neocortex. They have a living theory
that is updated every time there is a research breakthrough.

------
CuriouslyC
AI definitely isn't stuck, unless you define it solely as creating artificial
general intelligence. The problem there is that we don't understand general
intelligence very well at all.

Of course the fixed graphical models we use have their own problems. For
instance, we can't even effectively model a neural network with a variable
number of inputs.

~~~
yorwba
I'm not sure why you think neural networks can't handle variable numbers of
inputs. Recurrent networks that ingest whole sequences have been around for a
long time, and other structures have their own network topologies. Support for
things other than classic RNNs is more limited, but e.g. TensorFlow Fold
([https://github.com/tensorflow/fold](https://github.com/tensorflow/fold)) was
specifically designed for that.

~~~
CuriouslyC
RNNs are a good model for things that are naturally sequential with limited
state transfer. They are not so elegant for things with no defined ordering
and a large amount of shared state.

~~~
yorwba
Can you give an example of a problem where you have "no defined ordering and a
large amount of shared state"? What kind of model is typically used in that
domain?

~~~
CuriouslyC
One problem I've been considering is triangle surface meshes. The data is
variable in size, with no defined start or end point, where points distant on
the surface may share a high amount of mutual information (through symmetry,
etc).

One approach I've thought about is applying kernel methods. You can compose
kernels, so they scale up cleanly regardless of variations in the input
dimension. The sum or product of kernels between each node in the input graph
and some basis set is itself a kernel. If your kernels describe covariance
between observations (i.e. Gaussian processes) then additional input
dimensions have a constraining effect, rather than causing evidence inflation
for larger inputs as a typical neural network might.

------
peterburkimsher
The article is calling us to teach AI to read words and phrases and look for
the meaning, not just a statistical correlation.

I put dictionary data into Pingtype English to try to parse phrases instead of
just words. e.g. "pick [something] up". The purpose is to do word-for-word
translation to Chinese as an educational tool. It's not perfect, but the
dictionary is editable. You can contact me if you want to discuss new ways of
extending the features (e.g. data from UrbanDictionary, movie subtitles, etc).

[http://pingtype.github.io/english.html](http://pingtype.github.io/english.html)

I also want to correct the author that CERN does not have billions of dollars
of funding. There's only about 5000 staff, and the other 10,000 people working
there are funded by universities elsewhere who send them to CERN to do the
research.

~~~
lispm
The CERN budget is roughly a billion Euro per year.

~~~
homarp
some numbers from [https://press.cern/facts-and-figures/budget-
overview](https://press.cern/facts-and-figures/budget-overview)

in 2016, 1127.2 million CHF or 1,163,675,662.22 USD

~~~
giardini
A billion dollars' research into AI would be more beneficial than another
billion spent on CERN.

The Standard Model covers well over 99% of known physics already. Other moneys
are being wasted paying students and professors to study string theory w/o any
experiments possible.

Let's develop some true AI and let it close the gap. Two birds, one stone.
[And maybe we can find out how we do analogies, at the same time!]

------
asketak
Most of the AI progress in last years is just tuning pattern recognition
algorithms. We can not expect these algorithms to produce results like humans,
because humans have a lot of information not just from percieving the world,
but their patterns of thinking are also vastly dependend on the underlying
structure of brain, that has developed over milions of years of evolution.

If there is a cliff, toddlers are scared of being nearby. They definetely
don't have the ability to "imagine=simulate" the consequences of falling over
the cliff. The fear is in the structure of neurons of brain.

If you feed classifier algorithm with images of black dogs and white swans and
then want to classify black swan. Both classifying it as dog(because of color)
or swan(because of shape) are right. The difference is only in bias, which
features do you prefer.

~~~
scardine
You must watch the documentary "The secret life of babies". There is a scene
where they put several babies on a "cliff" situation (with a glass in the same
level to prevent them falling), they will confidently march into the glass -
they do several other experiments that prove babies are fearless.

[https://www.netflix.com/title/80009352](https://www.netflix.com/title/80009352)

~~~
giardini
Babies are not born afraid of the cliff (i.e., it isn't hardwired into the
infant brain); they develop fear of it at about 9 months' age, once their
depth perception has developed.

------
wwarner
It's great to read someone challenging the hype, but this analysis is a bit
too negative. I'd point out that AlphaGo did something a little bit like the
author's toddler learning to squeeze through the back of her chair. AG learned
its new moves by playing against itself. I still wonder how they avoided over-
fitting here. Imagine two twins who loved to play and learn about go, and
played constantly against one another, and in the process discover new plays
they both believe to be unbeatable. You'd expect some of their new moves to be
weak when actually tested against other players. But AG's new moves really
were strong.

Admittedly, games with simple scores are the only scenarios where this really
kicks in. But then again, the stock market could fit this model.

------
shahbaby
This article is actually pretty accurate in that it identifies that neither
academy nor industry is well suited to solving AGI.

Suppose that a real solution to AGI will actually take 10 years to solve with
minimal milestone achievements along the way. In other words, until you have
the complete system figured out, it'll be hard to see the results.

In academia, most people are ultimately focused on getting their paper
published.

In industry, most people are ultimately focused on making a profit.

In both cases, people would get off track long before they reached the full
solution.

Lastly, the principles behind which a real AGI operates are likely so abstract
that everyone reading this will likely be long dead by the time humans stumble
upon them.

The only way we can short cut this process is by looking at the solution (ie
the way Numenta is doing it).

------
giardini
No need to involve other nations: the US military is moving ahead on its own:

[http://www.insidesources.com/nsa-chief-without-ai-cyber-
is-a...](http://www.insidesources.com/nsa-chief-without-ai-cyber-is-a-losing-
strategy/)

From the 2016 article:

"Artificial intelligence will play a big role in the future of U.S. strategy
in cyberspace, according to National Security Agency Director Adm. Michael
Rogers, who told Congress Tuesday that relying primarily on human intelligence
'is a losing strategy.'”

The lineage of military intelligence systems using AI is (necessarily,
historically) heavily biased toward language-based AGI ("old AI") rather than
neural networks. The NN are there of course, but IMO the impressive work is in
the AGI.

------
LukeB42
What do you do when this software discovers it's rewarding to compete with you
for physical space?

The problem with AGI is it'll implicitly have to model the entities it
interacts with and that may present two challanges:

1) Developing robust strategies for managing an AGI discovering a greater
reward response from defecting than for cooperating with people / developing
strategies for managing scenarios in which a quorum of AGIs discover it's
rewarding to collude to the detriment of humans / cellular life.

2) The tractability of maintaining one language model per entity across
channels.

The actual implementation could be done by plugging a handful of related
techniques we've developed over the past couple of years together though.

------
yters
Why does everyone assume human intelligence is computable? Seems we should be
checking that assumption at this point since we've made so little progress,
and a definitive answer is much more valuable than this ongoing speculation.

~~~
JacksonGariety
Philosophers have been saying this for 60 years Hubert Dreyfus' 1972 book
"What Computers Can't Do" is a notable example.

~~~
yters
It's alright, but I don't recall the book articulating actions computers
cannot do. It seemed to still leave open the possibility that we can automate
all human work. What we need is a precise task that humans can do with ease
but we can prove is impossible for any computational device whatsoever.

~~~
JacksonGariety
My understanding was that he thinks we can potentially automate all human
work, just not with computers. A precise task isn't necessary: AI researchers
are simply mistaken about the nature of intelligence.

~~~
yters
Dreyfus doesn't know his comp sci. Everything automatable is computable. The
interesting question is whether human intelligence is automatable.

~~~
JacksonGariety
>everything automatable is computable

why do you suppose this?

~~~
yters
Everything physical is computable.

~~~
JacksonGariety
So you have to prove that all human work is physical when some of it doesn't
seem to be physical at all (creative work).

~~~
yters
Yes the non-physical work is not automatable.

------
shireboy
I'm not sure about "stuck", or that a huge international affair like ITER is a
good solution. We could have maybe AI with our maybe fusion for $70 billion in
40 years ;)

But watching my 1yo learn to toddle around and navigate does show just how
limited current AI is. With tons of training and battery, we can coax a
computer to barely do what my 1yo does on a belly of cherrios and a few hours
of trial and error.

There's lots of great stuff and some terrifying stuff happening in AI and I
don't doubt more to come, but watching kids learn puts it in perspective for
me.

------
baalimago
We humans don't know how to learn. We don't know how learning works. We simply
work work work work until we know whatever we set out to know, we don't learn
how we learned it, but are happy that we simply know it and leave it at that.

Therefore teaching someone/something else how to learn will be almost
inherently impossible, because we don't understand it ourselves (yet?)

And if we do learn how to learn, why would we need an AI to do it for us?

~~~
icebraining
We barely know which regions of the brain recognize shapes, let alone exactly
how it does so, yet we've built machines that can do it.

~~~
kronos29296
Take any NP-hard problem at its simplest form. Whatever half assed heuristic
you use chances are the solution becomes optimal (if it is even half good.)
The more complex the problem the more the heuristics fall apart. For the
smaller ones we can simply brute force it. The same thing happens here.
Compared to something like face recognition, recognizing simple shapes is much
simpler. So even the not so great techniques (comparatively) work well enough.
They fall apart when we use the same methods for the complex stuff.

A great example would be a greedy algorithm. It works for some problems but
doesn't for some others. Take a simple enough problem and you get optimal
solution. Push the algorithm to its limits and you don't even get a good
solution. You don't have to understand how the best algorithms for a task work
to come up with a greedy algorithm.

------
jsemrau
We (humanity) have made huge progress to understand images in terms of content
and emotions of people. Imagenet is truly a gift to the world. However, that
has brought us only a small but important step forward. Clearly expectation
has to catch up to reality. However, all these solutions are becoming quickly
more accessible to the laymen bringing another boost to operational
efficiencies for companies worldwide.

~~~
unityByFreedom
> Clearly expectation has to catch up to reality.

Woah, you feel expectations are behind reality? I feel there's a lot of news
lately predicting AGI.

~~~
psyc
I actually do feel that expectations are behind reality, at least amongst
those who are just barely too smart for their own good. I still see comments
daily on HN or Reddit that promote the narrative that there is no AGI, people
only work on ML, and all ML is a narrow party trick. And I think that is a
_terrible_ characterization of what, e.g., the computational neuroscientists
are doing. Peruse some of the research happening at MIT and Stanford right
now, and I don't see how anyone can cling to the "it's just ML" canned
response.

~~~
pdimitar
Maybe because, gods forbid, people judge by real-world results and not by the
words of a bunch of narrow specialists patting themselves on the back?

The author's points still stand. Robots do fall over trying to open doors and
they don't invent new ways to climb a chair. This is a fact. The terrible
characterization you speak of is well-founded in observable reality. That is a
fact as well.

~~~
psyc
If this is your attitude towards long-term academic research, there's little
hope I could expand your awareness of what the state of the art in AI actually
is. I'm reminded of a point Yudkowsky made 10 years ago:

[http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...](http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/)

~~~
pdimitar
I am sure you misunderstand me but I gladly take the blame for it. I am all
for people doing experiments just for the heck of it _and being paid for it_
\-- we as a race need a lot more leisure and discovery time. We're being
robbed of leisure and discovery time more and more with each passing year, we
always owe somebody money, there's always something else that is urgent to do,
and in the end we never get to just slack for a year or two, especially after
a burnout -- something that was deemed very normal even only 50 years ago.
This is an awful period of human history and one I am sure will be remembered
with great deal of shame one day. But let me not digress a lot...

I am 100% behind science, experimentation, and even silly / goofy discoveries
whose usefulness might come centuries later (or never; I am okay with that).
Please don't get me wrong. _We need much more of that as a race_.

I will also immediately agree that I am oblivious to what is happening in the
AI area. But can you blame my cynicism? Everybody, their dog, and its butler
are now claiming to do "AI innovation" and in the end 99% of them just swallow
investment dollars, figure out a lucrative exit, and some even _repeat_ that a
year or two later. Naturally, people get worn out and start putting snarky
remarks when they hear the now-meaningless term "AI" \-- I am one of them, and
I don't feel bad about it. I believe the sarcastic attitude is well justified.

Everybody keeps praising certain, _very specifically tuned_ , NNs when they do
certain very specific tasks. Fine. I will grant you that I can't code the
algorithms needed to surpass human doctors in recognizing latent cancer or any
kind of early signs of a dangerous disease. This is true. But the current way
of doing things is like "input heckton of data, go to lunch, expect magic when
you return". It definitely feels like it, even if I know that it's not
factually true.

NNs show bias. Seems like nobody cares, they're like "yeah we know it's a
problem, we'll get to it" and yet there are NNs that very likely already deny
black families loans due to the inherent bias in the datasets they've been fed
with. The concept of implementing a truly explainable AI seems to be very new
_when it had to be there right from the start and shouldn 't have ever been
missing_; what are you people even thinking?! A driverless car makes a strange
decision and what, "the NN worked perfectly"?! Bah.

To me, "AI" advocates are very content to deny very real issues that exist
_RIGHT NOW_ and that makes me cynical about that branch of science since you
guys always seem to try and sprint into the future while blindfolding yourself
about things that need attention here and now.

I admit I got off on a tangent. In any case, these are my collective thoughts
on the topic.

------
adamnemecek
We are using the wrong computational paradigm. We have to abandon bits and go
back to analog computing in the form of analog photonic computing that gives
you fast Calculus. This is painfully obvious in the case of neural networks,
which run faster on an analog computer and are also easier to program.

~~~
sgt101
I think you are out of date. Rectification is pretty good at removing a lot of
the vanishing gradient issues that nn's used to face, and the overwhelming
power of modern digital computers (50k cores is common) make this all moot as
far as I can see.

~~~
adamnemecek
Nope, they aren't even in the same category. E.g. Notice that there is a cpu
size limit because of heat. Do you know what doesn't overheat? A photonic
computer. You can build a cpu the size of a house. Also how does the number of
cores constitute "overwhelming power".

~~~
sgt101
:o) well, due to the wavelength of light a photonic cpu has to be 50->100
times the size of a current gen electronic one.

When I were a young 'un we had one core, and it ran at 25Mhz and about 130 of
us shared it. Now I have 50,000 cores that run at 2 Ghz and five people share
it. Things aren't quite directly comparable but the speed up is at least
100,000x I am overwhelmed by this, things that would have taken 1000 days;
approximately 3 years, can be achieved in ten or twenty minutes. In reality
the use of these infrastructures has enabled (in neural net land) the
development of techniques that improve performance by several more orders of
magnitude - so things that would have taken several years are now done in a
minute or so. I believe that there is plenty more headroom to be had.

~~~
adamnemecek
Where is that estimate coming from? I mean year there has been progress by why
do you think photonic would be even better?

~~~
sgt101
1989->2017

Photonics are promising QC technologies - especially Phonons, but we are a
long way off!

------
pinouchon
Reddit discussion:
[https://www.reddit.com/r/artificial/comments/6qcx6t/research...](https://www.reddit.com/r/artificial/comments/6qcx6t/research_labs_in_academia_or_big_tech_companies/)

------
4bpp
So a three year old finding an unanticipated way to slip out of its chair is
evidence that it is smart, but a neural net finding an unanticipated common
pattern to all school bus images in its training set is evidence to the
contrary?

------
ilaksh
The international resource mission is called open access to papers and open
source AI software. See things like Tensorflow, Open AI, etc.

------
desireco42
I didn't read this article for the simple reason that NY Times is not the
place to learn any insight about AI like this. They are article churning
machine that serves political propaganda and useful local news and analysis.
Anything science based, not really their thing.

------
mike_hearn
The article is riddled with errors that undermine its own thesis.

It starts badly:

 _> Artificial Intelligence is colossally hyped these days, but the dirty
little secret is that it still has a long, long way to go_

This is not a secret, let alone a dirty one. Even 5 minutes casual research
into the state of AI will reveal what it can do and what it can't.

It says:

 _> Such systems can neither comprehend what is going on in complex visual
scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read
this story and summarize what it means”)._

In fact comprehension of (very) simple stories is now more or less a solved
problem. I wrote about performance on the bAbI tests here:

[https://blog.plan99.net/the-science-of-westworld-
ec624585e47](https://blog.plan99.net/the-science-of-westworld-ec624585e47)

Summarisation of stories is also something with good recent results:

[https://research.googleblog.com/2016/08/text-
summarization-w...](https://research.googleblog.com/2016/08/text-
summarization-with-tensorflow.html)

Summarisation of arbitrary video is harder but given that object and path
extraction already works well, it doesn't seem very implausible that we'll see
some good research results in video summarisation systems within a few years.
Extrapolation from what's happening to hypothesised explanations is a lot
harder but not hard to imagine it being possible given the direction research
is going.

 _> My daughter had never seen anyone else disembark in quite this way; she
invented it on her own. Presumably, my daughter relied on an implicit theory
of how her body moves, along with an implicit theory of physics — how one
complex object travels through the aperture of another. I challenge any robot
to do the same._

Challenge accepted:

[https://www.youtube.com/watch?v=gbYiKMisbME](https://www.youtube.com/watch?v=gbYiKMisbME)

And for the imagination component:

[http://www.wired.co.uk/article/googles-deepmind-creates-
an-a...](http://www.wired.co.uk/article/googles-deepmind-creates-an-ai-with-
imagination)

 _> To get computers to think like humans, we need a new A.I. paradigm_

That's not clear at all, given recent research. It is an odd statement from
someone who has worked in AI. But then as the author is not a computer
scientist, perhaps not that odd.

Modern neural networks are so similar to how humans think that psychological
techniques are being used to understand and "debug" them:

[https://deepmind.com/blog/cognitive-
psychology/](https://deepmind.com/blog/cognitive-psychology/)

I'm not sure how "think like humans" can be easily defined, but using
strategies developed to understand human thinking on robots seems like a good
starting point. Making mistakes similar to what you'd expect humans to make is
also a good sign.

 _> But it is no use when it comes to top-down knowledge. If my daughter sees
her reflection in a bowl of water, she knows the image is illusory; she knows
she is not actually in the bowl_

She does now. But it takes time for babies to learn how to interpret mirrors.

[http://www.thoughtfulparent.com/2009/10/child-psychology-
cla...](http://www.thoughtfulparent.com/2009/10/child-psychology-classics-
mirror-test.html)

Animals usually never learn this, though a few very intelligent species can.

I don't see any obvious theoretical reason why image recognition engines
shouldn't be able to understand mirrors, given sufficient research.

 _> Corporate labs like those of Google and Facebook have the resources to
tackle big questions, but in a world of quarterly reports and bottom lines,
they tend to concentrate on narrow problems like optimizing advertisement
placement or automatically screening videos for offensive content._

Another bizarre statement given the author's background. Google and Facebook
have been investing massively in very long term AI research and building many
things along the way of no direct commercial value, like AIs that play games.
I don't see Google's public AI research focusing on the cited problems,
although it would not surprise me if there are parallel efforts to apply
research breakthroughs in these areas.

 _> An international A.I. mission focused on teaching machines to read could
genuinely change the world for the better — the more so if it made A.I. a
public good, rather than the property of a privileged few._

And here we have it ladies and gentlemen .... the reason the article is so
filled with factually false and logically dubious statements. It is an
advocacy piece for new social policy: a vast new government research
investment in academia, in which presumably Mr Marcus would like to be
employed (rather than at Uber).

Besides, even this last paragraph is disingenuous. There does not seem to be
any risk of AI becoming "the property of the few". In fact the large corporate
research labs are doing fantastically well at publishing research papers and
making the results of their work publicly available and useful ... in fact
given the relative quality of corporate vs academic open source releases I'd
say they're doing better than academia is. It's hard to imagine universities
producing something as robust and well documented as TensorFlow.

------
m0dbit
\- Pursuing low-hanging applied-solutions fruit under the guise of a grand
mission statement of (AGI) [which is now all the rage], results in one
becoming (stuck). You maybe can fool investors and the lay with such madness,
but you can't fool yourself nor the matter at hand.

\- Not staffing or structuring like you understand or respect what General
Intelligence is results in a narrow and specialized mindset among your
employee base that produces narrow and specialized solutions.

It's called a local minimum. It's where you land when you don't focus on the
bigger picture.

> How to move forward? There's several techniques for that. I don't see them
> being used. Which either means they don't understand they're stuck or they
> know they're stuck and don't care. Why would the latter mindset be willfully
> chosen?

Current models and methods for training AI require huge data sets and
computational power to be effective. Who currently maintains such resources?
Whose fueling and molding the perception and direction of current efforts? See
the conflict of interest?

Furthermore, given how convoluted the approaches/math are, it lends itself to
specialized individuals...PhDs. A match made in heaven that allows the market
to be narrowed and segmented to a specialized group of people. The problem
with this is : It results in narrow and Weak AI.

> Not knowing you're stuck Enough people have made sound arguments. You either
> grasp them and change or, given how comfortable you are, stay the course. It
> could be, even with a PhD and clout, that you're just not that intelligent
> enough to grasp the sound arguments... But, if this is the case, do you
> really think you're going to solve general intelligence?

Those that (truly) seek to move forward have been moving forward with (AGI).

> Those attempting to preserve old business models with a fresh top layer
> coating of the new. > Specialist who refuse to respect anything beyond their
> group's chosen methods and thus respect the scope of AGI. > Well-funded
> groups who exclusively hire from a narrow scope and narrow specialized focus
> > VC groups that only invest in low hanging fruit applied engineering
> ventures > VC groups that don't give those in (true) pursuit of this funding
> Will just get let behind with the new wave. It's the same as it's always
> been. You maybe can fool yourself and others. However, you can't fool the
> laws of nature and the universe.

Enough people have spoken. Enough hints have been given. Enough people have
taken and borrowed concepts of the small fry and called it their own only to
find themselves lost in what it meant.

Enough time has elapsed. If you're not acting and steering your resources in
accordance with the new, you just get left behind grasping the old.

Same as its always been... (True) disruption.

It's on the horizon. It's coming. So, keep your eyes peeled.

------
Kenji
_Some of the best image-recognition systems, for example, can successfully
distinguish dog breeds, yet remain capable of major blunders, like mistaking a
simple pattern of yellow and black stripes for a school bus._

That's exactly the problem. Robots lack sanity checks because they lack real
understanding. If you cannot recognize an object that is far away, you are
instantly aware of your inability to identify this object. A computer just
runs its code over it and outputs complete garbage, and this nonsense then
enters the system and does who knows what damage.

Plausibility checks are incredibly complex! If you are in central Europe and
you are not in a zoo and you see a leopard fur pattern, it's probably not the
living animal! And so on.

~~~
GuB-42
Image recognition system just recognize images. They essentially do the first
pass of what your brain can do.

You too can mistake yellow and black stripes for a school bus or see an actual
leopard in Poland. That's when you put what you've seen in context that you
rule out the idea. And if you really want to see something in a picture, you
will, especially with faces.

It is no different with computers. You train your algorithm so see school
buses exclusively and it will see school buses everywhere. Conversely can also
teach it context, for example by taking account of the webpage hosting the
image.

Computers algorithms usually have a confidence rating too. They can tell
"definitely a school bus (99%)" or "looks vaguely like a school bus (30%), but
it may also be a wasp (10%)", so they can be aware of their own flaws. In
fact, confidence intervals are often a key part of machine learning.

------
lngnmn
Pattern recognition and feedback loops aren't enough. Artificial Intelligence
require Artificial Emotions.

~~~
phreeza
What makes you think emotions are anything else than pattern matching and
feedback loops?

~~~
lngnmn
Emotions are "learned heuristics".

------
cubano
My first impulse is to think "oh another socialist rant from the NYT"...

And so is my second.

