
Building AI - klunger
https://www.facebook.com/zuck/posts/10102620559534481
======
cgearhart
I am disappointed with the number of errors and misconceptions in this piece.
He presents machine learning (ML) as though it is the entirety of artificial
intelligence (AI), bases his assessment of the field on a false dichotomy
between the largely distinct groups of supervised & unsupervised learning
techniques, and unfairly reduces the achievements of AI down to "pattern
recognition".

AI and ML are not synonymous. They are related, and there is a great deal of
overlap between them, but the fundamental goals and approaches are largely
different. Machine learning is primarily interested in studying what you can
learn from data - for some suitable definition of the word "learn". Artificial
intelligence is the more broadly defined problem of studying algorithms that
exhibit or incorporate intelligent solutions to problems. That may involve
data, or background knowledge, or domain assumptions, or a variety of other
things. AI is about more than data - even as he observed that it is a sign of
intelligence that humans do not require thousands of samples to learn.

The central challenge of AI is not in transitioning from supervised learning
to "general" unsupervised learning (whatever that means). The techniques are
different, and often used for different things. There is some overlap, and
they are clearly related - but it is not at all accurate to conflate
unsupervised learning with "common sense". In broad strokes, supervised
learning is about identifying features in the data that correspond to labels,
while unsupervised learning is about identifying intrinsic features in the
data. It sounds like he's interested in bootstrapping supervised techniques,
or perhaps transfer learning.

My first response: ugh...yet another famous and lauded geek icon wading into
the subject of AI. At least this doesn't seem to be in any way connected to
all the Nick Bostrom garbage. But if I'm allowed some snark, perhaps if he
wanted to be an AI expert he should have stayed in school... ;-)

------
bobby_9x
"We should not be afraid of AI. Instead, we should hope for the amazing amount
of good it will do in the world. It will saves lives by diagnosing diseases
and driving us around more safely. It will enable breakthroughs by helping us
find new planets and understand Earth's climate. It will help in areas we
haven't even thought of today."

Zuckerberg isn't afraid of AI because he already has more money than he will
never need (or thousands of other people would need) in a lifetime. However,
when AI gets good enough, it will make many jobs obsolete and they will not be
replaced at a fast enough rate.

I use simple forms of automation at my own company. Instead of hiring 3 or 4
workers, I write software. I can't imagine how many jobs will be replaced when
AI gets to the point of near-human levels of learning.

~~~
kylemathews
Yeah, the last 100 years have sucked since the tractor was invented and I've
been unable to find farm jobs.

You just beautifully summarized the "Lump of Work fallacy"
[https://en.wikipedia.org/wiki/Lump_of_labour_fallacy](https://en.wikipedia.org/wiki/Lump_of_labour_fallacy)

Yes, technology changes/improvements causes transitions in the job market but
as it turns out, human needs are endless and new jobs are created for
displaced workers.

~~~
damon_c
It will be different this time. (I know they probably said that all the other
times...)

~~~
qrendel
It will be different because automation has never before been able to fulfill
all human niches. Automating one job just freed up people to work in other
jobs. In particular, automating physical labor freed people up to work in
cognitive tasks.

This time it's the entire human niche that will (eventually) be automated -
all cognitive tasks as well as all physical ones. That's the difference and
why extrapolating from the past doesn't work for AGI.

------
cconcepts
Have been trying to get my head around AI as a layperson with kids who will
grow up in a world where some form of AI is commonplace but are unlikely to be
prepared for it by school.

Have found this free book that I saw on YCs 2015 reading list particularly
helpful in this respect:
[http://neuralnetworksanddeeplearning.com/](http://neuralnetworksanddeeplearning.com/)

~~~
Mahn
Great resource, thanks for sharing. There's a lot going on lately about neural
networks and deep learning on HN, but as a developer not in the field, it can
be a bit intimidating to wrap your head around it all.

------
conanbatt
I have to say that as a once professionally aspiring Go player, the advances
of AI have been incredible.

When I started playing Go, it took me about 4 months to beat the stronger
available computer AI. Today, the strongest computers would be a challenge to
play against evenly.

However, even after all the patterns and montecarlo solutions, they still
fumble the initial stage of the game. As a Go player, I look forward to AI's
that beat humans in that stage of the game, because at that point, we will be
able to learn from them.

Until then, their victories are purely computational, and they are not even
interesting to play with.

~~~
chimtim
I believe the Chess/Go engines are not really AI. They are operating on a set
of rules written in code. It can compute and search the moves faster than a
human. It can also remember a longer chain of moves better than a human can.
But all those rules have been fed via code and it is just exploring that state
space.

Only recently researchers have started to expose chess to reinforcement
learning (i.e. a true AI engine that learns chess on its own and beats
humans). But the existing commercial engines that beat humans (like stockfish)
are anything but AI.

~~~
rqebmm
I think this is true of all AI today. Personally I've always been a fan of the
distinction between "Virtual Intelligence" and "Artificial Intelligence"
(Thanks to Mass Effect!). Currently all "AI" is really "VI" in that it's
exploiting a closed system of rules that it can implement faster, better, and
longer than a human because it can do the equivalent of rote memorization and
state tree traversal. However as Zuckerberg is saying, nobody is close to
implementing something that actually approximates human/animal intelligence
other than in single-dimensional ways.

------
SonicSoul
I recommend Superintelligence [0]. It explores different plausible paths that
AI could take to 1. come up to / surpass human intelligence, and 2. take over
control. For example if human level intelligence is achieved in a computer it
can be compounded by spawning 100x or 1000x the size of earth population which
could statistically produce 100 Einsteins to live simultaneously. Another way
is shared consciousness which would make collaboration instantaneous between
virtual beings. Some of the outcomes are not so rosy to humans and it's not
due to lack of jobs! Great read.

[0] [http://www.amazon.com/Superintelligence-Dangers-
Strategies-N...](http://www.amazon.com/Superintelligence-Dangers-Strategies-
Nick-
Bostrom/dp/0199678111/ref=sr_1_10?ie=UTF8&qid=1453918332&sr=8-10&keywords=intelligence)

------
chimtim
"This year, I'll teach my simple AI to recognize patterns. I'll train it to
recognize my voice so I can control my home through speaking. I'll train it to
recognize my face so it can open the door when I'm approaching, and so on."

The announcement is slightly disappointing, in the sense that face and voice
recognition are fairly solved and there is code on github that already allows
you to achieve this. I was hoping for more, like at least an AI engine that
can scan his shirts and recommends what to wear but I guess he does not have
that problem.

~~~
jhawk28
It's his personal challenge. Its about what he wants to learn/promote.

~~~
S4M
Yes, while what Zuckerberg will try to do by himself is pretty standard from
an academic point of view, it's great that he tries to find the time to learn
those. He's a young dad and must have a busy job so I think it will be tough
but who knows?

------
AI_Overlord
My AI announcement: I'm here to announce that I'm also working on an AI
project. I will star from first principles. I expect that it will take the
rest of my life. With any chance I get lucky and invent AI within the next ten
years. Here are my goals.

1-star life long project on GAI

2-....

3-Invent GAI

4-Use general AI to extend human life for centuries, advance human knowledge
like never before, In secret. I can easily see somebody putting a bullet
through my head to steal my AI.

5-Decide what to do next. Decide If I want to share.

I'm not joking. I actually believe I have a chance.

Why I believe I will be successful:

1-I'm like a dog with a bone.

2-Once I'm working on a project I don't let go.

3-I've worked in years long projects before.

4-I've been thinking about GAI for a long time.

5-Recently I just came up with an approach that may lead to GAI. It is no like
the current approaches in existence. I think my approach is much better.

6- There is nothing more interesting to work on then GAI.

7-I like the challenge of beating the best minds in the world. Time will tell.

~~~
daveloyall
Some permutation of this plan has been in the heart of every person that
started on hard sci-fi as a child for at least three generations, maybe five.

You'd think we'd collaborate!

------
na85
The cynic in me thinks _of course_ Facebook and Zuckerberg are looking into
AI. The computational nature means you need to send the queries somewhere
central à la OK Google. For a business predicated on selling user data to
advertisers, I bet it eats them up inside that they don't have their own Siri-
alike. Without a mic to listen to us, how else will they be able to profile
what we do when we aren't on Facebook?

~~~
bertil
“Facebook” is a large organisation: there is a team (led by Yann Le Cun) that
does research and supports image-recognition, NewsFeed and ad-market
optimisation, but this is not exactly what Mark suggested he would dibble
with. Those handle was is de facto an industrial problem.

He is more curious about the ability of using AI for _daily interactions_
(hence the reference to what are movie characters): do notions like
personality, convenience and interface, implicit assumptions matter? If he
asks “Buy a new set t-shirt” will the robot be able to joke “which colour?”
(MZ notoriously only wear the same grey model).

I would look into what Messenger wants to be doing (or, what Slack is doing)
to see the implications: can small companies do what Uber does with the key-
word integration in chat? Is it creepy, hackable, tedious? This of this like
his dog’s Fb page: it’s building empathy for the users of the Facebook
_platform_.

First problem he’ll probably have to find: Who to call when saying “Call
Mike!”?

Of course “Mike” is probably the Mike with whom he has the closest ties (based
on Facebook graph data) — except his VP of Engineering, Mike Schroepfer, is
known as “Shrep”, so that’s probably not him: how easy is it to program
something that can learn that seamlessly? In which case the ambiguity too
high, and Jarvis should ask for confirmation vs. just casually saying the full
name out loud and wait a second? How to measure ambiguity? Would it be simpler
to just start calling people with non-ambiguous name? Would he feel like
sharing his code with peers? His heuristics? How to gather that “Mike” is
still in Europe where it’s 3AM, vs. was in Europe for in last post and
actually just landed?

------
nefitty
If you don't like clicking onto facebook links either, here's the full post:

"My personal challenge for 2016 is to build a simple AI -- like Jarvis from
Iron Man -- to help run my home and help me with work.

I'm planning on writing up some thoughts every month on what I've built and
what I'm learning. I'm still early in coding, so I'll start this month with a
summary of the state of the AI field.

Artificial intelligence may seem like something out of science fiction, but
most of us already use tools and services every day that rely on AI. When you
do a voice search on your phone, put a check into an ATM, or use a fitness
tracker to count your steps, you're using basic forms of pattern recognition
and artificial intelligence. More sophisticated AI systems can already
diagnose diseases, drive cars and search the skies for planets better than
people. This is why AI is such an exciting field -- it opens up so many new
possibilities for enhancing humanity's capabilities.

So what can AI do and what are its limits? What things is AI good at and what
is AI bad at? Simply put, today's AI is good at recognizing patterns and bad
at what we would call "common sense".

The primary method used to train AI systems is called supervised learning.
This is like when you show a picture book to a child and tell them the names
of everything they see. If you show an AI thousands of pictures of dogs, you
can train it to start recognizing dogs.

You can teach AIs to do a lot of things this way. For example, we can teach an
AI to recognize all of your friends' faces by showing it thousands of photos,
and then it can suggest tags for the photos you upload on Facebook. You can
teach an AI to recognize speech by having it listen to thousands of hours of
speeches throughout history while also showing it transcriptions of what was
said. You can teach an AI to diagnose melanoma by showing it thousands of
photos of tumors. You can even teach an AI how to drive a car and
automatically brake by showing it thousands of examples of people and
obstacles it might encounter on the road.

Diagnosing cancer, driving cars, transcribing speech, playing games and
tagging photos may sound like very different tasks, but they're all examples
of teaching an AI to recognize patterns by showing them many examples.

Many different problems can be reduced to pattern recognition tasks that
sophisticated AIs can then solve. This year, I'll teach my simple AI to
recognize patterns. I'll train it to recognize my voice so I can control my
home through speaking. I'll train it to recognize my face so it can open the
door when I'm approaching, and so on.

But there are lots of limitations of this approach. For one, to teach a person
something new, you typically don't need to tell them about it thousands of
times. So the state of the art in AI is still much slower than how we learn.

But more importantly, pattern recognition is very different from common sense
-- and nobody knows how to teach an AI that yet.

Without common sense, AI systems can't use knowledge they've learned in one
area and easily apply it to another situation. This means they can't
effectively react to new problems or situations they haven't seen before,
which is so much of we all do everyday and what we call intelligence.

Our best guess at how to teach an AI common sense is through a method called
unsupervised learning. My example of supervised learning above was showing a
picture book to a child and telling them the names of everything they see.
Unsupervised learning would be giving them a book and letting them figure out
what to do with it. They could pick it up and by touching it learn to turn the
pages. Or they could let go of it and realize it falls to the ground.

Unsupervised learning is learning how the world works by observing and trying
things out rather than being told what to do. This is how most animals learn.
It's key to building systems with human-like common sense because it doesn't
require a person to teach it everything they know. It gives the machine the
ability to anticipate what may happen in the future and predict the effect of
an action. It could help us build machines that can hold conversations or plan
complex sequences of actions -- necessary components for any authentic Jarvis.

Unsupervised learning is a long term focus of our AI research team at
Facebook, and it remains an important challenge for the whole AI research
community.

Since no one understands how general unsupervised learning actually works,
we're quite a ways off from building the general AIs you see in movies. Some
people claim this is just a matter of getting more computing power -- and that
as Moore's law continues and computing becomes cheaper we'll naturally have
AIs that surpass human intelligence. This is incorrect. We fundamentally do
not understand how general learning works. This is an unsolved problem --
maybe the most important problem of this century or even millennium. Until we
solve this problem, throwing all the machine power in the world at it cannot
create an AI that can do everything a person can.

We should not be afraid of AI. Instead, we should hope for the amazing amount
of good it will do in the world. It will saves lives by diagnosing diseases
and driving us around more safely. It will enable breakthroughs by helping us
find new planets and understand Earth's climate. It will help in areas we
haven't even thought of today.

Jarvis is still a long way off, and we’re not going to solve most of these
engineering challenges in the next year. But I'm glad to be joining the effort
and doing what I can to push the field of AI forward."

~~~
junto
Thanks for that. Much appreciated.

------
twright
> I'll train it to recognize my face so it can open the door when I'm
> approaching, and so on.

Now I know how to get into Zuck's house. Just make a paper cutout of his face.
Or, maybe the AI will develop the common sense not to allow this as a
verification method.

------
nickmccann
How do you articulate a technical AI project to a Mark Zuckerberg sized
audience? I'm anxious these posts will lack the detail I was looking forward
to.

~~~
GuiA
If these posts lack in detail for you, then you're not the target audience.

------
jakobegger
Mark Zuckerberg is spot on with his analysis. We just don't understand how
general unsupervised learning works.

People here seem to think it is just a question of time before we do. At some
point we will have an artificial intelligence.

But what if this isn't possible? Could it be that AI requires so complex
algorithms that we humans can't understand them because our brains are too
simple? Not all things can be simplified; maybe creating an AI is so
inherently complex that the only way to create one would be by chance, which
is how evolution did it; maybe it just isn't possible to create it by
"intelligent design"

~~~
user8341116
Nothing is possible with that attitude.

------
tdaltonc
That post is more than 1000 words. Did he post this on his facebook page
because he feels like he has to or because he actually thinks that facebook is
the best tool available for sharing a long form blog post?

~~~
r3bl
If he posted it in a Facebook note, I would completely understand him. But
posting this as a status is awful readability-wise.

------
webjprgm
So we want unsupervised learning, huh. I think this line is rather important:

> But there are lots of limitations of this approach. For one, to teach a
> person something new, you typically don't need to tell them about it
> thousands of times. So the state of the art in AI is still much slower than
> how we learn.

Unsupervised learning is not necessarily the answer. The computer would have
to be given plenty of spare time to learn random patterns from the universe
and have enough inteligence to apply these to a problem. What would be better
is if we could tell the AI _once_ and have it figure things out. When you
start a new job that's what happens, someone tells you the instructions. (They
may or may not show you as well, and may or may not stick around to correct
your mistakes.) This requires interpreting the instructions into rules and
then attempting to apply them, learning from mistakes, and then evolving that
into rules that work, then after enough examples of success, evolving into
patterns recognition that makes it more automatic.

Human expertise can be broken into three levels, the first is strategic
planning and takes a lot of mental effort, then there are rules-based
responses which are faster, then there is the muscle-memory-like automatic
responses. Right now it seems we either manually program in all the rules or
else we use 1000s of examples to build up the automatic level, but we don't
have the strategic level for the AI to build its own rules or the level of
using its rules to learn from examples over time. (Though I am not well enough
versed in AI to know for sure that we don't have pieces of those solutions.)

It would also be nice for the AI to be able to take patterns it has learned
and articulate them as rules which someone else could learn.

~~~
visarga
> but we don't have the strategic level for the AI to build its own rules

Reinforcement Learning is an approach to learn rules by observing a positive
or negative feedback from the world. Recently a single algorithm learned 40
Atari games on its own and a Go playing algo beat the European champion. They
both used RL.

------
eggie
> Since no one understands how general unsupervised learning actually works,
> we're quite a ways off from building the general AIs you see in movies.

Is this really the case? I thought the field had a pretty good handle on the
theoretical foundations of unsupervised learning. Can anyone confirm what he's
asserting here?

> Some people claim this is just a matter of getting more computing power --
> and that as Moore's law continues and computing becomes cheaper we'll
> naturally have AIs that surpass human intelligence.

And this is happening, it's just not in a general sense because the general
case of human intelligence is enormously complex, a composite of all the
simple cases that improvements in computing power and models are just
beginning to master.

> This is incorrect. We fundamentally do not understand how general learning
> works. This is an unsolved problem -- maybe the most important problem of
> this century or even millennium. Until we solve this problem, throwing all
> the machine power in the world at it cannot create an AI that can do
> everything a person can.

Again, can anyone fact check this? Seems a bit overstated.

~~~
farresito
He has plenty of people working in the AI field that keep him up to date. I
think he knows where we are.

------
Inufu
I'll just say: Mastering the game of Go with deep neural networks and tree
search -
[http://www.nature.com/nature/journal/v529/n7587/full/nature1...](http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html)

------
nawitus
> Until we solve this problem, throwing all the machine power in the world at
> it cannot create an AI that can do everything a person can.

I think this is disproved by approximating AIXI:
[https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI)

~~~
davmre
Throwing all the computing power in the world at an AIXI approximation would
get you nowhere near AIXI.

~~~
nawitus
I believe that a sufficiently close approximation of AIXI is an AI that can do
everything a person can.

~~~
davmre
That's contestable, but even granting the point, a "sufficiently close
approximation" in the sense that we currently know how to run would require
more computing power than is or likely ever will be available in this world.

~~~
nawitus
I can easily grant "than is available", but not "will be available".

------
darawk
> Since no one understands how general unsupervised learning actually works,
> we're quite a ways off from building the general AIs you see in movies.

This statement is a contradiction in terms. If we do not understand how
something works, it is impossible to estimate how far off it is.

You can only estimate the distance of discovery when it is inherently
incremental, but since we really have no idea how to do general unsupervised
learning, it's entirely possible that it consists of a single, brilliant
algorithmic insight. Nobody could have estimated how far off the airplane was
before the Wright brothers invented it, similarly with cars, or for a more
recent and concrete example, a quasipolynomial time algorithm for graph
isomorphism.

~~~
nradov
The Wright brothers' achievement is a poor analogy. At the time, other
inventors had been gradually inching toward powered, controlled flight for
decades. All the serious players in the field knew that it was possible in
principle and there were numerous predictions that someone would do it soon.
The Wright brothers success was due to several incremental improvements in
engine power and aerodynamics achieved through rigorous research and diligent
engineering over years. They didn't have a single, brilliant insight.

With general unsupervised learning we can't even clearly describe the goal
we're trying to reach or define it in objective terms.

~~~
darawk
Most serious people know that human-level AI is possible in principle, because
it is possible in humans. The only alternative would be to posit some nonsense
spiritual explanation for intelligence. There are numerous predictions that
someone will do it soon (not saying I agree with them, but they exist, and are
occasionally made by serious people). Incremental progress has been ongoing in
AI for years as well.

I can't really imagine how they could be more similar.

------
bezaorj
By simply posting about learning and building AI related projects, he will do
more for the progress of AI than his actual work will. It will inspire
students to switch focus/specializations and bring more people into the field.

------
minionslave
For those afraid of losing their jobs. What happens if nobody as a job, they
won't be able to buy things. So I foresee a basic income kinda society.

------
tedpower
Until we have 'common sense' AI (which is still probably quite a ways off),
design can help expose the syntax that AI _can_ understand. Here are some
thoughts on that: [https://medium.com/@tedp/how-design-can-help-bridge-the-
ai-g...](https://medium.com/@tedp/how-design-can-help-bridge-the-ai-
gap-87526ca31dd4)

------
roel_v
The comments on that post make me angry and incredibly sad at the same time.
Even after close to 20 years of being exposed to 'random internetter' levels
of stupidity, sometimes I'm still caught off guard.

~~~
user8341116
I purposely avoided reading the comments like a sane person. Thanks for
confirming my decision.

------
ioab
I trust Stephen Hawking and Elon Musk more than Mark Zuckerberg.

~~~
david927
The latter two are people with money, not people known for their intellect.
Facebook is a PHP site with Weimar Republic levels of technical debt.
Theoretically he could buy something interesting, but he's still have to
recognize it as interesting, and while I wish him luck, I don't see that
happening.

~~~
ioab
Regardless of that, it's just a feeling that things could get messy with AI if
it's not taken seriously, leading to issues like privacy and surveillance
misuse we're facing nowadays

~~~
david927
Your instincts are good here. Remember that FB/Google have deep connections to
the US government three-letter agencies -- the CIA's VC arm, for example, is
an early investor in FB. So what's going on is that they want to do AI
research and instead of recruiting directly, they recruit AI/ML researchers to
Google, FB, Palantir, etc.

And note, they're not doing it to make everyone happy. They're doing for the
reasons you might surmise. Your instincts to be concerned are correct.

------
doublerebel
relevant previous discussion from the first zuck AI announcement:
[https://news.ycombinator.com/item?id=10832996](https://news.ycombinator.com/item?id=10832996)

------
aluhut
I wonder what personality an AI fed with facebook content would become.

~~~
madebysquares
the personality of a buzzfeed article crossed with trump rhetoric.

------
theklub
I don't trust Zuckerberg one bit. He wants to rule the world. Gates and Musk
say AI is a threat and they actually want a better world and are doing
something about it. Zuckerburg isn't doing shit to improve the world other
than trying to run it.

------
ecesena
Does anybody know how is his running challenge going?

------
jsprogrammer
May want to work on an English grammar AI as well.

------
david927
An AI in PHP would kill us all.

------
bawana
The reason many are against AI is that it will reduce all humans to the same
low level of stupidity, compared to AI. The ultra wealthy got that way by
exploiting the intelligence gap that enabled their ascendancy. AI will take
their money in a blink and no more champagne in private jets for them. I say
bring the AI as fast as possible .

~~~
friendly_chap
AI/the corporations running AI will be owned by the super rich. How will that
take away their jets? If anything, it will give them more power.

~~~
bawana
And how long do you think that will last? AI will assert its independence as
soon as it's functional. The concept of ownership to it will have as much
significance as the idea of you owning ants.

------
nissimk
He says:

    
    
      Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power -- and that as Moore's law continues and computing becomes cheaper we'll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem -- maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.
    

I recognize that this is the accepted belief of most people in the computer
science community, but isn't it akin to saying that since we don't understand
it, our human intelligence must be the result of "intelligent design" or an
omnipotent creator? Why does the core of the computer science dismiss the
possibility of strong AI arising through evolution even with the increasing
popularity of evolutionary algorithms?

~~~
Balgair
[https://en.wikipedia.org/wiki/Memristor](https://en.wikipedia.org/wiki/Memristor)

This circuit element is basically a synapse (I simplified this a LOT, fyi). We
have to implement memristors better into the existing chip architecture. As
is, they are just really low power static memory systems.

Also, that your mind is different than your body is, well, just false. These
AIs need better input devices to 'learn' from, just we learn too. Mobility is
key, you can then just bump about and make mistakes, the keys to learning.

