
Geoff Hinton Wants Computers to Think More Like Brains - Osiris30
https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/
======
gambler
_> With self-driving cars, I think people kind of accept that now. That even
if you don’t quite know how a self-driving car does it all, if it has a lot
fewer accidents than a person-driven car then it’s a good thing._

Unless there is a condition in the environment that causes a systemic failure
involving large number of self-driving cars at the same time. This happens all
the time with other types of software. People shouldn't assume that machine-
learned models are magically different and ignore all common-sense rules we
apply to safety-critical applications. In your daily job, would you be willing
to bet your entire car on a black box piece of code that seems reliable,
because it worked most of the time in the past?

Also, recall that flash crash of the stock market. Imagine something like
that, but with lots of cars on a highway. If you don't understand "how a self-
driving car does it", you are unlikely to install appropriate safeguards.

PS: Please don't insult your intelligence by replying that we don't understand
how people drive either, so it makes any opaque self-driving software okay.

~~~
chongli
_systemic failure involving large number of self-driving cars at the same
time_

This happens to human drivers every year. Snow, freezing rain, fog; these
adverse weather conditions can lead to large numbers of collisions at the same
time. Expecting self driving cars to never have a problem during rare
environmental corner cases is to expect them to be vastly superior to human
drivers all the time. If you're waiting for that day before you consent to
self-driving cars sharing the road with you, you'll be waiting a long time.

~~~
gambler
_> these adverse weather conditions can lead to large numbers of collisions at
the same time._

Cars sliding out of control on a snowy road is not a _systemic_ failure.

If you have an entire fleet of self-driving cars using the same (or similar)
technology, the exact same condition could cause all of them to misbehave in
the exact the same way. This could happen simultaneously with a large number
of cars. Or could be something that propagates in a cascading manner. (A car
breaks, turns and spins out of control. The next car detects that, breaks,
turns and spins out of control, etc.)

If you have no clear idea how your driving software works, how are you going
to make sure this doesn't happen?

People have common sense, variability and adaptive learning to prevent things
like that. Moreover, our traffic system, our cars and our driver education
have been co-evolving prioritizing safety for the last 100 years. (Heck, some
of it started with horses and carriages.) The complexity and size of the
system have been gradually scaled up for just as long, and it was designed and
refined to be used by humans.

It's astounding how many of AI enthusiasts fail to see these issues and
differences. Traffic system isn't a board game.

~~~
johnwyles
> If you have no clear idea how your driving software works, how are you going
> to make sure this doesn't happen?

A lot of people don't know how typing "www.google.com" into their browser
works but DNS "just works" for them. There is software in nearly every part of
our lives that could systemically go wrong in catastrophic ways - we do our
best to plan and prevent them.

> People have common sense, variability and adaptive learning to prevent
> things like that. Moreover, our traffic system, our cars and our driver
> education have been co-evolving for safety for the last 100 years. (Heck,
> some of it started with horses and carriages.) The complexity and size of
> the system have been gradually scaled up for just as long, and it's designed
> and refined to be used by humans.

Isn't that incredible? The complexity goes up: we adapt. As self-driving cars
are introduced we _adapt_ and _evolve_ and incorporate these changes in
technology, the driving "environment", security, safety measures, and laws and
regulations.

> Traffic system isn't a board game.

I didn't see any efforts at self-driving vehicles just say "ahhhh! this
driving stuff is so easy!" Of course it comes with huge difficulties, an
enormous amount of complexity, a baffling amount of data, a ton of edge cases,
and now a wealth of tribal knowledge on the subject.

When considering these things think of the systemic benefits as well. There
was a story on the Google self-driving cars in Mountain View not taking a turn
when the passengers of the car expected it would. Turns out that the IR
sensors on the vehicle saw that a jogger behind a wall of hedges was about to
enter the intersection to cross while jogging. Systemically implementing that
change across a fleet of cars would then save untold numbers of injuries or
lives.

We adapt.

~~~
apta
> Turns out that the IR sensors on the vehicle saw that a jogger behind a wall
> of hedges was about to enter the intersection to cross while jogging.

Some cars today already have IR sensors (night vision systems) installed in
them, and include pedestrian detection monitors as well. Cars don't have to be
full on self-driving to have a large increase in safety and accident
prevention.

------
YeGoblynQueenne
>> People can’t explain how they work, for most of the things they do. When
you hire somebody, the decision is based on all sorts of things you can
quantify, and then all sorts of gut feelings. People have no idea how they do
that. If you ask them to explain their decision, you are forcing them to make
up a story.

Manager: Why did you delete the production database in the middle of a work-
day, no less?

DBA: My decision was based on all sorts of things you can quantify, and then
all sorts of gut feeling. I have no idea how I do that. Do not ask me to
explain my decision, or you are forcing me to make up a story.

Manager: But, you crashed our client's entire business for a whole day!

DBA: I can't explain how I work.

(Somehow, I don't see that flying.)

~~~
smallnamespace
Nobody has perfect self knowledge, and yet in society we demand people explain
and justify themselves, so some level of confabulation is unavoidable.

I guess pointing that out while explaining yourself is a bad idea though.

------
laythea
"You should regulate them based on how they perform. You run the experiments
to see if the thing’s biased, or if it is likely to kill fewer people than a
person. With self-driving cars, I think people kind of accept that now"

I don't. In my opinion, we are all getting carried away with ourselves.

Google or not, I challenge anyone who makes a statement including the phrase
"...more like brains", to define that statement accurately. In response, you
will likely hear all sorts of things that _sound_ like we know what we are
doing and are very clever, but.

If we can't even ask the questions properly, I doubt we will get deliberately
valuable answers. Sure, we may luck out, with AI researchers stumbling across
random combinations (layered AI) that do produce useful results hap-hazardly,
but I would never rely on it.

Eg. I rely on software running in an aircraft, because:

a) it works, by design, not trial and error. b) it is verified. c) we know how
it works.

unless you have (c) you cannot achieve (b), with any degree of certainty.
Experimenting until confident is _not_ good enough for me for anything
important.

Brains fail at times. Computers should not make mistakes.

Even if you knew how a brain works, and even if its possible to replicate it,
please don't make computers like this. Software fails enough on its own
without any help, even when we supposedly understand it.

With this AI stuff, even if we knew where we are going, we don't know how to
get there. And even if we did get there (wherever that is), in the long run,
AI is not going to be used for the good.

As always with humanity, we will find ways to use AI to extract value out of
_other people_. This is what drives business.

No thanks, but it is interesting for fluff. A bit too voodoo for me.

~~~
modeless
> Brains fail at times. Computers should not make mistakes.

I don't think this is a reasonable position to take. We are going to have to
accept that if we want computers to do the things brains can do, they will
also make mistakes like brains do sometimes. It's not reasonable to insist
that the error rate is exactly zero. Besides, if you consider the computer as
a complete system including the hardware and human-computer interface, it's
clear that no complete system can ever be error-free, even if the CPU executes
its instructions faultlessly.

As long as it's as good or better than a human, it's good enough to be useful.
Even with "fallible" AI techniques, we should be able to make systems that are
much more reliable than humans.

~~~
laythea
"...if we want computers to do the things brains can do..."

That's my point though. I don't want computers to do what my brain does,
mainly because I have no idea what my brain does, nor does anyone. Sure, some
people think we do, but we don't.

It also seems silly to try to engineer something that we know may fail (by
design), on the probability that it may haphazardly produce better results
(even if this is most of the time). That may be useful for things like search
results, where the outcome is not critically important.

Please excuse the example, but I would be happier passing away in an aircraft
accident knowing it was because although the system was designed not to fail,
we did our best in terms of understanding the systems in place. That would be
my definition of "sh&t happens".

If I passed away because the aircraft "AI" took an untested path of execution
(there are too many to verify, unlike most regular "dumb" software), I would
be less happy.

Maybe there is a balance, Eg verifiable software for the important stuff, AI
fluffery for the less so. But in that case, we should not get carried away
(eg. self driving cars). Remember, planes could fly themselves completely for
a long time, and we still have 2 pilots for a reason.

Computers are a human's tool, not replacement.

~~~
modeless
> I don't want computers to do what my brain does

OK, that's a valid position to take, and I'm sure some people agree, but I and
many other people do want computers to do what our brains do. And it is pretty
likely to happen.

> I would be happier passing away in an aircraft accident knowing it was [not
> AI]

If I was in an aircraft accident caused by human pilot error while pilot AIs
with 1/100 the error rate of humans were blocked from deployment because they
don't meet your "zero errors" requirement, I would not be happy.

~~~
laythea
"...I and many other people do want computers to do _what our brains do_ "
(emphasis, mine).

I hear you, but my point is "what do our brains do?"

In the case of driving, if the answer is "drive a car", then that is a bit
like "brexit means brexit" :) The definition of the thing is the outcome of
the thing.

Instead, if you try to can decompose the question into a more useful form, one
that can drive a specification of software, then progress can be made.
However, all that has done, in that case, is a description of the outcome, not
the process. (And incomplete, due to the complexity).

It is at this point where my problem with all this AI stuff starts :)

The lack of understanding of the process, rather than the outcome is why I
call it voodoo, and anyone talking different is trying to impress, in my
opinion.

~~~
Jarwain
Generally our brain is composed of a bunch of different subsystems and
networks that accomplish varying tasks or objectives, and communicate the
results with each other. There's a lot of data sharing going on, as well as
reinforcement of known successes thanks to one subsystem known as the reward
center.

While we don't understand how our brain works to the bits and bytes, our
knowledge and understanding of how our brain functions is Growing. A fuller
understanding of how our neurotransmitters work on a chemical/biological
level, and an effective way to model it, will likely result in interesting
advances in the field of AI

I chat with my neuroscience friend a lot, and the overlap I see between how he
describes how the brain works, and general computer science concepts is
surprising.

------
AndrewKemendo
There's a logical inconsistency here:

Statement 1:I’ve always been worried about potential misuses in lethal
autonomous weapons. I think there should be something like a Geneva Convention
banning them, like there is for chemical weapons.

Statement 2: You should regulate them (AI) based on how they perform. You run
the experiments to see if the thing’s biased, or if it is likely to kill fewer
people than a person.

The intention of utilizing AI in weapons systems is to reduce human error in
their application. So is it unethical to implement an "AI" in a weapons system
that kills fewer people than a person would?

Seperately, I am curious why Hinton is not talking more about state
representation and prediction, as almost uniformly across DL/RL researchers
this is agreed upon as the next step forward. Hinton certainly is working with
Richard Sutton on his RL programs at U Alberta. Sutton believes that the next
step in AI systems is state representation and prediction and gave a quick
talk on it recently [1].

[1]
[https://www.youtube.com/watch?v=6-Uiq8-wKrg](https://www.youtube.com/watch?v=6-Uiq8-wKrg)

~~~
articwombat
Statement 2 is not in regards to weapons systems. It is referencing things
like self-driving cars. You can see that if you include the next sentence.

"ou should regulate them based on how they perform. You run the experiments to
see if the thing’s biased, or if it is likely to kill fewer people than a
person. With self-driving cars, I think people kind of accept that now."

~~~
AndrewKemendo
I realize that, however it's a logical claim on the determinants of
implementing an automated system.

If the logic can be applied to one system it should be able to be applied to
all systems.

------
thrwthrw93223
Side comment: hacker news is known for having the ‘author here’ moments in
threads.

Has Professor Hinton ever made a known appearance on HN?

~~~
visarga
I know Hinton participated in a few AMA sessions on reddit.

------
xpuente
The big question here is: How brains think?

~~~
paraschopra
Active Inference is one possible hypothesis unifying different ideas on how
brains think.

------
sifoobar
We do have some kind of shared understanding when it comes to humans though,
especially within a culture and even more so within a limited context like
driving a car.

We don't generally expect someone to freak out and start treating a car in
sunlight as a house because they've seen a lot of houses in sunlight lately.

The difference between a human and ML is that a human has a clue, demanding
perfect introspection is missing the point.

This borders on dishonesty if you ask me, he should know better.

------
criddell
> The brain is solving a very different problem from most of our neural nets.
> You’ve got roughly 100 trillion synapses.

Hinton doesn't mention microtubules at all. Do any of the neural net or other
connected neuron models incorporate the quantum vibration effects inside the
neuron's microtubules?

~~~
aaaaaaaaaab
I thought quantum microtubules are still considered crackpottery.

~~~
lenticular
There are definitely a few non-crackpots interested in quantum microtubules,
but it seems very far from mainstream.

It's conceivable that quantum effects could matter in some way on very small
scales, and that doesn't really have anything to do with quantum mysticism.

------
0xdeadbeefbabe
> GH: No, there's not going to be an AI winter, because it drives your
> cellphone. In the old AI winters, AI wasn't actually part of your everyday
> life. Now it is.

Oh good; although, that predicament about publishing AI papers seems pretty
bad.

~~~
thanatropism
There's something interesting about AI winters: they begin with researchers
claiming incremental science (say, heuristic tree search) as Whoa Artificial
Intelligence and end up with some pretty complex technology totally taken for
granted (car routing apps). In this cycle, we're in love with Computer Vision
-- at the other end of the tunnel we might be stuck with omnipresent facial
recognition and nada-de-nada of Jeff Hawkins and Ray Kurzweil's dreams.

------
rdlecler1
We place too much emphasis on learning and not enough on designing AI network
topology. No matter how hard you try you’re not going to get a chimpanzee to
write novels. They have different brain architectures.

------
KasianFranks
One symbol: computational neuroscience

------
mistrial9
As a programmer, I find you faith in computers amusing

------
guicho271828
So .. like human brains that chose a particular president?

------
FigmentEngine
brains don't think, minds do.

~~~
anonytrary
What does this even mean?

~~~
darkpuma
Presumably he's a dualist.

