
What does it mean for a machine to “understand”? - stablemap
https://medium.com/@tdietterich/what-does-it-mean-for-a-machine-to-understand-555485f3ad40
======
nutanc
This is a good balanced article that gets a lot of things right. We should
take a forgiving approach when we talk about AI systems. And as the author
points out the problem is not that AI systems dont have understanding yet. The
problem is with the hype which leads many to believe that we are close to
building systems which can understand us.

That said, I have a small problem with the examples presented to say that
already machines understand us :)

The article says 'For example, when I tell Siri “Call Carol” and it dials the
correct number, you will have a hard time convincing me that Siri did not
understand my request"

Let me try to take a shot at trying to explain that Siri did not "understand"
your request.

Siri was waiting for a command and executed the best command that matched.
Which is, make a phone call.

It did not understand what you meant because it did not take the whole
environment into consideration. What if Carol was just in the other room. A
human would maybe just shout "hey Carol, Thomas is asking you to come",
instead of making a phone call.

If listening to a request and executing a command is understanding, then
computers have been understanding us for a long time. Even without the latest
advances in AI.

~~~
netsharc
So the next version of Siri can locate Carol's phone in the next room and will
just beep her phone to tell her to see you. Of course that's still not
understanding.

The classic analogue is of course the Chinese room argument:
[https://en.m.wikipedia.org/wiki/Chinese_room](https://en.m.wikipedia.org/wiki/Chinese_room)

~~~
TheOtherHobbes
Which is an absolutely textbook example of begging the question.

If you could make a machine pass the Turing test it might be intelligent - but
no one has, and it's debatable if it's even possible, and it's even more
debatable if, hype notwithstanding, the Turing test is even a good test of
human-equivalent intelligence, because it ignores side channels that are
fundamental to human communication, including tone of voice, posture, and
facial expression.

(Yes, people communicate over email/SMS. But no one communicates over
email/SMS without an implied social context that hugely limits and simplifies
the content of any conversation.)

It's not the "call Carol" problem that needs to be solved. It's the
"understand the entire world context well enough to know _how_ to call Carol
without being told - which includes being able to research information that
isn't already available, and also includes edge cases like 'We went to Carol's
funeral last week' and 'Carol had her phone stolen yesterday' and 'Carol is
flying to Australia and won't be receiving messages for another 12 hours" and
"Carol prefers FaceTime to WhatsApp."

And so on.

Ultimately your toy machine has to show evidence that it understands the
entire world and can learn about it like a human can - which includes being
able to do original research that isn't a simple literal Google search, parse
humour, understand emotional responses and common cultural references, and
follow standard social protocols.

That's a _much_ harder problem than having a vaguely plausible limited text-
only conversation, whether it's in Chinese, English, or Swahili.

~~~
Nasrudith
I would call the moving goalposts a subtle sign of a win as well if it is
getting closer that previous "unthinkable" tasks need more qualifiers. Missing
or adding them makes it easier or harder. To be a smartass anything can pass a
text messages from the comatose (that is nothing) and nothing can reliably
"prove" itself god by say resurrecting and teleporting your dead relatives to
you would be obviously useful but impossible as that isn't something text
messages can do.

------
js8
I have a straightforward definition of "understand". To understand means to be
able to give a (representative) example of the (intensionally) given set.
Though it is harder than it seems, as it usually means solving the constraint
satisfaction problem.

For example, take the classical AI knowledgebase fragment, "bird is animal
that flies". If I ask example of bird, it can say "eagle", and exhibit some
understanding. We can then probe further and ask for a bird which is not an
eagle. If it says "bat" or "balloon", it exhibits that it still doesn't
understand birds quite right.

In particular, if the description is nonsensical and thus impossible to
understand, we cannot give any examples.

This idea was really inspired by the study, where they asked people to
recognize nonsensical and profound sentences, describing certain situation.
The profound are the ones where you can create a concrete instance of the
situation.

~~~
mjburgess
Then animals do not understand.

You've rigged this up to operationalize it for current digital machines.

"Understanding", "Intelligence", etc. is a feature of animals in their
environment. We need to begin there; and that _is_ what we are talking about.

We "understand" how to drive as a dog "understands" how to play fetch.
Understanding is not ever going to be a trivial rule that some digital system
may instantiate.

It will always require direct causal contact with an environment. In my view
"understanding" is "competent play in a changing environment" \-- ie., the
ability to modify the environment _as it changes_ in accordance with your
goals.

This rough definition is inspired by work in animals to understand the role of
the neocortex, and animal learning, and the role of consciousness therein.
Roughly: consciousness is "perceptual and cognitive intelligence grappling
with environmental change".

~~~
js8
> Then animals do not understand.

I am agnostic regarding that, as I don't think there is any evidence that they
do not attempt to build models that are consistent representations of reality.

I am assuming, based on my own experience, they also have this "internal
lightbulb" going on when they think they have built the correct model. But
whether they are actually cognizant of it (self-aware), I have no idea. (I
guess what I am saying is that understanding and self-awareness are two
different things.)

~~~
mjburgess
I'm not even talking about self-awareness. I'd be happy to raise the bar to
that level when (, if) we have mice-level AI.

However the bar is way below that at the moment, and masquerading as
"intelligence".

Current machine learning (ie., mere statistical) approaches to AI, that do not
explicitly aim to dynamically model environments/goals/behaviour/etc., aren't
even meeting an extremely minimal notion of intelligence.

We have at the moment "smart rocks". Electrical current "tumbles down" a
"digital mountain" and we all it's path "smart" because it has useful
outcomes. Equally, a rock rolling down a hill finds an optimal path -- it aint
"smart".

We should look at what the rock does when you start adpating its environment:
eg., create a little dip in the mountain side; it gets trapped. A mouse doesnt
get trapped in a dip, it continues to explore -- why?

Because animal behaviour is inherently exploratory of the enviornment. A mouse
doesnt "solve" a maze, it intelligently navigates it -- so that when
unexpected change occurs, it isn't "broken".

At the moment, all AI systems _radically_ break when such changes occur --
because they are statistically trained on mere data. They arent dynamically
model building. They aren't in an environment. They're just rocks rolling down
a hill.

------
cjfd
On the one hand the quote by Edsger Dijkstra comes to mind. "The question of
whether machines can think is about as relevant as the question of whether
submarines can swim." We are hardwired to attribute great significance to what
happens both in our own head and that of other people.

On the other hand, machines still perform actions that one could call
'stupid'. When alphago was losing in the fourth match against Lee Sedol it
would play 'stupid' moves. These were, for instance, trivial threads that any
somewhat accomplished amateur go player would recognize in an instant and
answer correctly.

Humans, and also animals, have a hierarchy in their understanding of things.
This maps on brain structure too. Evolution has added layers to the brain
while keeping the existing structure. In this layered structure the lower
parts are faster and more accurate but not as sophisticated. Stupidity arises
because of a lack of layeredness so when the goal of winning the game is
thwarted the top layer doesn't have any useful thing to do anymore and it
falls back on a layer behind that. For alphago pretty much the only layer
behind its very strong go engine is the rules of go. So, even when it is
losing it will never play an illegal move but it will do otherwise trivially
stupid things. For humans there is a layer between these things that prevents
them from doing useless stuff. For living entities this is essential for
survival. You can be forgetful of your dentist appointment but it is not
possible to forget to let your heart beat. It seems that this problem could be
mended by putting layers between the top level algorithm and most basic
hardware level such that stupid stuff is preempted.

~~~
AnIdiotOnTheNet
> When alphago was losing in the fourth match against Lee Sedol it would play
> 'stupid' moves. These were, for instance, trivial threads that any somewhat
> accomplished amateur go player would recognize in an instant and answer
> correctly.

I think this behavior is less 'stupid' than it appears. When human beings play
Go, the points matter even to the loser, and everyone goes home when it is
over. There is life outside of Go. To Alpha Go, Go is it's entire universe.
Part of the way it was trained was competing against other instances of
itself, a sort of Thunderdome where the loser doesn't get to continue
existing, and doesn't contribute to future generations. To Alpha Go, defeat is
death. The behavior we observe when losing is nigh-certain has a human
equivalent, we call it desperation. Alpha Go is trying moves that can only
possibly work if the opponent makes a catastrophic blunder, which is
incredibly unlikely, but it's the only shot it has.

------
modeless
> When I ask Google “Who did IBM’s Deep Blue system defeat?” and it gives me
> an infobox with the answer “Kasparov” in big letters, it has correctly
> understood my question. Of course this understanding is limited. If I follow
> up my question to Google with “When?”, it gives me the dictionary definition
> of “when” — it doesn’t interpret my question as part of a dialogue.

Google Search doesn't, but Google Assistant does. I posed the exact queries
suggested by the article and the second query of simply the word "when" did
give the correct answer (May 11 1997).

~~~
baddox
That example seems pretty unrelated to what I would think of as
“understanding.” That’s more just a feature request for Siri.

It’s like saying “my calculator lets me type ’1 + 2 =’ and gives me the answer
‘3,’ so it seems to understand that question, but when I look at the
calculator I see there’s no ‘sqrt’ button that would show me the square root
of 3.”

The fact that my basic calculator doesn’t have a “sqrt” button is pretty
irrelevant to how well it “understands” how to add two numbers together.

~~~
taneq
Your basic calculator still has a concept of context, though. If you go '1 + 2
=' then it will give you '3', and if you press '/ 2 =' then it will give you
'1.5'. It 'remembers what you were talking about' within its very limited
scope.

I think what they were trying to get at is that understanding is stateful.

------
BoppreH
I don't remember where I first saw it, but the best definition of
"understanding" I've seen is "being able to encode and compress".

For example, imagine a system that has as input the picture of a human face in
RAW format. If the system runs the picture through JPEG compression, for
example, and returns something substantially smaller, it has shown some
understanding of the input (color, spatial repetition, etc).

A more advanced system, with more understanding, may recognize it as a human
face, and convert it to a template like the ones used for facial recognition.
It doesn't care about individual pixels anymore, or the lighting, just general
features of faces. It understands faces.

An even more advanced system may recognize the specific person and compress
the whole thing to a few bits.

I would say that an OCR scanner understands the alphabet and how text is laid
out, GPT-2 understands the relationship between words and how text is written.
And a physics simulator understands basic physics because it can approximately
compress a sequence of object movements into only initial conditions and small
corrections.

Lossy compression makes this concept non-trivial to measure, but it's still a
world's away from the normal philosophical arguments.

------
stared
> Speaking as a psychologist, I’m flabbergasted by claims that the decisions
> of algorithms are opaque while the decisions of people are transparent. I’ve
> spent half my life at it and I still have limited success understanding
> human decisions. - Jean-François Bonnefon’s tweet (as quoted in
> [https://p.migdal.pl/2019/07/15/human-machine-learning-
> motiva...](https://p.migdal.pl/2019/07/15/human-machine-learning-
> motivation.html))

~~~
gus_massa
The advantage of humans is that we have a building bullshit generator.

If someone ask why you like ice-cream, you can tell a nice story about the hot
summers during your childhood, but the reality is that sugar and fat are very
useful.

If a the autopilot of a Tesla hit someone, the error report is "Fatal error
0xDEADBEEF: coefficient 742 > 812".

If a person hit someone the explanation is "It was dark and near a curve. I
was texting that is totally safe. I got distracted by reindeer nearby. And I
snoozed and was thinking about reaching a handkerchief".

------
Nasrudith
To be gadflyish do humans even truly understand or do they just claim they do
because they had the observations roughly encoded from what they have been
taught? Teachings which themselves often include unfounded assumptions or
outright superstition.

Human understanding has been wrong often enough, missing enough crucial
context to be dangerously hillariously wrong even amongst the "experts" of the
day who came closest.

The isn't some epistemological nilhism but to point out that understanding is
incomplete for everyone and just because a given intelligence subset doesn't
match with our assumptions doesn't mean it is wrong - although it also isn't
always right.

------
ilaksh
I think getting near human level for NLP understanding means be being able to
visualize and combine all of the dynamic systems that language represents. I
mean it's obvious that you can get pretty far just by processing a lot of
text, but there is a limit. Some information about the way things work just is
not encoded very well in text the way it is in video input. So you need to be
able to do a sort of physics simulation for starters. Except it can't just be
physics, because there are a lot of patterns that occur that you need to be
able to call up and manipulate or combine that are not just plain physics.
These patterns are not represented in text.

There are projects doing video and text understanding. I think the trick to
efficient generalization is to have the representations properly factored out
somehow. Maybe things like capsule networks will help. Although that my guess
is that to get really sort of componentized efficient understanding neural
networks are not going to be the most effective way.

------
avmich
The proposal in the article is to define "understanding" and work towards
testable satisfaction of the definition.

This sounds a bit like a studying for a test taking. What if we made a
definition and then worked successfully to reach the state when, according to
this definition, the system "understands". Can we expect to be satisfied with
the result in general, outside of the definition?

The definition of understanding could be tricky, as history suggests. Other
than "to understand is to translate into a form which is suitable for some
use", there could be many definitions. Article itself brings examples of chess
playing or truck driving which were considered good indicators, yet failed to
satisfy us in some ways.

Maybe we should just keep redefining "understanding" as good as we can today,
and changing it if needed, and work trying to create a system "good", not
necessarily "passing the test"?

------
YeGoblynQueenne
OK, wow, the old guard sure knows how to write sensibly. This is a great
article.

But I have to disagree with this (because of course I do):

>> For example, when I tell Siri “Call Carol” and it dials the correct number,
you will have a hard time convincing me that Siri did not understand my
request.

That is a very common-sense and down-to-earth non-definition of intelligence:
how can an entity that is answering a question correctly not "understand" the
question?

I am going to quote Richard Feynman who encountered an example of this "how":

 _After a lot of investigation, I finally figured out that the students had
memorized everything, but they didn’t know what anything meant. When they
heard “light that is reflected from a medium with an index,” they didn’t know
that it meant a material such as water. They didn’t know that the “direction
of the light” is the direction in which you see something when you’re looking
at it, and so on. Everything was entirely memorized, yet nothing had been
translated into meaningful words. So if I asked, “What is Brewster’s Angle?”
I’m going into the computer with the right keywords. But if I say, “Look at
the water,” nothing happens – they don’t have anything under “Look at the
water”!_

[https://v.cx/2010/04/feynman-brazil-education](https://v.cx/2010/04/feynman-
brazil-education)

In this (in?) famous passage Feynman is arguing that students of physics that
he met in Brazil didn't know physics, even though they had memorised physics
textbooks.

Feynman doesn't talk about "understanding". Rather he talks about "knowing" a
subject. But his is also a very straight-forward definition of knowing: you
can tell whether someone knows a subject if you ask them many questions from
different angles and find that they can only answer the questions asked from
one single angle.

So if I follow up "Siri, call Carol" with "Siri, what is a call" and Siri
answers by calling Carol, I know that Siri doesn't know what a call is,
probably doesn't know what a Carol is, or what a call-Carol is, and so that
Siri doesn't have any understanding from a very common-sense point of view.

Not sure if this goes beyond the Chinese room argument though. Perhaps I'm
just on a diffferent side of it than Thomas Dietterich.

------
visarga
Does AlphaGo 'understand' go?

I think the key ingredient is 'being in the game', that means, having a body,
being in an environment with a purpose. Humans are by default playing this
game called 'life', we have to understand otherwise we perish, or our genes
perish.

It's not about symbolic vs connectionist, or qualia, or self consciousness.
It's about being in the world, acting and observing the effects of actions,
and having something to win or lose as a consequence of acting. This doesn't
happen when training a neural net to recognise objects in images or doing
translation. It's just a static dataset, a 'dead' world.

AI until now has had a hard time simulating agents or creating real robotic
bodies - it's expensive, and the system learns slowly, and it's unstable. But
progress happens. Until our AI agents get real hands and feet and a purpose
they can't be in the world and develop true understanding, they are more like
subsystems of the brain than the whole brain. We need to close the loop with
the environment for true understanding.

~~~
julvo
If the Agent would 'understand' Go, we'd expect it to adapt to a round board
easily. Humans probably would. (argument from Gary Marcus)

~~~
visarga
Even a simple scaling down of the board from 19x19 to 9x9 has a huge effect on
strategy. A circular board would probably produce something that doesn't look
like Go and would confuse trained humans as well.

------
boyadjian
To understand means to classify, to modelize.

------
RaiseProfits
You should direct the question to the computer if you want a meaningful
answer.

------
basicplus2
Self consciousness is required for understanding and intelligence

~~~
sadness2
Self consciousness can be equally stratified and broken down functionally.
It's not a boolean either.

------
igammarays
I'm with John Searle on the Chinese room [1] opinion, i.e. that a machine
cannot be said to "understand" language even if it is able to pass the Turing
Test. That is because when we say "understand", we are referring to particular
kind of human experience (qualia?) that a machine simply doesn't seem to have,
but animals, for example, do.

[1]
[https://en.wikipedia.org/wiki/Chinese_room](https://en.wikipedia.org/wiki/Chinese_room)

~~~
msla
I can say that _you_ don't have qualia and you can't prove me wrong.

Does that seem _dangerous_ to anyone else?

I also don't see any distinction between "qualia" and "soul" other than
spelling, but perhaps it's because _I_ don't have one.

Finally, I have this question for Searle: Say you understand English. Does any
specific neuron in your brain understand English? No, the larger system of
neurons+neuronal connections does, so why doesn't the system of grad
student+book understand Chinese?

~~~
baddox
Dangerous? No. To me it just seems to mean that “qualia” is not a particularly
useful concept, particularly when discussing the capabilities of computer
software.

~~~
goatlover
The issue is that we’re using a word based on human mental activity and social
agreement and then applying that to a computational process in a machine,
which likely leaves out part of the human experience which makes up the word
understand.

It’s more accurate to say the Chinese room computes results which humans
recognize as successful translation from English to Chinese. The understanding
is all on the side interpreting the output.

~~~
baddox
Then what is the point in asking if a machine “understands” English and
Chinese? It sounds like the question would be either completely untestable, or
the answer would just be “no” by definition because we’re defining
“understanding” to be “based on human mental activity.” It just doesn’t seem
like a useful question if the answer can not be determined by a test such as
the Chinese room thought experiment.

~~~
goatlover
Well, if we asked this about Data from Star Trek, then the answer would have
to be yes, or mostly yes (Data does struggle to make sense of some human
behavior on the show). So then the question is what gives Data an
understanding that the Chinese Room lacks?

Data participates in human society and he has a human-like body. Data also has
subjective experiences, as evidence by his dream sequences in one episode.
Whereas the Chinese Room is just following a bunch of rules for translation.
But Data doesn't merely translate from one set of symbols to another given a
large set of rules. He learns by interacting with people and his experiences
as an android. From that we could say understanding is the result of an
embodied social activity that the Chinese Room completely lacks. Whatever the
Chinese room is said to be doing, that's not the same as understanding
language.

Another way to put it is that language isn't equivalent to symbol
manipulation, even though it makes use of symbols, or a least since the
written word was invented.

------
friendlybus
I don't think it's possible for machines to understand. Numbers are
meaningless, our human actions give them a useful function. All of the meaning
a computer appears to provide is the preassigned values of layers and layers
of programming work done by humans. Even today AI has a lot of human tagging
and categorization that makes it useful.

The idea that a new self- sustaining meaning generation can arise out of the
interlocking mechanisms of a computer is an interesting one. As we see self
driven car CEOs describe some of the most advanced systems we have, requiring
to be run in controlled environments and balking at the infinite complexity of
real life, are we really building computer systems that are anything more than
an incredibly sophisticated loop?

~~~
prvnsmpth
Well, what does it mean for humans to "understand"? Don't humans understand
things by altering the state and connections of neurons in the brain? You
could make the argument that the brain is also an "incredibly sophisticated
loop".

My point is that humans are also highly-sophisticated, biological machines, so
if you say machines cannot "understand", you are making the same claim for
humans as well.

~~~
friendlybus
Humans also squirt fluids around in their brains. Brains as machines is one of
many ways to think about humans. Humans can conceive of and move past thoughts
or concepts that would cause a machine to crash. I think more ideas describe
human brains than being simply machines, though that idea is useful in places.

Making the claim about what a human is in the absolute, is more about what you
fill the unknown with than the nature of a human.

Understanding is the difficult question. I would argue the understanding
people want out of machines is the ability to generate, use and self-manage
tools and that the machine knows the tool's place or context under a human
value, story or intent and adapt to the implications of that higher order.
That in the most exaggerated sense would be perceived as a machine that
understands, but of course people mean different things when they say that.

~~~
russdill
Are you equating machine here with "sequentially programmed computer?".
Because computers and neural networks specifically have gone far beyond that.

~~~
friendlybus
I get how ML works. I don't mean loops as a for(int i) loop, but the concept
of a loop itself, a circle. A self-driven car with ML decision making is still
bounded by some rules we will be forced to compromise on. Some people at MIT
are focusing on deaths per miles driven as a safety metric to determine
whether we can replace humans with ai cars and when that might happen.

But given the constraints of ML/ai you will eventually have a bounded
container where an ai car can operate and where it can't. The car will be
tasked with looping through that environment from job to job then back to
recharge at it's base station. For all the sophistication of getting the car
on the road and working it won't really be making up it's own story through
the world nor will it understand the greater context of it's actions. The
pattern recognition in CV is great, but it is fed by humans, so the meaning
that a tree should be avoided has been initially put in by a programmer, even
if the car in the moment chooses to avoid the tree by itself. The car is
crunching meaningless numbers like a pipe directs water.

So when people say a machine "understands something" it can't ever really be
true because all of our machines don't know what is going on in the world,
they only know what numbers they see and how to behave when those numbers
change. At the very bottom it's electricity looping through logic gates and
that same principle is repeated all the way up to a car that loops through
it's environment and comes back.

If all the humans left the planet, the car wouldn't be described as
understanding the world, it'll be seen as a generic device sitting in a garage
somewhere waiting for orders from a human. If you fill the earth with aliens
the CV breaks not having seen aliens before, the roads get changed over time
by nature, the high detail mapping it relies on fails. The cars
"understanding" only exists as an outcome of electric impulse. It doesn't
understand and never could. We are building more and more sophisticated loops,
and I'm glad, but to think computers can understand is a doomed project. They
will never "get" the values, intents and stories we put in them. Computers
will forever be a labour of love is not able to regress into understanding
what we mean it to be.

~~~
gus_massa
> _A self-driven car with ML decision making is still bounded by some rules we
> will be forced to compromise on._

The atom in the molecules in the neurons of your brain are bounded by the laws
of Physics. They can't disobey them, they are as free as the coefficient of
the ML tables.

> _If all the humans left the planet, the car wouldn 't be described as
> understanding the world, it'll be seen as a generic device sitting in a
> garage somewhere waiting for orders from a human._

Unless some car have setup an alarm to go to pick you from work at 5pm, you
are not there but it goes anyway. After some time (1 hour?) it gives up and
return home to get charged and wait for the next day. The waiting time depend
on the weather (if it is cold or rainy) and the battery charge and perhaps the
congestion of the roads.

Once per year they go to the robot-mechanic for the anual service. They also
go when a tire or something get broken. They can call the autonomous crane in
case it is needed. During the repairing time, they call a replacement and send
all your info and schedule, so you would not miss your appointments (in case
you were still there).

The car also negotiates automatically the insurance with the company web
service, and pays the registration fees. Your autonomous house pays the
electricity bills. Until your bank account is empty.

If you have some money in a good investment found this can last for a long
time, until your car is too old and decides to retire and buys a replacement.

We are still very far from this scenario, but it is not so difficult to
imagine that a bunch of small features compose nicely.

Somewhat related:
[https://en.wikipedia.org/wiki/Hachikō](https://en.wikipedia.org/wiki/Hachikō)

~~~
friendlybus
I like your hachiko characterization of a patient and loyal car. That level of
automation would be nice, I find cars to be a chore sometimes.

What I'm trying to get at is deeper. I guess it's a question of philosophical
form. Can you grow an software package to the point of transcending a looped
format? Usually a program has our goals and desires established in the coding
process and we may through in some qualitative checking functions. We then
compile it into a binary form that runs on a CPU that has a clock. That CPU
always runs, and the human relevant meaning in the code like function names,
the human interpretation of images, video, maps and sound was evaporated only
leaving streams of binary. The binary flows through logic gates that act like
plumbing tools. The tools can check their own output and proceed down
different qualitative paths.

ML as a form grinds out the problem of optimizing the path through those logic
gates against qualitative checks. Then we store the working model and loop it
at runtime.

So why can you give a human the idea 'making cars can be sold and get you
laid' and the human will change their entire career, living location and
lifestyle to suit a better economic output, but the program cannot
reason/create a form that is not a loop?

If we give a car body sensors and body 'brains' it can synthesize many
different perspectives at once. Tactile door handles could give
fingerprint/heartbeat/temperature senses on the human driver, as one tiny
example. You could program in assumptions about what a high temperature human
needs and wants. You could give the car every kind of imaging sensor, air
quality sensors, moisture level sensors. You could track and synthesize all
that data across time and evolve in a sense for when it's going to rain like
ants have, or whatever. It could 'feel' the world. But it would still be that
sheepdog waiting at home.

Could it anticipate your needs? Only as a historical projection and whatever
you program in. Can you infer human intent, thought or value from sensors?
Computer vision applied to human faces or voices? I don't think so.

Humans use different forms of language to transmit intent, values, stories,
feelings. The idea that we could have a language or sensor inference that we
talk to the car with that will perceive and adapt to the conflict our own mind
is wrestling with and seeks to solve is difficult. Google's automated hair
dressing appointment booker is cool, it is extending the breadth of voice
commands a computer can respond to without having to understand what the words
mean or the conflicts implied in understanding their meaning but only how they
should be plumbed around as electric bits.

I guess the endless hope is that we just have enough quantity of processed
information we can build a machine that you can interface with, it'll solve
the problem and that you don't need to know how the innards of the machine
work. Which always seems like a plausible goal until it isn't. Web apps break,
the internet can behave unpredictably, washing machines require cleaning and
soap/ washing knowledge, cars break. Stuff that can be ignored is usually
because we pay others to fix the problems quietly. Shifting the burden of
understanding how to deal with looped quantative machines to the
capatalist/currency system, another quantative system.

The challenge of allowing a human to ignore the new loops one must learn the
structure of and thus be able to say the machine 'understands' me instead of I
having to understand it, is a forever doomed hope that we benefit from trying
to solve.

~~~
russdill
Just bring it down to the basics. Our brains operate on neurons, neurons
operate on physics, physics can be fully simulated by computers.

This looping, CPU, "programming it in", and app concept are not the direction
machine learning is going in. That's not how deep learning and neural networks
work. You can integrate them with an app and do looping yes, but you can also
just connect them to each other. No looping, no "programming it in", no apps.

~~~
friendlybus
Our brains don't solely work on neurons. There was a neuroscience video with
3-4 prominent people in the field dispensing of the idea that brains are
computers. They squirt fluids around, it is an unknown. There are plenty of
things about existence that physics does not capture.

Frankly I'm not saying ml is programmed in, only the initial conditions are,
which is where the meaning is. We have hired a lot of low income earners to
classify images for image recognition, which is the outsourcing of discerning
meaning from the CPU to the he human. These kind of broad discussions don't go
anywhere here, I should go somewhere more philosophical.

~~~
gus_massa
Fluids in the brain carry a few hormones from here to there and the
distribution of neurotransmisors have an important role, but they act very
slowly. If we ever have a 100% bug compatible model of the neurons of the
brain, adding the flux in fluids will be easy. They can be modeled with a very
low number of slow changing variables.

I don't expect to see ever a 100% bug compatible model of the brain. I expect
to see some system that does a somewhat similar calculation and produce
results that look similar to the result in the brain. Something like the eye
and a camera.

