
The Stupidity of Computers - mgunes
http://nplusonemag.com/the-stupidity-of-computers
======
tlarkworthy
The never ending language learner (NELL) is able to build ontologies largely
unsupervised with 87% accuracy from a base set of seed facts and access to the
internet, indefinitely. [1]

Whilst it is unable to assign one symbol with multiple nouns, I think these
are more engineering issues than anything. The overall architecture of NELL
can be made smarter with horizontally scalable knowledge inference additions.

I think articles like this are going to be out of date fairly soon (if not
already out of date privately).

[1] [http://en.wikipedia.org/wiki/Never-
Ending_Language_Learning](http://en.wikipedia.org/wiki/Never-
Ending_Language_Learning)

~~~
3rd3
> Accumulated errors, such as the deduction that Internet cookies were a kind
> of baked good, led NELL to deduce from the phrases "I deleted my Internet
> cookies" and "I deleted my files" that "computer files" also belonged in the
> baked goods category.

That is hilarious!

By the way, there is another intersting never ending lerner which is based on
random images from the internet: [http://www.neil-kb.com/](http://www.neil-
kb.com/)

~~~
adwf
Yes 87% is quite impressive, but that remaining 13% is all important. The
errors sadly propagate (as above); more importantly, the errors tend to strike
randomly also.

eg. A human might not be expected to know too much medical terminology, hence
could easily get names/locations/functions of muscles and organs mixed up.
Whereas a computer might be brilliant at that, but then suddenly not know what
a door knob is (door handle? door knob of butter? etc...)

So ultimately to make a computer seem intelligent the problem is two-fold:
Getting a high percentage of right answers, whilst also getting the "right"
wrong answers.

Afterall, we don't criticize a human too much for not knowing something like
the exact geographic location of a small African nation. But we definitely
would if they thought that it was on Mars! It's exactly the same with
computers, they're just far more likely to make wildly incorrect mistakes than
understandably incorrect ones. So finding a way to mitigate computer mistakes,
to make them milder, is every bit as important as eliminating them entirely -
a potentially impossible task.

~~~
blueblob
If you think about the way that humans learn, we get formal education that the
computer doesn't get. We are getting corrected for our mistakes by other
people. This work did not originally do that. The newer versions of it get
corrected by people but that doesn't mean that it couldn't be corrected by
more instances of itself that operate over different datasets via a consensus
algorithm. The algorithm will also probably get flack that it is not
unsupervised when human learning is actually partially supervised.

One could argue that humans have a similar problem of errors propagating.
Areas that we feel strongly about can bias us against learning in fields such
as religion, politics, and, of course, programming language design.

I think that it is possible that humans would make some similarly poor
responses in areas that we don't talk about often.

~~~
adwf
I agree absolutely. I believe that future AI's will be educated in much the
same way we educate children. Perhaps less education will be needed - possibly
we could start them off at a higher age bracket for example. But ultimately
I'm certain the first "strong" AI's will be educated/supervised to some
degree.

My point was more that perhaps we're a bit too focused on the wrong metric for
success. The current criteria set is {positives, negatives, false positives,
false negatives}, and we try to optimise for high/low degrees of one or the
other in order to determine whether a particular approach is successful or
not.

What is then overlooked, is that perhaps we don't need to have a near-perfect
positive rate, but instead achieve an acceptably-incorrect false positive or
false negative rate. Where the answer may be wrong, but it's not too far
wrong. Much like a human might pin a country like India in the wrong place on
the map, but wouldn't ever put it in the middle of the Indian ocean.

In summation: Perhaps the key for computers to appear intelligent, is not to
be perfectly correct, but to be not too disastrously incorrect.

~~~
blueblob
Perhaps you can extend the F1-score[1] to be an F1,w-score that weights errors
based on some measure of distance from correctness.

[1][http://en.wikipedia.org/wiki/F1_score](http://en.wikipedia.org/wiki/F1_score)

~~~
adwf
Sounds like a good idea. Finding that distance heuristic will be a challenge
though!

------
kijin
As an Aspie, I like the fact that my computer is "stupid".

It understands my commands literally and executes them exactly the way I
specified them, down to the last typo. It doesn't try to second-guess my
intentions, read my body language, or do any of the thousand other things that
neurotypical people do to drive me crazy. After a full, stressful day of
interacting with people, interacting with the "stupid" Terminal is a breath of
fresh air. I'm sure a lot of other autistic people like computers for the same
reason.

If my computer ever began to interpret my words and actions like an actual
specimen of _homo sapiens_ does, I'd probably throw it on the ground and
destroy it with a jackhammer. When I buy an Intel processor, it's because I
want it to crunch numbers for me, not because I want a clone of that thing in
the movie "Her".

------
dmunoz
> Amazon also had your entire purchasing history, and its servers could
> instantly compute recommendations you would be likely to accept.

I wish. I have long been surprised at how poor Amazon recommendations are
given that they have a large pool of actionable data: things I have actually
purchased. What else could be a stronger signal?

I have even informed them which purchases not to use as a basis for
recommendations, or when recommendations they have made are not interesting
(at least, those that are made from my page while signed in, no ability to do
so with the emails they send).

I preorder plenty of technical books, so Amazon could easily suck more money
out of me if they kept me informed on up and coming books by authors I have
purchased from, or in the specific area I purchase in. Instead, I seem to get
weekly emails about the latest "popular" books in a very general area (e.g.
programming).

~~~
adwf
Yeah, I have a long purchasing history of sci-fi novels. But then I buy a book
about UX and another about value investing, suddenly my recommendations are
all replaced by Warren Buffet and web design books...

Sci-fi novels are an area where I need a large amount of discoverability to
find new authors - recommendations are very useful. Not so much when I break
my trend and buy a stock market textbook.

~~~
waveman2
From personal experience I suggest you never buy a children's book on your own
account.

There doesn't seem to be a way to tell Amazon "Hey this purchase was a one-
off, don't use it for recommendations".

~~~
gaius
Sure there is, in the top left there is a link for "<your name>'s Amazon",
click that, then in the toolbar in the middle click "Improve your
recommendations". You'll get a list of all your purchases with a checkbox by
each one to disregard it.

------
tim333
Good article but I disagree with his basic conclusion. To illustrate what I
mean a couple of quotes:

"... will increase the hold that formal ontologies have on us. They will be
constructed by governments, by corporations, and by us in unequal measure,..."

"We will define and regiment our lives, including our social lives and our
perceptions of ourselves, in ways that are conducive to what a computer can
“understand.” Their dumbness will become ours."

I think the opposite is in fact happening, that academically for example there
was a fixed system, in the UK for example you did GCSEs, A Levels, bachelor’s
degree etc., which was something like a formal ontology constructed by
government - physics shall be divided form chemistry and both shall be graded
from A to E. Now we have all kinds of new forms of education like online
courses, Wikipedia and so on so you can pretty much find an educational form
to fit what you want to do. We're moving more from a formal approved ontology
to many competing ones where you can choose.

I'm not sure where posting on Hacker News fits into this. Maybe it is
regimenting our lives, including our social lives and our perceptions of
ourselves, in ways that are conducive to what a computer can understand!

------
david927
I've never heard of the magazine n+1, but if this is typical of it, I'm deeply
impressed. Particularly, some of the author's conclusions are brilliant.

His only failure is when he tries to prognosticate. Ontologies are reductive
only when they're not contextual, for example.

~~~
mjn
I generally like the magazine. It isn't mainly about technology, though, if
that's what you're looking for: it's more of a
literature/theory/politics/arts/culture magazine. Articles mainly about
science/technology are maybe one per issue on average, though usually pretty
good ones (with some misses).

Another new-ish magazine that I mentally place in vaguely the same genre is
The New Inquiry: [http://thenewinquiry.com](http://thenewinquiry.com)

~~~
dclara
Mainly not about tech? It's such a perfect summaries of what's happening in
the tech history for information processing. Like to see more of this kind.

~~~
mjn
This article is; I just meant that the magazine overall is not mainly about
tech. Here's their latest ToC, for example: [http://nplusonemag.com/print-
issue-18/](http://nplusonemag.com/print-issue-18/)

~~~
dclara
It's unbelievable. Even in the tech world, I haven't seen such good quality of
reports so far. Why is it not that popular? Maybe because it's a paid service.

Thank you so much. Please introduce more to us later.

------
danso
> _Consider how difficult it is to get a computer to do anything. To take a
> simple example, let’s say we would like to ask a computer to find the most
> commonly occurring word on a web page, perhaps as a hint to what the page
> might be about._

Um, has the OP ever tried getting a _human_ to easily perform such a task?

~~~
brudgers
Pointing out the problems with the proposition just illustrates the author's
point about the semantic gulf. It's relatively easy for a human to not only
see the problem but to point it out snarkily. But short of singularity we may
not see a computer doing so in the near future.

Abandoning meta-snark despite it's pleasures, what is interesting is the way
in which the semi-absurd example implied a relevant example which I now
realize was why I wasn't bothered by the semi absurdity. I didn't really take
the words literally.

What was evoked seems to have been the idea of how unlikely it would be that a
computer could read the pseudo code and determine that it found the most
frequent _words_ on a page without being given the answer ahead of time.

~~~
danso
I wasn't (only) meaning to be snarky, but the proposition he makes is itself a
bit of snark and, more to the point, begs the question. If you're assuming
that, given a non-trivial webpage, all educated adult humans will agree as to
what it's about...well, how can the computational scientist argue against
that? If anything, the author himself should be keenly aware of how humans
fail at general interpretation, even after having 12 years of education
(K-12)...unless he's never had a comments section for his articles.

~~~
girvo
Meta: your comment finally solidified the proper meaning and usage of "begs
the question". I'd sort of understood it, but hadn't seen it used in modern
text like that.

------
bad_alloc
I always wonder why computers should be intelligent. In essence they can add
numbers pretty quickly. If you interpret the result of many such additions in
a certain way you get a text, a video or an operating system. Wouldn't it make
more sense to build a machine for thinking instead of reusing one we built for
calculating?

~~~
mjn
That's one of the core high-level AI debates going back decades, whether a
fundamentally different hardware substrate is needed for intelligent machines,
or if it's instead more of a 'software' problem. Biologically-inspired AI
tends to want to build things that look more like networks of neurons as the
hardware substrate (among other things), while some other branches of AI find
existing computing hardware to be generally fine as a substrate.

Imo it comes down to more of a pragmatic question than a philosophical one,
though. Unless we actually discover a super-Turing computational system, any
machine will be doing some kind of computation that in principle could be
performed by any other Turing-equivalent machine. Of course not all Turing-
equivalent machines are equally easy to build everything on top of, which is
where the "equivalent in principle" part falls short. But alternate hardware
models (assuming not super-Turing ones) don't in themselves get you anything
that in some inherent or philosophical sense _can 't_ be done by a pile of
NAND gates.

------
sp332
ELIZA was actually an AI platform, which made it possible to write all kinds
of AI bots. The "doctor" program was a tiny example program, like a hello-
world for the platform which was capable of much more. Here's a description of
how ELIZA worked:
[https://csee.umbc.edu/courses/331/papers/eliza.html](https://csee.umbc.edu/courses/331/papers/eliza.html)

------
shangxiao
If you're like me and did not previously know about SHRDLU, then I highly
recommend watching this demonstration:

[http://www.youtube.com/watch?v=bo4RvYJYOzI](http://www.youtube.com/watch?v=bo4RvYJYOzI)

~~~
wazoox
I've learned about it in a 1977 magazine about robots, when I was 6 (that I
probably still have lying around somewhere), and I've been fascinated by
computers ever since, and a great admirer of Terry Winograd.

------
skywhopper
Humans have enough trouble understanding human language, especially in textual
form. Context, body language, facial expression, shared experiences, cultural
memes, power relationships, and internal motivations all play a role in how
lanugage must be interpreted. Ambiguity, emotion, connotation, and double-
meanings all play critical roles. It's highly unlikely (I would say
impossible) that computers can ever be made to fully understand human
language.

To the extent that humans and computers are able to communicate with each
other, it will be via a cooperatively developed human-computer language that
will be influenced as much by the computers as by the humans.

That shared compromise is true today via programming languages and the stilted
way we must interact with Google Voice Search and Siri. And while those
contours will change over the forthcoming decades, it will continue to be true
for all time.

As humans augment themselves with digital computing power and as computer
technology itself evolves, things will become very different, but the
fundamental disconnect will always be there.

~~~
Houshalter
A human-computer language is an interesting idea, but it defeats much of the
purpose of natural language. The convenience of being able to talk to a
computer normally, or to process large amounts of text data that wasn't
intended to be read by the computer. And there is so much data to learn from
written in natural language and there would be very little in the new
language.

------
dsego
What about this?

Building Brains to Understand the World's Data (google tech talk):

[https://www.youtube.com/watch?v=4y43qwS8fl4](https://www.youtube.com/watch?v=4y43qwS8fl4)

The result is [http://www.groksolutions.com/](http://www.groksolutions.com/)

------
arocks
Surprising that the article doesn't talk about the modern day equivalent of
Ask Jeeves:

[http://www.wolframalpha.com/input/?i=How+old+is+President+Cl...](http://www.wolframalpha.com/input/?i=How+old+is+President+Clinton%3F)

------
utopkara
Some coconut farmers use trained monkeys to harvest crops. I am sure a monkey
would make a weak opponent in a fencing match, but they are pretty good with
planning and motor skills in addition to being great climbers. In this sense,
computers (and AI) of our time is very useful. Furthermore, computers and
programs are super flexible and they do not have hard coded limits to their
capabilities as humans and monkeys have. As humans, we might be the all
powerful masters for now, but compared to the improving AI we'll soon be what
the SHRDLU is to Watson.

------
yetanotherphd
Is there anything that grasshoppers eat that isn't kosher? Maybe Watson is
much smarter than we think.

~~~
michaelwww
My first experience with AI was playing against a chess computer in the 1980's
and losing. I was young, didn't understand computers, and wasn't a very good
chess player, but after I kept losing I had this eerie feeling about the
machine, like it was an actual sentient being. I've been aware of the power of
anthropomorphism ever since and noticed the readiness for humans to do ascribe
feelings and thoughts to things that don't actually have them. The computer is
the medium and tool for capturing the intelligence and data of the programmers
behind it, so I think the right question is asking if Watson is self-aware.
There is no evidence to support that and it is doubtful it will happen any
time soon, but that's almost irrelevant. The combination of the user, the
computer, the programmers behind them and the world's data working together is
a new kind of intelligence and is almost frightening in it's power.

------
agentultra
The stupidity of computers is starting to get frustrating. As a software
developer working on a distributed storage system, I've profiled my placement
algorithm and after some thought have decided there is a better strategy based
on a common use-case scenario. I create a branch in my repository and try out
my changes. In the mean time my colleagues have made changes to the
development branch I diverged from. My computer cannot seem to be able to run
the differential of the profiling function against my diverged branches and
tell me whether my changes validate my assumptions from a simple query (nor
whether the combined changes would continue to validate my assumptions before
I merge them... it couldn't even construct the profiling function for me).

Ontologies seem like just the tip of the iceberg.

It seems more likely to me that instead of a general-AI we're more likely to
be able to map and simulate a human brain within a computer.

------
jplur
Well written article. I wrote a long comment about my views, sat for a bit,
re-read the article, then deleted my banter because it wasn't adding anything
interesting, but I do want to say these themes have been on my mind lately and
I appreciate seeing them discussed so well.

------
evunveot
_A reductive ontology of the world emerges, containing aspects both obvious
and dubious._

Now I'm contemplating what an opposite-of-reductive ontology would look like
(if that even makes semantic sense). An ontology that enriches rather than
simplifies. It's hard to think about.

------
TrainedMonkey
What I got out of the article is that computers are good at doing what they
are programmed to do. Problem is not bad computers, but rather the fact that
we do not know how to program computer for general intelligence.

~~~
skywhopper
There are real, absolute limits on what you can program a set of transistors
and logic gates to do. Changing those limits will require changing what a
"computer" is. At which point, we'll probably call it something else.

~~~
Falling3
What are these limits and how do you know they exist?

~~~
gtirloni
Can we create digital logic circuitry that embodies ALL the functionality of a
neuron, including interconnections (and here you have to compensate circuitry
limitations that the neurons don't have) in the same size? I think you can
start from there to understand current technology's limitations.

I agree we will likely not be calling these things "computers", if we ever
invent them.

~~~
skywhopper
Even if you remove the size restriction and the physical limitations of
manufactured circuitry, we're lacking the perfectly defined model of how a
neuron actually works. We have plenty of knowledge about neurons, but nowhere
near enough to simulate them realistically. You'd need to be able to account
for every possible state a neuron could ever be in.

But a neuron is more than just an input-output state machine, it's affected by
levels of oxygen, glucose, and any number of hormones, proteins, and other
chemicals in the bloodstream. An adult human's neurons are each individually
shaped by their entire existence up to that point. Alcohol consumption, sun
exposure, antidepressant medication, hydration levels, exercise levels. It all
affects how they work.

And that's just one neuron. Simulating the brain as a solution to this problem
is, I think, out of the question.

~~~
Houshalter
>An adult human's neurons are each individually shaped by their entire
existence up to that point. Alcohol consumption, sun exposure, antidepressant
medication, hydration levels, exercise levels. It all affects how they work.

Possibly, but why on Earth would you want to simulate that? Just simulate what
the neuron is supposed to be doing or would be doing under ideal
circumstances.

~~~
gnaritas
> Just simulate what the neuron is supposed to be doing or would be doing
> under ideal circumstances.

We still don't know that either.

------
dchichkov
I'd like to place a bet, but unfortunately, there's no point in placing a bet
stating that by a certain date AIs would beat median human intelligence by
huge margins. It would be like placing a bet on a prediction of any other
extinction event. Impossible to collect, even if the prediction is right.

On the other hand, just for the fun of it, a couple of other predictions.
Here: "true, human level AIs would not be developed until 8Tb RAM sticks would
become a commodity". Another: "true, high fidelity, multi-censorial brain-
computer interface will never be built".

~~~
Houshalter
It is possible to bet on the apocalypse, though it's not very practical:
[http://lesswrong.com/lw/ie/the_apocalypse_bet/](http://lesswrong.com/lw/ie/the_apocalypse_bet/)

------
jotm
Computers aren't stupid, they're only really good at math. And trying to
understand human language using math is hard.

~~~
enupten
Techically they are terrible at Math too. All they are good at is doing as
they're told.

------
colund
My smart far-away and very old relative once said during the dot com boom:
"computers are just tools, nothing else, people seem to see something magical
in computers that is just not there". She was a very clever old lady. Still I
love their logical magic and hope computers will never cease to amaze me.

------
tmikaeld
Hmm...

Wall of text.. ok, challenge accepted!

 _read_

 _scroll_

 _read_

 _scroll_

 _read_

 _scroll_

 _read_

MORE!?...

 _scroll_

NOT MORE I GET IT!

...computers are stupid but at least they don't give up!

~~~
shawabawa3
Yep... really interesting article but just too long

~~~
timje1
After all, who has five minutes to spare these days to enrich their knowledge
of the world around us?

Everyone knows that if the information is important enough, the author will
find a way to fit it into 140 characters. Well 110, with 30 characters worth
of tags.

