
Computer Pseudoscience - jxramos
https://www.city-journal.org/danger-of-artificial-intelligence
======
richk449
> a computer can be programmed to detect instances of the word “betrayal” in
> scanned texts, but it lacks the concept of betrayal. Therefore, if a
> computer scans a story about betrayal that happens not to use the actual
> word “betrayal,” it will fail to detect the story’s theme.

Seems sorta odd to write a book today dismissing 1960s AI technology.

~~~
AstralStorm
Still has the same problem, lack of symbolic and functional or operational AI
is still biting hard.

We need inferences and at least sets of labels as output to make sense of what
the ML is doing, not simple matches or probabilities.

------
bostonpete
The author seems to use the fact that a computer's intelligence can be
deconstructed to 0's and 1's as evidence that the computer can't understand
what it's processing. Couldn't the processing in the human brain be similarly
deconstructed (albeit not to 1's and 0's)?

~~~
hackinthebochs
Absolutely. That brains can be deconstructed into the presence and absence of
ion gradients over space and time, and these ion gradients do not understand,
does not demonstrate that brains cannot achieve understanding. Using an
analogous argument against AI ever being more than dumb pattern matching is
fallacious and a serious failure of imagination.

~~~
anon1m0us
Aren't brains also "dumb pattern match"ers?

I see people talking all the time about how machines can't be conscious, but
yet, animals are just machines themselves, so what gives?

It's religious I think. People just want to be special. Not only as
individuals, but also as a species.

~~~
AstralStorm
They're apparently not. We can hypothesize without having seen a thing, using
previous information.

This is different from outputting random single value output. Typically people
tend to use generic or vague terms rather than mismatch, which is very
different from what ML does.

We can also output that we're not certain. And we can use surrounding context
on a way current ML cannot.

~~~
anon1m0us
Those all still boil down to yes or no. Have you seen it? Yes or no? More than
50% sure? yes or no? More than 90% sure? Yes or no?

The neuron either reaches a threshold to fire, or it does not. On or off.

~~~
AstralStorm
That's not how biology works, the spiking intervals matter, and when they
occur, chemical state of the neuron at many moments, its shape and
composition, and myelination.

In comparison, DNN are an abacus level of simplicity.

~~~
anon1m0us
Just because something is more complex than we currently can mathematically
model, does not mean we cannot mathematically model it.

If we can mathematically model it, then we can use 1's and 0's to do so. What
you are suggesting is that how we think and observe the world with our brains
is not mathematically possible. If that's the case, then Math is wrong, or at
best: an approximation to the universe.

Implication: Consciousness exists in the gap between the universe and math.

------
russdill
I love how nearly every pitfall listed for AI applies equally to humans who
cheerfully see the face of a God in a slice of toast and confusion correlation
and causation as easily as they breath air.

------
ratsbane
This is exactly one of the things that makes ML, AI, or whatever so
interesting. Sometimes it _does_ find correlations which a human might miss
because the human thinks there is no way those two things are linked - but in
fact they are.

"For example, a correlation may exist between changes in temperature in an
obscure Australian town and price changes in the U.S. stock market. A person
would know that the two events have no connection."

~~~
tlb
If you're willing to entertain that many hypotheses, you need a huge amount of
data to not be fooled by noise. For instance, there might be 1B hypotheses of
the form "temperature in some town is correlated with the price of some
stock". You'd need p<0.000000001 to not have false positives. Even the stock
market doesn't have enough data for that.

~~~
emmelaich
My objection is to the absoluteness of the phrase

> A person would know that the two events have no connection.

When a negative statement like this is presented as absolute and categorical
it's very easy to disagree with it.

------
OrderlyTiamat
If the book is as "technical" as this article than I don't think I'll be
reading it. The sole argument, it seems, is that computers lack the nebulously
defined "concept".

If you don't define it, than you might as well be saying it lacks a soul and
can therefore never be truly intelligent. What does understanding a concept
mean?

Because AI researchers have been working to get closer to this understanding
for years, and to simply say "it isn't there yet" falls a bit flat. It's like
saying climbing a tree isn't flying, while ignoring the attempts to fly with
kites, wingsuits, or parachutes: we might not have a plane yet, but we are
making progress.

