
Artificial Intelligence: What’s to Fear? - the-enemy
https://www.the-american-interest.com/2019/10/08/artificial-intelligence-whats-to-fear/
======
jacknews
"The sea makes people feel as if they are in a whirl of action ... But this is
wrong thinking, We say waves “cause” a ship to break up ... but the wave
itself is passive and without the power to act."

Nope, the wave is energy transmitted, from the sun, to the wind, to the sea
... to the ship.

In the same way a mind is a wave, constrained and channeled by it's neuron
substrate, and by external stimulii.

The article is nonsense to me, there is no such thing as passive - the
universe is all active, seething.

~~~
ozim
I think main point is that "wave" does not have any agenda, it does not have
any purpose. By itself atoms cannot "decide" to cause chain reaction. All of
this is just "programmed".

This kind of topic is taken on by Stanislaw Lem in
[https://en.wikipedia.org/wiki/The_Invincible](https://en.wikipedia.org/wiki/The_Invincible)

Really liked this book.

~~~
ozim
To add to previous comment. "What's to Fear", if author would read that book,
he would notice there is still a lot to fear. Synthetic life form, without
even mimicking humans could be dangerous. The same as waves in the ocean, even
if those waves have no agenda to kill us all, if it is 10 in Beaufort scale
you better stay away from those waves.

------
MarkMMullin
Grrrr. This could have been interesting if Umberto Ecco was arguing semiosis,
instead of this poor exercise in semantics. As a concrete example, consider
"The AI therapy app is not a mind. It cannot emote. It cannot feel the warmth
behind its praise or the sense of urgency behind its criticism. It is
passive." Yah, no. I can also do sentiment analysis on the users comments, and
the behavior of the system can be driven by a goal to change that sentiment
analysis, so I've got me an active participant in a feedback loop - yes, there
is no their there, its just a numbers game - but this article is still
complete twaddle

------
bko
> Even lovemaking is at risk, as artificially intelligent robots stand poised
> to enter the market and provide sexual services and romantic intimacy.

This perfectly encapsulates why I'm so skeptical of AI alarmists. They tend to
boil down the human experience to such coarse terms that it undermines their
legitimate issues.

> AI is a collection of passive silicon and metal. It does not speak. It has
> no mind. Like an ocean wave, it forms part of an ordered sequence of passive
> realities moved by an agency not its own

No, AI is a series of numbers. I just wrote about this misconception the other
day [0]. It would be great if people stopped anthropomorphizing AI just
because it has the word intelligence in it and humans are said to be
intelligent

[https://medium.com/ml-everything/artificial-general-
intellig...](https://medium.com/ml-everything/artificial-general-intelligence-
as-a-modern-day-spiritual-conjuring-963dd80377ce)

~~~
mellosouls
It's not clear to me from your post or essay whether you are saying the
current state of AI is absurdly overhyped and is not even remotely close to
producing actual intelligence (I agree strongly); or if you are claiming it's
not possible with computers _even in principle_ because "numbers", in which
case I don't understand the justification at all, as it implies a mystical
quality to human intelligence you rightly dismiss in the current hype.

~~~
bko
Definitely your first point.

As for the second point, I don't think there is a mystical quality to human
intelligence. I guess its mystical in the sense that no one really understands
consciousness just yet. But I think something else is going on other than the
brain being a rote inefficient chemical cruncher. I find the whole AGI,
singularity, and AI hysteria reductive

~~~
mellosouls
Yeah I'd agree with most of that, consciousness (as opposed to cognition)
definitely fascinatingly different to ponder.

------
windsurfer
This article completely misses what I see as the biggest problem with
intelligence in machines: Corporations can use them to be unaccountable.
Insider trading? No, that was our "algorithm". Racist policies? Oops, sorry
that was just our bot. Inhumane practices? Sorry, we are still "working" on
our artificial intelligence since you know it's very difficult.

Where most people see mysterious advanced help in day to day tasks, some
corporations are seeing obedient, unaccountable and un-auditable workers. They
are already starting to use them to perform their dirty deeds, and hiding
behind the black box nature of these complex algorithms.

~~~
bko
> Corporations can use them to be unaccountable. Insider trading? No, that was
> our "algorithm". Racist policies? Oops, sorry that was just our bot.

I would say the opposite. Insider trading? No, here are the inputs to our
algorithm. Racist policies? No, here are the inputs to our algorithm.

~~~
johnchristopher
"Our" algorithm ? I was under the impression that AI were generating their own
algorithms.

~~~
azdacha
In a sense you are building an algorithm but with different means from the
usual ones. It is built trough compilation of a large amount of data using the
well known machine learning algorithms. Tbh and especially in this case these
are vocabulary terminations we should not but stuck on.

------
bjornsing
This was an absolutely fascinating read...

> An atomic bomb is a frantic explosion. Yet none of these phenomena is
> active; they are all passive. They lack the power of cause.

To me it’s completely incomprehensible that an obviously intelligent and well
educated (MD PhD) person can think like this... How is it possible to deny
that a thermonuclear explosion can cause... a lot of things...?

I’ve lately come to realize that there is much more variation in how people
see the world and reason than I thought possible. I would genuinely like to
understand those “modes of thinking” that are most far removed from my own.
Can anybody explain how this particular worldview works or point me to
something I can read?

~~~
mellosouls
I think the author is distinguishing between intent/agency/free will in the
case of mental action and the empty, meaningless quality of material "causes".

For causality and the slipperiness of it, read Hume.

~~~
bjornsing
Hume the empiricist... yea, I find this philosophical view incredibly
difficult to understand. I naturally gravitate to the hypotetico-deductive way
of thinking, and find it very hard to break out of.

I don’t think going straight to Hume will be very meaningful for me. Is there
anybody who has explained empiricism from a more hypothetical-deductive
perspective?

------
hprotagonist
_Yet not only does AI win at cards now, it also creates art, writes poetry,
and performs psychotherapy._

Those are shockingly broad claims that are not even close to being accepted by
even a minority of people.

~~~
dagw
AIs can absolutely do all those things. The question is if they can do it well
enough to be of any value.

~~~
hprotagonist
i believe this is a category error. AI is not sentient or aware, it
defintionally cannot “do” anything.

~~~
naasking
I see no reason why "doing something" requires sentience.

~~~
hprotagonist
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/EWD936.html)

here's the relevant djikstra. To be the agent of an event requires agency. We
have it, principal component analysis doesn't.

I'm firmly of the opinion that anthropomorphization of code is a category
error and a bad idea.

~~~
naasking
> To be the agent of an event requires agency. We have it, principal component
> analysis doesn't.

Agency remains undefined, just like "sentience" from your other post. Unless
you actually define it, how can you possibly know that we have it and
principle component analysis doesn't?

Even if we take a connotative definition of agency, ie. that it is some
quality or behaviour that humans exhibit, then it remains to be shown that
code _cannot possibly have agency_ in this exact sense.

Certainly not all code will have agency, just like all code does not have a
"sorter" property exhibited by sorting algorithms. I also agree with you that
current forms of AI don't have what we recognize as agency, but I don't think
we're as far off as some think, and I definitely don't see how it's a category
error to talk about AI in this fashion. Whatever qualities you think humans
have that grants agency, it might not be nearly as complicated as you assume.

------
rpmisms
My problem with these articles isn't that they overestimate AI, but that they
underestimate humanity. Yes, many millenials have a serious electronics
addiction, but many Gen Z kids are really good about managing their screen
time, and try to make face-to-face communication happen more and more.

tl;dr: Humanity is really good at adapting to coexist with technology. Stop
Panicking.

~~~
azdacha
What about all other thechnological controversy ? Everything is not about
Screen-Time.

~~~
rpmisms
I'm using a specific example of a panic to point out that the panic is often
unfounded.

------
zwieback
I waited for something insightful at the end but didn't find anything there.
Just a bunch of "metal and silicon isn't intelligence." Ok, fine, but a whole
article without any reference to the Turing test? That should have been the
starting point.

------
laughingbovine
Sure, let's dive into the philosophy of causation and completely butcher it.
Very clear article /s

------
szczepano
steam engine - what to fear ?

combustion engine - what to fear ?

computer - what to fear ?

internet - what to fear ?

robots - what to fear ?

artificial intelligence - what to fear ?

?????? - what to fear ?

Maybe there is not point to fear ?

------
ryandvm
I have a few thoughts about doomsday level AIs:

1\. We _will_ eventually crack "general intelligence". If nature is able to do
it with 9 months of build time and a few pounds of meat, there's no physical
reason we can't do it on silicon (or whatever substrate is next)...

2\. When we finally develop a half-way decent general intelligence, much like
human intelligence, we're not going to have any idea how it works.

3\. It will probably end up employing a lot of the same cognitive shortcuts
that humans use (see
[https://en.wikipedia.org/wiki/List_of_cognitive_biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases)).
And in doing so we're likely to end up with a general intelligence that, on
the whole, is capable of much greater calculation, but ultimately will be a
disaster as it will suffer from the same sort of psychological flaws.

Human survival is probably going to come down to humanity trying to talk a
super-intelligent AI out of some crazy ass conspiracy theories. It will be
like the ultimate Thanksgiving with your crazy uncle.

~~~
xamuel
Our Tower of Babel _will_ eventually reach heaven.

~~~
ryandvm
And mankind will never be able to soar like the birds.

 _shrug_

We've seen this play out many, many times before...

~~~
xamuel
To "soar like the birds" would mean to be able to fly at will and land at
will, wherever and whenever you desire, as effortlessly as walking. You're
absolutely right, mankind will never be able to soar like the birds. Even if
we could figure out the technology, it would be a privacy/safety/liability
nightmare and they'd make laws against doing it.

