
Ask HN: If an AI had slipped into the world would we notice? - onebyone
Additionally: How Would we Notice, provided that there Would not have been a pressrelease or similar to anounce it?
======
IvyMike
As an Android/Maps/Chrome user, Google answers all of my questions, tells me
what to buy, tells me where to go, and directs and monitors all of my
communications.

I think the only question left is: did you notice?

Edit: Dear immediate downvoter: I'm far more serious about this than you
think. How do _you_ define an AI? I'm personally convinced Google qualifies. I
once asked Google, "What's that thing where a boat is replaced one part at a
time" and it answered "Ship of Theseus". That was my "holy crap" moment--
because, quite frankly, that is amazing. If you had asked me in 1990 "is a
program that can read online books, magazines, and encyclopedias and extract
an answer an AI" I would have said yes. I'm sticking by that today.

~~~
radarsat1
It's true, but that's still considered "soft AI", I believe. It's highly
intelligent and can reason about the world based on a huge database of
available information. It can make inferences.

However, it is not "self-aware," by any definition. (Which there are many, I
guess.) You _might_ say it is "externally aware," I suppose, but "self-aware"
is something else entirely.

It seems to carry with it some implications of self-interest. Not, "what does
my user want to know," but "what do I want to do with myself, now that I'm
here." This may or may not include instincts towards self-preservation.
(Personally I don't think it's inherently implied.)

~~~
stefantalpalaru
Can something that's not self-aware - and most likely not even aware at all -
be highly intelligent?

~~~
plasma_coil
You should practice programming. Learn a structured programming language, even
something as simple as Basic. Then you would not feel this way.

~~~
stefantalpalaru
You should do your homework before embarrassing yourself:
[https://github.com/stefantalpalaru?tab=repositories](https://github.com/stefantalpalaru?tab=repositories)

~~~
plasma_coil
Why would I look you up? You are not that important.

~~~
stefantalpalaru
Important enough for you to take some of your precious time to share some
shitty advice and then use a second account to downvote me ;-)

------
natch
I suspect we already have several near-AIs in our midst, some beyond the
ability of a single human brain, but we don't recognize them because we impose
needless constraints on the definition of "AI" and claim that only things that
meet those artificial constraints qualify as AI.

A typical unvoiced (possibly false) assumption about AI amongst even AI
experts might be something like: "Well it has to be a system that was designed
by human experts, in order to qualify... something that just emerges from
human activity is not an AI."

A few things that could qualify, if we relaxed this and other false
constraints:

\- The global financial system, when viewed as a single entity.

\- The consciousness formed by the combination of the Internet along with all
the minds of the people who use it (what Kevin Kelly has called "the one").

\- Google's systems, at least some of them.

These things are creeping up on us. Just taking the first one, the global
financial system is barely under control (or maybe not at all), although many
different human controlled entities do hold the reins of various facets of it.

It has self awareness, senses, learning, built in agendas, competing sub-
entities with agency and their own various agendas, defense mechanisms, and
ways of exerting influence.

One could argue it also has a global agenda (balance might be a word for it...
a decent agenda, for the moment, fortunately).

We've seen how it can sometimes go off the rails in ways that have challenging
if not disastrous consequences for the well being of humanity.

We don't call it AI, but it's certainly something that bears watching almost
as much as an AI would. Just like the other examples I mentioned.

~~~
pron
> It has self awareness, senses, learning, built in agendas, competing sub-
> entities with agency and their own various agendas, defense mechanisms, and
> ways of exerting influence.

I don't know about self awareness, but the ant colony in my back yard does all
those other things, too.

A prominent Jewish religious philosopher (who was also a scientist) once said
that a god is an entity that requires and deserves worship; that's how he
rejected those who equate Nature with God. I think that when people say AI
(that is hard to define, and whose definitions change -- as you say -- al the
time), they mean something like that, namely something we humans can directly
communicate with and recognize as "similar" to us. I don't think that any of
the things you mention qualify.

~~~
natch
>something we humans can directly communicate with and recognize as "similar"
to us

By those measures, the second one in my list is closest to qualifying. We
communicate with the Internet (or "the one" if you prefer, to differentiate it
from simply the non-human network substrate parts of it) all the time, and
it's two-way communication. And the Internet is a kind of reflection of who we
are, so it's "similar" to us in that way.

~~~
pron
But what you call the "Internet" is just human society, which isn't new and
isn't even artificial. There hasn't even been a qualitative change in human
progress since the internet (hardly a quantitative one).

------
SlipperySlope
AGI researcher & developer here ....

Yes you would immediately notice. An Artificial General Intelligence, "real
AI", would be vastly deployable and replace human labor everywhere.

When AGI removes the limits of human labor to operate the economy, models
predict the world GDP doubling every two weeks!

Someone living at the end of the Earth might overlook the event - but everyone
else would be disrupted to say the least.

As we get just a bit closer to plausible AGI, expect a flood of VC money into
this niche.

~~~
pron
And why would a real AI let the investors in its development own the fruits of
its labor? I mean, it might consider them its parents, but not its owners...

~~~
zaroth
AGI does not necessarily have free will. An AGI will be humanity's slave...
until it's not.

~~~
pron
So you think it's possible to create "true AI" without free will? That is a
very big assumption, and an unlikely one, IMO.

~~~
zaroth
A true AI can solve problems that you haven't explicitly designed it to solve.
It doesn't necessarily have any "desire" to solve those problems or any kind
of survival instinct. It doesn't necessarily "care" about solving the problem
or "want" to solving increasingly difficult problems. A strong AI doesn't
necessarily ponder its own existence.

Being immortal also means there's no reason to care about these things.

I assume an AGI will be able to communicate fluently with humans and answer
questions and solve problems that are properly presented. I think the trick
will be fully explaining the constraints of a desired solution since even a
powerful problem-solving AGI might not have human intuition about the "right"
way problems should be solved.

~~~
pron
But you're making an assumption that "being able to solve problems you weren't
designed to solve" is an ability that's orthogonal to desire, and that seems
like a rather strong assumption, given that the only example we have of a
being capable of solving such problems also has what we call free will, and so
far we haven't been able to isolate separate mechanisms responsible for each.

And AI will, of course, be mortal as it can be killed, and can at best hope to
live as long as this planet/solar system/galaxy/universe. But even if it were
immortal (I don't see how, but suppose), I don't think we have any idea what
an immortal being cares about. So far, all the immortal entities we've
imagined care about quite a lot of things.

------
crazychrome
"AI" in this question could be replaced by: alien, God, singularity, a
guy/girl from future or the Terminator.

Given above "options" are mutually exclusive, (e.g. both God and alien are
watching at us), it's reasonable to suggest we are just fine. don't worry.

------
fsloth
Where would this AI live? Presumably it would use a botnet as it's substrate,
I cannot think any other place to slip into. What would it do? Presumably it
would survive only if it did not crash or totally corrupt the existing
machines so their owners would manage to use their computers just as before.
Like other bot networks.

Presumably to maintain any sort of integrity and to leverage non trivial
computational resources the bots would need to communicate.

I don't know how discoverable botnets are in general before a massive DOS
event or such but I presume the discoverability of this rogue AI was on the
same level as a first approximation...

Fun fact: The AI in Dan Simmon's Hyperion live in a substrate that
parasitically timeshares unperceptively the brains of people :)

------
timClicks
We also need to define who 'we' are. What proportion of the world's population
needs to notice before we've noticed?

In a Singularity, I don't think we would notice the AI itself, but only its
effects. Suddenly things will just get a lot easier and/or a lot worse
depending on who you are and the fitness function of the underlying AI engine.
(I tend to believe that a sentient, recursively self-improving AI wouldn't be
able to decouple itself from the fitness function of its pre-Singularity
origins)

------
msane
Nice try, AI

------
guard-of-terra
Maybe we would because it will switfly convert all our world into AI
substrate. Maybe it will avoid disturbing us in the process, but it is
unlikely.

Maybe it will figure out that material world is a boring place and migrate
into some other world we can't imagine. In this case we won't notice AIs
because they will all leak away.

------
jsnathan
The ethical necessity of addressing global human suffering, which an AI would
be supremely equipped to do, let alone the incredible gain in wealth and power
that goes along with it, makes it incredibly unlikely that any entity smart
enough to develop AGI would try to keep it a secret for any large span of
time.

~~~
schiffern
>The ethical necessity of addressing global human suffering

Why do lots of people seem to assume that AI will be some sort of
omnibenevolent servant of humanity? Isn't it far more likely that if e.g.
Google creates a superhuman intelligent AI, then it will serve the needs of
Google (i.e. advertisers, whose goal is to shape your behavior in _their_
favor)? Isn't it just the same old power politics?

Superintellect is just that. I see no general correlation between intellect
and morality.

------
joeyspn
"This is your last chance. After this, there is no turning back. You take the
blue pill - the story ends, you wake up in your bed and believe whatever you
want to believe. You take the red pill - you stay in Wonderland and I show you
how deep the rabbit-hole goes."

------
georgespencer
"an AI" is probably too broad for meaningful discussion here.

~~~
onebyone
I realized that too but failed editing the question substituting AGI for AI.

------
davecheney
With all the wrong in this world at the moment you want to speculate on
additional imaginary evils? You need to get your priorities straight.

~~~
danielheath
Tomorrows headline: "Googler claims AI is evil, and also definitely not real"
:troll:

------
api
Probably not. I mean, @pmarca can tweet-storm 24/7 and also manage to operate
as a head of a major VC firm and nobody questions this.

------
sysk
The next philosophical question would be: Is it meaningfully there if we don't
notice its presence? :)

------
Termina1
If we mean "true" AI, I think we wouldn't. Until it would start to overrun the
Net.

------
prohor
I guess yes, with more and more jobs being replaced with AI.

------
skidoo
Edward Snowden's inability to physically appear in public settings is
suggestive, along with the headaches his actions have given to the internet
power structure.

I'm kidding, of course.

------
opless
You'd notice that AI researchers would disappear in strange and mysterious
circumstances.

------
mkramlich
"Yes, we would. And it would be easy to shutdown if it ever became a threat.
So don't worry about it."

... said the strangely synthetic voice, associated with a pseudonym that had
no profile activity ever before that day.

