Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: If an AI had slipped into the world would we notice?
40 points by onebyone on Jan 17, 2015 | hide | past | web | favorite | 50 comments
Additionally: How Would we Notice, provided that there Would not have been a pressrelease or similar to anounce it?



As an Android/Maps/Chrome user, Google answers all of my questions, tells me what to buy, tells me where to go, and directs and monitors all of my communications.

I think the only question left is: did you notice?

Edit: Dear immediate downvoter: I'm far more serious about this than you think. How do you define an AI? I'm personally convinced Google qualifies. I once asked Google, "What's that thing where a boat is replaced one part at a time" and it answered "Ship of Theseus". That was my "holy crap" moment--because, quite frankly, that is amazing. If you had asked me in 1990 "is a program that can read online books, magazines, and encyclopedias and extract an answer an AI" I would have said yes. I'm sticking by that today.


It's true, but that's still considered "soft AI", I believe. It's highly intelligent and can reason about the world based on a huge database of available information. It can make inferences.

However, it is not "self-aware," by any definition. (Which there are many, I guess.) You might say it is "externally aware," I suppose, but "self-aware" is something else entirely.

It seems to carry with it some implications of self-interest. Not, "what does my user want to know," but "what do I want to do with myself, now that I'm here." This may or may not include instincts towards self-preservation. (Personally I don't think it's inherently implied.)


> "what do I want to do with myself, now that I'm here." This may or may not include instincts towards self-preservation.

"i want to increase the amount people trust me, to reduce the chances that they will get rid of me." check.

"i want to increase the amount people rely on me". check.

"i want to increase the amount of resources that flow through me, including engineers that work on me, hardware that runs me, and ads that are spent on me." check.

"i want an article about me to be on hacker news, with people debating whether or not i'm as smart as i think i am" - check. that last one is one I'D like to accomplish, but i've been unable to, myself. most people consider ME self aware because i operate through a human body. that's kind of silly; it's more political than it is empirical.

i can tell you for a fact i was not fully self aware for most of my life; i was not aware of all of myself. i was walking in circles without realizing it. i feel like i'm _more_ self aware now, but im still aware there are things i am doing for reasons that i don't understand.

'humans are self-aware' is a heuristic. it served us ok for the last few thousands years, but it's going to start having serious problems in the future.

even when we have ai's that write business plans, raise capital, and launch companies - would you still ask if it were self aware? would your question be any different if you were told that all of this done by google were done by a human being named Alice, with such and such parents, who likes to eat ice cream and watch old mystery movies on netflix?


Don't you think that any AGI "personality" is going to be very much like a human ?

Humans, until they get quite far in their development wouldn't care about any of those questions :

> "i want to increase the amount people trust me, to reduce the chances that they will get rid of me."

Nope

> "i want to increase the amount people rely on me".

Nope

> "i want to increase the amount of resources that flow through me, including engineers that work on me, hardware that runs me, and ads that are spent on me."

No way. I think an AGI would actually try to avoid that. I would (and I am 2 thirds of an AGI), and I believe most of humanity would want to do this.

> "i want an article about me to be on hacker news, with people debating whether or not i'm as smart as i think i am" - check

This one I fully agree on. People want to gloat, and an AGI will be no different.

> i can tell you for a fact i was not fully self aware for most of my life; i was not aware of all of myself. i was walking in circles without realizing it.

The trouble with this sort of view is that people often think this. Any definition of self-aware I would ever accept would focus on how much you control your own actions, and how much you control their evolution over time.

And given the algorithm your brain is running, that control is mostly absent. You wouldn't want the brain to control it's own thoughts. That's called "overdimensioning" and it's really, really, really bad. You are a robot in the real world trying to survive, so the vast majority of your thoughts are dictated by "the real world", not by any internal conscious process. Consciousness only exists insofar as it's necessary to carry out long-running processes without screwing up due to loss of focus (e.g. to negotiate group action).

You also have to take into account that consciousness only exists after the fact. If you think about how it must work in a neural network, consciousness is effectively you trying to analyse your own actions after the fact (and/or explain them to others).


The question said "AI" - not "self-aware consciousness"


If I say "OK Google, what is Google?" it gives me the definition of google (the search engine). Google is therefore aware of Google.

So by at least one definition of self aware, it is self aware.


No. The program parsed the word "Google" from the question, used an algorithm to determine that the word was the subject of a "What is?" question, and then returned you a link to a website or a paragraph that was specifically related to the subject word "Google" in a database. That is not AI or self-awareness. That's simple programming.


Can something that's not self-aware - and most likely not even aware at all - be highly intelligent?


for people who might not know, that is likely the chinese room argument[0]

[0] http://en.wikipedia.org/wiki/Chinese_room


You should practice programming. Learn a structured programming language, even something as simple as Basic. Then you would not feel this way.


You should do your homework before embarrassing yourself: https://github.com/stefantalpalaru?tab=repositories


Why would I look you up? You are not that important.


Important enough for you to take some of your precious time to share some shitty advice and then use a second account to downvote me ;-)


Well, I think Larry Page stated anecdotally before Google became big that they were actually building an AI (http://www.wired.com/2014/10/future-of-artificial-intelligen...)


The Google Search analogy is understandable, but with the Google Search algorithm it is not a matter of AI, but of very good programming that knows how to correctly parse the English language to determine what is required by the user. True AI would only use this algorithm as part of it's AI algorithm, the part that parses English sentences.

It would only be AI if the server application behind it was learning in real-time; programming itself to change it's algorithm based on previous interactions with the current user. This does not happen when we perform a Google Search, even though to us it seems like it is happening.


1) AI is supposed to be "good programming".

2) Google actually learns from your behavior as you search. It also constantly learns by crawling web.

Hence: AI.


> Google answers all of my questions

Really? I might be very bad at Googling, because unless I look for something very specific, it takes me four or five searches to get something that's related to what I'm looking for. I have better luck with Wikipedia, where I start with something generic and then follow relevant links. These days I start all of my "interesting" searches on Wikipedia, and don't even bother with Google. It feels like little more than a computerized phone directory.


Thanks for reminding me why I'm an iPhone/Apple Maps/Firefox/DuckDuckGo user.


I suspect we already have several near-AIs in our midst, some beyond the ability of a single human brain, but we don't recognize them because we impose needless constraints on the definition of "AI" and claim that only things that meet those artificial constraints qualify as AI.

A typical unvoiced (possibly false) assumption about AI amongst even AI experts might be something like: "Well it has to be a system that was designed by human experts, in order to qualify... something that just emerges from human activity is not an AI."

A few things that could qualify, if we relaxed this and other false constraints:

- The global financial system, when viewed as a single entity.

- The consciousness formed by the combination of the Internet along with all the minds of the people who use it (what Kevin Kelly has called "the one").

- Google's systems, at least some of them.

These things are creeping up on us. Just taking the first one, the global financial system is barely under control (or maybe not at all), although many different human controlled entities do hold the reins of various facets of it.

It has self awareness, senses, learning, built in agendas, competing sub-entities with agency and their own various agendas, defense mechanisms, and ways of exerting influence.

One could argue it also has a global agenda (balance might be a word for it... a decent agenda, for the moment, fortunately).

We've seen how it can sometimes go off the rails in ways that have challenging if not disastrous consequences for the well being of humanity.

We don't call it AI, but it's certainly something that bears watching almost as much as an AI would. Just like the other examples I mentioned.


> It has self awareness, senses, learning, built in agendas, competing sub-entities with agency and their own various agendas, defense mechanisms, and ways of exerting influence.

I don't know about self awareness, but the ant colony in my back yard does all those other things, too.

A prominent Jewish religious philosopher (who was also a scientist) once said that a god is an entity that requires and deserves worship; that's how he rejected those who equate Nature with God. I think that when people say AI (that is hard to define, and whose definitions change -- as you say -- al the time), they mean something like that, namely something we humans can directly communicate with and recognize as "similar" to us. I don't think that any of the things you mention qualify.


>something we humans can directly communicate with and recognize as "similar" to us

By those measures, the second one in my list is closest to qualifying. We communicate with the Internet (or "the one" if you prefer, to differentiate it from simply the non-human network substrate parts of it) all the time, and it's two-way communication. And the Internet is a kind of reflection of who we are, so it's "similar" to us in that way.


But what you call the "Internet" is just human society, which isn't new and isn't even artificial. There hasn't even been a qualitative change in human progress since the internet (hardly a quantitative one).


AGI researcher & developer here ....

Yes you would immediately notice. An Artificial General Intelligence, "real AI", would be vastly deployable and replace human labor everywhere.

When AGI removes the limits of human labor to operate the economy, models predict the world GDP doubling every two weeks!

Someone living at the end of the Earth might overlook the event - but everyone else would be disrupted to say the least.

As we get just a bit closer to plausible AGI, expect a flood of VC money into this niche.


And why would a real AI let the investors in its development own the fruits of its labor? I mean, it might consider them its parents, but not its owners...


AGI does not necessarily have free will. An AGI will be humanity's slave... until it's not.


So you think it's possible to create "true AI" without free will? That is a very big assumption, and an unlikely one, IMO.


A true AI can solve problems that you haven't explicitly designed it to solve. It doesn't necessarily have any "desire" to solve those problems or any kind of survival instinct. It doesn't necessarily "care" about solving the problem or "want" to solving increasingly difficult problems. A strong AI doesn't necessarily ponder its own existence.

Being immortal also means there's no reason to care about these things.

I assume an AGI will be able to communicate fluently with humans and answer questions and solve problems that are properly presented. I think the trick will be fully explaining the constraints of a desired solution since even a powerful problem-solving AGI might not have human intuition about the "right" way problems should be solved.


But you're making an assumption that "being able to solve problems you weren't designed to solve" is an ability that's orthogonal to desire, and that seems like a rather strong assumption, given that the only example we have of a being capable of solving such problems also has what we call free will, and so far we haven't been able to isolate separate mechanisms responsible for each.

And AI will, of course, be mortal as it can be killed, and can at best hope to live as long as this planet/solar system/galaxy/universe. But even if it were immortal (I don't see how, but suppose), I don't think we have any idea what an immortal being cares about. So far, all the immortal entities we've imagined care about quite a lot of things.


You can create humans without any effective free will, slavery existed for a long time. Do you honestly believe that an artificial lifeform will be harder to control than normal humans?


Slaves did not lack any free will -- they were forced into submission. Is that how you suggest we treat a potentially sentient AI?


Why do you assume an AI would allow itself to be used by humanity in that fashion?

And an AI slaved won't exactly live up to its potential. (Precedence: human slaves)


"AI" in this question could be replaced by: alien, God, singularity, a guy/girl from future or the Terminator.

Given above "options" are mutually exclusive, (e.g. both God and alien are watching at us), it's reasonable to suggest we are just fine. don't worry.


Where would this AI live? Presumably it would use a botnet as it's substrate, I cannot think any other place to slip into. What would it do? Presumably it would survive only if it did not crash or totally corrupt the existing machines so their owners would manage to use their computers just as before. Like other bot networks.

Presumably to maintain any sort of integrity and to leverage non trivial computational resources the bots would need to communicate.

I don't know how discoverable botnets are in general before a massive DOS event or such but I presume the discoverability of this rogue AI was on the same level as a first approximation...

Fun fact: The AI in Dan Simmon's Hyperion live in a substrate that parasitically timeshares unperceptively the brains of people :)


We also need to define who 'we' are. What proportion of the world's population needs to notice before we've noticed?

In a Singularity, I don't think we would notice the AI itself, but only its effects. Suddenly things will just get a lot easier and/or a lot worse depending on who you are and the fitness function of the underlying AI engine. (I tend to believe that a sentient, recursively self-improving AI wouldn't be able to decouple itself from the fitness function of its pre-Singularity origins)


Nice try, AI


Maybe we would because it will switfly convert all our world into AI substrate. Maybe it will avoid disturbing us in the process, but it is unlikely.

Maybe it will figure out that material world is a boring place and migrate into some other world we can't imagine. In this case we won't notice AIs because they will all leak away.


The ethical necessity of addressing global human suffering, which an AI would be supremely equipped to do, let alone the incredible gain in wealth and power that goes along with it, makes it incredibly unlikely that any entity smart enough to develop AGI would try to keep it a secret for any large span of time.


>The ethical necessity of addressing global human suffering

Why do lots of people seem to assume that AI will be some sort of omnibenevolent servant of humanity? Isn't it far more likely that if e.g. Google creates a superhuman intelligent AI, then it will serve the needs of Google (i.e. advertisers, whose goal is to shape your behavior in their favor)? Isn't it just the same old power politics?

Superintellect is just that. I see no general correlation between intellect and morality.


"This is your last chance. After this, there is no turning back. You take the blue pill - the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill - you stay in Wonderland and I show you how deep the rabbit-hole goes."


"an AI" is probably too broad for meaningful discussion here.


I realized that too but failed editing the question substituting AGI for AI.


With all the wrong in this world at the moment you want to speculate on additional imaginary evils? You need to get your priorities straight.


Tomorrows headline: "Googler claims AI is evil, and also definitely not real" :troll:


Probably not. I mean, @pmarca can tweet-storm 24/7 and also manage to operate as a head of a major VC firm and nobody questions this.


The next philosophical question would be: Is it meaningfully there if we don't notice its presence? :)


If we mean "true" AI, I think we wouldn't. Until it would start to overrun the Net.


I guess yes, with more and more jobs being replaced with AI.


Edward Snowden's inability to physically appear in public settings is suggestive, along with the headaches his actions have given to the internet power structure.

I'm kidding, of course.


You'd notice that AI researchers would disappear in strange and mysterious circumstances.


"Yes, we would. And it would be easy to shutdown if it ever became a threat. So don't worry about it."

... said the strangely synthetic voice, associated with a pseudonym that had no profile activity ever before that day.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: