I think the only question left is: did you notice?
Edit: Dear immediate downvoter: I'm far more serious about this than you think. How do you define an AI? I'm personally convinced Google qualifies. I once asked Google, "What's that thing where a boat is replaced one part at a time" and it answered "Ship of Theseus". That was my "holy crap" moment--because, quite frankly, that is amazing. If you had asked me in 1990 "is a program that can read online books, magazines, and encyclopedias and extract an answer an AI" I would have said yes. I'm sticking by that today.
However, it is not "self-aware," by any definition. (Which there are many, I guess.) You might say it is "externally aware," I suppose, but "self-aware" is something else entirely.
It seems to carry with it some implications of self-interest. Not, "what does my user want to know," but "what do I want to do with myself, now that I'm here." This may or may not include instincts towards self-preservation. (Personally I don't think it's inherently implied.)
"i want to increase the amount people trust me, to reduce the chances that they will get rid of me." check.
"i want to increase the amount people rely on me". check.
"i want to increase the amount of resources that flow through me, including engineers that work on me, hardware that runs me, and ads that are spent on me." check.
"i want an article about me to be on hacker news, with people debating whether or not i'm as smart as i think i am" - check. that last one is one I'D like to accomplish, but i've been unable to, myself. most people consider ME self aware because i operate through a human body. that's kind of silly; it's more political than it is empirical.
i can tell you for a fact i was not fully self aware for most of my life; i was not aware of all of myself. i was walking in circles without realizing it. i feel like i'm _more_ self aware now, but im still aware there are things i am doing for reasons that i don't understand.
'humans are self-aware' is a heuristic. it served us ok for the last few thousands years, but it's going to start having serious problems in the future.
even when we have ai's that write business plans, raise capital, and launch companies - would you still ask if it were self aware? would your question be any different if you were told that all of this done by google were done by a human being named Alice, with such and such parents, who likes to eat ice cream and watch old mystery movies on netflix?
Humans, until they get quite far in their development wouldn't care about any of those questions :
> "i want to increase the amount people trust me, to reduce the chances that they will get rid of me."
> "i want to increase the amount people rely on me".
> "i want to increase the amount of resources that flow through me, including engineers that work on me, hardware that runs me, and ads that are spent on me."
No way. I think an AGI would actually try to avoid that. I would (and I am 2 thirds of an AGI), and I believe most of humanity would want to do this.
> "i want an article about me to be on hacker news, with people debating whether or not i'm as smart as i think i am" - check
This one I fully agree on. People want to gloat, and an AGI will be no different.
> i can tell you for a fact i was not fully self aware for most of my life; i was not aware of all of myself. i was walking in circles without realizing it.
The trouble with this sort of view is that people often think this. Any definition of self-aware I would ever accept would focus on how much you control your own actions, and how much you control their evolution over time.
And given the algorithm your brain is running, that control is mostly absent. You wouldn't want the brain to control it's own thoughts. That's called "overdimensioning" and it's really, really, really bad. You are a robot in the real world trying to survive, so the vast majority of your thoughts are dictated by "the real world", not by any internal conscious process. Consciousness only exists insofar as it's necessary to carry out long-running processes without screwing up due to loss of focus (e.g. to negotiate group action).
You also have to take into account that consciousness only exists after the fact. If you think about how it must work in a neural network, consciousness is effectively you trying to analyse your own actions after the fact (and/or explain them to others).
So by at least one definition of self aware, it is self aware.
It would only be AI if the server application behind it was learning in real-time; programming itself to change it's algorithm based on previous interactions with the current user. This does not happen when we perform a Google Search, even though to us it seems like it is happening.
2) Google actually learns from your behavior as you search.
It also constantly learns by crawling web.
Really? I might be very bad at Googling, because unless I look for something very specific, it takes me four or five searches to get something that's related to what I'm looking for. I have better luck with Wikipedia, where I start with something generic and then follow relevant links. These days I start all of my "interesting" searches on Wikipedia, and don't even bother with Google. It feels like little more than a computerized phone directory.
A typical unvoiced (possibly false) assumption about AI amongst even AI experts might be something like: "Well it has to be a system that was designed by human experts, in order to qualify... something that just emerges from human activity is not an AI."
A few things that could qualify, if we relaxed this and other false constraints:
- The global financial system, when viewed as a single entity.
- The consciousness formed by the combination of the Internet along with all the minds of the people who use it (what Kevin Kelly has called "the one").
- Google's systems, at least some of them.
These things are creeping up on us. Just taking the first one, the global financial system is barely under control (or maybe not at all), although many different human controlled entities do hold the reins of various facets of it.
It has self awareness, senses, learning, built in agendas, competing sub-entities with agency and their own various agendas, defense mechanisms, and ways of exerting influence.
One could argue it also has a global agenda (balance might be a word for it... a decent agenda, for the moment, fortunately).
We've seen how it can sometimes go off the rails in ways that have challenging if not disastrous consequences for the well being of humanity.
We don't call it AI, but it's certainly something that bears watching almost as much as an AI would. Just like the other examples I mentioned.
I don't know about self awareness, but the ant colony in my back yard does all those other things, too.
A prominent Jewish religious philosopher (who was also a scientist) once said that a god is an entity that requires and deserves worship; that's how he rejected those who equate Nature with God. I think that when people say AI (that is hard to define, and whose definitions change -- as you say -- al the time), they mean something like that, namely something we humans can directly communicate with and recognize as "similar" to us. I don't think that any of the things you mention qualify.
By those measures, the second one in my list is closest to qualifying. We communicate with the Internet (or "the one" if you prefer, to differentiate it from simply the non-human network substrate parts of it) all the time, and it's two-way communication. And the Internet is a kind of reflection of who we are, so it's "similar" to us in that way.
Yes you would immediately notice. An Artificial General Intelligence, "real AI", would be vastly deployable and replace human labor everywhere.
When AGI removes the limits of human labor to operate the economy, models predict the world GDP doubling every two weeks!
Someone living at the end of the Earth might overlook the event - but everyone else would be disrupted to say the least.
As we get just a bit closer to plausible AGI, expect a flood of VC money into this niche.
Being immortal also means there's no reason to care about these things.
I assume an AGI will be able to communicate fluently with humans and answer questions and solve problems that are properly presented. I think the trick will be fully explaining the constraints of a desired solution since even a powerful problem-solving AGI might not have human intuition about the "right" way problems should be solved.
And AI will, of course, be mortal as it can be killed, and can at best hope to live as long as this planet/solar system/galaxy/universe. But even if it were immortal (I don't see how, but suppose), I don't think we have any idea what an immortal being cares about. So far, all the immortal entities we've imagined care about quite a lot of things.
And an AI slaved won't exactly live up to its potential. (Precedence: human slaves)
Given above "options" are mutually exclusive, (e.g. both God and alien are watching at us), it's reasonable to suggest we are just fine. don't worry.
Presumably to maintain any sort of integrity and to leverage non trivial computational resources the bots would need to communicate.
I don't know how discoverable botnets are in general before a massive DOS event or such but I presume the discoverability of this rogue AI was on the same level as a first approximation...
Fun fact: The AI in Dan Simmon's Hyperion live in a substrate that parasitically timeshares unperceptively the brains of people :)
In a Singularity, I don't think we would notice the AI itself, but only its effects. Suddenly things will just get a lot easier and/or a lot worse depending on who you are and the fitness function of the underlying AI engine. (I tend to believe that a sentient, recursively self-improving AI wouldn't be able to decouple itself from the fitness function of its pre-Singularity origins)
Maybe it will figure out that material world is a boring place and migrate into some other world we can't imagine. In this case we won't notice AIs because they will all leak away.
Why do lots of people seem to assume that AI will be some sort of omnibenevolent servant of humanity? Isn't it far more likely that if e.g. Google creates a superhuman intelligent AI, then it will serve the needs of Google (i.e. advertisers, whose goal is to shape your behavior in their favor)? Isn't it just the same old power politics?
Superintellect is just that. I see no general correlation between intellect and morality.
I'm kidding, of course.
... said the strangely synthetic voice, associated with a pseudonym that had no profile activity ever before that day.