I take the term AI to mean "a thing that people will actually acknowledge as being AI when it exists"—something that can only be defined retrospectively, but which still is a workable definition in that you can sort of predict what people will or won't want to have "wasted" the eventual definition of the term on.
My personal bet is on people placing that line squarely on the divide between "Stoic Guru" and "Active Academy" UX models[1]. An AI is probably, to most people, an agent that continues to think and learn when it is not being interacted with—by surfing the internet, maybe—such that it can generate novel outputs, and revise its own previous beliefs, with no user-visible step that would be perceived as "re-training." Something that will come to you with thoughts it has had that you'll find relevant, that were inspired by new data the agent has acquired not derived from your input.
A simple example that people would probably think of as "AI" (and which people wouldn't accept as a UX paradigm unless it was presented to them through an agent-based interface) would be a spam filter like GMail's that learns from the classifications of everyone who uses it, that would go back and reclassify messages that it now had a more refined opinion on. Nobody would want a dumb algorithm to take emails that "were" in their inbox and move them over into their spam folder—that's destructive!—but they'd accept a (virtual) secretary using their judgement to do so. Thus, AI.
My personal bet is on people placing that line squarely on the divide between "Stoic Guru" and "Active Academy" UX models[1]. An AI is probably, to most people, an agent that continues to think and learn when it is not being interacted with—by surfing the internet, maybe—such that it can generate novel outputs, and revise its own previous beliefs, with no user-visible step that would be perceived as "re-training." Something that will come to you with thoughts it has had that you'll find relevant, that were inspired by new data the agent has acquired not derived from your input.
A simple example that people would probably think of as "AI" (and which people wouldn't accept as a UX paradigm unless it was presented to them through an agent-based interface) would be a spam filter like GMail's that learns from the classifications of everyone who uses it, that would go back and reclassify messages that it now had a more refined opinion on. Nobody would want a dumb algorithm to take emails that "were" in their inbox and move them over into their spam folder—that's destructive!—but they'd accept a (virtual) secretary using their judgement to do so. Thus, AI.
[1] https://scifiinterfaces.wordpress.com/category/active-academ...