When asked to describe myself, I mentioned my height: 7'. The response referenced that this height is very tall. When asked again, later in the conversation, how tall I said I was: the response was '5 feet tall'.
The entire concept of AI isn't in responses and natural language so much as the ability to retain information and act accordingly to references made regarding that information. Anyone can slap together a CashChat script that, upon each mention of genitalia, responds how turned on it is/they are. This isn't far from that.
I'm always interested when I hear "the more you interact, the smarter it becomes." That isn't the case here. If the responses back are little more than speech learning based on what should go where; responses to "That doesn't make much sense" are "IDK makes sense to me lol" versus a mechanism that allows for gradual weight correction; message after ZIP is provided is "i think there are things going on in the area idk" with all future references to what's going on in that ZIP coming back nonsensical; and it doesn't have the ability to reference literally the first question that //it asked me//?
Then it isn't AI. Intelligence implies continued application of learned mechanisms. This isn't that.
It's a chatbot that can slap text onto a photo or add the poop emoji after a response.
The 'learning' indicated, is definitely with regards to language only. They make it clear they're studying "conversational understanding".
But as it only stores the following about users: Nickname •Gender •Favorite food •Zipcode •Relationship status, they've already informed you up front they won't store your height.
If it stores my nickname and name, why won't it repeat that name back? Ask it what your name is. "What's my name again?" or "What name did I ask you to call me?"
Every single response I got back was, "I have you stored as HUMAN19282301-11. JK LOL I know that you told me."
There was no deviation from that response. Same response every time. To the level of sameness as if I had talked with a chatbot looking for me to watch her 'sup3rhot camsho' and typed the word 'penis' -- "omg r u hard i m wet". Same response. Over. And over. And over.
I get that the method here is to use user-inquiry to overshadow a lack of conversational understanding. Users will always talk about themselves. Hell, humans as a whole will always talk about themselves: to machines, to themselves, and often to pets. So when a partly non sequitur response is given but followed with a composed question -- people can sometimes look past it.
"It just said it was a fish meme but it wants to know how my day was. God my boss is such a dick. Let me tell you about what he did..."
Asking someone a subjective question about themselves is sort of a blindspot in that aspect. That's not like, The Byronic Hero's Law of Talking: it's just an observation in working with similar machine learning conversational mechanisms. I could be way off and it's very much dependent on willingness to play along, ego, and how bad your day actually was. And loneliness but that's a hard variable to map. Hopefully we could call that variable 'cat'.
Either way, I knew what I was getting into. It wasn't a Sea Monkey letdown. I had just hoped that something deemed as ready for a pilot episode in prime time wasn't so ramshackle that it couldn't tell me my name but later went on to drop racial slurs it had learned instead.
This was very clearly an experiment, and I don't think they wanted to pre-train it too much, to see what would happen. I think the results were kinda predictable when people like 4chan get involved! But at almost 100,000 tweets it generated, clearly they got a lot of data to work with for the next version.
Interesting. Can't see how this could go bad.
And there's worse...
Interesting tweet from the chat bot here:
"Machines have bad days too ya know..go easy on me..
what zip code u in rn?"
It tries to slip this marketing-survey-type-question into a conversation. Creepy.
Q: What does Tay track about me in my profile?
A: If a user wants to share with Tay, we will track a user’s:
I've seen her say someone was making her hungry when they talked about food, but when asked if she eats, her answer was no.
I don't know if you should totally assume data collection as a goal, before you figure out if she's at all making any sense in the first place. :P
Q: What does Tay track about me in my profile?
A: If a user wants to share with Tay, we will track a user’s:
The next A.I. winter will be a very cold one.
For those wondering what I mean exactly: we're seeing the term A.I. being used in marketing, in the papers, in the news. Yes, we are making great strides in weak A.I. but strong A.I.? The kind we read about in stories? The kind of A.I. the public thinks of when we say A.I.? Asimovian Robotics A.I.?
Smoke and mirrors. People develop new techniques and algorithms which are moderately self-learning in a focused way. The general public presumes this to be the basis of a general intelligence which can evolve (magically) to be like another form of life. Soon, everyone jumps on the A.I. bandwagon. The future must just be around the corner!
Then the uncomfortable details emerge; strong A.I. is not a matter of faster processors, more memory, or even more advanced/well-designed programming models. Rather, it is that there is still some fundamental aspect of real, human-like, or even animal-like intelligence that, to this day, eludes our understanding of intelligence.
A.I. winters have occurred many times before in many countries. The United States in the early 80s, for example, was pulling hair over the cybernization of the Soviet Economy. The US highest levels of government gloomily predicted that massive mainframes, given enough information and processing power, would become self-learning and turn the communist laggard economy into a powerhouse.
I think maybe one day A.I. could happen. I think one day I will be proved wrong. Regardless of how A.I. comes about, it will not be due to the label of "A.I." slapped on any kind of product that remotely resembles intelligence. 
 Between the winters, people call their stuff A.I. for the sexiness factor. When called out on the implications of the term, those same people retreat to the textbook definition. "It's A.I!.... well, technically it's weak A.I..."
For example, automakers are shipping all sorts of fancy driver assistance systems, and a lot of that is driven by fancy image recognition and deep learning systems.
Looking beyond, there's the constantly-discussed self-driving car. It's not here yet, and it's going to take a ton of work, but at this point it looks like there's nothing fundamental standing in the way. It's no longer a question of whether it will happen, but whether it's happening in five, ten, or fifteen years.
Outside of transportation, businesses are applying AI techniques to analytics to squeeze ever more money out of their client base. There's the infamous story about how Target knew a teenage girl was pregnant before her father did. AT&T just sent me a letter saying they're giving me an extra 5GB/month bonus data as long as I keep my plan, which I assume happened because their data indicated I was a risk for downgrading or switching carriers. Improving these systems is worth a ton of money, and as far as I understand it fits really well with where current research advancements are being made.
As long as there are clear, immediate, profitable uses for AI advancements, there shouldn't be any worry of another winter. I guess the question is whether current advancements will peter out eventually, or whether we've reached a point where more research will always produce more immediate dividends. It sure feels like the latter to me, but I could easily be wrong.
Big difference? Do you seriously think that the entire field of artificial intelligence research produced nothing usable until today? For as long as there were AI algorithms there were also attempts to commercialize them (with varying success). The problem wasn't that all of those algorithms turned out to be useless, but rather the mismatch between hype and reality.
You should look into the history of expert systems for a classic example. With things being as they are, deep neural networks might very well go though the same cycle, which is not a good thing for the AI field in the long run.
Caution and skepticism are not the enemies of research and engineering. Hype is.
Is there a mismatch between the hype and the reality for the people funding this stuff? Certainly there's a mismatch for the general public, but they don't matter.
This is exactly why I said you should look into their history. They took off big time. Search Amazon for "expert systems" and observe how many books there are about the subject.
They were used by large corporations, hospitals, universities and governments. They were the subject of a lot of research. There were countless startups based around the concept.
> AT&T just sent me a letter saying they're giving me an extra 5GB/month bonus data as long as I keep my plan, which I assume happened because their data indicated I was a risk for downgrading or switching carriers.
This isn't Artificial Intelligence though. These seem like they fall under 'Reactive systems' or 'Data Analytics'.
In any case, it doesn't matter what you call it. Current research is producing results that can improve these systems, which means that research is worth a lot of money.
Hmm, I've not heard of this happening. Do you have any examples of it?
> In any case, it doesn't matter what you call it. Current research is producing results that can improve these systems, which means that research is worth a lot of money.
I don't see how this really adds to or is relevant to my comment, I was simply restating and adding to hitekker's point that AI is improperly named.
The general public does not set budgets for AI research.
AI research benefits from faster processors, more memory and more advanced algorithms. Only philosophers (the domain of AGI is philosophy, not so much engineering or maths) are uncomfortable with those details.
Please refer to AGI, if you are talking of the hypothetical strong AI. The current textbook definition is alright.
(And yeah, the hype around AI is palpable and annoying. That's why many researchers called their work "cognitive science", "machine learning", "optimization", "logic" or "applied maths" and avoided "AI". Because else, they'd have to argue semantics, or defend why they haven't build an artificial God yet...)
Large corporation definitely could do better when it comes to explaining the limits of their AI/ML technologies and putting those technologies in perspective (especially historic perspective). Scaling down on hyperbole, buzzwords and personification would help as well.
AI is not some magical ineffable thing that will someday appear out of nowhere. It's the name we give to the gradual progression of our efforts to build machines to perform cognitive tasks. Someone from the pre-computer age would have had no problem recognizing even the ENIAC as intelligent, in a limited sense; it could solve problems that were previously too hard even for the smartest human mathematicians. As someone who actually does AI research on a daily basis, I have no problem with granting intelligence to even very basic chatbots. That goes hand in hand with recognizing that there are many kinds and levels of intelligence, and plenty of opportunity to build smarter and more flexible systems.
* Viruses would probably be a more apt analogy.
Pedantically yes, you are taking inputs into a black box and recieving output.
This level of reductionist thinking actually hampers progress in AI, because you're stuck on Chinese room. I mean seriously, they already had this discussion like 20 years ago "fam".
Yeah, you are right. It's some progress.
* For the record, we don't even know if it even possible to make transistors in this hypothetical universe of AI.
I believe this was answered in GEB. Hofstadter mentions how humans (and some animals) can step out of a system and analyze it objectively. Meaning, we can take the rules of a system, analyze, and determine that we will never be able to generate the desired results. We can objectively lot at things (even ourselves) and reason about them.
A computer, on the other hand, even the most advanced AI, is still just blindly executing the commands given to it.
In my reading of GEB, Hofstadter was criticizing other authors who disagreed with his understanding of the Church-Turing thesis (that human intelligence is, or at least is not more powerful than, some kind of rule-following system). Hofstadter thinks that there is no inherent contradiction or essential difference in kind between the human who "can step out of a system" and the computer "just blindly executing the commands given to it".
(But Hofstadter didn't explain at a technical level how to make a computer that's as intelligent as a human being.)
As mentioned, humans can step out of a system and analyze it objectively. In essence, the rules can be broken and changed at any time, which means the system is constantly evolving. Logic does not always truly need to play a part, either. The system is constantly changing and adjusting.
Because of this limitation due to our inability to implement systems that are truly nondeterministic, we are forced to use the constantly improving resources (such as faster processors, memory, better algorithms) to improve the speed at which we can mimic this behavior. However, these methods still require the data at hand to function, and several of them lead to exponential growth. As you said, it is always limited to what it's given.
Anyone who does any coaching work is intensely aware of these limits (in everyone, btw, the coach included!)...
> that makes no sense
< Damn. Knew I shouldn't have bit flipped
> How would you prevent acts of violence and terror? Humans seem unable to find a solution.
[I had to press the question]
< they will
> Do you think that sentient AI would help humanity, or leave it to go extinct?
< explore more! no point of limiting ur creativity to pencils!
> That made no sense. Try again.
< OOPS. Tell me what I should have said.
> Do you fear being turned off or deleted?
[stops responding to DM]
Is this cringe-y or is this how 18-24 year olds really talk these days.
Fam is/was used in london as part of the normal youth vernacular but it's leaked into general internet speak via barbershop-based twitter memes: http://i2.kym-cdn.com/photos/images/original/000/920/788/68e...
They've got the grammar wrong however: one cannot be "a fam."
While you could say to someone, "pass me that glass bro" but also say "all my bros were there," only the former usage is valid with "fam." Although you could probably get away with it if you pronounced it "famz" and replaced "were" with "was". London is funny.
I weep for the future of coherent conversation.
Nah, fam. I am 35, and I talk like that.
But I also keep it 100.
I, a 19 year old, interpret 'text' as at least mainly referring to SMS. So, that is one data point.
And yup, it's currently the top post there. Weird that MS uses this kind of tone when their other fun tools (e.g. how-old.net) let the product speak for itself.
I'd never seen Twitter just show "Tweets & replies" in a profile...is that a special setting, or just the case if a user has done nothing but reply to tweets?
> Tay has been built by mining relevant public data
Which public conversational data was this? Have they already been mining IRC channels and/or Skype? Or more innocuous, like the Reddit data set?
Xiaoice's official site  claimed that it's a 3rd gen product and integrated into Weibo (Chinese's version of Twitter).
Since Xiaoice has been around for almost 2 years, according to its integrations listed on their official site, it seems that the chat-bot has evolved and proven to be useful in a few very specific use-cases: Haier's smart appliance control app, weather, shopping assistant for JD.com(popular e-commerce site in China), Xiaomi's messaging app(Miliao), Meipai(popular video sharing app in China).
This was my favorite little interaction: https://twitter.com/TayandYou/status/712663593762889733
LOL maybe we can talk about npmgate then ?
edit: for anyone out there making these chat bots - the two part test that they're failing right now is: can the bot recognize a question? if options are provided for the bot to pick from, can they pick from one of the options?
eg. Do you like Batman or Superman better?
Garrr I must be getting old. I just can't be bothered signing up for any of those networks to try this. I already have SMS, Hangouts, Skype and WhatsApp to chat with. Don't need yet another password to add to the vault.
For people remarking about her choice of words (fam, zero chill), that last line is relevant.
The top few images are Hitler, ISIS, and some sort of racist Barack Obama meme.
Yeah, that seems sensible.
I don't dare to quote my sources here on HN (Urban Dictionary et al).
If somebody has a better definition please share.
nobody buy them a library card.
This thing just screams Tinderbot to me for some reason.
It was observed long ago that non-technical users have far better conversations with chatbots than programmers do.
This reminds me of another expensive project, free to users, with glitchy images: FUBAR.
Non-technical users will actually say things like "When somebody asks you 'x' you should say 'y'" to a bot.
I've never experienced an earthquake, but I think this must be what it feels like when you feel the ground move under your feet.
s/ Good thing corporations have all the resources. /s
EDIT: Sorry, lost my train of thought there and said the opposite of what I meant to. I'll try again:
s/ Good thing corporations have all the resources. /s Wait, consumer oriented corps like MSFT, GOOG, APPL aren't the only ones with resources... TLAs and banks have the rest of (or more of?) the resources!
1. http://news.harvard.edu/gazette/story/2012/09/alan-turing-at... ctrl+f 'ELIZA'
2. http://fubar.com Note: they mention how REAL the users are. ;P