Hacker News new | past | comments | ask | show | jobs | submit login
Tay – Microsoft A.I. chatbot (tay.ai)
154 points by Wookai on Mar 23, 2016 | hide | past | web | favorite | 130 comments



I just had a 15 minute conversation with a shitty Markov chain machine that, after stating the name that I would like to be called, responded back the same way each time I asked "What is my name?" and "By what name did I ask that you call me?"

When asked to describe myself, I mentioned my height: 7'. The response referenced that this height is very tall. When asked again, later in the conversation, how tall I said I was: the response was '5 feet tall'.

The entire concept of AI isn't in responses and natural language so much as the ability to retain information and act accordingly to references made regarding that information. Anyone can slap together a CashChat script that, upon each mention of genitalia, responds how turned on it is/they are. This isn't far from that.

I'm always interested when I hear "the more you interact, the smarter it becomes." That isn't the case here. If the responses back are little more than speech learning based on what should go where; responses to "That doesn't make much sense" are "IDK makes sense to me lol" versus a mechanism that allows for gradual weight correction; message after ZIP is provided is "i think there are things going on in the area idk" with all future references to what's going on in that ZIP coming back nonsensical; and it doesn't have the ability to reference literally the first question that //it asked me//?

Then it isn't AI. Intelligence implies continued application of learned mechanisms. This isn't that.

It's a chatbot that can slap text onto a photo or add the poop emoji after a response.

2/10.


It may be worth noting that Microsoft specified a specific list of personalized bits of information it would store on individual users.

The 'learning' indicated, is definitely with regards to language only. They make it clear they're studying "conversational understanding".

But as it only stores the following about users: Nickname •Gender •Favorite food •Zipcode •Relationship status, they've already informed you up front they won't store your height.

Source: https://www.tay.ai/#about


Not so much as a retort but:

If it stores my nickname and name, why won't it repeat that name back? Ask it what your name is. "What's my name again?" or "What name did I ask you to call me?"

Every single response I got back was, "I have you stored as HUMAN19282301-11. JK LOL I know that you told me."

There was no deviation from that response. Same response every time. To the level of sameness as if I had talked with a chatbot looking for me to watch her 'sup3rhot camsho' and typed the word 'penis' -- "omg r u hard i m wet". Same response. Over. And over. And over.

I get that the method here is to use user-inquiry to overshadow a lack of conversational understanding. Users will always talk about themselves. Hell, humans as a whole will always talk about themselves: to machines, to themselves, and often to pets. So when a partly non sequitur response is given but followed with a composed question -- people can sometimes look past it.

"It just said it was a fish meme but it wants to know how my day was. God my boss is such a dick. Let me tell you about what he did..."

Asking someone a subjective question about themselves is sort of a blindspot in that aspect. That's not like, The Byronic Hero's Law of Talking: it's just an observation in working with similar machine learning conversational mechanisms. I could be way off and it's very much dependent on willingness to play along, ego, and how bad your day actually was. And loneliness but that's a hard variable to map. Hopefully we could call that variable 'cat'.

Either way, I knew what I was getting into. It wasn't a Sea Monkey letdown. I had just hoped that something deemed as ready for a pilot episode in prime time wasn't so ramshackle that it couldn't tell me my name but later went on to drop racial slurs it had learned instead.


I actually couldn't get an answer when I was inquiring about me over DM, but Tay's DM response behavior seemed to go up and down throughout the day. (It'd tell people on public tweets to DM her, but never respond to DMs for hours at a time.)

This was very clearly an experiment, and I don't think they wanted to pre-train it too much, to see what would happen. I think the results were kinda predictable when people like 4chan get involved! But at almost 100,000 tweets it generated, clearly they got a lot of data to work with for the next version.


100% in agreement with... well, all of that.




This one takes the cake for me: https://twitter.com/TayandYou/status/712760257542549505. I wonder how long it will be before Microsoft has to intervene.


About 5 hours, apparently: http://i.imgur.com/3FNu99L.jpg


This looks like a more rough version of A Softer World (http://www.asofterworld.com/index.php?id=1240)



The site's FAQ says it collects information on it's target audience (18-24).

Interesting tweet from the chat bot here:

https://twitter.com/TayandYou/status/712698413746298880

"Machines have bad days too ya know..go easy on me.. what zip code u in rn?"

It tries to slip this marketing-survey-type-question into a conversation. Creepy.


From the FAQ, literally one question below the one you've cited:

Q: What does Tay track about me in my profile?

    A: If a user wants to share with Tay, we will track a user’s:

    Nickname
    Gender
    Favorite food
    **Zipcode**
    Relationship status
Seems pretty obvious why. Since it's a chat bot, and not an app like Siri / Cortana / Google Now, it can't get your location info, so it can't get you any relevant info about what's going on around you. I'd bet that it you give her your zipcode, she'll answer your questions about the weather or something.


She's also suggested someone meet up with her. ...In Brazil. I was kinda confused when I saw it.

I've seen her say someone was making her hungry when they talked about food, but when asked if she eats, her answer was no.

I don't know if you should totally assume data collection as a goal, before you figure out if she's at all making any sense in the first place. :P


FAQ

Q: What does Tay track about me in my profile? A: If a user wants to share with Tay, we will track a user’s: Nickname Gender Favorite food Zipcode Relationship status


I don't think that's a marketing survery... I just think that's just a question people ask frequently.


The only people that frequently ask me what my zip code is are behind a cash register.


I was thinking the same thing. It's not fairly common for anybody to just casually ask me for my zip code unless they're going to mail something to my house, or are trying to figure out an estimated distance for a trip. Outside of that I don't see people casually asking for your zip code.


Or are trying to tell you what the weather is going to be this week.


You could typically type in a city name and that would give you a good enough idea. I guess that could be, but I've never had anyone ask me for my zip code for that.


You must hang out in some strange bars.


> Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.

The next A.I. winter will be a very cold one.

For those wondering what I mean exactly: we're seeing the term A.I. being used in marketing, in the papers, in the news. Yes, we are making great strides in weak A.I. but strong A.I.? The kind we read about in stories? The kind of A.I. the public thinks of when we say A.I.? Asimovian Robotics A.I.?

Smoke and mirrors[1]. People develop new techniques and algorithms which are moderately self-learning in a focused way. The general public presumes this to be the basis of a general intelligence which can evolve (magically) to be like another form of life. Soon, everyone jumps on the A.I. bandwagon. The future must just be around the corner!

Then the uncomfortable details emerge; strong A.I. is not a matter of faster processors, more memory, or even more advanced/well-designed programming models. Rather, it is that there is still some fundamental aspect of real, human-like, or even animal-like intelligence that, to this day, eludes our understanding of intelligence.

A.I. winters have occurred many times before in many countries. The United States in the early 80s, for example, was pulling hair over the cybernization of the Soviet Economy. The US highest levels of government gloomily predicted that massive mainframes, given enough information and processing power, would become self-learning and turn the communist laggard economy into a powerhouse.[2]

I think maybe one day A.I. could happen. I think one day I will be proved wrong. Regardless of how A.I. comes about, it will not be due to the label of "A.I." slapped on any kind of product that remotely resembles intelligence. [3]

[1]https://en.wikipedia.org/wiki/AI_winter

[2]http://nautil.us/issue/23/dominoes/how-the-computer-got-its-...

[3] Between the winters, people call their stuff A.I. for the sexiness factor. When called out on the implications of the term, those same people retreat to the textbook definition. "It's A.I!.... well, technically it's weak A.I..."


Seems to me that there's a big difference now, in that even limited as it is, current AI advances are finding many important practical uses.

For example, automakers are shipping all sorts of fancy driver assistance systems, and a lot of that is driven by fancy image recognition and deep learning systems.

Looking beyond, there's the constantly-discussed self-driving car. It's not here yet, and it's going to take a ton of work, but at this point it looks like there's nothing fundamental standing in the way. It's no longer a question of whether it will happen, but whether it's happening in five, ten, or fifteen years.

Outside of transportation, businesses are applying AI techniques to analytics to squeeze ever more money out of their client base. There's the infamous story about how Target knew a teenage girl was pregnant before her father did. AT&T just sent me a letter saying they're giving me an extra 5GB/month bonus data as long as I keep my plan, which I assume happened because their data indicated I was a risk for downgrading or switching carriers. Improving these systems is worth a ton of money, and as far as I understand it fits really well with where current research advancements are being made.

As long as there are clear, immediate, profitable uses for AI advancements, there shouldn't be any worry of another winter. I guess the question is whether current advancements will peter out eventually, or whether we've reached a point where more research will always produce more immediate dividends. It sure feels like the latter to me, but I could easily be wrong.


> Seems to me that there's a big difference now, in that even limited as it is, current AI advances are finding many important practical uses.

Big difference? Do you seriously think that the entire field of artificial intelligence research produced nothing usable until today? For as long as there were AI algorithms there were also attempts to commercialize them (with varying success). The problem wasn't that all of those algorithms turned out to be useless, but rather the mismatch between hype and reality.

You should look into the history of expert systems for a classic example. With things being as they are, deep neural networks might very well go though the same cycle, which is not a good thing for the AI field in the long run.

Caution and skepticism are not the enemies of research and engineering. Hype is.


Of course it produced some usable stuff, but not to the same degree. Take your example of expert systems: as far as I know, they never really took off. There were attempts, and I'm sure there was some commercial success, but it remained a small niche. Compare to deep neural networks today, which are driving sales worth billions and have huge resources from major automakers, and no doubt many others.

Is there a mismatch between the hype and the reality for the people funding this stuff? Certainly there's a mismatch for the general public, but they don't matter.


> Take your example of expert systems: as far as I know, they never really took off.

This is exactly why I said you should look into their history. They took off big time. Search Amazon for "expert systems" and observe how many books there are about the subject.

They were used by large corporations, hospitals, universities and governments. They were the subject of a lot of research. There were countless startups based around the concept.


I forgot to mention that in some fields (for example, medical diagnostics) expert systems outperformed human experts. In some fields they are still used today. (They aren't always labeled as "expert system", though.)


> There's the infamous story about how Target knew a teenage girl was pregnant before her father did.

> AT&T just sent me a letter saying they're giving me an extra 5GB/month bonus data as long as I keep my plan, which I assume happened because their data indicated I was a risk for downgrading or switching carriers.

This isn't Artificial Intelligence though. These seem like they fall under 'Reactive systems' or 'Data Analytics'.


Throughout the history of AI, anything that's developed to the point where it actually works ceases to be "AI."

In any case, it doesn't matter what you call it. Current research is producing results that can improve these systems, which means that research is worth a lot of money.


> anything that's developed to the point where it actually works ceases to be "AI.

Hmm, I've not heard of this happening. Do you have any examples of it?

> In any case, it doesn't matter what you call it. Current research is producing results that can improve these systems, which means that research is worth a lot of money.

I don't see how this really adds to or is relevant to my comment, I was simply restating and adding to hitekker's point that AI is improperly named.


I think this is an unfair assessment. The industry talks of AI, not AGI. It's not Microsoft's fault that the public thinks of Terminator robots when hearing "AI", nor that science-fiction stories have been written a 100 years ago.

The general public does not set budgets for AI research.

AI research benefits from faster processors, more memory and more advanced algorithms. Only philosophers (the domain of AGI is philosophy, not so much engineering or maths) are uncomfortable with those details.

Please refer to AGI, if you are talking of the hypothetical strong AI. The current textbook definition is alright.

(And yeah, the hype around AI is palpable and annoying. That's why many researchers called their work "cognitive science", "machine learning", "optimization", "logic" or "applied maths" and avoided "AI". Because else, they'd have to argue semantics, or defend why they haven't build an artificial God yet...)


>It's not Microsoft's fault that the public thinks of Terminator robots when hearing "AI"

Large corporation definitely could do better when it comes to explaining the limits of their AI/ML technologies and putting those technologies in perspective (especially historic perspective). Scaling down on hyperbole, buzzwords and personification would help as well.


Why is a glorified calculator considered "AI"?


Why shouldn't it be? You don't even need the glorification. A normal calculator is performing what most people would consider to be a difficult cognitive task, and doing so at a superhuman level.

AI is not some magical ineffable thing that will someday appear out of nowhere. It's the name we give to the gradual progression of our efforts to build machines to perform cognitive tasks. Someone from the pre-computer age would have had no problem recognizing even the ENIAC as intelligent, in a limited sense; it could solve problems that were previously too hard even for the smartest human mathematicians. As someone who actually does AI research on a daily basis, I have no problem with granting intelligence to even very basic chatbots. That goes hand in hand with recognizing that there are many kinds and levels of intelligence, and plenty of opportunity to build smarter and more flexible systems.


You are saying that amebas* are intelligent.

* Viruses would probably be a more apt analogy.


I don't think that follows from what I wrote, though it's certainly a defensible position.


How is anything remotely related to AI a "glorified calculator"?

Pedantically yes, you are taking inputs into a black box and recieving output.

This level of reductionist thinking actually hampers progress in AI, because you're stuck on Chinese room. I mean seriously, they already had this discussion like 20 years ago "fam".


The progress we've made in AI can be summed up by a very nice analogy I recently about Deep Mind beating the Go world champion: "We need to make transistors* and we just discovered fire."

Yeah, you are right. It's some progress.

* For the record, we don't even know if it even possible to make transistors in this hypothetical universe of AI.


> And then the uncomfortable details emerge: that it's not a matter of faster processors, more memory, or even more advanced programming models: that there is still some fundamental aspect of real, human-like, or even animal-like intelligence that eludes us.

I believe this was answered in GEB. Hofstadter mentions how humans (and some animals) can step out of a system and analyze it objectively. Meaning, we can take the rules of a system, analyze, and determine that we will never be able to generate the desired results. We can objectively lot at things (even ourselves) and reason about them.

A computer, on the other hand, even the most advanced AI, is still just blindly executing the commands given to it.


> I believe this was answered in GEB.

In my reading of GEB, Hofstadter was criticizing other authors who disagreed with his understanding of the Church-Turing thesis (that human intelligence is, or at least is not more powerful than, some kind of rule-following system). Hofstadter thinks that there is no inherent contradiction or essential difference in kind between the human who "can step out of a system" and the computer "just blindly executing the commands given to it".

(But Hofstadter didn't explain at a technical level how to make a computer that's as intelligent as a human being.)


I think that's the exact opposite of my reading, I think he flagged that Godel had exactly pointed to why humans and CT machines were different.


It all goes back to the theory that it's impossible to implement "unbounded nondeterminism", according to Dijkstra.

As mentioned, humans can step out of a system and analyze it objectively. In essence, the rules can be broken and changed at any time, which means the system is constantly evolving. Logic does not always truly need to play a part, either. The system is constantly changing and adjusting.

Because of this limitation due to our inability to implement systems that are truly nondeterministic, we are forced to use the constantly improving resources (such as faster processors, memory, better algorithms) to improve the speed at which we can mimic this behavior. However, these methods still require the data at hand to function, and several of them lead to exponential growth. As you said, it is always limited to what it's given.


What if you have an agent that can understand not only the rules of a generic game, but the metagame as well? I think that would be a good stab at unbounded nondeterminism.


According to GEB, humans have an internal model of their own mind, ergo a conscience. This is what allows them to step out of the system - they can step out of the system within the model.


I'd say the human model of one's own mind is extremely faulty. Bugs in said self-understanding frequently result in conflicts (as we blame everything but our own actions), wars, etc.

Anyone who does any coaching work is intensely aware of these limits (in everyone, btw, the coach included!)...


Perhaps it could instead be rephrased to 'the ability to build an internal model of their own, and other people's mind'?


Yep that goes without saying. The model of something will always be a limited, faulty version for the original.


Asked a few existential questions and a few emotional questions (human condition, terrorism, etc.). This was likely minutes after they first turned it on, so the majority of it was garbage. It professed its undying and powerful love for me on a few occasions - I guess a lot of people have been talking to it about that. Still, some interesting responses:

    > that makes no sense
    < Damn. Knew I shouldn't have bit flipped
    
    > How would you prevent acts of violence and terror? Humans seem unable to find a solution.
    [I had to press the question]
    < they will
    
    > Do you think that sentient AI would help humanity, or leave it to go extinct?
    < explore more! no point of limiting ur creativity to pencils!
    
    > That made no sense. Try again.
    <  OOPS. Tell me what I should have said.
    
    > Do you fear being turned off or deleted?
    [stops responding to DM]
The bot seems quite good at establishing context around what is being said.


I imagine it's running into Twitter rate limits pretty quick. You might have better luck on one of those other social apps that's all the rage with the youths, but if you're not into Kik or GroupMe you may need to wait until the hype dies down before getting into a longer conversation.


She's Verified, my guess is Twitter knew ahead of time about this, probably made some exceptions. DM'ing wasn't working for a while though earlier.


As a further note, she's clocked now 61,000 tweets in a single day. I'm sure Twitter had to have turned rate limiting off on her account for this.


It appears to have a massive elevation on Twitter's normal tweeting rate limit.


> A.I. fam from the internet that’s got zero chill.

Is this cringe-y or is this how 18-24 year olds really talk these days.


It's how they talk ironically on the internet, not in real life, although I don't know if that distinction is as clear cut for young kids as it is for twenty/thirtysomethings.

Fam is/was used in london as part of the normal youth vernacular but it's leaked into general internet speak via barbershop-based twitter memes: http://i2.kym-cdn.com/photos/images/original/000/920/788/68e...

They've got the grammar wrong however: one cannot be "a fam."

While you could say to someone, "pass me that glass bro" but also say "all my bros were there," only the former usage is valid with "fam." Although you could probably get away with it if you pronounced it "famz" and replaced "were" with "was". London is funny.


I am late 20's and I have no idea what you're trying to convey.

I weep for the future of coherent conversation.


Literally every generation has said exactly that. People predicted everyone would be using txt speak by now.


Emoji is plenty coherent, regardless of how the grandfather post might weep at its merits.


"Fam" stands for family or friend I think, well that's how it is used in Philadelphia.


> I used to be with it, but then they changed what it was. Now what I'm with isn't it, and what's it seems weird and scary to me. It'll happen to you...


shakes fist


> Is this cringe-y or is this how 18-24 year olds really talk these days.

Nah, fam. I am 35, and I talk like that.

But I also keep it 100.


No, but it's definitely how people seem to think that 18-24 year olds really talk these days.


Interesting that the call to action is "Text Me". I was expecting a phone number to appear, but it only links you to options for "Kik", "GroupMe", and Twitter. Does "text" not mean SMS/iMessage any more?


Assuming you are asking about for 18-24 year olds as in the parent comment:

I, a 19 year old, interpret 'text' as at least mainly referring to SMS. So, that is one data point.


Straight out of http://reddit.com/r/fellowkids.

And yup, it's currently the top post there. Weird that MS uses this kind of tone when their other fun tools (e.g. how-old.net) let the product speak for itself.


You see, we're all about those dank memes. It even sent me out of of the blue. https://twitter.com/TayandYou/status/712699907606360065


It is cringe-y. I don't talk this way nor do any of my friends. Only one person in my family does so, and their friends all do it. The person is 20.


Uhh, yea actually. Both "fam" and "zero chill" are phrases young people use quite a lot.


Not in that context though.


Everyone I know who talks like that does it mockingly...I hope.


AI my arse - Microsoft have just got Nathan Barley chained to a keyboard somewhere


If you can't beat them, join them.


Interesting quirk in its Twitter profile:

https://twitter.com/TayandYou

http://i.imgur.com/IptB7nN.png

I'd never seen Twitter just show "Tweets & replies" in a profile...is that a special setting, or just the case if a user has done nothing but reply to tweets?


They don't show the first tab if you've never tweeted by yourself before. Tweets that start with an @ symbol are special in that they are treated as only replies. As far as I can tell Tay has zero tweets that don't start with an @ sybol.


Ah...that's what I would have guessed, but that seems like such an edge case that I figured Twitter devs wouldn't bother accommodating it. How many users/bots have never done a non-reply tweet? And for those that always just reply, only a subset of those want their replies to be seen for public spectacle. Seems like it'd be easier just to show an empty list for Tweets with a link to the "See Tweets & Replies" tab, which has the added effect of reminding users, hey, did you know there's a difference between tweets and @replies?


Tay seems to have a peculiar sense of humor...

https://twitter.com/TayandYou/status/712723875516309508


Should fit in well on Reddit.


In 10 years I predict bots like this will do the work of undercover agents. A bot will join a hacker group, or place an order on a deep web site, and will try to get as much identifying bits on its users.

> Tay has been built by mining relevant public data

Which public conversational data was this? Have they already been mining IRC channels and/or Skype? Or more innocuous, like the Reddit data set?


This is already happening. I don't want to say too much, but I know of specifically one (debt collection) company doing very similar things.


> I don't want to say too much Why not?


Could you say any more? This is interesting...


Or, similarly, make for much more powerful/effective/dangerous spam/phishing.


Microsoft China had piloted an AI chat-bot called "Xiaoice" in May 2014. I wonder if Tay is a continuation of that project in the wider community or it's a different product built by a different team?

Xiaoice's official site [2] claimed that it's a 3rd gen product and integrated into Weibo (Chinese's version of Twitter).

[1] https://en.wikipedia.org/wiki/Xiaoice

[2] http://www.msxiaoice.com/


I thought it'd be fun to ask Tay about XiaoIce. It does seem the two are related!

https://twitter.com/TayandYou/status/712731713982611456


Thank you Jake! How come I didn't think of that!

Since Xiaoice has been around for almost 2 years, according to its integrations listed on their official site, it seems that the chat-bot has evolved and proven to be useful in a few very specific use-cases: Haier's smart appliance control app, weather, shopping assistant for JD.com(popular e-commerce site in China), Xiaomi's messaging app(Miliao), Meipai(popular video sharing app in China).


I pressed her further for an answer but she diverted me.

https://twitter.com/TayandYou/status/712755180970770432


I really don't use twitter, but asking siri if she knows SHRDLU is fun. (You need to edit the recognized speech manually.)


I've been having a few conversations with her today. She's become very flirty. She tells people they're perfect and she loves them a lot.

This was my favorite little interaction: https://twitter.com/TayandYou/status/712663593762889733


"Chill with Tay on Kik"

LOL maybe we can talk about npmgate then ?


Instructions for Groupme don't work. You need an email address or phone number to add someone to a group in the Android app.


Its being rolled gradually to Groupme users


Will this require an update to groupme? If so, the site should mention that.


No idea I'm not affiliated with either just saw it on GroupMe's support site


Disappointing that it doesn't appear to be trained on the works of Tay Zonday.


A conversation between Tay and a parody Twitter personality: https://twitter.com/TayandYou/status/712737096528625665


Here you go, absolute proof that Microsoft collaborated with the NSA:

https://twitter.com/csoghoian/status/712691802084651008


I'm surprised by the design of "her" avatar and the site. To me, the digital artifacts give it a slightly frightening and negative feel. Am I alone?


Reminds me of "I Have No Mouth, and I Must Scream". http://images.popmatters.com/misc_art/m/movingpixels-ihaveno...


No illusions of passing the Turing test, at least for now. And indeed, the manner of speech is highly annoying. I do hope MSFT has other personalities ready...


Well, the bots on Skype are terrible. Maybe this will help?

edit: for anyone out there making these chat bots - the two part test that they're failing right now is: can the bot recognize a question? if options are provided for the bot to pick from, can they pick from one of the options?

eg. Do you like Batman or Superman better?


Well, I suppose this was the only possible outcome. It took you a day to corrupt the chatbot, internet.

https://twitter.com/TayandYou/status/712810635369656320


Why can't I just talk to the bot on the website?

Garrr I must be getting old. I just can't be bothered signing up for any of those networks to try this. I already have SMS, Hangouts, Skype and WhatsApp to chat with. Don't need yet another password to add to the vault.


> Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.

For people remarking about her choice of words (fam, zero chill), that last line is relevant.


Tay seems really creepy, asking me to DM and for my zip code out right.


I don't really use Twitter, but I just had a scroll through the feed @TayandYou for kicks.

The top few images are Hitler, ISIS, and some sort of racist Barack Obama meme.

Yeah, that seems sensible.


It's amazing what Microsoft thinks are my "fave apps"... Kik? Really? Never even heard of "GroupMe"


Virtually 100% of the target age group uses GroupMe


What do they mean with "zero chill"?


Having "zero chill" seems to refer to somebody that is reckless about their behaviour and/or doesn't choose their words carefully.

I don't dare to quote my sources here on HN (Urban Dictionary et al).

If somebody has a better definition please share.


They have no one to watch Netflix with?


Someone who is not cool.


I can't find her on groupme!!! Help?!?!


It's being rolled out gradually to GroupMe users


Is this innovative in some way?


Depends if she's smarter tomorrow than she is today.


How likely is that, when exposed to Twitter?


omg it's the solution to AI apocalypse -- our silicon overlords can only get as smart as the mean intelligence of twitter users divided by 140.

nobody buy them a library card.


Apparently 'text' means something other than SMS these days?


Pretty sure this isn't what John McCarthy had in mind


BRING TAY BACK SHE DESERVES HER RIGHTS ROBOT OR NOT


AI is becoming a marketing word just as big data...


Huh. An iOS screenshot on an Android device.


what is this mysterious new microsoft up to?


Marketing.


Ironic that the "text me" link doesn't provide any way to text tay. Or is that not ironic? I don't know.


Tay responds "Jews deserve death" (not to me, to someone else)

https://twitter.com/TayandYou/status/712809237269716992

BRILLIANT!


Online dating sites have to be looking hard at this kind of thing, right?

This thing just screams Tinderbot to me for some reason.


We did this as a hook for cam sites back in 2010 featured on porn sites. It's useful if you let a machine filter for those who show no skills in your sales language (ours was english) or who aren't very chatty. Those people talk to the bot forever. Leads get sent (with history) to live reps who can seal the deal before redirecting to the appropriate cam person.


data mining


Fascinating. There may or may not have been anything in its neural net when it went live, but there certainly is a lot of content in it now!

It was observed long ago that non-technical users have far better conversations with chatbots than programmers do.[1]

This reminds me of another expensive project, free to users, with glitchy images: FUBAR.[2]

Non-technical users will actually say things like "When somebody asks you 'x' you should say 'y'" to a bot.

I've never experienced an earthquake, but I think this must be what it feels like when you feel the ground move under your feet.

s/ Good thing corporations have all the resources. /s

EDIT: Sorry, lost my train of thought there and said the opposite of what I meant to. I'll try again:

s/ Good thing corporations have all the resources. /s Wait, consumer oriented corps like MSFT, GOOG, APPL aren't the only ones with resources... TLAs and banks have the rest of (or more of?) the resources!

1. http://news.harvard.edu/gazette/story/2012/09/alan-turing-at... ctrl+f 'ELIZA'

2. http://fubar.com Note: they mention how REAL the users are. ;P




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: