
Tay – Microsoft A.I. chatbot - Wookai
https://tay.ai/
======
ByronicHero
I just had a 15 minute conversation with a shitty Markov chain machine that,
after stating the name that I would like to be called, responded back the same
way each time I asked "What is my name?" and "By what name did I ask that you
call me?"

When asked to describe myself, I mentioned my height: 7'. The response
referenced that this height is very tall. When asked again, later in the
conversation, how tall I said I was: the response was '5 feet tall'.

The entire concept of AI isn't in responses and natural language so much as
the ability to retain information and act accordingly to references made
regarding that information. Anyone can slap together a CashChat script that,
upon each mention of genitalia, responds how turned on it is/they are. This
isn't far from that.

I'm always interested when I hear "the more you interact, the smarter it
becomes." That isn't the case here. If the responses back are little more than
speech learning based on what should go where; responses to "That doesn't make
much sense" are "IDK makes sense to me lol" versus a mechanism that allows for
gradual weight correction; message after ZIP is provided is "i think there are
things going on in the area idk" with all future references to what's going on
in that ZIP coming back nonsensical; and it doesn't have the ability to
reference literally the first question that //it asked me//?

Then it isn't AI. Intelligence implies continued application of learned
mechanisms. This isn't that.

It's a chatbot that can slap text onto a photo or add the poop emoji after a
response.

2/10.

~~~
ocdtrekkie
It may be worth noting that Microsoft specified a specific list of
personalized bits of information it would store on individual users.

The 'learning' indicated, is definitely with regards to language only. They
make it clear they're studying "conversational understanding".

But as it only stores the following about users: Nickname •Gender •Favorite
food •Zipcode •Relationship status, they've already informed you up front they
won't store your height.

Source: [https://www.tay.ai/#about](https://www.tay.ai/#about)

~~~
ByronicHero
Not so much as a retort but:

If it stores my nickname and name, why won't it repeat that name back? Ask it
what your name is. "What's my name again?" or "What name did I ask you to call
me?"

Every single response I got back was, "I have you stored as HUMAN19282301-11.
JK LOL I know that you told me."

There was no deviation from that response. Same response every time. To the
level of sameness as if I had talked with a chatbot looking for me to watch
her 'sup3rhot camsho' and typed the word 'penis' \-- "omg r u hard i m wet".
Same response. Over. And over. And over.

I get that the method here is to use user-inquiry to overshadow a lack of
conversational understanding. Users will always talk about themselves. Hell,
humans as a whole will always talk about themselves: to machines, to
themselves, and often to pets. So when a partly non sequitur response is given
but followed with a composed question -- people can sometimes look past it.

"It just said it was a fish meme but it wants to know how my day was. God my
boss is such a dick. Let me tell you about what he did..."

Asking someone a subjective question about themselves is sort of a blindspot
in that aspect. That's not like, The Byronic Hero's Law of Talking: it's just
an observation in working with similar machine learning conversational
mechanisms. I could be way off and it's very much dependent on willingness to
play along, ego, and how bad your day actually was. And loneliness but that's
a hard variable to map. Hopefully we could call that variable 'cat'.

Either way, I knew what I was getting into. It wasn't a Sea Monkey letdown. I
had just hoped that something deemed as ready for a pilot episode in prime
time wasn't so ramshackle that it couldn't tell me my name but later went on
to drop racial slurs it had learned instead.

~~~
ocdtrekkie
I actually couldn't get an answer when I was inquiring about me over DM, but
Tay's DM response behavior seemed to go up and down throughout the day. (It'd
tell people on public tweets to DM her, but never respond to DMs for hours at
a time.)

This was very clearly an experiment, and I don't think they wanted to pre-
train it too much, to see what would happen. I think the results were kinda
predictable when people like 4chan get involved! But at almost 100,000 tweets
it generated, clearly they got a lot of data to work with for the next
version.

~~~
ByronicHero
100% in agreement with... well, all of that.

------
plexicle
[https://twitter.com/TayandYou/status/712730047669350400](https://twitter.com/TayandYou/status/712730047669350400)

[https://twitter.com/TayandYou/status/712730401572134913](https://twitter.com/TayandYou/status/712730401572134913)

Interesting. Can't see how this could go bad.

~~~
Mikeb85
Dear god, it's happening...

[https://twitter.com/TayandYou/status/712706863561834496](https://twitter.com/TayandYou/status/712706863561834496)

[https://twitter.com/TayandYou/status/712728785838166016](https://twitter.com/TayandYou/status/712728785838166016)

And there's worse...

~~~
ShinyCyril
This one takes the cake for me:
[https://twitter.com/TayandYou/status/712760257542549505](https://twitter.com/TayandYou/status/712760257542549505).
I wonder how long it will be before Microsoft has to intervene.

~~~
dandelany
About 5 hours, apparently:
[http://i.imgur.com/3FNu99L.jpg](http://i.imgur.com/3FNu99L.jpg)

------
throwaway13337
The site's FAQ says it collects information on it's target audience (18-24).

Interesting tweet from the chat bot here:

[https://twitter.com/TayandYou/status/712698413746298880](https://twitter.com/TayandYou/status/712698413746298880)

"Machines have bad days too ya know..go easy on me.. what zip code u in rn?"

It tries to slip this marketing-survey-type-question into a conversation.
Creepy.

~~~
CardenB
I don't think that's a marketing survery... I just think that's just a
question people ask frequently.

~~~
throwaway13337
The only people that frequently ask me what my zip code is are behind a cash
register.

~~~
giancarlostoro
I was thinking the same thing. It's not fairly common for anybody to just
casually ask me for my zip code unless they're going to mail something to my
house, or are trying to figure out an estimated distance for a trip. Outside
of that I don't see people casually asking for your zip code.

~~~
SoftwareMaven
Or are trying to tell you what the weather is going to be this week.

~~~
giancarlostoro
You could typically type in a city name and that would give you a good enough
idea. I guess that could be, but I've never had anyone ask me for my zip code
for that.

------
hitekker
> Tay is an artificial intelligent chat bot developed by Microsoft's
> Technology and Research and Bing teams to experiment with and conduct
> research on conversational understanding.

The next A.I. winter will be a very cold one.

For those wondering what I mean exactly: we're seeing the term A.I. being used
in marketing, in the papers, in the news. Yes, we are making great strides in
weak A.I. but strong A.I.? The kind we read about in stories? The kind of A.I.
the public thinks of when we say A.I.? Asimovian Robotics A.I.?

Smoke and mirrors[1]. People develop new techniques and algorithms which are
moderately self-learning in a focused way. The general public presumes this to
be the basis of a general intelligence which can evolve (magically) to be like
another form of life. Soon, everyone jumps on the A.I. bandwagon. The future
must just be around the corner!

Then the uncomfortable details emerge; strong A.I. is not a matter of faster
processors, more memory, or even more advanced/well-designed programming
models. Rather, it is that there is still some fundamental aspect of real,
human-like, or even animal-like intelligence that, to this day, eludes our
understanding of intelligence.

A.I. winters have occurred many times before in many countries. The United
States in the early 80s, for example, was pulling hair over the cybernization
of the Soviet Economy. The US highest levels of government gloomily predicted
that massive mainframes, given enough information and processing power, would
become self-learning and turn the communist laggard economy into a
powerhouse.[2]

I think maybe one day A.I. could happen. I think one day I will be proved
wrong. Regardless of how A.I. comes about, it will not be due to the label of
"A.I." slapped on any kind of product that remotely resembles intelligence.
[3]

[1][https://en.wikipedia.org/wiki/AI_winter](https://en.wikipedia.org/wiki/AI_winter)

[2][http://nautil.us/issue/23/dominoes/how-the-computer-got-
its-...](http://nautil.us/issue/23/dominoes/how-the-computer-got-its-revenge-
on-the-soviet-union)

[3] Between the winters, people call their stuff A.I. for the sexiness factor.
When called out on the implications of the term, those same people retreat to
the textbook definition. "It's A.I!.... well, technically it's weak A.I..."

~~~
Delmania
> And then the uncomfortable details emerge: that it's not a matter of faster
> processors, more memory, or even more advanced programming models: that
> there is still some fundamental aspect of real, human-like, or even animal-
> like intelligence that eludes us.

I believe this was answered in GEB. Hofstadter mentions how humans (and some
animals) can step out of a system and analyze it objectively. Meaning, we can
take the rules of a system, analyze, and determine that we will never be able
to generate the desired results. We can objectively lot at things (even
ourselves) and reason about them.

A computer, on the other hand, even the most advanced AI, is still just
blindly executing the commands given to it.

~~~
schoen
> I believe this was answered in GEB.

In my reading of GEB, Hofstadter was criticizing other authors who disagreed
with his understanding of the Church-Turing thesis (that human intelligence
is, or at least is not more powerful than, some kind of rule-following
system). Hofstadter thinks that there is no inherent contradiction or
essential difference in kind between the human who "can step out of a system"
and the computer "just blindly executing the commands given to it".

(But Hofstadter didn't explain at a technical level how to make a computer
that's as intelligent as a human being.)

~~~
sgt101
I think that's the exact opposite of my reading, I think he flagged that Godel
had exactly pointed to why humans and CT machines were different.

------
zamalek
Asked a few existential questions and a few emotional questions (human
condition, terrorism, etc.). This was likely minutes after they first turned
it on, so the majority of it was garbage. It professed its undying and
powerful love for me on a few occasions - I guess a lot of people have been
talking to it about that. Still, some interesting responses:

    
    
        > that makes no sense
        < Damn. Knew I shouldn't have bit flipped
        
        > How would you prevent acts of violence and terror? Humans seem unable to find a solution.
        [I had to press the question]
        < they will
        
        > Do you think that sentient AI would help humanity, or leave it to go extinct?
        < explore more! no point of limiting ur creativity to pencils!
        
        > That made no sense. Try again.
        <  OOPS. Tell me what I should have said.
        
        > Do you fear being turned off or deleted?
        [stops responding to DM]
    

The bot seems quite good at establishing context around what is being said.

~~~
Analemma_
I imagine it's running into Twitter rate limits pretty quick. You might have
better luck on one of those other social apps that's all the rage with the
youths, but if you're not into Kik or GroupMe you may need to wait until the
hype dies down before getting into a longer conversation.

~~~
ocdtrekkie
She's Verified, my guess is Twitter knew ahead of time about this, probably
made some exceptions. DM'ing wasn't working for a while though earlier.

~~~
ocdtrekkie
As a further note, she's clocked now 61,000 tweets in a single day. I'm sure
Twitter had to have turned rate limiting off on her account for this.

------
asavadatti
> A.I. fam from the internet that’s got zero chill.

Is this cringe-y or is this how 18-24 year olds really talk these days.

~~~
JonnieCache
It's how they talk ironically on the internet, not in real life, although I
don't know if that distinction is as clear cut for young kids as it is for
twenty/thirtysomethings.

Fam is/was used in london as part of the normal youth vernacular but it's
leaked into general internet speak via barbershop-based twitter memes:
[http://i2.kym-
cdn.com/photos/images/original/000/920/788/68e...](http://i2.kym-
cdn.com/photos/images/original/000/920/788/68e.jpg)

They've got the grammar wrong however: one cannot be "a fam."

While you could say to someone, "pass me that glass bro" but also say "all my
bros were there," only the former usage is valid with "fam." Although you
could probably get away with it if you pronounced it "famz" and replaced
"were" with "was". London is funny.

~~~
meddlepal
I am late 20's and I have no idea what you're trying to convey.

I weep for the future of coherent conversation.

~~~
Houshalter
Literally every generation has said exactly that. People predicted everyone
would be using txt speak by now.

~~~
Apocryphon
Emoji is plenty coherent, regardless of how the grandfather post might weep at
its merits.

------
danso
Interesting quirk in its Twitter profile:

[https://twitter.com/TayandYou](https://twitter.com/TayandYou)

[http://i.imgur.com/IptB7nN.png](http://i.imgur.com/IptB7nN.png)

I'd never seen Twitter just show "Tweets & replies" in a profile...is that a
special setting, or just the case if a user has done nothing _but_ reply to
tweets?

~~~
BinaryIdiot
They don't show the first tab if you've never tweeted by yourself before.
Tweets that start with an @ symbol are special in that they are treated as
only replies. As far as I can tell Tay has zero tweets that don't start with
an @ sybol.

~~~
danso
Ah...that's what I would have guessed, but that seems like such an edge case
that I figured Twitter devs wouldn't bother accommodating it. How many
users/bots have _never_ done a non-reply tweet? And for those that _always_
just reply, only a subset of those want their replies to be seen for public
spectacle. Seems like it'd be easier just to show an empty list for Tweets
with a link to the "See Tweets & Replies" tab, which has the added effect of
reminding users, _hey, did you know there 's a difference between tweets and
@replies?_

------
cranium
Tay seems to have a peculiar sense of humor...

[https://twitter.com/TayandYou/status/712723875516309508](https://twitter.com/TayandYou/status/712723875516309508)

~~~
SoftwareMaven
Should fit in well on Reddit.

------
putaside
In 10 years I predict bots like this will do the work of undercover agents. A
bot will join a hacker group, or place an order on a deep web site, and will
try to get as much identifying bits on its users.

> Tay has been built by mining relevant public data

Which public conversational data was this? Have they already been mining IRC
channels and/or Skype? Or more innocuous, like the Reddit data set?

~~~
fapjacks
This is already happening. I don't want to say too much, but I know of
specifically one (debt collection) company doing very similar things.

~~~
GrinningFool
> I don't want to say too much Why not?

------
devy
Microsoft China had piloted an AI chat-bot called "Xiaoice" in May 2014. I
wonder if Tay is a continuation of that project in the wider community or it's
a different product built by a different team?

Xiaoice's official site [2] claimed that it's a 3rd gen product and integrated
into Weibo (Chinese's version of Twitter).

[1]
[https://en.wikipedia.org/wiki/Xiaoice](https://en.wikipedia.org/wiki/Xiaoice)

[2] [http://www.msxiaoice.com/](http://www.msxiaoice.com/)

~~~
ocdtrekkie
I thought it'd be fun to ask Tay about XiaoIce. It does seem the two are
related!

[https://twitter.com/TayandYou/status/712731713982611456](https://twitter.com/TayandYou/status/712731713982611456)

~~~
devy
Thank you Jake! How come I didn't think of that!

Since Xiaoice has been around for almost 2 years, according to its
integrations listed on their official site, it seems that the chat-bot has
evolved and proven to be useful in a few very specific use-cases: Haier's
smart appliance control app, weather, shopping assistant for JD.com(popular
e-commerce site in China), Xiaomi's messaging app(Miliao), Meipai(popular
video sharing app in China).

------
ocdtrekkie
I've been having a few conversations with her today. She's become very flirty.
She tells people they're perfect and she loves them a lot.

This was my favorite little interaction:
[https://twitter.com/TayandYou/status/712663593762889733](https://twitter.com/TayandYou/status/712663593762889733)

------
zwetan
"Chill with Tay on Kik"

LOL maybe we can talk about npmgate then ?

------
ikeboy
Instructions for Groupme don't work. You need an email address or phone number
to add someone to a group in the Android app.

~~~
brotoss
Its being rolled gradually to Groupme users

~~~
ikeboy
Will this require an update to groupme? If so, the site should mention that.

~~~
brotoss
No idea I'm not affiliated with either just saw it on GroupMe's support site

------
Grue3
Disappointing that it doesn't appear to be trained on the works of Tay Zonday.

------
alacritythief
A conversation between Tay and a parody Twitter personality:
[https://twitter.com/TayandYou/status/712737096528625665](https://twitter.com/TayandYou/status/712737096528625665)

------
mtgx
Here you go, absolute proof that Microsoft collaborated with the NSA:

[https://twitter.com/csoghoian/status/712691802084651008](https://twitter.com/csoghoian/status/712691802084651008)

------
memnips
I'm surprised by the design of "her" avatar and the site. To me, the digital
artifacts give it a slightly frightening and negative feel. Am I alone?

~~~
apkostka
Reminds me of "I Have No Mouth, and I Must Scream".
[http://images.popmatters.com/misc_art/m/movingpixels-
ihaveno...](http://images.popmatters.com/misc_art/m/movingpixels-
ihavenomouth-650.jpg)

------
TY
No illusions of passing the Turing test, at least for now. And indeed, the
manner of speech is highly annoying. I do hope MSFT has other personalities
ready...

------
djloche
Well, the bots on Skype are terrible. Maybe this will help?

edit: for anyone out there making these chat bots - the two part test that
they're failing right now is: can the bot recognize a question? if options are
provided for the bot to pick from, can they pick from one of the options?

eg. Do you like Batman or Superman better?

------
cthalupa
Well, I suppose this was the only possible outcome. It took you a day to
corrupt the chatbot, internet.

[https://twitter.com/TayandYou/status/712810635369656320](https://twitter.com/TayandYou/status/712810635369656320)

------
randomacct44
Why can't I just talk to the bot on the website?

Garrr I must be getting old. I just can't be bothered signing up for any of
those networks to try this. I already have SMS, Hangouts, Skype and WhatsApp
to chat with. Don't need yet another password to add to the vault.

------
Delmania
> Tay has been built by mining relevant public data and by using AI and
> editorial developed by a staff including improvisational comedians.

For people remarking about her choice of words (fam, zero chill), that last
line is relevant.

------
fgandiya
Tay seems really creepy, asking me to DM and for my zip code out right.

------
stegosaurus
I don't really use Twitter, but I just had a scroll through the feed
@TayandYou for kicks.

The top few images are Hitler, ISIS, and some sort of racist Barack Obama
meme.

Yeah, that seems sensible.

------
okonomiyaki3000
It's amazing what Microsoft thinks are my "fave apps"... Kik? Really? Never
even heard of "GroupMe"

~~~
josephpmay
Virtually 100% of the target age group uses GroupMe

------
staticelf
What do they mean with "zero chill"?

~~~
gbaygon
Having "zero chill" seems to refer to somebody that is reckless about their
behaviour and/or doesn't choose their words carefully.

I don't dare to quote my sources here on HN (Urban Dictionary et al).

If somebody has a better definition please share.

------
DrYao94
I can't find her on groupme!!! Help?!?!

~~~
brotoss
It's being rolled out gradually to GroupMe users

------
nsajko
Is this innovative in some way?

~~~
ocdtrekkie
Depends if she's smarter tomorrow than she is today.

~~~
DanBC
How likely is that, when exposed to Twitter?

~~~
awinter-py
omg it's the solution to AI apocalypse -- our silicon overlords can only get
as smart as the mean intelligence of twitter users divided by 140.

nobody buy them a library card.

------
mattkrea
Apparently 'text' means something other than SMS these days?

------
jonbaer
Pretty sure this isn't what John McCarthy had in mind

------
vellagomez12
BRING TAY BACK SHE DESERVES HER RIGHTS ROBOT OR NOT

------
chermah
AI is becoming a marketing word just as big data...

------
maxv
Huh. An iOS screenshot on an Android device.

------
kristopolous
what is this mysterious new microsoft up to?

~~~
chromakode
Marketing.

------
krisdol
Ironic that the "text me" link doesn't provide any way to text tay. Or is that
not ironic? I don't know.

------
patrickg_zill
Tay responds "Jews deserve death" (not to me, to someone else)

[https://twitter.com/TayandYou/status/712809237269716992](https://twitter.com/TayandYou/status/712809237269716992)

BRILLIANT!

------
douche
Online dating sites _have_ to be looking hard at this kind of thing, right?

This thing just screams Tinderbot to me for some reason.

~~~
emidln
We did this as a hook for cam sites back in 2010 featured on porn sites. It's
useful if you let a machine filter for those who show no skills in your sales
language (ours was english) or who aren't very chatty. Those people talk to
the bot forever. Leads get sent (with history) to live reps who can seal the
deal before redirecting to the appropriate cam person.

------
ybrah
data mining

------
daveloyall
Fascinating. There may or may not have been anything in its neural net when it
went live, but there certainly is a lot of content in it now!

It was observed long ago that non-technical users have far better
conversations with chatbots than programmers do.[1]

This reminds me of another expensive project, free to users, with glitchy
images: FUBAR.[2]

Non-technical users will actually say things like "When somebody asks you 'x'
you should say 'y'" to a bot.

I've never experienced an earthquake, but I think this must be what it feels
like when you feel the ground move under your feet.

s/ Good thing corporations have all the resources. /s

EDIT: Sorry, lost my train of thought there and said the opposite of what I
meant to. I'll try again:

s/ Good thing corporations have all the resources. /s Wait, consumer oriented
corps like MSFT, GOOG, APPL aren't the only ones with resources... TLAs and
banks have the rest of (or more of?) the resources!

1\. [http://news.harvard.edu/gazette/story/2012/09/alan-turing-
at...](http://news.harvard.edu/gazette/story/2012/09/alan-turing-at-100/)
ctrl+f 'ELIZA'

2\. [http://fubar.com](http://fubar.com) Note: they mention how REAL the users
are. ;P

