Hacker News new | past | comments | ask | show | jobs | submit login
Advances in Conversational AI (facebook.com)
122 points by moneil971 81 days ago | hide | past | web | favorite | 15 comments

I gotta say, when AI is able to converse like humans, a lot of bad stuff will happen. People are so used to the other conversation partner having self-interest, empathy, being reasonable. When enough bots all have a “swarm” program to move conversations in a particular direction, they will overwhelm any public conversation. Moreover, in individual conversations, you won’t be able to trust anything anyone says or negotiates. Just like playing chess or poker online now. And with deepfakes, you won’t be able to trust audio or video either.

The ultimate shock will come when software can render deepfakes in realtime to carry on a conversation, as your friend but not. As a politician who “said crazy stuff” but really didn’t, but it’s in the realm of believability.

I would give it about 20 years until it all goes to shit. If you thought fake news was bad, realtime deepfakes and AI conversations with “friends” will be worse.

(If you go 50 years out people can start building sleeper bots with reputations to subvert community consensus, eg about science or politics. All our systems will be subverted. Don’t believe me? How far are we from that with “skeptical” blog comments and tweets or sorta-believable allegations ruining people’s reputations?)

I suppose that's how the Butlerian Jihad[1] really gets started, then?

[1] https://en.wikipedia.org/wiki/Butlerian_Jihad

Links to papers:

https://arxiv.org/abs/1811.00207 - Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset

https://arxiv.org/abs/1811.01241 - Wizard of Wikipedia: Knowledge-Powered Conversational agents

https://arxiv.org/abs/1810.10665 - Engaging Image Captioning Via Personality

https://arxiv.org/abs/1902.08654 - What makes a good conversation? How controllable attributes affect human judgments

And a demo of a bot that they've produced: https://www.facebook.com/Beat-The-Bot-212188996195556/

The demo bot keeps repeating: "You do not currently have access to this page". If anyone has access and can paste a few examples of conversation, it would be useful to understand if results are good for production bots.

    Human: I'm a huge football fan - the Eagles are my
    favorite team!

    Knowledgable model: I've always been more of a fan 
    of the American football team from Pittsburgh, the
While an impressive improvement from "I like football too," this stilted attempt at incorporating relevant information ends up coming off as "how do you do, fellow humans!?"

The aspect that it identified that this was about American Football and not say the rest of the World football is interesting. The name of the team gives that away.

What does stilt it is the superfluous repeating of football as if the context was unclear in which for a human, it is. Then to clarify that further by stating "american football", does kinda add to the stiltedness. Then to state the location even further, as a football fan who was aware of their team, would be aware of other teams, because they are a huge football fan.

The more human way of responding would of been more along the lines of: I've always been a Steelers fan myself.

Though, banter and smacktalk of competitive sports fans, for an AI to blend in with that and get the balance right, that will be interesting to see play out as is. Expect the AI to go into parenting mode and ask why all the negativity.

Yes, still firmly in the uncanny valley.

The problem is that they are trying to cheat, rather than truly understanding what somebody is saying and responding appropriately.

That they are now addressing the consistency of their own model shows that they are barely able to understand what the model itself is saying, let alone what the other party is saying.

That's the trick of these ML approaches though is that we trade the cost/time of building semantic understanding for something that appears to work, but will produce things that make no real semantic sense/understanding.

I dunno, probably about as good as Zuck is doing.

To me it sounds like a badly trained foreign spy. Which is actually pretty good if the goal is to sound human-like!

It's a bit weird that having the agent make up stuff about itself randomly ("I'm a construction worker" or "I build antique homes and refurbish houses") is considered a good response. Something like "I don't need to make money, I'm a bot" is what I'd expect. Or alternately, in a game situation, the bot could be playing a particular role it's given beforehand and judged by how well it stays in character.

Yeah I guess it would depend on context. If they want to use a bot for say, training purposes, it would make sense to play a role. But if I'm talking with a virtual assistant, I'd prefer not to be lied to.

They created an Ai bot game as well


They give you a character to play, you are given 2 answers one from a bot one from a real person.

You respond to the one you think is best, interesting, took too long to connect me to another 'player' didn't get a chance to actually play yet

Fascinating topic. Unfortunate that a voice in the back of my brain has to say "Oh I bet if they good at this they'll just use it to target ads better"

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact