Hacker News new | past | comments | ask | show | jobs | submit login
Large-Scale Pretraining for Dialogue (github.com)
22 points by vok 5 days ago | hide | past | web | favorite | 4 comments





Is there a purpose to this, other than tricking the user into thinking they are talking to a human?

One the one hand, if you have a large database of facts on which you can train this to accurately answer questions, why not just index the database in Elasticsearch and let your users search over it? This is just as efficient, a lot cheaper, and a lot more transparent to your users.

On the other hand, if you _don't_ have such a database, the thing will just end up lying to your users. Which is basically a worst-case-scenario, because you're abusing trust that you gained by pretending to be human.

I will take wikipedia + elasticsearch/google/bing/duckduckgo over a chatbot trained on wikipedia any day.


"The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test."

Paper here: https://arxiv.org/abs/1911.00536


Examples are very convincing. I just stopped trusting online posts as a way to inform myself of public opinion.

How do we regulate this thing? Forbid bots from social media and chats altogether? Force them to introduce themselves as machines?


> The right amount of freedom is the freedom to do as you please , as long as you don't hurt people or property

I ll be damn’d if that bot aint a libertarian




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: