
Fact-checking chatbot “Meiyu” shuts down dubious family texts - petethomas
https://www.wsj.com/articles/know-it-all-robot-shuts-down-dubious-family-texts-11551370040
======
roywiggins
"Clyde Lin was scolded by his uncle after the 39-year-old pilot brought Meiyu
into the family chat group late last year. “Who is this?” Mr. Lin said his
uncle demanded. The family group contained too much personal information to
give access to a bot, Mr. Lin said his uncle said."

His uncle is 100% correct. Adding a bot to a conversation thread is very
likely literally connecting a hose up to your family discussions and sending
all of them to a random startup. You wouldn't add in a random human fact-
checker, and even with the illusion that the only entity reading your texts is
a robot, there's no guarantee that will always be the case.

"Ms. Hsu, the developer, said Meiyu isn’t designed to collect personal data on
users." 1) objectives change, 2) ownership changes, 3) it very likely collects
personal data whether it's meaning to or not. Brilliant. I literally could not
think of a better target for a state actor than something like this, which
could give pretty deep insight into what people are thinking about you when
they talk privately.

~~~
deogeo
> isn’t designed to collect personal data on users

Is that the same as "doesn't"? And does that apply to all the other components
that get access to the chat logs it sends back home?

~~~
Bartweiss
I can't help thinking of the DoD's definition of 'collection', where gathered
data has to be processed and analyzed to count as 'collected'. Even if the bot
doesn't do anything more sinister than phone home with error logs to help fix
bugs, that should still be viewed as compromising the identities and message
contents of everyone participating in the chat.

~~~
aij
> where gathered data has to be processed and analyzed to count as 'collected'

I thought even then they didn't count it as "collected" until a _human_ sees
the result.

So they could ~~collect~~ record all your conversations, ~~analyze it with a
computer~~ have a computer analyze it, and then then only have a human look at
it if they think they will be able to justify having collected it.

~~~
Bartweiss
> _I thought even then they didn 't count it as "collected" until a human sees
> the result._

Yeah, I was never quite clear on whether it meant "scraped for data" or
"scraped for data and then that data was used". I tried to look it up before I
posted, but since "we don't collect..." seems to have been a lie under any of
those definitions, I'm not convinced it actually had a clear 'technical'
meaning.

Of course, if reading algorithmic output without specific records doesn't
count as seeing the data, there's always the possibility that info is making
it all the way to the 'trigger a drone strike' step without ever being
"collected"...

------
roywiggins
> In a nation with long-held Chinese traditions of etiquette, however, Meiyu
> is proving to be socially inept. Online chat groups often comprise several
> dozen extended family members. Openly disputing facts with elder relatives
> is considered bad behavior.

To heck with "Chinese traditions", "openly disputing facts with elder
relatives" does not go down great in the West either, unless you've got a
particularly feisty uncle who likes political debates.

Literally my first thought on seeing the headline was "wow, sounds like a
great way to get disowned" and that was me projecting my American context onto
it.

Have these people never heard of the phrase "pick your battles"? A bot has no
tact, and will pick fights with fairly trivial nonsense and deeply problematic
lies with the same assiduousness. Lots of things are not, strictly speaking,
true. Not all false beliefs are damaging in the same way. Being technically
correct is the worst kind of correct.

~~~
technofiend
>Openly disputing facts with elder relatives is considered bad behavior.

I was envisioning the opposite - like posting anything critical of the
government's handling of Tiananmen Square would have your little uncle spybot
posting "corrections" to the chat explaining the government sanctioned view.

------
forgingahead
Current top comment on the WSJ:

===

The examples given are "contrarian bot" rather than "fact checking bot."

e.g., "The doctor quoted in this post does not have proper qualifications."
Fine, but that doesn't mean he's wrong.

e.g., “The internet has a lot of information on drinking water. Doctors say
not all of it is credible.” No kidding. But this doesn't mean "stay hydrated"
was bad advice.

I'm having difficulty finding the value of something like this is. I suppose
if you're too timid to push back against other people it might be nice to have
a bot to do it. Like the example where it's culturally inappropriate to push
back on your elders, so you introduce a piece of software to contradict
everything they say.

Fine, maybe that feels good in a strange way, but it seems dysfunctional,
passive aggressive, and unproductive. The bot isn't giving advice, it's just
saying "that's YOUR opinion" to everyone else's advice

===

~~~
roywiggins
I've known people with the same conversational style as this bot, and they are
absolute hell to talk to unless you decide to gamify it and play with finding
the least objectionable thing they could find fault with.

~~~
Bartweiss
I'll bet treating this bot as a gatekeeper would actually be pretty
interesting - try the same claim or story repeatedly and see what you can do
to sneak it past.

\- Can you find the same stuff on a more-reputable site? (Perhaps the bot
doesn't know about those "user blog" sections sometimes hosted under news-site
domains?)

\- Can you find a different attribution for the same quote that doesn't throw
a 'credentials' flag? (Perhaps _less_ information is better, because writing
"a doctor said" will impede checking up on the speaker's credentials?)

\- How about just contorting the phrasing of sentences until the bot can no
longer extract anything useful about their meaning? What does it do with
double negatives and nested clauses?

I don't want this bot in my group chats, but I could imagine using it as an
adversary to refine a trolling bot. If you wanted to train up a realistic
version of Shiri's Scissor, this might be an effective way to craft posts that
are especially hard to refute.

------
Rychard
Outline link: [https://outline.com/https://www.wsj.com/articles/know-it-
all...](https://outline.com/https://www.wsj.com/articles/know-it-all-robot-
shuts-down-dubious-family-texts-11551370040)

------
shesee
Hi I'm author -- comments on HackerNews are more trenchant especially for tech
aspect.

1\. both auntie "Meiyu" and "CoFacts" are open source projects, we share the
source code on GitHub.

2\. can't disagree the search results might be manipulated, but for now,
there's just limited volunteers to "clarify" most of rumors on CoFacts.

3\. Logs: that's a certain point. Since I deployed this project on Heroku
(which remains limited logs) I still have no time to format the logs. this
comment about log storage makes sense to me, I won't rebut this point of view
is nothing to me, it matters,will update it for sure.

I think wsj actually skip most of background of why we do this. Since not only
Taiwan's election has been effected by deliberatedly manipulated rumors,
endless medical misinformation. furthermore, most of medias own strong biases
in Taiwan, everyday these's (even quite rough) fake news, bombing everyone's
brain day after day until you give up to clarify anything.

And these medias own resources, centralized to publish fake news, to fight
with them in fact is quite hopeless, unless we try to decentralized our
information, allow everyone got the chance to verify these and speak it loud.

So Meiyu, imo at least it clarifies rumors / misinformation for you, and
repeat it tirelessly. It's annoyed for everyone I can totally understand
(That's another reason I name it as Auntie "Meiyu" by very common senior
generation name, I'd like This tiny service becomes a bit heartwarming and
friendly), but we must go for it. nor only someone's life / health would be
misled by misinformation, but even our island would be ruined by elaborate
politic rumors as well.

------
dwighttk
To the person using this to avoid confrontation: adding a bot that responds
‘false’ is at least as confrontational as you replying that same way.

~~~
jandrese
It's a joke character trait from The Office, in bot form.

Or [https://i.kym-
cdn.com/photos/images/newsfeed/001/191/035/135...](https://i.kym-
cdn.com/photos/images/newsfeed/001/191/035/135.png) in bot form.

I get the noble cause to combat false information right at the roots,
especially when people are too polite to do it themselves, but making a bot to
be rude and obnoxious for you is still rude and obnoxious.

------
zuypaweu
Whats even more brilliant is that someone could potentially influence massive
amounts of people with this. If you hear something on the TV you're be pretty
skeptical, but when it comes from someone you love then it gets interesting...

very interesting..

They could make people believe that WW2 never happened or some other garbage.
It all depends on who'll be verifying whats true and whats not.

------
civilian
I think that people suggesting their own variants on cold-treatments is a way
for them to show they care.

"Drink a lot of water!" "Put on socks before you go to bed!" "Netty pot!"
"Drink lots of tea!" All of these are kind of common knowledge. I think the
act of encouraging people to rest up and take care of themselves is just a
generic way to show you hope they get better. And maybe, if they really are so
sick that they aren't thinking clearly, it'll then serve as a reminder to get
some tea.

------
qwerty456127
Cool! I hope it is going to became available in more languages! It would be
nice, however, if you could customize its manner and its level of skepticism
(i.e. I'd prefer it not to claim anything false unless it's scientifically
proven false and it can link to the proof, e.g. in my opinion "the doctor's
qualification is questionable" is not a sufficient reason to dismiss an idea).

------
rajacombinator
Who needs thoughtcrime police when you can turn family against each other!

