Hacker News new | past | comments | ask | show | jobs | submit login

This is the thread (it was a retweet with comment, naturally).

https://twitter.com/ylecun/status/1275162528511860737

Whatever your position, Yann engages on substance, and Timnit is obnoxious:

https://twitter.com/timnitGebru/status/1275191341455048704?s...

https://twitter.com/timnitGebru/status/1275191515380215808?s...

Basically the worst of online discourse, but in this case one-sided. Yann is discussing in good faith and Timnit is not.

If this is the normal way she interacts with people she disagrees with it's no surprise they didn't want her to stay. The public tweeting about it doesn't inspire much confidence either.

https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-c...




> Whatever your position, Yann engages on substance, and Timnit is obnoxious:

For what it's worth, I and many others disagree with this characterization. I find Yann comes across as incredibly condescending and holier than thou in their interactions. Nor does Yann engage on substance. He refuses to take the time engage with, or even acknowledge that he has read, the relevant academic literature (which Timnit repeatedly cites).


People can read the thread for themselves and decide.

I see a quote tweet from Timnit with "I'm sick of this framing...listen to us", then misquoting what he said. Ignoring his replies and then following up with "I'm disengaging for my sanity...not worth my time...Maybe your colleagues will try to educate you..." etc.

She attacks him (so he replies) and then she ignores him and talks down to him.

I don't have a dog in this fight, I'm just an outside person reading this - I suspect most people reading that thread would think Timnit's tweets are obnoxious. Imagine if the people were switched.

I wouldn't want to work with someone who argues that way when they disagree.


Not to get too heated or anything, but the framing of "People can read the thread for themselves and decide." concerns me. Saying "I think A", then responding to people who say "I think ~A" with "hey hey hey, let's let people decide themselves" is pretty stifling. I don't believe that was your intent, but that's one way it reads.

I appreciate that you've explained your reasoning for your position, though.


Yes, you've made your position clear. I'm simply asking that you acknowledge that not everyone agrees with you, instead of making sweeping statements that no matter your opinion, Timnit was "obnoxious". You have your view of the events, I'm not trying to convince you to change it. I'm simply saying that other people disagree. No need to continue to try and justify yourself.


Of course - there will almost always be some amount of people that disagree with any position.

That doesn’t make all positions equally valid.

Stating that others disagree seems like a banality?

Obviously this is the case, just look at this thread and Twitter.

I just think they’re wrong.

I think attempting to explain why I think the way I do is kind of the point of the discussion. If I can’t justify it then I’d want to change my mind.


I think the quotes given are persuasive evidence, though. “I’m disengaging for my sanity”?


I guess the way I think about it is this: Yann has a long history in ML. He's probably had to deal with bias problems for decades. Probably pretty experienced with it. Now he's heading up some Facebook ML stuff and on a daily basis he watches hundreds to thousands of engineers work on systems that process and learn from billions of users. I feel like after you do that for a while, you gain enough wisdom and experience to deserve to be engaged with respect and thoughtfulness. She has repeatedly engaged with bad faith, misleading interpretations of intent, and is just sort of really "attacky". Sure, he's a bit condescending (I've seen the same thing and it annoyed me for a bit, then I kind of read about what he's done and realized: he's got tons of experience and data about this and works with it at scale constantly.


My understanding is that this is not the first time they engaged on this type of topic, and Yann has a history of ignoring other people who brought up similar criticisms to him (at conferences, etc.)

At some point you lose the assumption of good faith, and deserve to be called out for refusing to learn.

For what it's worth, I'm well aware of who Yann is, and was at the time as well. That doesn't make him immune to being wrong. (Nor, by the way, do I see any bad faith in her initial tweet. I see exasperation, but not bad faith).


I lost the assumption of good faith already in her first tweet, in particular

>You can’t just reduce harms caused by ML to dataset bias.

She's already attacking a strawman right there. Yann did not deny any harms caused by ML.


Nor did Timnit claim that he was. Her disagreement was about the causes of harms, not the existence of resulting harms.


There was no disagreement. Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers). In particular Yann did not suggest in any way a) there are no harms or b) that harms are only related to biased training sets. Yann was commenting on the outcome of a particular research project and how they used a biased training set resulting in the outcome that was observed.

Timnit brought up harms first, then pretended Yann did marginalize such harms and attributed them solely to a biased training sets. And then viciously attacked that strawman. That's a bad faith argument.

I can appreciate that she might have been indeed generally sick and tired as she writes, and can appreciate that sick and tired people will not always manage to be nice or overcome their own biases and assume good faith all the time from the other party; we're all human after all. But that doesn't change anything about her argument being made in bad faith.


> Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers)

This feels like unreasonable semantics. The dangers are precisely the danger of causing harm. The harms therefore are concrete results of theoretical dangers manifesting. They aren't different.

> In particular Yann did not suggest in any way a) there are no harms

I agree, and I've said as much.

> b) that harms are only related to biased training sets

He did, insofar as he suggested that the dangers were due solely to bias in the training set, which is implied when he says that if you train the same model on different data, everyone looks African. Like yes, that is true, but it doesn't reduce the harms (or dangers, if you want to be precise). It just creates a different set of biases with a different set of potential harms (which again, are "dangers").

I'm not seeing a strawperson.


and I believe that Yann knows 100X about the causes of harms than Timnit does- so lecturing him is just wasting everybody's time.


You've done a phd. Why on earth would you believe that someone, even an expert in the broad field, would know more about a particular topic than someone who specializes in that area is research?

Do you think Yann knows 100X more about every area of ML then everyone else, or is it just fairness, accountability, and transparency that he happens to be more knowledgeable in than arguably a founder of the subfield?


It's pretty simple: he did made ML work before almost anybody else did, and kept working on during the deep network explosion, and is now running Facebook AI, which has to deal with these sorts of problems with practical solutions on a daily basis, with billions of users. That sort of daily experience counts so much that I would I would place him in the "knows 100X about every area of ML" (excluding rare subfields).

It's rare but I have encountered people outside my field who knew more than I did about my field, because of their daily experiences over decades, or their raw intelligence. Yann seems to have both.


So are you suggesting that Yann has more expertise on what you're working on now (which I know, and would consider to be an ML subfield in the same vein as Timnit's), and therefore would defer to his expertise when he says things that show nothing more than an undergraduate level of the topic?

Because I'm only a dilettante in the AI ethics space (and admittedly ML as a whole), and I can describe the flaws in Yann's reasoning. Blind deference of that level isn't rational.


I've argued with Yann about my field (we share a friend on facebook) and in that case, he did turn out to be technically correct.

I don't agree with your assessment of his "Reasoning".


Can someone explain to me why people are still taking tweets seriously?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: