Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This person forced Yann LeCun (one of the top 3 AI/ML researchers today, head of Facebook AI, and an ML researcher since the 80s) to give up on Twitter after she took exception to this tweet of his: https://twitter.com/ylecun/status/1274782757907030016

She lives her life attacking people who she disagrees with, and brings in race/gender into any conflict where she's involved.

I am surprised Google had hired her, because it was only a matter of time they'd become collateral damage in this roving tornado of hate and aggression.

From what I have seen today, she gave them an ultimatum: do this, this and that or else I will leave. They decided to not do those things she demanded, and then proceeded to fire her before she could do any more internal damage.



So I don't know enough to make a well informed opinion; however it seems part of the aggreivement on her part is that she was not given the names of those who peer reviewed her paper.

If internally she has a history of bringing race or sex into every discussion I could see why they'd rather be anonymous. Even being accused of being racist can ruin someone's career in this current climate.


Are reviewers of academic papers not typically anonymous anyway? Or is this some kind of internal review system rather than academic peer review?


This was an internal review of the paper, as far as I can tell the paper was never published outside of Google.


You're commenting on an article where a person was fired due to being a racial/gender justice advocate, and you're worried about "Even being accused of being racist can ruin someone's career in this current climate" ???


> a person was fired due to being a racial/gender justice advocate

Nope, they were fired for being a douche.


While they were a jerk, it actually was far more specific than that. According to Jeff's response, first they bypassed a 2-week review process:

> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

Then, when called out on it, they made "an ultimatum" and said they would leave if certain demands weren't met. I'm not sure what every demand was, but one of them was indeed disclosing the name of the reviewers.

> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date.

Jeff declined to meet the conditions, and let them go. Seems pretty straight forward to me.


And this is what HN has devolved to, calling a leading Ethical AI researcher a douche!


> a leading Ethical AI researcher a douche!

Ex-ethical AI researcher, we don't know what her next job will be, and she certainly lost an opportunity to lead by losing her place at Google. We are yet to see if this maneuver will cost her social capital or add to her reputation. Also, do you seriously believe being an ethics researcher automatically excludes one from misconduct?

From what we know so far, she seems to have overplayed her hand, assumed she should be treated above rules and when failed at it wanted to take names and encouraged mutiny, in a company where she is a junior manager. Not only there are actual legal complications for a manager to do that, it also sounds like narcissistic and toxic.

Her goals might be laudable, but she have executed poorly and emotionally, and lost an immediate chance to deliver ongoing impact for the ethics causes she cared about. I hope aspiring activist-ethicist-researcher-techies take note of this. Tantrums don't deliver long term impact. Narcissism clouds your judgement. Impassioned supporters mislead you. You have great power, if you wield it responsibly and pay attention not to overestimate it.


While there is some overlap, generally speaking Identity Politics != Ethics. She's an Identity Politics researcher, narrowly focused on race and gender topics. Not some philosopher or religious figure interested in the flourishing of humanity at large. Here is an abstract from a larger work, presumably her PhD thesis: https://arxiv.org/pdf/1908.06165.pdf. Critical gender and race theory through and through.


Amen. Her paper is citing dated problems in AI models. These problems have been known for at least 4-5 years, and the responsible AI community has been working on it. No progress mentioned. The paper read like an editorial rather than actual AI research.


You're trying to discredit her for... writing a well-sourced paper on race & gender discrimination in AI.


Well written, well sourced and very narrow for someone that fashions themselves as 'ethicist'. There are more things in life beyond race and gender as seen through the prism of critical race/gender theory. I heard somewhere of this strange word 'love', something to look into. She's young, smart, capable and possibly well meaning in spite of her unbecoming behavior. Perhaps one day she'll grow to see the struggle common to all born of a woman.


You can be both a brilliant scientist (I'm not claiming Gebru is one) and a douche. They're not exclusive.


They wouldn't even share with her the feedback on her paper to give her the opportunity to fix it. They didn't have to say who the feedback was from, but they could've at least told her the substance of the feedback.


To quote from Dean's email

"Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback."

I can imagine wanting to know the feedback, but she didn't ask for that she asked for the identity of everyone person who they consulted.

It could also be that disclosing the feedback would've identified the individual.


A few things don't add up then.

Timnit claims that she was told at a meeting called on short notice that her paper was being retracted because of anonymous feedback. She said she asked about the substance of the feedback and they refused to tell her.

After that meeting, she is told she can be read a private document with some, but not all of the anonymous feedback.

That led to Gebru sending a frustrated email to her colleagues and a separate email with conditions that she was willing to resign over. In that email it seems like she asked to knows the identities of the people who gave the feedback that let to the demand for retraction.

Jeff Dean says that Timnit wanted the identity of the people who gave the feedback as a condition for staying. Then he describes what that feedback was. He doesn't acknowledge that Timnit was not given the substance of the feedback in the first meeting when she asked for it. And it sounds like the full feedback wasn't given later during that private reading.

The feedback Dean cites that led to the retraction doesn't at all sound like something that required the protection of identities, and certainly nothing that required keeping that feedback secret. If the reviewers felt like the paper left out some relevant research, why wasn't that communicated fully and clearly in the first meeting?

Jeff Dean's email seems to leave out a lot of context about everything leading up to Timnit threatening to resign. At the very least it doesn't seem like the reasons the surprise retraction order came down weren't revealed at all until some time after that first meeting.


Some guesswork:

Per Dean, Timnit appears to have precipitated this by violating internal process for a 2 week internal review period and approval before external submission of a paper. (I have no idea how regularly this process is enforced, but I think it's standard in the industry). And then -- reading between the lines -- reacted poorly to the lack of approval. Whence Dean and Megan made the decision to force a retraction.

Then Timnit demanded the names and feedback of the reviewers. Again, as an outsider, I have no idea if the provision of this is customary. But not just the reviewers; also everyone that Dean and Megan had spoken with.

I do wonder how much of this was the irregular process around the paper, her reaction to criticism of the paper, or her public criticism of DEI efforts, particularly as a manager. Or all of the above.

Calling out your leadership chain, both for DEI metrics and for behavior about the paper -- while also disclosing that you seriously considering suing Google -- seems unwise if you don't wish to be terminated.


> The feedback Dean cites that led to the retraction doesn't at all sound like something that required the protection of identities, and certainly nothing that required keeping that feedback secret.

Perhaps the reviewers were afraid of being dragged on Twitter and possibly having their careers destroyed as a result.


From Dean's email:

"A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues."

Is it true that this information was not shared with her?


Would you share the identities of the reviewers, if the author has a history of attacking anyone who disagrees with them??


Sorry, I was referring to the assertion that "they wouldn't share the feedback on her paper".


According to her, she was allowed to see the feedback confidentially, but was not given the names of the authors of the feedback.


Because she wanted to attack them! That's how she has reached where she is: by attacking and silencing anyone who disagrees with her, and then turning around and classifying their opposition as "sexism/racism"; and who wants to be labeled as a sexist/racist these days? People would much rather just keep quiet.


Reminds me of the drama with Coraline Ada Ehmke and the Contributor Covenant a few years back that forced Linus Torvalds to take a break. These people are drama seeking instigators that demand that people do as they say, and companies keep giving in to them. It's a kind of rent seeking and power grabbing.


Github got rid of her pretty quickly. And guess what ? She accused them of being sexist for doing so.


I do not know the details of this case and I can't speak to the subject's character. However I can't help but note that people who study the effects of race and gender tend to bring race/gender into anything and everything, because well, that's their job. If your response is that race and gender should be brought into only the areas where race and gender are relevant, I'd respond that the prevailing opinion among gender and race scholars is that gender/race is relevant everywhere.


I've thought about that filter through which we view the world a lot recently, and I think you may have identified part of where the conflict comes from. People viewing the world through fundamentally different filters.

The problem is that if you focus on race, gender and/or sexual orientation as an important factor underlying interactions, if you're not careful you can soon come to see race and gender as the primary driving force that dictates the outcomes of all interactions, and soon all issues get reduced down to a matter or racial, sexual, or class struggle.

The problem is the real world doesn't work that way, it is a complicated interacting network of events that are often only 2nd or 3rd order events from other completely different decisions made long ago.

To try and reduce all interactions and societal problems to a matter of a 1 or 2 factors is like trying to make money on the stock market solely by looking at the Fed's monetary policy. Sure the Fed's monetary policy is an important overall factor, and in some cases the most important factor in certain changes in the stock market, but that doesn't mean that there aren't a thousand other moving parts to focus on.

The problem is when we interpret every event through a single filter, or give one filter primacy in all interpretations of events that filter now colors everything we see.

So for example if my primary filter for understanding the world is race relations and/or critical race theory, any time someone disagrees with me I really do feel as though it is a racially motivated attack. Why? Because I have chosen to interpret everything in my world through the filter of race, and every event that occurs will in my mind be race related.

Note this applies just as well to any filter, christian extremists will interpret all events that happen as a sign of the times and the devil coming to power. If my filter is the class struggle than I will see every issue be between the haves and the have-nots.

My point is not to discredit people that want to bring gender and race in to everything I think there are many discussions where examining the role these factors played is important. My point is to caution and warn against getting into a single filter mindset, that is the cause of much of the divisiveness we see, we have people that really are living in totally different worlds because the way they choose to view the world is totally different and they refuse to ever consider there might be other filters to view the world through.


“It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale.”


> The problem is when we interpret every event through a single filter, or give one filter primacy in all interpretations of events that filter now colors everything we see.

Anyone noticing the irony of this seeming to be pretty much what the paper author seems to be blaming the machine learning systems for ?


"Google fired a black researcher". Does the word "black" belong there or is it used just as an attempt to tarnish Google?


There's a big difference between raising honest, thoughtful points about social justice issues (even forcefully), and using those same issues (superficially) as a cudgel to marginalize your enemies and increase your own social power (and note that those things don't have to be happening at the same time).

These days I see a lot of both. I also have no idea what happened in this specific case, and at this time I wouldn't be willing to take a side.


You’d probably find this fascinating:

https://science.sciencemag.org/content/360/6396/1465


Please share why you think a person who was hired to address race/gender bias in AI writing a paper about race/gender bias in AI is tending to "bring race/gender into anything and everything"?

Or let's cut to the chase, you saw the headline, and your immediate conclusion was this was a raging SJW. Why did you jump to that conclusion?


I am trying to get away from mind reading and assuming I know why others say and do things. Would you be interested in joining me in that?


It is a big problem that in our current political climate people with those types of built in personal issues float to the top. Gender and race should only be brought up if there is evidence that someone is being discriminated against or harassed based on those traits. I see all kinds of articles immediately pulling race and gender into the equation and it’s dangerous and dishonest because it robs credibility from situations where there is truly discrimination/harassment based on this occurring.


It's incredibly hard to prove discrimination or harassment was based on ethnicity, sexual orientation, religion, or political views, even when it is.

You say "people with those types of built in personal issues float to the top" aren't you implying their positions are not earned based on merit and, therefore, that they are less competent than their peers?


I am implying that they often scare away their competition by attacking the character of the individuals who they work with, rather than focusing on true accomplishments that would give them legitimate merit for the positions they aspire to. Their supervisors can even be intimidated into promoting them if they have a history of filing complaints of discrimination when things don’t go their way.

It’s also possible that some promotions/hires are based on desired political appearance of the company. Promoting someone based on certain race and gender types ironically makes your company seem less racist and sexist to the casual outside observer.


I think there’s definitely a dynamic in any dogmatic arena (social justice is dogma heaven) where the most pathological people thrive.

In the high-dogma arena there’s incredibly easy and visible signals that give you the ability to appear like “one of the good ones.”

I think coincidentally this is highly attractive to elite circles too. Follow all the social justice rituals and shibboleth and you suddenly get the camouflage of an empathetic, good person.


> Follow all the social justice rituals and shibboleth and you suddenly get the camouflage of an empathetic, good person.

If a person is indistinguishable from a good human being, isn't it actually a good human being?


No, plenty of rapists and abusers etc do all the social justice rituals.


Then they are distinguishable by observable behavior.


It's like they didn't read any I wrote - or just demonstrated how much the woke let down their guard if people shut up and do whatever they say.


> You say "people with those types of built in personal issues float to the top" aren't you implying their positions are not earned based on merit and, therefore, that they are less competent than their peers?

Many are, yes. I had to hire one because my director’s bonus was implicitly tied to a diversity quota OKR, and the VP’s bonus was explicitly tied to one, which he blogged about as a shameful virtue signaling PR exercise. They didn’t even interview her. Then we way over-leveled her to boot, because there was another stupid OKR to grow diversity among senior levels. And this was at a big tech company that you’ve definitely heard of.

This is happening at many companies who claim that they aren’t lowering the bar.

Truth be told I had already lowered the bar and wanted to pass her but still couldn’t justify it. I didn’t want to deal with the diversity police and had my fingers crossed that she would be the one.

Some perfectly capable white guy didn’t get an opportunity as a result.


In general, theatening to resign is never a good strategy and will never be in your favor unless you are a highly valued executive.

If you're anyone else in the company and think a problem could possibly work out, try to amicably work it out. If you believe it to be impossible to work out, it is much better to just resign than to threaten to resign, which will almost always result in firing.

If you resign, you have the upper hand, can still get 2 more weeks of pay, possibly a couple of good recommendations for your next job, and then go write your blog posts after that.


He's still posting on Twitter. Did so today, and yesterday.

Her response to that tweet is a bit harsh, but not completely unreasonable. I'm not a Twitter expert, so I can't really stitch together their conversation on Twitter to see where/when it goes off the rails.


What/where is her response? Could you link it please?



Thanks for sharing.

Curious: How did she 'force him to give up on twitter'?

There's a lotta companies who've hired ticking time bombs like this. They don't get why they were truly hired.


Easy. Just call that person a racist, a bigot, or an oppressor without a shred of evidence.

> A person: I think bias in machine learning is caused by bias in training data. > Me: Why are you ignoring all I've been saying? You're a racist. You're a white man who oppresses. I'm not going to talk to a bigot like you.

Repeat this a few times, the attacked will be punished both online and offline. Justice served.


I don't think that's a fair characterization of Gebru's argument there at all. A better summary might be "You're the Chief AI Scientist at a massively influential and profitable company with billions of users and exabytes of data, you can't just throw up your hands and say 'welp, that's what the model spits out so that's what we're stuck with.'"


Its more like " There are difficulties on collecting data and historical reasons that leads to bias in the trained model" Other side " no you are a racist and responsible for bias due for being a white male"


It's pretty clear that Gebru criticized LeCun for being apathetic about an obviously biased model, not because he's a white male. And again, "There are difficulties on collecting data and historical reasons that leads to bias in the trained model" is accurate, but it's still a total cop-out! There's no way Facebook would let results like [0] slide if there were money at stake - they'd get more training data, or even throw out training images of white faces, if they had to - so why is LeCun defending someone else's failure to do so?

More importantly, how should that make us feel about the very real possibility of LeCun serving as an expert advisor on, say, police or other military applications of AI? Would he again just throw up his hands and say, "Well the training data consists disproportionately of black faces, so the model disproportionately implicates black people, nothing we can do about it"?

[0] https://twitter.com/Chicken3gg/status/1274314622447820801


All real world data is inherently biased... Don't they teach this fact in introductory statistics? I can't imagine otherwise.


"SJW"'s are racist and sexist. (AmIdoingitright?)


It was discussed on HN a few months ago: https://news.ycombinator.com/item?id=23696427


"You are probably going to be a very successful computer person. But you're going to go through life thinking that girls don't like you because you're a nerd. And I want you to know, from the bottom of my heart, that that won't be true. It'll be because you're an asshole." ~ The Social Network

A good number of social activists are rude and abrasive and then blame how people react as racism/sexism. No one wants to be around or work with anyone who is a jerk. The quote above sorta reminds me of how this person is going about their life.


I read her email and all I could read was her privilege and her entitlement. I think she knows there is racism and sexism in play but thinks everything is about that. The feedback going to HR was probably because they didn’t feel comfortable telling her it direct for fear of reprisals from her.


One unusual privilege that she has is the ability to describe her experience of racism and sexism and expect to be listened to - her e-mail here speaks mostly on this point.

Her e-mail identifies that you can speak on racism and sexism as much as you want, but all of your speech will be routed to /dev/null.


> Her e-mail identifies that you can speak on racism and sexism as much as you want, but all of your speech will be routed to /dev/null.

I think the fact people need to go through HR to give Feedback means the speech isn't going to /dev/null. People seem to take her speech very seriously and don't want to feel her wrath.

In fact, Google even appeared to take her concerns onboard after she got a lawyer and only threatened to sue. She just seems to expect more which is the entitlement I was talking about.


(My intention above was to say that Timnit Gebru does have that privilege, and is using it to speak for other people who don't have that privilege)


I totally agree with you. Screaming racism/sexism/marginalisation every time someone disagrees with you is annoying and tiring. I am a second degree immigrant in Norway and I always see this behaviour in people who blame society when they don't get their way when in reality people rarely get their way all the time. I hope the level of stupid activism does not reach Norway although we do copy many stupidities of the US :)


Falsely screaming racism and sexism at everything hurts people who are actually impacted.

This behavior also enables racists and sexists.


Worst of all, it pushes non-racists to become actual racists.

If you start attacking people who previously had no problem with you, grouping them as evil on the basis of their physical characteristics, you’re going to get various forms of backlash. One of those is going to be those people seeking support among others belonging to the evil group identity that you carved out for them, and they’re going to push back in kind.


> One of those is going to be those people seeking support among others belonging to the evil group identity that you carved out for them, and they’re going to push back in kind.

I don't think that happens very often outside of the villain origin stories in comic books. A normal non-racist person doesn't react to being called a racist by becoming the biggest racist they can just to spite people.


Oh, I don’t think it happens in the common case, no. But it absolutely happens at the margins, particularly among folks who are more predisposed to radicalization. I know that happens.

And racism isn’t binary. You don’t have to be pushed all the way to wearing KKK garb to have some level of contempt or prejudice towards other races. So a great way to avoid this outcome and minimize exacerbating existing tensions is to not engage in more divisive hostility on the basis of their identity. Like the guy she is berating likely didn’t even consciously realize he was white until she made a point of reminding him in the worst way possible. A dumb, counterproductive strategy.

But her behavior is going to make it harder for all other black people to get hired. People who are otherwise not concerned with race or sex at all will see this and then need to convince themselves that she is not representative of most black people. Some will undoubtedly just go with the option that seems like it carries the lowest risk to them personally and professionally.

Personally I would not hire anybody who has a history of trying to weaponize twitter. And being smeared as a racist, like she is doing to Jeff Dean, can be extremely damaging and severely career limiting. He’ll probably be ok, but it’s something that many people can never move past, especially if Vice or Vox or Salon or whatever culture war libel vehicle decides that they can twist your story into a hot hit piece.


Key & Peele had an excellent sketch about it: https://www.youtube.com/watch?v=e3h6es6zh1c


Experiencing marginalisation is frustrating, and frustrated people sometimes have outbursts.

I don't think we can infer much from a few highly publicised events.


Was she marginalized? She ran a team at Google. How many of us can claim that?


How was she marginalized? Marginalized means keeping someone in a powerless or unimportant position in society, which she was not as she had one of the most important roles in AI and was a manager.


Even if you are marginalized, threatening to resign with ultimatums will never work in your favor unless you have an entire union behind you.


I wasn't commenting on expected outcomes of their behaviour, just that I think it's a bit of a jump say to they are "rude and abrasive " in general.


That's a little ungenerous. Social activists may go over the line and be rude or abrasive, but unfortunately it's impossible to be a polite social activist. Activism fundamentally requires telling people that their actions should be modified, that their beliefs may be flawed, that their systems have problems. It's really hard to do that and be a likeable person.

Of course social activists may at times cross the line and be unnecessarily rude or abrasive. But I'd have more sympathy because their fundamental job is to be subversive and critical of people.


May I direct you to watch Daryl Davis's ted talk or really anything he speaks about. https://www.youtube.com/watch?v=ORp3q1Oaezw

For context he is a black man that has gotten over a hundred leaders of the KKK, including former convicts to leave the Klan. Not by arguing with them, not be reasoning, not by showing them the effects of their behaviour, but instead by sitting down, talking with them and becoming their friend.

In a nutshell this one man has probably done more to combat racism, than all the angry tweets on twitter combined.


Daryl Davis is certainly a wonderful inspiration in his patience, compassion and effectiveness in reaching racists. However I'm not sure about using him as a model for how activists should behave. What that's essentially requiring is that activists should patiently reach out until the people oppressing them learn enough to stop oppressing them.

In some cases this is possible. The Klu Klux Klan is comprised of broken men who dance in costumes at night. What about the racists who do not do that? Who enact their racism as government policy? Or as housing discrimination? Or through their art?

How long will it take to reach them? To quote James Baldwin^[1]:

> You always told me it takes time. It's taken my father's time, my mother's time, my uncle's time, my brothers' and my sisters' time, my nieces' and my nephews' time. How much time do you want. For your progress.

[1]: https://www.youtube.com/watch?v=OCUlE5ldPvM


If one uses twitter and mobbing to do it, one will never reach them.


"I am a musician, not a psychologist or sociologist. If I can do that, anyone can do that. Take the time to talk with your adversaries, you will both learn something."

Thanks for the link, that was an incredible talk, and that guy is cool as hell.


What a talk. Thanks.


he is amazing, I lack the self control


Ilhan Omar said something similar on Twitter in response to Obama's criticisms of the "defund the police" slogan. I think you're being overly generous to social activists. Their fundamental job is to bring social change. You can't change anything by simply alienating those who disagree with you. That creates political entrenchment.

The fact is that Obama has got way more done for America than Omar, so I'm inclined to think he's right.


I think we need to distinguish between social activist and politician. Social activists can certainly be politicians, such as Ilhan Omar, Alexandria Ocasio-Cortez, Bernie Sanders etc. But many are not. Obama is not a social activist. He's an effective politician, one who has certainly brought about change. But he's not a social activist.

Many social activists were famously unpopular. Martin Luther King died with a 66% disapproval rating^[1]. Susan B. Anthony was ridiculed and accused of trying to destroy the institution of marriage (sound familiar?).

Of course some activists veer too far. Jane Fonda is an example. I don't fault her opposition to the Vietnam War, but her infamous photo practically advocated for violence against American troops.

In a sense the aim of social activists is to be ahead of their time. MLK was reviled in his time and practically infallible today. We need both politicians and activists for progress. Politicians push forward the policy, but the activists push forward the politicians. Without activists, politicians can end up in holding patterns, afraid to lose popularity. Without politicians, activists are just screaming into a void.

[1]: https://www.newsweek.com/martin-luther-king-jr-was-not-alway...


Obama got his start as an actual activist, under the title of "community organizer", in Chicago. Even in his early years, he clearly understood that to get real results, you have to make compromises and avoid demonizing people on the other side of issues. To my eye, Obama's emotional intelligence, communication ability, and strategic aptitude are genius-level-off-the-charts-stupendous, whereas these firebrand activists you mention are operating at an elementary level.


If I may put it differently, you know how most on the left see trump as infantile, poor of character, and so on


>Many social activists were famously unpopular. Martin Luther King died with a 66% disapproval rating^[1].

Being unpopular is hardly the same as being abrasive and rude.


> The fact is that Obama has got way more done for America than Omar, so I'm inclined to think he's right.

Its funny how the people endorsing Obama's experience on the "defund the police" slogan losing people seem to ignore the other "error" he mentioned as losing people in the same interview: giving too small of a platform within the Democratic Party to voices like AOC's (who, obviously, has the opposite view as Obama on "defund the police") who have proven quite effective with connecting with large constituencies with which the party establishment (implicitly, including Obama himself) has been ineffective.

I would be cautious in selectively referencing that Obama interview to suggest he knows better than "The Squad".


> it's impossible to be a polite social activist.

Fred Rogers and Carl Sagan are standing at the front of a very long line of people who are waiting to gently disagree with you.


As much as I respect Fred Rogers and Carl Sagan, they were white men working for goals that weren't exactly controversial. If we're calling them social activists—I'm not opposed to that, I just didn't consider them as activists in my original comment—then yeah there's plenty of polite social activists. The social activists I was considering were along the lines of MLK, Susan B. Anthony, James Baldwin. People who were advocating for something fundamentally opposite to current societal values. With that definition I believe there's no way to be polite.

I guess if I had to refine my original claim: it's impossible to politely disagree with fundamental societal values.


>unfortunately it's impossible to be a polite social activist

Tell that to Gandhi, tell that to the many people in the Civil Rights Movement and Martin Luther King, tell that to the people who were in the Monday demonstrations in East Germany, the list goes on.


The Freedom Riders did something fundamentally abhorrent to southerners. They went into white establishments as black people (or accompanying black people). It's only because of changing norms that we don't see their actions as horrifically rude or even violent in nature.


Martin Luther King was assassinated.

His demeanor was polite, but his impact was rude and abrasive to many (racist) people.


>Social activists may go over the line and be rude or abrasive, but unfortunately it's impossible to be a polite social activist.

Exhibit A: Gandhi


Ever heard of Noam Chomsky? Not sure if he’d agree with you.


This is the thread (it was a retweet with comment, naturally).

https://twitter.com/ylecun/status/1275162528511860737

Whatever your position, Yann engages on substance, and Timnit is obnoxious:

https://twitter.com/timnitGebru/status/1275191341455048704?s...

https://twitter.com/timnitGebru/status/1275191515380215808?s...

Basically the worst of online discourse, but in this case one-sided. Yann is discussing in good faith and Timnit is not.

If this is the normal way she interacts with people she disagrees with it's no surprise they didn't want her to stay. The public tweeting about it doesn't inspire much confidence either.

https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-c...


> Whatever your position, Yann engages on substance, and Timnit is obnoxious:

For what it's worth, I and many others disagree with this characterization. I find Yann comes across as incredibly condescending and holier than thou in their interactions. Nor does Yann engage on substance. He refuses to take the time engage with, or even acknowledge that he has read, the relevant academic literature (which Timnit repeatedly cites).


People can read the thread for themselves and decide.

I see a quote tweet from Timnit with "I'm sick of this framing...listen to us", then misquoting what he said. Ignoring his replies and then following up with "I'm disengaging for my sanity...not worth my time...Maybe your colleagues will try to educate you..." etc.

She attacks him (so he replies) and then she ignores him and talks down to him.

I don't have a dog in this fight, I'm just an outside person reading this - I suspect most people reading that thread would think Timnit's tweets are obnoxious. Imagine if the people were switched.

I wouldn't want to work with someone who argues that way when they disagree.


Not to get too heated or anything, but the framing of "People can read the thread for themselves and decide." concerns me. Saying "I think A", then responding to people who say "I think ~A" with "hey hey hey, let's let people decide themselves" is pretty stifling. I don't believe that was your intent, but that's one way it reads.

I appreciate that you've explained your reasoning for your position, though.


Yes, you've made your position clear. I'm simply asking that you acknowledge that not everyone agrees with you, instead of making sweeping statements that no matter your opinion, Timnit was "obnoxious". You have your view of the events, I'm not trying to convince you to change it. I'm simply saying that other people disagree. No need to continue to try and justify yourself.


Of course - there will almost always be some amount of people that disagree with any position.

That doesn’t make all positions equally valid.

Stating that others disagree seems like a banality?

Obviously this is the case, just look at this thread and Twitter.

I just think they’re wrong.

I think attempting to explain why I think the way I do is kind of the point of the discussion. If I can’t justify it then I’d want to change my mind.


I think the quotes given are persuasive evidence, though. “I’m disengaging for my sanity”?


I guess the way I think about it is this: Yann has a long history in ML. He's probably had to deal with bias problems for decades. Probably pretty experienced with it. Now he's heading up some Facebook ML stuff and on a daily basis he watches hundreds to thousands of engineers work on systems that process and learn from billions of users. I feel like after you do that for a while, you gain enough wisdom and experience to deserve to be engaged with respect and thoughtfulness. She has repeatedly engaged with bad faith, misleading interpretations of intent, and is just sort of really "attacky". Sure, he's a bit condescending (I've seen the same thing and it annoyed me for a bit, then I kind of read about what he's done and realized: he's got tons of experience and data about this and works with it at scale constantly.


My understanding is that this is not the first time they engaged on this type of topic, and Yann has a history of ignoring other people who brought up similar criticisms to him (at conferences, etc.)

At some point you lose the assumption of good faith, and deserve to be called out for refusing to learn.

For what it's worth, I'm well aware of who Yann is, and was at the time as well. That doesn't make him immune to being wrong. (Nor, by the way, do I see any bad faith in her initial tweet. I see exasperation, but not bad faith).


I lost the assumption of good faith already in her first tweet, in particular

>You can’t just reduce harms caused by ML to dataset bias.

She's already attacking a strawman right there. Yann did not deny any harms caused by ML.


Nor did Timnit claim that he was. Her disagreement was about the causes of harms, not the existence of resulting harms.


There was no disagreement. Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers). In particular Yann did not suggest in any way a) there are no harms or b) that harms are only related to biased training sets. Yann was commenting on the outcome of a particular research project and how they used a biased training set resulting in the outcome that was observed.

Timnit brought up harms first, then pretended Yann did marginalize such harms and attributed them solely to a biased training sets. And then viciously attacked that strawman. That's a bad faith argument.

I can appreciate that she might have been indeed generally sick and tired as she writes, and can appreciate that sick and tired people will not always manage to be nice or overcome their own biases and assume good faith all the time from the other party; we're all human after all. But that doesn't change anything about her argument being made in bad faith.


> Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers)

This feels like unreasonable semantics. The dangers are precisely the danger of causing harm. The harms therefore are concrete results of theoretical dangers manifesting. They aren't different.

> In particular Yann did not suggest in any way a) there are no harms

I agree, and I've said as much.

> b) that harms are only related to biased training sets

He did, insofar as he suggested that the dangers were due solely to bias in the training set, which is implied when he says that if you train the same model on different data, everyone looks African. Like yes, that is true, but it doesn't reduce the harms (or dangers, if you want to be precise). It just creates a different set of biases with a different set of potential harms (which again, are "dangers").

I'm not seeing a strawperson.


and I believe that Yann knows 100X about the causes of harms than Timnit does- so lecturing him is just wasting everybody's time.


You've done a phd. Why on earth would you believe that someone, even an expert in the broad field, would know more about a particular topic than someone who specializes in that area is research?

Do you think Yann knows 100X more about every area of ML then everyone else, or is it just fairness, accountability, and transparency that he happens to be more knowledgeable in than arguably a founder of the subfield?


It's pretty simple: he did made ML work before almost anybody else did, and kept working on during the deep network explosion, and is now running Facebook AI, which has to deal with these sorts of problems with practical solutions on a daily basis, with billions of users. That sort of daily experience counts so much that I would I would place him in the "knows 100X about every area of ML" (excluding rare subfields).

It's rare but I have encountered people outside my field who knew more than I did about my field, because of their daily experiences over decades, or their raw intelligence. Yann seems to have both.


So are you suggesting that Yann has more expertise on what you're working on now (which I know, and would consider to be an ML subfield in the same vein as Timnit's), and therefore would defer to his expertise when he says things that show nothing more than an undergraduate level of the topic?

Because I'm only a dilettante in the AI ethics space (and admittedly ML as a whole), and I can describe the flaws in Yann's reasoning. Blind deference of that level isn't rational.


I've argued with Yann about my field (we share a friend on facebook) and in that case, he did turn out to be technically correct.

I don't agree with your assessment of his "Reasoning".


Can someone explain to me why people are still taking tweets seriously?


I see the tweet, but I don't see her in the replies at all. Where did she take exception to the tweet?


I'm always hearing tales of how insane Google's social justice warrior culture has become ... but I'm always hearing these stories from White, Asian, or Indian males. I wonder how bad it actually is in reality.

The picture insiders paint is that of organizations infested with technically mediocre people who form cliques that provide air cover for each others mediocrity ... with sex and race providing a disincentive for management to take action.

I wonder how accurate that is ... one explanation for the stories could just be that a lot of disgruntled males are unhappy with competition.

On the other hand, if it was true and I was a high performance underrepresented minority, there's no way I'd go to Google. Wouldn't people just assume I'm there because of quota filling, instead of my actual performance?


I definitely saw a bit of that when I was there, but only a bit. I think it mostly started after I left. Before about 2012 or so, every female engineer I encountered was just as competent as the men. I never thought about diversity culture because everyone was of pretty consistent skill, at least in the parts of the org I was in.

After 2012 we got a diversity hire on our team who was useless in every way, was a pathological liar and manipulator, yet who was consistently rewarded because the bosses boss was a female feminist. Her boss knew she was trouble but could do nothing. I did encounter a few more cases where female employees would just inexplicably have skills gaps that shouldn't have happened at Google, like not knowing what hexadecimal was.

I never encountered grumbling about competition. Google was not a hugely competitive place at that time because there was more than enough for everyone. Promos were generally not quotad, for instance, at least not for most of the rank and file.

I was a high performance underrepresented minority, there's no way I'd go to Google. Wouldn't people just assume I'm there because of quota filling, instead of my actual performance?

If you were genuinely high performance then no, of course people wouldn't assume that. And sure you'd go to Google, because they pay very well.

The real issue and it's not at all Google specific are the minority low performers or outright troublemakers, like Gebru. Then people absolutely assume they are quota filling. They go anyway because they aren't self aware enough to realise that's happening and blame any lack of respect they get to racism/sexism. And the money is still great so why wouldn't they go?


Indeed, this is why positive discrimination is still discrimination, and therefore racist.


Not disputing what you are saying, but there seems to be more to the story. If this is all that happened why not let her resign on her own (perhaps take action if this did not happen after n months). Surely that would create less publicity.


If given the right opportunity and enough time, every once victimized group will become the oppressor. Note this is true for groups not individuals.


She reminds me of an intelligent Adria Richards.


Seeing this kind of thing is why I keep putting "Be Nice!" into the MOTDs on all my machines.

I know I flip out at people on the internet sometimes but I hope I never become someone like that.


Seems like typical SJW behavior. Hopefully more companies will stand up to these far leftists and kick them out.


[flagged]


> POC in tech deal with every day.

Why even repeat such an absurd lie? Surely you don’t actually believe it yourself.


[flagged]


What would make you think it isn’t?

Where do these ideas even come from?


Are you saying that someone that Google hired as an AI ethics researcher should not be allowed to disagree with an AI practitioner on matters of AI ethics, and should be canceled for expressing an opinion?

What happened to diversity of thought, academic freedom, free speech as a civic virtue and not simply a restriction on government action, and all those other things this community normally stands for? Just yesterday we saw the NLRB rule that Google had illegally fired workers, and we were talking about how the big tech companies use their power to suppress internal dissent and it's bad. How did we forget that so quickly?


> forced ... to give up on Twitter after she took exception to this tweet

Can you provide a source for this. What did she say?

There are legitimate criticisms of Yann's tweet. Just because he is technically correct about how ML works doesn't mean that what he is saying isn't ALSO a classical dismissal of the concerns people have with AI.

The issue isn't that ML is evil or racist. The issue is that if it is used too objectively detached from the reality of the world it operates in, the outcomes could be used for evil or with racist intents.

AI scientists like LeCun handwaves away concerns about diversity in data sources, just as his employer handwaves away concerns about engagement algorithms surfacing misinformation on the platform.

But the societal consequences remain for others to deal with.


It was discussed on HN a few months ago: https://news.ycombinator.com/item?id=23696427

Basically (to summarize a lot), his point was: an ML model is only as good as the data you feed it. If, say, the photos for your face recognition model are only of white men, then obviously the model will do well on white men, while (possibly) not doing as well on other races or genders. This is a statement of fact, and nothing controversial about it.

But she took offense to it, and started insulting him on Twitter. He got tired of defending himself, and just quit twitter.


That LeCunn needed to defend his own character out of such simple discussion shows everything that's wrong with the US' progressive culture.


She was never uncivil and she pointed out a series of issues, from what she believed was imprecise in his argument, to the societal issues that caused her to be viciously attacked by other people for engaging in the discussion.

Other people questioned whether data only can be accountable for the biases and were not subjected to the kind of vitriol she was. The fact I am a white cishet male shields me from a lot of misdirected anger.


Sorry that's ridiculous - as a observer with no vested interest it was * very * clear she was very uncivil in her tone, repeatedly.


Not just that, she never provided any links to tutorials/papers/etc that she had given on the topic, and when I looked into the one workshop (from memory) that someone else mentioned, it had literally nothing to do with the issue of dataset bias.

That episode gave me the impression she was more interested in drum-beating and axe-grinding than engaging constructively. I'm not surprised she seemed to be doing the same inside Google.


I agree, reading the article (https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...) and the thread I also don't see how one puts this on Timnit.

Coming back to the original tweet: if you changed the training data have more black people instead of white, would it perform the same but with inverted racial biases? Maybe? You really can't know without doing it. It might generate faces with a dark complexion but also big distortions or unrealistic colors. The original model doesn't just produce white people because of the training data, the hyperparameter tuning, perhaps the entire architecture, would have been modified until it produced acceptable outputs... using white training data. Ultimately the engineers and other humans behind the scenes are the arbiters of success, loss functions are chosen by humans, and swapping training data on the same model won't change those early decisions.

In another context I'm sure LeCun could have offered his somewhat reductionist take on this example of bias (something I might have done myself – reductionism flows in technologists' veins!) A discussion could have ensued, and everyone could have come out with a better understanding. A hot Twitter thread isn't where that will happen. Neither LeCun nor Timnit have the power to change what Twitter is. LeCun (reasonably!) doesn't like the nature of the discussion, and he leaves, and I think that's OK.


> Ultimately the engineers and other humans behind the scenes are the ultimate arbiters of success, loss functions are chosen by humans, and swapping training data on the same model won't change those early decisions

How would changing loss functions alter this? This makes no sense.

Hyperparameter tuning is done iteratively, and used get the best score on the test dataset. Do you really believe the engineers hand picked examples and purposefully trained it so that looked more "white" and disregarded the test scores?

Training/test data is 100% the cause for this bias.


The goal is to get pleasing face reconstruction, everything follows from that. There's no objectively correct loss function, it's something that is selected because it has a good effect.

A simple example could be sensitivity to color differences: with lighter skin tones there is a fairly large difference between eyebrows and eye features and the person's forehead or cheek. Someone with a pretty dark complexion might have features that are distinguished in a very compressed set of colors. Depending on the loss function a dark complexion face could be essentially flat and featureless.

This could be fixed of course... but it requires changing the loss function.


Learning conditional mean vs conditional median for superresolution might be differently affected by a large chunk of outgroup. That was what I remember folks talking about during this twitter feud. Or, like with word embeddings, where people have added some balance penalties to the loss function to make it unbiased in the presence of biased data. Dataset bias caused this, but dataset bias can sometimes be well mitigated.


I agree with you on the principles & morals, but frankly this reads as scolding - all he did was explain how this can be avoided, which is good context to provide to the public. There isn't some hidden message there where he's "dismissing concerns people have with AI" or he's facilitating "outcomes that could be used for evil or with racist intents"

His tweet doesn't rule out the possibility that he secretly has a malicious set of incentives, just like your handle doesn't rule out that you're a Communist spy sent to fracture the American psyche. Yet, its ridiculous. I feel like a lot of the world in 2020 is for some reason a hostile filter is applied ~99% of the time on the internet, and you _rarely_ see that in personal interactions in real life.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: