Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the paper surrounding Timnit's "departure" from Google.

If you're on Timnit's side, "departure" means "firing", and the paper is the reason she was fired.

If you're on Google's side, "departure" means "mutually-agreeable resignation", with Timnit's melodramatic and unprofessional response to normal feedback.

Personally, I don't see anything in this paper that implicates Google or would be reasonable for Google to try to suppress, so I'm falling into the camp of trusting Google's side of the story. But who knows?



Google didn't fire Dr Gebru for this paper, even though that is the popular narrative. Google fired her for sending an email to a large list that said Google's DEI initiatives were a failure and that employees shouldn't waste their time contributing to them.


> Personally, I don't see anything in this paper that implicates Google or would be reasonable for Google to try to suppress

Can you explain why this leads you to support Google? Google still claims that the paper doesn't meet their publication standards, despite, as you say, it containing nothing that it would be reasonable for Google to suppress.

> Timnit's melodramatic and unprofessional response to normal feedback.

Keep in mind the feedback was that you cannot publish this paper, and that isn't disputed by Google.


As a general principle, when there is a "they-said she-said" situation, and the certain facts about one side of the story don't add up, from my perspective that increases the odds that the other side is telling the truth.

I have no idea what Google's standards are for letting a paper be published. My guess is that they don't make standards up on the fly, and no unique standards are applied to Timnit and not to literally everyone else.

> Keep in mind the feedback was that you cannot publish this paper, and that isn't disputed by Google.

Yes, and that in itself makes me more likely to think there is some benign, standard reason for not allowing publication rather than some nefarious motive about suppressing Timnit or hiding their own guilt, or whatever reason Timnit has provided as to why she thinks Google doesn't want the paper published.

If the argument is "Google won't let me publish this paper because it's too dangerous for them", and then I look and see that there isn't really anything dangerous for Google in the paper, then I would say that perhaps the argument is incorrect as to the reasons Google wouldn't let them publish


> My guess is that they don't make standards up on the fly, and no unique standards are applied to Timnit and not to literally everyone else.

It is not disputed by Google that special standards (an additional, non-standard review process) were applied to this paper. There is some lack of clarity about how often that additional review process is applied (https://artificialintelligence-news.com/2020/12/24/google-te..., https://www.reuters.com/article/us-alphabet-google-research-...). But broadly it seems to not have much to do with the technical merit of the papers, despite what Google originally claimed, and instead be a legal/PR process.


...or you can argue exactly the opposite along the same lines...


What? If someone says a paper is too dangerous for Google, but then I read the paper and there is nothing dangerous at all for Google - that is evidence that there is in fact something too dangerous for Google in the paper?


I think the argument is moreso that Google won't even allow milquetoast criticism of LLMs, which falls perfectly in line with the events.


If Google had a big problem with something you find mild, that could easily make you believe Google is making errors in judgment.


What's the backstory here?


Editorializing as little as possible:

This paper was originally going to be coauthored by Timnit Gebru, Margaret Mitchell, Emily Bender, and a few other collaborators from Google and UW.

The paper, at a high level, offers criticisms of large language models (including BERT, a Google model). In ~October/November, the paper went through the normal Google paper review process and was approved to be published externally (i.e. submitted to a conference). Later, Gebru was informed that the paper was not fit for publication due to some additional review, and needed to be unsubmitted from the conference, or the Google coauthors needed to removed their names. Initially, this was provided with no context or reason.

Upon pushing back, Gebru was given some information on why the paper was unfit for publication. Publicly, what we know is that Google's reasoning here was that the paper did not cite relevant work and was not up to Google's publication standards (of note here, the paper cites nearly 200 other works, which is huge for a CS paper, and it later passed peer review at the conference, so this claim seems dubious).

Gebru complained that this feedback was essentially trying to bury the paper, especially given that she was not given the opportunity to address or incorporate the feedback, only drop the paper. She sent two emails, one to her management stating that this kind of process was not conducive to research and stating that she would consider resigning if things didn't change. She also sent an email complaining about the process to a mailing list about diversity and inclusion work, noting that DE&I work was going to continue to be a waste of time without executive buy-in.

Google "accepted Gebru's resignation", noting that the second email she sent was unprofessional. Under the relevant law, Gebru didn't resign and was fired by Google. Google has since partially walked back their statements, and refers to the situation as her "departure", leaving it amusingly vague.

The paper is published in January, after passing peer review, coauthroed by "Schmargaret Schmitchell", among others. It's since come to light that some other papers have also gone through this additional review since then, this additional review process was formalized, it seems, only after Gebru's paper was submitted. The sensitive review process involves legal approval and requires authors to remove statements like "having concerns". [0]

Margaret Mitchell, Gebru's co-lead is also fired, for sharing confidential information. The investigation into her misconduct took over a month, and she was fired the same day as a reorganization of the AI ethics team was announced.

[0]: https://www.reuters.com/article/us-alphabet-google-research-...


Gebru also publicly accused Jeff Dean of being complicit in "silenc[ing]" female researchers over a year before she resigned/was fired [1], and the "email complaining about the process to a mailing list about diversity and inclusion work" also included a brief reference to past legal tussles between herself and Google:

> I’m always amazed at how people can continue to do thing after thing like this and then turn around and ask me for some sort of extra DEI work or input. This happened to me last year. I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google (which is when they backed off--before that Google lawyers were prepared to throw us under the bus and our leaders were following as instructed) and the next day I get some random “impact award.” Pure gaslighting.

From the outside, it looks like whatever relationship Gebru and Google leadership had was extremely strained well before this paper. It looks like Google leadership had gotten tired of Gebru (for good or bad reasons) and took this as a good time to cut ties.

[1] https://twitter.com/timnitgebru/status/1193238414742548480?l...

[2] https://www.platformer.news/p/the-withering-email-that-got-a...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: