It's hard to make too much sense about what happened. Here's what I made of it, if anyone can point out any important mistakes/inaccuracies/details:
1. She and her co-authors submitted a paper for review 1 day before its (external, for publication?) due date. It's supposed to be submitted to Google 2 weeks prior, so they can review.
2. The authors submitted it externally before the review was returned.
3. Google decided that the paper "didn’t meet our bar for publication" and demanded a retraction.
4. Timnit Gebru demanded access to all of the internal feedback, along with the author of the feedback. If Google wouldn't provide this access, she said that she intended to resign and would set a final day.
5. Google declined her demands and took her ultimatum as a resignation. Rather than setting a final date, they made the separation effective immediately.
It sounds to me like Google has a policy that papers must be internally reviewed before they can be "published". She understood publish in the academic sense, while Google views sending the paper out for conference review as publishing. The paper ends up failing internal review, so per policy it must be promptly retracted. This is confusing to the academic who expects to get access to the raw review responses so that the paper can be fixed. After all in her mind it is not published yet, and updates can be submitted to the conference to fix the issues.
The underlying reason for the paper failing internal review looks to be that it basically condemns Google's whole approach to AI (big models) while failing to take into account some relevant research that may favor google's approach. (I have no way of guessing if it would have passed review if it did take into account those, but still condemned the approach).
The massive amount of miscommunication makes her feel like she was being oppressed, due to race and or gender. Hence her aggressive tone in her emails and ultimatum.
Looks like a fairly classic case of a major blowup due to poor communication.
While there hasn't been enough evidence to fully condemn either side, I disagree that this is just poor communication.
Unless Google deliberately changed the enforcement of the policy just to mess with her, she should have known the policy. It doesn't seem to be a complicated process, and 2 weeks is a reasonably short time to wait.
On the other side, Google has been in this game long enough, that they must know a paper can be updated in this case. So there wouldn't be a misunderstanding there, either.
What exactly happened is unclear so far. But I doubt it will come down to just a series of misunderstanding.
On a side note, I suspect that working research for a company with a conflicting interest in the same area is generally a bad idea. At some point, the dilemma of either leaving or stopping caring about it would present itself.
That’s a good point. How many other Google researchers have submitted papers to conferences without waiting for the 2 week internal review to complete? What happened in those cases if the paper failed the internal review? Without more context it’s impossible to tell if this was a case of discrimination or just a case a researcher not following Google’s internal policies and suffering the consequences.
I worked as a run-of-the-mill engineer at Google and was involved in publishing a paper externally. We couldn't do it the first time around because we finished only 3 days before the external deadline and there wasn't enough time to get it approved internally. This seemed to be very standard around me. People were actually submitting well ahead of time and to me it seems like that Gebru expected the rules to be bent for her if she realistically believed a single day would be enough. Let me just point out that this is a standard practice even in academic research at universities when larger teams are involved.
Having worked at three large academic institutions and been part of the submission process for about a dozen manuscripts, I have never seen or heard of a requirement for an internal administrative approval.
IRB approval for study designs, yes, admin submission of grants, yes, but never an authorization that the results and manuscript could be submitted to a journal or conference
Are you serious? Every large collaboration in e.g. physics has those. You simple cannot create a splinter cell within the organization and publish whatever you want. You have to coordinate with everybody because the work is never truly yours. In case of e.g. CERN, literally thousands of people helped you in some way and they do not want you to potentially tarnish their reputation. Industry AI labs are like that as well. I know this for a fact at least in the organizations I worked at.
What you’re describing in this comment sounds like a consortium. These often an embargo on publication of independent manuscripts until the primary paper(s) have been published. However, once the data’s out, each investigator is free to publish whatever secondary analysis they like.
I understood the Google process to be an _administrative_ approval, based on the conclusions of the paper and unrelated to the quality of the science. This I have never heard of at an academic institution. The final call that a manuscript is ready for publication has always been made by the corresponding/senior authors, not admin.
I don't have stats, but when I tried to do this it was made quite clear that I needed to submit 2 weeks early because nobody who reviews can drop their day work to do a priority review for anything less than a superpaper. Most of the people doing paper reviews (just like in academia) are busy and doing them in their "spare time" (evenings, weekends).
I don't think we even need data on this. One or two day prior is unthinkable, just per common sense. Although data points would be valuable and appreciated but we don't really need data to understand the one or two day is just not enough time for anyone or anything.
It's not like there are a bunch of dedicated reviewers, it's always other researchers who review. I don't think some of the most valued papers like MapReduce or BigTable would be reviewed in just a day or two. It's just not a reasonable amount of time in any condition in terms of reviewing research papers.
In any large company there are lots of process rules and internal deadlines that are unrealistic for whatever reason and don't get followed in practice, leading to many last-minute exceptions. I don't know about the specific case of research papers, but if would not surprise me at all if "one day before the deadline" were a common submission timeframe at Google. Of course that not ideal and probably sucks for the team in charge of reviewing these things, but in that context it wouldn't be such a "damning" piece of evidence against her and the other researchers.
Submitting for an internal review a day before the deadline is not a common thing in Google Research at all. I worked there and actually was not able to submit a paper to an external conference because the internal deadline was only 3 days away and it was likely we wouldn't be able to get the internal approval ahead of time. The rules were clear and as far as I understood it, people did follow them. If there was any selective enforcement, it would be in her favor if she expected a single day to be enough.
A lot of process rules are unrealistic, but that doesn't mean this one is, and that can't be used to conclude that all internal processes are unrealistic and wouldn't be followed. It's an quite unrealistic statement to make. Most places rely on internal processes to work, and it would be extremely rare for any case of exception.
> Most places rely on internal processes to work, and it would be extremely rare for any case of exception.
That's a pretty broad statement with no supporting evidence. In fact, I'd argue that most large organizations rely on internal processes to be ignored or bypassed just as often as they rely on other processes to be followed.
I've been on teams that required "every feature" to go through a design review. Guess what? Suddenly a lot of what we were already working on stopped being "features" and started being "small bug fixes". Approvals for launches are often sought from the last few approvers just before launch because the list of required approvers is so long, and the VP of engineering isn't going to approve until everyone else has already approved etc... This kind of shit is par for the course at a large company because the alternative is basically a "work to rule" strike.
Me thinks it's a common misconception for strongly opinionated people to believe that companies want to hire them in order for them to "fix things and make the company a better place".
In reality, what every company/team/management wants is someone who will give them some constructive criticism on small mistakes here and there that are easy to fix, but ultimately act as a supportive shield for the company against /other/ criticisms. The more credible that someone is, the better, and the more sophisticated that act of supportive shielding is, the better.
In exchange, the company provides support, financial or otherwise, for that someone's personal career ambitions in becoming rich and famous themselves.
Anyone else see it this way? Or am I just too jaded these days?
That's pretty jaded IMHO. I would think most management wants positive change, not just to be shielded from criticism.
But reasonable people can disagree about what the status quo is, how good or bad that status quo is, what counts as positive change, what timescale positive change needs to happen at, and what costs/tradeoffs they are willing to accept to accomplish positive change.
Probably Timnit and the management disagreed on "what timescale positive change needs to happen at, and what costs/tradeoffs they are willing to accept to accomplish positive change".
Not jaded. I think that's actually a very insightful way to put it. Heck I wouldn't even mind if a boss I had put it like that, as it's at least honest and not really unreasonable. I mean, if you go into an organisation and decide the entire premise of the organisation is irredeemably flawed then at some point your feedback is going to no longer be useful, even if some of it is on point, because "this entire enterprise is worthless" is not really an actionable view most of the time.
Usually is a reason they hire consultants and contractors to do say the unpleasant truths. They can be ignored, reframed, bribed to say whatever, and have the luxury of saying ugly things and then fucking off not long after.
I'd even expect that we only know the tip of the iceberg. You don't leave employment that you are otherwise happy with over such a thing.
There were clearly conflicts brewing below the lid. This was just taken as the final straw. On both sides, I'd reasonably expect. She was dissatisfied with a bunch of things that she couldn't makr public, so she boiled over on this one. That's why many people are surprised this is such a big deal.
And then she gave a ultimatum, which was great for both sides. For the employer since they could use it to get rid of her right away and make it look it's because of this (and "technically" they are right). And for her because she can use it for maximum outrage. It was a calculated provocation. Come on, if you really want to be part of a process of change, that's not how to achieve that. She is smart enough to understand this.
I don't think it's miscommunication in anyway. Timnit had submitted dozens of papers before and unless the process requirements are completely new, she would be well aware of them. This seems more of a case of selective enforcement, where she had been allowed to go around the process in the past (or felt that she has enough leverage to be able to do so now) and then not being allowed to do so this time.
This feels like a classic privileged tantrum which has blown up in the face because Google refused to bend this time.
> This seems more of a case of selective enforcement, where she had been allowed to go around the process in the past (or felt that she has enough leverage to be able to do so now) and then not being allowed to do so this time.
Jeff's email explicitly says the paper was "approved" then submitted, albeit perhaps in one day. The two-week thing might be a policy but would be immaterial if it ended up approved anyway within whatever timeframe. It sounds like a red-herring to emphasize the time-in-advance; in any case the blame, if any, is to be on whoever flipped the approval bit and did not wait for other reviewers (possibly her manager?)
She sends for approval 1 day before the conference deadline, proceeds to submit the paper with conditional approval, and waits for the actual approvals. It is common practice with one side effect: if reviewers don't like certain parts of the paper, they can ask you to withdraw it (since you don't have an option to update the paper at this stage). If the paper was submitted for approval before submitting to the conference though, then they would have some room for back-and-forth engagement with updates on the paper.
Nowhere in the account of the story from the other party anything is specified as "conditional" nor whoever issued the initial approval is mentioned at all. More weirdly, in her account of the story, the "back-and-forth" came from, tada... HR department, which is super weird.
Look, I am not necessarily picking her side in the more general story, but it is apparent this paper submission story in isolation is super weird. It seems feasible that she was in the queue for getting the axe having engaged in prior fights with the company (some mention of threat of lawsuit, etc). I also acknowledge the account of story from her side was quite cherry-picked and actively strategic, if not deceitful, and it does not even seem that Jeff was a direct player in this saga. Looks the highest level person in the loop was Megan, not Jeff, and Jeff is simply dragged into the mix for the opportunity to throw a punch at his reputation. That, however, does not minimize the weirdness of putting the whole thing on this two-week paper submission policy.
I agree with you for almost everything you said. This case is definitely unusual. What I mean by conditional is that the person who approved it initially is not an actual reviewer.
For your last sentence, I think Jeff uses that 2-week policy to state that his position is "technically" right.
I agree -- the passive voice here (intentionally?) makes it impossible to tell whether the external submission was done in good faith:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
If (to the best of her knowledge) she got approval through the correct process, and then an unidentified party opaquely removed the approval after the fact, I think her questions are justified.
I found the quoted sentence to be very unclear, which is surprising because it is the crux of the matter. From it, I can't tell the timeline between the order of when things happened and who did what. Who approved and submitted what and when?
> where she had been allowed to go around the process in the past
I'm curious, is there anything in the posted information above that points to the fact they were able to go around the 2-week limit in the past? Or is that just a guess?
As a side note, I worked there as a run-of-the-mill engineer and the 2 week deadline was respected as far as I can tell. I once couldn't submit a paper to an external conference exactly because there was not enough time to get the internal review and approval.
It could still be miscommunication. It is quite plausibly that she did not have any previous papers fail internal review, and almost certain that she did not have any fail internal review while already submitted to a conference (and thus "published" in Google's POV).
Not understanding Google's policy of retract it now, possibly re-"publish" (resubmit) later after addressing Google's concerns, could very well escalate things, especially since it sounds like this all occurred right as a bunch of people were taking vacation, so ability to get answers to questions/concerns was likely greatly hampered.
> Timnit had submitted dozens of papers before and unless the process requirements are completely new, she would be well aware of them. This seems more of a case of selective enforcement, where she had been allowed to go around the process in the past (or felt that she has enough leverage to be able to do so now) and then not being allowed to do so this time.
If the established practice differs from the nominal process, a no-notice switch to rigorously enforcing the nominal process is for most practical purposes no different than a no-notice change of process. Out of date documentation is at least as common in human processes as software; the real process is what is actually recurrently executed, including how that recurrent practice takes into account position of particular participants.
just... wow. Can anyone point to a man ever being labelled in such a way on HN?
Edit: comparing the number of comment hits for " his tantrum" vs " her tantrum" to the relative numbers for " his " and " her " shows that "tantrum" is, indeed, used 2.5 times as often to describe women as it is for men.
It's not automatically a given that women don't do things that get labelled as tantrums at a higher rate. To demonstrate this is sexist, you would have to show that people judge the same behavior as tantrum or not tantrum at a different rate depending on the specified gender. Which may well be the case, in which case please link the study.
To turn it around: an interesting self-experiment is to try flipping the pronouns in the story and seeing if your internal reaction to it changes.
> the academic who expects to get access to the raw review responses so that the paper can be fixed.
Asking for the review notes is one thing, it seems like the main demand Jeff refused was giving the names of every reviewer. Why would the names of the reviewers be relevant to the academic, unless they specifically wanted to use the reviewers identity in order to undermine the notes?
Gebru has a history of using her considerable social media following to bully people she has minor disagreements with. This alone would practically warrant anonymity to make sure the reviewers do not get doxed by her.
Blind peer review should be pretty obvious to anyone from an academic background anyway. Asking for reviewers names from a publisher would be an outrageous request.
Was this a Google internal blind peer review? It sounds more like management pushing back for management specific reasons. I'm not sure it's the same thing, or that management should expect to be able to roadblock publication at the last minute without working with the authors directly.
> This is confusing to the academic who expects to get access to the raw review responses so that the paper can be fixed.
At most CS conferences, there is no second round of reviews (apart from a few PL ones that have been experimenting lately). If you get rejected, it's with the expectation that you make whatever changes you see fit and then submit anew to a different conference. In addition, there is generally no way to update a submitted paper after the submission deadline in such a way that the changes can still be taken into account in the single round of reviews, and making major changes after the accepting venue's review stage is very much frowned upon (as those changes would then not be reviewed). So her options would have been:
1. retract the paper;
2. get the paper rejected from the conference, and resubmit it later;
3. get the paper accepted and have it published without the major changes requested by Google, thus violating Google's policy/interests;
4. get the paper accepted and perform the major changes requested by Google at the camera-ready stage, thus violating academic convention and possibly even specific rules set out by the accepting conference.
2 is mostly 1 with more of everyone's time wasted, and 3 and 4 both violate some obligation or another. From Google's point of view, she might also well have gone for 3 without keeping them in the loop to present them with a fait accompli, and her enormous amount of social capital may have made it easy to get away with. Demanding 1 therefore seems like the most reasonable choice if they want to retain a credible norm of subjecting publications of people who work for them to prior review.
It seems like she has been working at Google for a fair amount of time. If so, I’m surprised that there was still confusion over when a paper needs to be taken through the internal review process. Outside of that, I found this to be a good summary of the situation. It omits her widely distributed email denigrating the entire diversity approach taken by Google, though.
Obviously it's hard to pass a judgement without knowing the paper contents. Reading the emails carefully, it seems Jeff acknowledges that the initial review culminated in an "approval" despite some other "reviewers" still in pending state. That is a curious detail. Presumably someone with authority must have flipped the Approval bit within that 1 day. Why that individual did not object? The two-week excuse sounds like a made-up excuse post-facto based on the written policy but clearly that alone does not seem to be the reason this blew up.
This is pure speculation, but it looks like there may have been some additional backchannel from the conference reviewers to their buddies at Google that caused a follow-on sensitivity or objection later from other functions at Google, namely PR & Policy (based on her tweets), which may have been why Timnit was super curious about the identity of the feedback authors.
The back channel theory sounds like a good point to me. Anonymous peer review is largely a charade, I'd be more surprised if Google didn't have fingers on the scales and flies on the wall at top ML venues than if they did. It's very possible that the leak/breach of anonymity is what kicked off her demands to get the names of people who filed complaints that led to the paper's retraction.
> The massive amount of miscommunication makes her feel like she was being oppressed, due to race and or gender.
That seams a big jump. Isn't it more likely that the treatment is due to pushing for ethics in an organization whose business model is antithetical to ethics?
Google is investing in coming up with smaller models aswell i.e "TensorFlow Lite". I think it is unjustified of her to condemn approaches at this stage, all research starts humble and almost always inefficient. People were doing Resnet-50 not so long ago and then processes improved. People came up with better configs with lesser layers YOLO for instance. So big models in and off itself isnt a problem. I dont see this as a good enough reason to resign.
That sounds like a horrible use of human hours, especially at Google salary level.
Paper writing is painstaking and mentally tiring. It would be much more efficient if they reviewed results and an abstract and then decided based on that "Go ahead and write the paper and publish it" or "Don't waste time writing it up".
Money is not an issue at all. It's about protecting the perception of quality that comes from having Google co-authors, and ultimately the Google brand.
Otherwise, everyone and their mother would milk the Google name by submitting papers in every direction.
Sure, then they should have a 2-step process. The first step should be approving writing the paper, and any high selectivity should happen at that point.
The second step should be approving the final contents, and that process should not reject a high fraction of submissions and should allow resubmissions with changes.
It would suck to spend 200 hours figuring out how to fight to position figures in LaTeX only to be rejected by some dudes who think the figure is confidential.
That wouldn't be effective though. Most of the time these reviews are to make sure company IP isn't being published, which could be outside the main results or abstract.
I think it's worth highlighting that Google's internal review of papers is probably significantly different than what we'd expect in an academic setting. I'm not big corporate entity expert, but I wouldn't be shocked to find out Google's internal review heavily weighs impacts on Google from both a public relations and earnings aspect.
My guess would be that they took the issue as a personal offence, and they believe that someone higher in the power structure has some personal beef against them as a person, rather than against parts of their work. They want to confront that person.
I'm curious about this "submitted paper for review 1 day before publication" assertion. From my reading, it sounded like work on this paper had been going on for a while. The email quoted on "Platformer" makes it sound like communication with HR (which seems weird, honestly, for a research paper) had been going on for at least two months.[0]
> Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from PR & Policy besides them asking you for updates (in 2 months).
The NY Times reported that four of the researchers working on the paper also worked at Google.[1]
> In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google.
It seems to me Google management should have been aware of this paper prior to the day before publication.
I don't think google are saying they weren't aware of the paper prior to the day-before, just that they hadn't seen the actual paper itself before then.
Out of curiosity do people think that a company either should or would publish self-funded research that's detrimental to their PR? I'm not sure that this is a realistic expectation with our current methods of company governance (which should probably be changed). I'm assuming the research isn't overly safety oriented and is more like "overuse of google leads to decreased overall well-being" rather that "use of the cigarettes that we sell leads to significantly reduced life expectancy". Of course that statement begs the question of where the line between the two lies.
In my opinion, it seemed pretty low key. But I've only read the abstract.
In terms of Google's expectations and what should we consider reasonable, I'm really not sure... I do think that when they hired this person to be "technical co-lead of the Ethical Artificial Intelligence Team at Google", they probably should've guessed that there could, possibly, be a conflict between "make loads of money" and "ethical".
I think one in important point is that she submitted the paper the day before the deadline when 2 weeks is usually required for review. It was approved then, but after the review process worked out she was asked to retract her paper.
I think the subtle part might be "It was approved then", I read that somewhere too and it was not clear whether it was approved by google despite it not following the normal 2 week process or if it was talking about some other form of approval.
>Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
Not sure what to make of that - maybe that Gebru submitted the paper and then immediately clicked the "approve" button herself?
Most definitely not. I think that is a critical detail. You can be 100% sure if she clicked on Approve herself, Jeff would have either omitted that fact entirely or definitely explicitly mentioned it was a self-approval. Some other person clicked on Approve for sure and they are somehow not putting any responsibility on that person in this public narrative.
> Timnit Gebru demanded access to all of the internal feedback, along with the author of the feedback. If Google wouldn't provide this access, she said that she intended to resign and would set a final day.
Is the part that makes me cynical about Google’s motive. What sort of process calls for XFN feedback on your work but withholds that feedback and its sources from you? This is very odd.
>I’m always amazed at how people can continue to do thing after thing like this and then turn around and ask me for some sort of extra DEI work or input. This happened to me last year. I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google (which is when they backed off--before that Google lawyers were prepared to throw us under the bus and our leaders were following as instructed) and the next day I get some random “impact award.” Pure gaslighting.
I think if you threaten to sue your employer, the employer isn't too much at fault and just covering their own neck when you later threaten to resign and they take you up on that offer.
Timnit: “I’m sick of this framing. Tired of it. Many people have tried to explain, many scholars. Listen to us. You can’t just reduce harms caused by ML to dataset bias. Even amidst of world wide protests people don’t hear our voices and try to learn from us, they assume they’re experts in everything. Let us lead her and you follow. Just listen. And learn from scholars like @ruha9 [Ruha Benjamin, Associate Professor of African American Studies at Princeton University]"
I don't like engaging this kind of discussion but have to call out the "I'm so tired" bit every time I see it.
I find it strange to take issue with the "I'm so tired" line. It would seem to be entirely appropriate for circumstances where you see an argument that you have replied to many times but that completely ignores your reply. It also just doesn't strike me as particularly hostile or profane. Quitting twitter and making a show of it strikes me as a far more emotional reaction, but nobody here seems to take any issue with that?
In this case, the issue is that "it's the dataset" is attacking a straw man: nobody is accusing ML practitioners to be explicit racists actively conspiring for their models to be racially biased. Everyone understands that it is (often) a result of the underlying dataset.
But, the argument goes, understanding the genesis of such bias isn't enough to excuse it. At least not when those models are put to actual use in, say, law-enforcement contexts. Or credit agencies, or hiring decisions, or really anything.
If your dog happened to bite any red-haired kid it came across, it doesn't matter that you think red-haired kids are just as good as others, or that the dog was once mistreated by a red-haired kid. You're not going to allow the dog to get near any red-haired kids. And if you do, you'll face charges of at least negligence when some red-haired kid gets mauled.
To tie this back to AI, it would mean that as long as your models produce racially biased results, they simply aren't ready for deployment or publication. Go find better data until your work is no longer liable to inflict harm on anybody.
That's the standard for all other industries: if your car is shown to reliably kill pedestrians below a certain height in crash tests, you go back to the drawing board. Adding a sticker "don't operate car in the vicinity of children" isn't enough.
You can disagree with that reasoning (although you'd be wrong). What you can't do, as a leader in AI, is to pretend to have never heard of it, or that it is so obviously wrong that it warrants no reply whatsoever.
> have to call out the "I'm so tired" bit every time I see it.
Many other rhetorical moves have a lot of politics to muddle through / implicit assumptions that can be disavowed in bad faith, etc. I don't like to engage in that kind of discussion at all.
But the "I'm so tired" bit is both condescending (and it takes... something... to condescend to Yan LeCunn) and evasive. The end result is authoritarian -- look man, this is decided and settled, you haven't kept up.
Calling this out is interesting because it evidences on the contrary that the discussion isn't settled at all, and that she's unable or unwilling to frame it in a convivial manner that's conducive to progress.
> Many other rhetorical moves have a lot of politics to muddle through / implicit assumptions that can be disavowed in bad faith, etc. I don't like to engage in that kind of discussion at all.
But isn’t what you described so vague and subjective, that you can apply it to pretty much anything?
IMHO it’s more honest to say “I sit out of conversations I don’t like”, which is an entirely fair and understandable decision to make :).
Yann was (actively or unintentionally) refusing to participate in the same conversation. His response was the "facts don't care about feelings" one. It's not wrong but it is a thought terminating cliche.
It is. Yann also seems blissfully unaware of how sampling bias issues have always been dealt with (eg the Heckman correction model) in traditional statistics, where “getting more data” is not an option.
It's not the message, it's the tone. There's a big difference between offering constructive criticism and being a condescending prick.
> You can disagree with that reasoning (although you'd be wrong). What you can't do, as a leader in AI, is to pretend to have never heard of it, or that it is so obviously wrong that it warrants no reply whatsoever.
There's a third option. You can just decide to not engage with the condescending prick at all.
We have one of those at work. His area of responsibility is a very cross functional one, and while it is important, he have a great deal of difficulty getting people to participate. Because participation usually means getting told you're an idiot. Loudly, forcefully, and with more words.
That seems to be the case here, only substituting "idiot" out for "racist".
> To tie this back to AI, it would mean that as long as your models produce racially biased results, they simply aren't ready for deployment or publication. Go find better data until your work is no longer liable to inflict harm on anybody.
Well, the disagreement is about publication. Yann LeCun himself said that "The consequences of bias are considerably more dire in a deployed product than in an academic paper."[1]
Why would it be wrong to publish research based on biased data?
What Yann said about data bias is a fact. He wasn’t dismissing your concerns nor distracting from it. A lot of people tried to extrapolate what Yann meant, but from the way I see it, if more people learn about the cause of this racial bias, the more people will be inclined to use fairer and diverse datasets in their future ML research.
The point is that she has an expectation of Yann to go further than just stating a fact.
I assume she thinks that Yann has enough intellectual capacity to improve situation of biased ML but doesnt execute. Further, my guess is that the reason why he doesnt execute is not transparent to her, hence frustrating.
This is probably the core of the conflict there - there are researchers that aim to remove technical bias from technical systems and expect to stop there, not going into social issues of how the systems will be applied, as they consider that (e.g. solving the biases in the society) as a separate issue.
Dr. Gebru, among others, expects and insists that they do go beyond that and try to get involved in the social consequences of tech. They refuse, ergo conflict.
> The point is that she has an expectation of Yann to go further than just stating a fact.
Yann was talking about a specific model, she was talking about ML & society on a grand scale. Can't we talk about specific things now? Why did he have to apologize for it?
My feeling is that Timnit is using people, she guilt trips, humiliates them online, makes a big fuss and it's all to whip up support in her crusades. She wants to win the public opinion by any means, so she needs famous opponents to pile on, such as Yann and Dean (who have universal admiration in the field). She's always playing the victim. I find this kind of behavior toxic, it degrades dialogue.
"It's December" is also a fact. He chose to mention a different one. Why, if not as an explanation / excuse for the model's behaviour?
Every practitioner already knows that these models behave according to the data they are trained on. The general public has no use for this explanation because they aren't the one deciding on the training data. So the only practical purpose is to offer some superficial causal mechanism that sounds like an explanation to that public, presented as an unchanging fact of nature, and distract from the equally true fact a biased model is only published or used iff it is trained on biased data and someone decides to go ahead and publish/use it nonetheless.
You're making a fallacy here, forcing the issue into black and white - if you mention first 'dataset bias', it must mean you 'offer some superficial causal mechanism that sounds like an explanation to that public'
The comment by the first John in the comment of the article here [https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...] adds a lot of insight too (starting with "As I understand, the social and structural problems are that AI learns the social and structural biases that are already present.").
Yeah GP is modeling exactly the behavior they're railing against. Claiming that something is wrong without any practical insights for how that thing might be better.
Tiredness is probably coming from trying to educate people on Twitter, you can't do that because tomorrow a fresh batch will come and you have to start over.
Yeah, it sort of implies that, yes, you're angry about something, but actually have no constructive advice or solutions, other than to accuse someone of being a bad person or even a racist.
The "I'm so tired" bit is similar to the "I'm not here to teach you" bit. Maybe someone can't teach you because they actually have nothing to teach.
No, "I'm so tired" means that it's an argument they have replied to before. She has written dozens of research papers, some of which probably contained arguments other than repetition of "you're a racist and a bad person".
You of course know about this, or would now about this considering she is called an "AI researcher at Google" in the title of this tread. So the only lazy and un-constructive criticism is to assume she has not made any arguments beyond your one-line caricature of what you believe her to be.
> “I think that now a lot of people have understood that we need to have more diverse datasets, but unfortunately I felt like that’s kind of where the understanding has stopped. It’s like ‘let’s diversify our datasets. And that’s kind of ethics and fairness, right?’ But you can’t ignore social and structural problems.“
This seems to be saying that Yann LeCun should actually be addressing social and structural problems, rather that just applying neural networks to data sets.
IMHO she is saying it’s reasonable to believe that social and systemic issues have an impact on creating ML models (as the teams designing, building, and deploying said models are members of society) in ways in addition to the data sets.
I don’t know what the right answer is but plenty of institutions come up with ways they attempt to deal with systemic biases. For example, blind auditions for positions in orchestras (you only hear the person auditioning, you can’t see them). Used to not be a thing. Now it is.
Point being that when people are motivated they are able to come up with stuff.
Yes it should if the networks he builds have an actual impact on real people because of their social/systemic/institutional status. If he doesn't want to do it, he shouldn't be working on networks that have this kind of impact.
What if you're a nuclear engineer that just wanna make the hot bit go hotter, and don't want to care about the nasty habit that radiation has of killing people?
I think the point was reciting the (true!) 'biased datasets create biased models' argument at best avoids, and at worst actively undermines those bigger questions.
Someone in another thread on this had IMO a great comment[1] summarizing the sentiment:
> If you are going to say "Well, garbage-in, garbage-out!", why do you keep putting the racist garbage in?
I don’t think that for a person who lives in a world where AI shops routinely sell software that discriminate against minorities, not just in ways that are explained by poorly thought out datasets, saying you’re “so tired” is at all wrong. This is the exact research that Timnit is an expert on, and these are systems that have real world consequences for minorities. It’s easy for people who are not minorities to say you should always a be polite but frankly I think she’s being extremely polite here given all of the circumstances.
I bet there is research showing how racism and discrimination literally tires out the people subjected to it. And now you have people saying they're not allowed to express that.
Nobody said it was disallowed. The grandparent said he called it out as a lazy rhetorical move. It's certainly true that oppression can be tiring (does that point need research?) It may equally be true that some people overclaim on the extent of their oppression, and the mental health harms they are suffering from it, and even make a profession of doing so.
The quality of the "argument" here is F--. Basically, it's, "nothing you say matters, only what we say matters." Phrases like "just listen" are essentially, "you will respect my authority!"
It's "commonly accepted" in that it is a well-known possibility and, therefore, something that practitioners should check for before publishing, using, or selling their models, yes.
It's not "commonly accepted" in the sense of throwing one's hands up, saying "there's just no way we could find enough pictures of black people for our AI to no longer confuse them with Apes" and move on.
Her point is more than that. I think that she was trying to say that ML systems have an inherent bias that is independent of their data.
This is a consequence of the fact that it's not possible to do generalization, and thus learning, without bias to begin with. What an AI researcher does when deciding which architecture is essentially tuning the bias in the generalization system in order to produce better results, which is not in and of itself bad, and is indeed unavoidable.
By bias, I mean really anything that means that the function g() is used over the function h(). In practice this can be anything from the seed for the random values, to the choice of activation function, to the architecture of the network, and so on.
If the architectures are selected for their performance on dataset y, then it is possible that even when trained on dataset x, they retain some of the bias of dataset y, because their generalization bias, which is architectural, was tuned for dataset y. This is, admittedly, a theoretical result, and it's not clear that it applies in every case, but it is a solid point.
For that reason the point that benchmarking against non-biased datasets is important was made, and thus that there is more to bias than the final dataset used for training during deployment. Therefore, it's not only a responsibility of the engineering community, but also of the scientific community.
Interestingly enough, this is true of all learning systems, including humans, and is a proof of why it's impossible to be completely unbiased in really anything.
Eh. I think there's some legitimate concern that AI isn't fair to marginalized groups for a variety of reasons.
Dataset bias is only one way this occurs, though. Even if you include more colored faces, you still may be taking pictures with a camera that doesn't capture as much contrast. Or even if you retrain, you may have chosen model structure to optimize behavior in a biased dataset. Or... if you're considering the cost of misidentification by a facial recognition system used by, say, police, you need to be sure that you take a perspective that applies to society as a whole and not to your own interactions with police.
It's perfectly reasonable to say "AI is worsening the experience of minorities in various ways" and to differ with responses that assume with slight additional care in curating datasets that the problem will be completely solved.
One of the huge questions is : who takes responsibility for the issues with the black box learning systems ?
In France, since you have to be able to explain in plain words the algorithm that was used to help to make an administrative decision, it indirectly makes these black box learning systems illegal.
(And yet they probably already have been used by the police?)
Now, the naive response is that the engineer that implemented a system should take responsibility for any harm done by the system - and I guess that if these engineers could lose their license to use a computer and face jail time, they might also start to put pressure upstream on researchers to provide better systems ?
I'm not sure I understood your comment correctly, so I'll apologize in advance if I misunderstood.
That statement is contained in a paper I linked that dates from 1980. I'm pretty sure it's factually correct, because I'm trying to make nothing but a demonstration that an idée reçue, that ML systems are only based due to their data, is incorrect.
There's really no cloaked message behind it, and I'm not sure why one would think there is (hence the disclaimer). I think it's a pretty uncharitable interpretation.
> That statement is contained in a paper I linked that dates from 1980. I'm pretty sure it's factually correct, because I'm trying to make nothing but a demonstration that an idée reçue, that ML systems are only based due to their data, is incorrect.
That's reasonable. I may have taken the initial text in the wrong context. Thanks for clarifying.
> Even amidst of world wide protests people don’t hear our voices and try to learn from us, they assume they’re experts in everything. Let us lead her and you follow. Just listen. And learn from scholars like @ruha9 We even brought her to your house, your conference.
It strikes me as very unprofessional, and frankly I believe her anti-white bias shone through in that instance.
I think there is a bit of a double standard here. I try in general to be as charitable to both sides of an agreement possible. If your standard for uncharitability is such that the linked tweet is a proof of racism, I don't understand why one would take a tweet claiming against overwhelming evidence at face value.
Personally, I don't think either indicate racism. I see the link you tweeted as an excessively abrasive response at a dismissive and reductive understanding of an issue, bias in learning system, despite research old and new. At the same time, I see LeCun's dismissal as a genuinely held position in error.
To see the first as racist but the second as okay seems to be a case of being much more charitable to one side than the other, though I understand how it may look this way.
Whatever dude. You people are delusional. Stop bending over backwards to defend dumb ideas. To do so otherwise opens you up to mockery and ridicule of your otherwise substantive thoughts.
Honestly, I am not even sure _how_ they would fix the system(s) they complain most about? Assume Yann is wrong and Timnit is right, what is the proposal for fixing the bias that somehow is inherent in the system?
She says 'let us lead and you follow' -- sure...where? What are we concretely doing about the "in-built bias" problem?
Yeah, that Lecun argument was a really bad first impression of her from me. I guess she found out life isn't Twitter and you can't just go around telling everyone they are bad and wrong without consequences.
I'd even go one step further and bet that her whole career plan is ML activism. If you scan the email, all the key words are in there. Textbook agitprop.
I feel sorry for the next company that will hire her.
Rude, maybe but not very. Dismissal, yes. The belief that AI should learn reality as it exists and not some ideologically purified version of it is a completely rational one, and dismissing the "work" of those so-called "ethicists" is no more problematic than dismissing the "work" of the Chinese censors who purify the Chinese internet. Indeed being dismissive of that work is to be praised, as it shouldn't exist at all.
Well, read any AI bias paper. Facts about reality these models capture and which are considered problematic include stuff like "doctors are more often men and nurses are more often women" or "most people on the internet are white" or "most stuff on the internet is written in English". Basically the internet is mostly built by westerners and any facet of that gets classed as "bias" rather than being what it is: the truth.
Even trivial facts like in English "men chuckle, women giggle" has been used as examples of bias. That's not bias though, that's just how the English language works.
What politically uncomfortable regularities are those? Please name the specific uncomfortable true facts about reality that you think these models are capturing.
Jeff Dean, Head of Google AI has responded (original article has been updated.) claiming the following:
>Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
> This happened to me last year. I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google
Self entitlement or she forgot she was just an employee, and not some sort of independent tribal leader fighting google from the inside.
> I think if you threaten to sue your employer, the employer isn't too much at fault and just covering their own neck when you later threaten to resign and they take you up on that offer.
It's not clear to me here what you're advocating for -- a lawsuit wouldn't be raised by such an intelligent and virtuous individual unless there was some clear wrongdoing. Are you saying that you think workers shouldn't sue their employers if their employers are breaking the law?
Just as an FYI, LeCun has not actually quit Twitter, he still posts stuff on there. For those curious I wrote up a more detailed summary of the episode (https://thegradient.pub/pulse-lessons/). TLDR, imho he was a bit unthoughtful in how he communicated with a more junior researcher, and got some respectful flack from it from various people, it was not that big a deal. But of course you can make up your own mind on that.
In any case, Timnit Gebru (this particular AI researcher) has been recognized as having done some pretty important work in AI ethics. Sounds like this whole thing is messy, but thought I'd mention that for more context.
Well, reading the thread, it's pretty clear that she was correct. There is a second step of metaoptimization which incentives the researchers to encode algorithms to produce algorithms that more easily bias along a certain way. And she is correct that benchmarking models against biased datasets can lead researchers to bias for architectures that are in turn more likely to have a certain bias.
I'm sure LeCun knows this too. It's a fact that has been known for almost 40 years now, that for a given generalization system multiple bias modes are are possible, and of course the process of optimizing architectures against a biased benchmark will change the learning bias of the model. It's not just datasets that are biased, generalization systems are necessarily biased too.
As for the last line, I don't know that it should be expected for employers to fire employees pursuing recourse against sexual harassment.
I'm sure we can agree that the tone wasn't good, but she does have a point in saying that it's not possible to reduce the bias to datasets used to train the final model.
TBH it doesn’t matter if she is right or wrong. Her tone and angry writing is pretty harsh.
The Twitter replies read a lot like her Google message thing. She’s got a lot of good things to say but needs to work on the delivery.
I think she’s playing the victim and race card a lot more than she should. She’s got all the ammo she needs to make her arguments without the angry victim tone.
Have you considered that there are many other people with the same thing to say, who are saying it with the respectful tone you demand, and are being just as politely and quietly ignored?
I think the simplest thing to be said about messaging and tone is this:
If your goal is to say things to convince people to change their behavior, it makes sense to spend some time thinking about how your message and its tone will be perceived, since that has a big impact on how people receive and process the message.
Based on a long career of watching people trying to convince the others, attacking people is typically a failing strategy. Instead, learn how to message in a way that is non confrontational, and most importantly, make it clear what is your opinion, versus fact.
Yes but I’m not a prominent AI Ethics Academic with ties to 2 of the best colleges in the country, Stanford and MIT. Who also had a very good job at one of the most sought after companies in the world, Google.
When she speaks people listen. Nobody listens to wil421.
Well, she didn't get ignored. Instead, she got fired. If that was her goal, then she succeeded. If it wasn't, maybe she should have taken a different approach.
I agree that her tone is not good, indeed. It's true that this is a big problem on Twitter especially. I still think it is an interesting and under-discussed point, though.
I think this is actually the better link to be discussing, rather than the twitter threads shared before.
The most relevant parts (in terms of back story) should be this:
> A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week, Nov. 27, the week when almost everyone would be out (and a date which has nothing to do with the conference process). You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.
> Then, you ask for more information. What specific feedback exists? Who is it coming from? Why now? Why not before? Can you go back and forth with anyone? Can you understand what exactly is problematic and what can be changed?
> And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored. And you’re met with, once again, an order to retract the paper with no engagement whatsoever.
> behavior that is inconsistent with the expectations of a Google manager.
Maybe my remark is down to earth but, she literally told her group to stop working. I do not know of any company where this is an acceptable behavior for a manager. Besides the disagreements, it sounds to me that was the action that caused Google to pull the trigger.
Also from Dean's email it is pretty clear there are a number of new products down the pipe based on AI. At one point top management decided the company should stop denigrating its own products.
She was fired for damaging Google's reputation. In her email, she carelessly demanded names of those who reported her, that can be interpreted as extortion, and Jeff didn't miss the opportunity to call her out. Her offer to resign was another mistake: she would be fired anyways, but for a different reason and with more paperwork. Now if she takes this to a court, she would have to tell the judge what she needed the names for.
Yes she was. Even the letter communicating to her the termination of employment made it fairly explicit that she was being terminated on terms different than what Google interpreted her resignation to be, due to the internal mail.
It's clear that for external PR purposes Google wants to maintain the story that it was merely accepting her resignation, but while clearly worded with the intent that an incomplete reading would support the narrative, the letter to her fairly directly (and unusually for a case where they viewed themselves as accepting a resignation) mad explicit that employment was being terminated, immediately, based on separate grounds from the supposed resignation.
On the other hand, I guess making it explicit that she was being fired for public internal complaints about a culture of discrimination is one way (like, the worst way possible, but one way) of making the case that what happened to Damore wasn't discrimination against white men, but just what they do to any internal criticism of Google culture.
in the email shown,she told a listserv called "Women and Allies" to stop writing docs that are an attempt to better the situation in terms of diversity, equity, inclusion as writing those docs are like pissing in the wind in her opinion
Quoting Jeff Dean's email about this (as published in https://www.platformer.news/p/the-withering-email-that-got-a...) " I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it." implies that at least Google does interpret her email as her telling people to stop work on certain issues.
It’s more relevant, but probably doesn’t tell the whole story considering she very heavily hints that she gave them a list of demands or she would resign, and Google just accepted her resignation.
"Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date."
We don't know the full list of demands, but Jeff Dean shared one demand that people may find unreasonable. Other demands may have been much more reasonable, but those weren't shared.
Considering she says she made a list of issues she'd like fixed and that she'd discuss it when she came back from PTO without ever mentioning resignation and that was answered with a "we accept your resignation" passed on to her reports, I believe she has legal representation.
The financial implications of her resigning vs her being fired are probably significant.
In Google's place I'd probably negotiate an agreement with a non-disparagement clause. She'd be compensated and everyone would save face.
And we'd never be discussing this. She could take other research roles and advance the state of AI in peace.
And, when reading it, I want to remember the backstory - that (if we can believe her story) Google went to unreasonable lengths to try to block the paper.
Maybe someone could post it here when it's published?
Pure speculation, but it's gotta be criticizing Google, right? What else could they _possibly_ have been so concerned about that they were willing to fire the researcher over it?
"feedback" here seems to refer to feedback on the scientific paper that is being retracted, not personal performance feedback or anything like that. When I red this first, wasn't immediately clear what this was about.
She has a history of attacking people through social media. If I were reviewing her paper, I would certainly not want to be known to her, especially if I were critical of it. The possibility of retaliation from her is very real and she often uses her considerable twitter following to do exactly that. I believe she thinks she's doing the right thing, but from a disinterested party's perspective she seems like a bully.
I'm well aware of that. But this was not a (submission) peer review situation. From everything I've seen (not a complete picture of course) this was a process entirely separate from the peer review process of the conference she was submitting to. That still makes it a situation where her paper was apparently reviewed (and possibly by peers) but it certainly doesn't have the same semantics in which anonymity is warranted/guaranteed.
But the feedback itself wasn't shared with her. That part doesn't make sense unless management didn't want her to see it, and why would that be the case?
Because (a) by the rules of engagement, management is not required to share original feedback, and (b) the original feedback could be used to identify the sources. In this case, there might have been a real risk to the sources given the individual in question is known to mobilize internal and external forces like Twitter mobs to personally attack people and bully them into submission.
I think this argument would stand on its own without the very specific claims about the person in question, which are a little strong unless you have personal information.
I'm not sure we know exactly what kind of review it is. I think it also depends what she signed up for, explicitly or implicitly. What were the expectations around freedom to publish?
It is a review that every paper submitted to a conference or a journal from Google has to go through. In fact I believe one has to do that prior to even submitting the paper to external peer review. That at least was the interpretation every researcher I knew at Google was operating under.
> This happened to me last year. I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google (which is when they backed off--before that Google lawyers were prepared to throw us under the bus and our leaders were following as instructed) and the next day I get some random “impact award.” Pure gaslighting.
And she was under the impression that her company would value her so much that, not only they wouldn't take the first "clean" opportunity to let her go, but that they'd want her to stay so much that she could leverage it?
That's just you not realizing how your company sees you as a litigious/PR disaster timebomb and are tiptoeing around you. The worst part is that there was a path for you to be professional and achieve your goals, but then I guess you don't get to bask in Twitter's righteous indignation that way.
To clarify, the first paragraph of defertoreotar@'s comment is a quote from Timnit Gebru's tweet where she admits to recently conspiring a legal battle against Google, while on Google's payroll. A wolf in sheep's clothing...
The thrust of her thread today is that it is shocking how Google dumped her with no warning - it sounds like, instead, they continued employing her for months and months while her lawyer was threatening to sue the company, and the culminating event was making demands for the company to meet if she was to stay.
Declining to accept an employee's set of ultimatums, an employee who already engaged in threatening legal action for months and months, is arguably a much clearer framing than "her willingness to assert her rights under the law"
Therefore "recently conspiring a legal battle against Google, while on Google's payroll." is worth sharing, it provides information not present in the limited frame of "her willingness to assert her rights under the law"
> instead, they continued employing her for months and months while her lawyer was threatening to sue the company
This doesn't at all sound like the case. It sounds like the company did something aggressive, she got representation who threatened to sue, and the company backed off, and all of this happened over a year ago and is essentially a closed chapter.
> all of this happened over a year ago and is essentially a closed chapter.
Very credibly threatening to sue your employer with a legal firm, regardless of whether you were in the right to do so, is never a closed chapter. Companies are made of people, and those people are not going to forget something like that. They're going to be waiting for their first chance to get rid of you.
Right or wrong, justified or not, if you threaten your employer with a lawsuit, you need to be looking for a new employer that very same day. The employment relationship is now irrevocably a hostile one.
Sorry, dumb question, which part are you disputing? I must be being too literal.
What you called out as "doesn't at all sound like the case":
[1: they continued employing her for months and months] [2: while her lawyer was threatening to sue the company]
First paragraph of the email OP posted, which I am defending as worth sharing:
"This happened to me last year. [[1]: there has been 11+ months, or months and months, since this occurred and she continued employment] I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google [[2]: her lawyer was threatening to sure the company]"
"while her lawyer was threatening to sue the company"
My reading is that the situation you're describing is resolved, and has been for quite a while. Thus no lawyer is currently threatening to sue, nor have they for quite a while.
Thus: an employee asserted her rights under the law in the past, and the situation was, apparently, resolved to everyone's pleasure. "recently conspiring a legal battle against Google" is a misrepresentation of that.
Gotcha, my post didn't mean to imply that the lawsuit _wasn't_ resolved, just that there was a period in which there was a lawsuit threatened and she was employed. Thank you for the feedback! I've had more negative interactions on here than positive recently, and this was heartening.
Yes, certainly it is and she undoubtedly knew that, but her job was guiding her employer to make ethical decisions / build an ethical framework. If you expect your ethicists to prioritize having a stable career and avoid potential career-limiting moves, I'm not sure how they're supposed to do their job. I would honestly be skeptical of a self-declared ethicist who's very good at climbing corporate ladders and has never gotten themselves in trouble with those more powerful than them.
Whether she was overly saber-rattling is, I think, not a meaningful discussion. "How could this ethicist have managed to keep her job" is a straightforward and uninteresting question: don't make any waves, don't challenge anyone too senior, make the company look good in public. "Does Google value talented AI ethicists who are genuinely committed to their work and are willing to sacrifice their career over it" is a much more interesting question.
Google found that there was a credible allegation that Andy Rubin raped another employee, and they figured out how to keep him around for years, until word got out, because he was valuable to the company. (And then they gave him a $90M exit package.) Timnit Gebru got fired while she was on vacation for simply being impolitic. That shows how much Google valued her work.
Her real job is in AI activism, not AI academia. She understands that very well, and she understands that an acrimonious parting of ways with Google, combined with obligatory accusations of racist bias, is a reasonable career advancement step in the field of activism (though it may have been distracting and debilitating in the field of academia).
Right. She offered her resignation. Google mishandled it, but obviously, she was ready to be thrown out of the organization. So saying "She could have stayed if only ___" misunderstands the entire situation - she wasn't trying to stay. She was trying to get the world to understand something about Google and how ethical the folks in charge of Google's AI are.
> And she was under the impression that her company would value her so much that, not only they wouldn't take the first "clean" opportunity to let her go, but that they'd want her to stay so much that she could leverage it?
In a realpolitik sense you are probably right. But right now I'm going through standard onboarding training and they are VERY explicit that it's not supposed to work this way, and that you are not supposed to retaliate or exclude someone because they make complaints or take actions based on the defense of a protected group (including themselves).
You are right, but off-topic in this specific context. Indeed, it was neither retaliation, nor exclusion. She offered her employer a "clean" opportunity to let her go, and they accepted it.
It will be difficult for her lawyers to prove that she was let go as a retaliation, which it is not. The only thing which is apparent is that her employer did not value her work as much as she thought, so that her demands and her threat to resign (and the tarnishment of the Google brand because of the Twitter threads which she would write about it) would actually turn into a "clean" opportunity to let her go.
That is what happens when you are supposed to negotiate, and go the confrontational way. You have to estimate your leverage accurately before the full-blown war, otherwise it is a blitzkrieg.
There is "legally liable for retaliation" and "morally liable". I think you are right, they aren't legally liable. I personally try not to be morally liable for an injustice, even if my actions are legally defensible, and I expect the same from the people I work with. I think many people at Google feel similarly, which is why the question of legal liability will not settle the matter internally.
But yeah, another lesson from this: never offer to quit.
> The worst part is that there was a path for you to be professional and achieve your goals, but then I guess you don't get to bask in Twitter's righteous indignation that way.
IMHO we don’t really know if that part (achieving your goals) is true, or not true.
I don't think "fired" is the right choice of words -- she gave an ultimatum offering to resign, and they accepted her resignation. Is accepting a resignation the same as firing someone?
> Thanks for making your conditions clear. We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.
I recently re-read Noah Smith's essay "Leaders Who Act Like Outsiders Invite Trouble" https://www.bloomberg.com/opinion/articles/2020-03-03/leader.... It's not directly on point, but one concept is: "This extraordinary trend of rank-and-file members challenging the leaders of their organizations goes beyond simple populism. There may be no word for this trend in the English language. But there is one in Japanese: gekokujo." And later, "The real danger of gekokujo, however, comes from the establishment’s response to the threat. Eventually, party bosses, executives and other powerful figures may get tired of being pushed around."
Institutions are being pushed by the Twitter mob, and by the Twitter mob mentality, even when the person is formally within the institution. And I think we're learning, or going to have to re-learn, things like "Why did companies traditionally encourage people to leave politics and religion at the door?" and "What's the acceptable level of discourse within the institution, before you're not a part of it any more?"
I've seen gekokujo in many places as a manager. There's often a desire among employees to require the world to adjust to how they think it should work instead of following process. This isn't always bad, of course, because process isn't always perfect. However, their success or failure tend to correlate to their perceived value. Simply put, you have to be a diva in order to act like a diva. I think the last puppeteer for Kermit is a great example[1].
No idea if that's happened here, but as a manager I know what I'd do if an employee -- any employee -- told me to to meet their demands or they would quit.
Gekokujō doesn't refer to Japanese corporate culture at all, but rather the attempted coups and going rogue by junior officers in the Japanese military of the 1920s and 1930s (for example, conquering Manchuria against orders of the senior officers).
More like coup d'état, or an underdog victory. Author doesn't seem to be using it right, since this would be an actual shift in power, rather than a failed insurgency (as in OP).
> And I think we're learning, or going to have to re-learn, things like "Why did companies traditionally encourage people to leave politics and religion at the door?" and "What's the acceptable level of discourse within the institution, before you're not a part of it any more?"
And yet, decades ago, when labour had real power and unions were a force in the economy, we had no problems with labour driving change within organizations.
I think what we're seeing now is what happens when individuals are disenfranchised from the political process and labour has no way to organize to drive broader change: individuals, frustrated with their lack of power, speak out and there is retaliation. And where there is a lack of organization, this anger has no focus and what you see is a chaotic lashing out.
I think the same is true of populism in general: when you render individuals powerless you prime them to be welcoming to the populist who promises to fight for them.
Is that an excuse for either Gebru or Google's behaviour, here, specifically? I have no idea. None of us have enough of the details to really understand what happened.
But the idea that labour shouldn't challenge leadership, or that the public in general shouldn't challenge the establishment, is a product of decades of union busting and concentration of power in the political class that we shouldn't accept just because it's become the status quo.
Maybe I'm missing something here, but isn't individuality a good thing, in general?
I'm not applying this to Gebru vs Google, as in this specific case, we don't have enough evidence to know what exactly happened.
While I don't like the Twitter mob per se, the idea that a person within an institution cannot publicly criticize the institution is ridiculous. Rank-and-file members should be able to challenge the leaders. The employers can already fire employees for almost any arbitrary reason. Don't like the way a subordinate talks about you? Fire the person.
Colleges promote (or should promote) independent thinking. Independent thinking makes it necessary that employees might disagree with the employer. Is that good for the efficiency of the company? Hard to say. But to say the society must cultivate a culture of obeisance such that people can work for companies better is upside-down. Companies should be built to favor independent thinkers.
I agree with your point that companies should provide more room for individualism, but there have to be some kind of boundaries and recognition that you are part of a team.
It seems like more and more that emboldened employees don't think there is any limit to what they should be able to say/do in a company context, which is a huge problem. If you want individualism to that degree, you need to be in business for yourself. If you take a job at a company, you're implicitly conceding some of that individualism.
> However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.
> As a result, we are accepting your resignation immediately, effective today. We will send your final paycheck to your address in Workday. When you return from your vacation, PeopleOps will reach out to you to coordinate the return of Google devices and assets.
This third part they make "the resignation" the same as firing the person in practice. No way to plan a coordinated rampdown, transition and handover of projects and responsibilities. Just an unconditional goodbye, don't come to work again.
No, this is more like giving two weeks' notice and then being asked to leave the building immediately. It's not pleasant but it also isn't the same as being fired. Google called her bluff here.
She said, "If you can't meet these conditions, I will resign effective <XY date>." Google said, "We can't meet these conditions, we accept your resignation and move it to <much earlier date>."
People will have to find the source document, which to the best of my knowledge is not public (note, not the message to Women and Allies), read it, and then reach their own conclusions.
I stand by my characterization. She characterizes it herself thusly:
"I said here are the conditions. If you can meet them great I’ll take my name off this paper, if not then I can work on a last date."
I think this is a major source of confusion in this whole conversation. There is clearly a follow up conversation between Timnit and some manager(s) (the name Megan is mentioned) in which this ultimatum was proffered.
Without the specific wording of the original email, I don't think that you can draw this conclusion.
I'll note also that this is rather unusual practice for Google. There are a number of people, far more critical of Google, who have resigned, and who have been able to rampdown for 2+ weeks.
The posted email specifically says something about behavior not expected of her position. So it seems possible if you are critical, but fulfill your obligations, they would accept your 2 weeks resignation.
Is there an example you have, even self-reported, of a person who was merely critical of Google and was not given a rampdown? Or someone who was critical and did not fulfill their job obligations, and was given a rampdown?
The email clearly states: "certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager"
> No, which is my point.
No, that is the opposite of your point. You said that people critical of Google have been given a rampdown, so it is unusual for her to not be given a rampdown. My point was that there may be other reasons for her not being given a rampdown - that being exactly what was in the email shared about her not behaving as a manager. If you think that she was fired solely for being critical and not given a rampdown, then there should be other examples of people being fired without a rampdown for merely being critical.
Yes, and having read the email, I will reiterate my assertion that it is "certainly not obvious" what behavior is being characterized as "inconsistent with the expectations of a Google manager". Repeating the vague justification message is not a response to criticism of the vague justification message. The email she was fired for (it's quoted in this thread) does not contain any content that I find to be fireable.
> No, that is the opposite of your point.
No, it's exactly my point. The vague justification is not borne out by any evidence.
I think you're misunderstanding the purpose of the email from google management. Presumably it wasn't to lay out a case so that the email could be leaked and twitter could understand their rationale. It was just telling her the reason they are not letting her take her last 2 weeks. She still gets paid, she just can't come into the office. I don't understand why you think that email should be convincing evidence to you, an outsider who has absolutely zero to do with the employment contract between Timnit and Google. Surely, Timnit and relevant parties at Google have more context than you or I do.
> The vague justification is not borne out by any evidence.
I think this isn't nearly so bad as forcing an early termination of two weeks notice.
I had a job pretty recently, was senior tech and was eager to please, which ended up meaning having to take the hard/after hours calls no one else wanted to handle. It also meant I eventually became the only on-call tech for over 4 months straight. This in turn meant I knew the most about all of our accounts, knew the most about their systems, scheduling for jobs as well as subcontractors, I handled it all, right down to installing the CCTV systems at our new location, single handedly.
When I decided I'd had enough of being taken advantage of and being lied to, I gave my two weeks notice.
They assumed I was going to work for a competitor (I wasn't I was taking some much needed personal time), and they fired me on the spot. While walking me out they also informed me that since they had my resignation letter if I tried to claim unemployment they would use it to claim I had resigned. I didn't care as I wasn't in need of unemployment I was just even happier now about leaving. They were rude, talked down to me and belittled me all the way out.
2 hours after I left the calls began 'how do we' 'how does this' 'this job's permits' 'how do we log into'. I politely told them to fuck off. They even went so far as to have other workers call/text me asking the same questions ver batim.
I will never again give 2 weeks notice unless I have in writing they must give me 2 weeks notice upon release for any reason other than something illegal.
This person offered an ultimatum and their employer made a choice. I just don't think its as bad as early termination of 2 weeks notice.
The usual practice is to give 2 weeks notice, you get escorted out immediately, and you get paid for those 2 weeks. It's technically not early termination as you're still getting paid.
There's no reason to be rude, talk down, belittle, however.
I've read this a lot, but in the four jobs I left i was never escorted out the door when I gave my notice. Maybe it depends on the role, but I've always stayed out my 2-4 week notice period and spent the time wrapping up my tasks, writing documentation, doing exit interviews, etc.
Did you get paid for those 2 weeks? If not then I would say you got fired.
After those 2 weeks, if the employer still wants help/answers then they should expect to pay a very very expensive hourly rate. :D
As an employer, it makes a lot of sense to accelerate someone out the door that has given notice. That said, you still want them on the hook to answer questions so you should send them home but pay them for those 2 weeks (or however long).
I tend to agree, but the following paragraphs suggest that had she not resigned she might have found herself on the wrong end of some sort of disciplinary process:
> However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.
> As a result, we are accepting your resignation immediately, effective today. We will send your final paycheck to your address in Workday. When you return from your vacation, PeopleOps will reach out to you to coordinate the return of Google devices and assets.
Her resignation certainly gave Google an easy out of all that hassle, but would I call this a firing? Perhaps not.
If you're the employer you have to figure that if somebody's pissed off enough to (i) sue you whilst employed by you, (ii) write that kind of message to others working in their function or group, and (iii) now they've delivered an ultimatum you're unable or unwilling to meet, they're only going to cause more trouble if you make them work their notice period.
If it were me in her boss's position I'd have got her out the door as fast as possible.
One of them is in the updated article with Jeff Dean's response:
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback.
I'm pretty sure if you resign you're still entitled to access the employers' health plan via COBRA? (Distinct issue from whether, as part of a severance package, the company agrees to pay for some/all of it)
Accepting a resignation means Google doesn't have to pay unemployment benefits. But ultimately, it was their choice to let her go. And the immediateness of the decision and revocation of all access smells like how you treat an employee you're firing.
Having conditions is often part of a negotiation. Maybe they offered to resign in that email, but Timnit Gebru does not say she did in the linked twitter as far as I can tell.
Unless she formally offer to resign it seems like her management jumped the gun.
I read that, not clear without reading the email whether it was a formal offer to resign.
If an employee tells their boss they have a plan to offer a resignation on x day or do not get y or are considering it they can fire the employee before then, but I assume they can not push around the day they offer a resignation.
An employer not wanting to keep an employee who might soon resign is understandable of course.
I am sure it can get fiddly on what counts as a formal offer to resign and that courts make calls one way or another based on ambiguous language.
Without the email though it's not clear to me. For better or worse it comes across as Timnit Gebru bosses were eager to shoo her out the door though.
This isn’t having a beer with your buddies and spitballing plans. If you email your employer saying give me this or I resign, the employer can accept your resignation simply by saying no to your conditions. It’s grade school logic. They called her bluff and that’s why she’s salty about it. She signaled her intention to resign and due to her behavior with her reports, Google doesn’t want more of that on her way out so she got cut loose immediately.
It's possible to not have the intent and still word something as a clear cut formal offer to resign however. With out the email it is not possible to tell where this falls.
Email from Google here[0] and corroboration from Jeff Dean here[1]:
"Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google."
> Which is exactly why she has refrained from releasing it throughout this whole drama: plausible deniability.
I do not think there is evidence for that. It is normal to remove employee access to email if they are leaving so it is unlikely Timnit would have access to the email to share.
The employer can't modify the specifics of the resignation by "moving it up", since that's not what the conditional resignation offered. They can fire her, sure, but they can't call it "resignation". You can't "resign" someone, but you can ask for their immediate resignation. If it's not voluntary, it's firing.
> The employer can't modify the specifics of the resignation
You say this, yet I have seen it done many times.
Of course not being a lawyer of any sort, I don't know. Is there a reference for your claim?
Jeff's email is in line with what I have seen done elsewhere, and given the high profile of the incident, I'd be shocked if legal hadn't been consulted on the wording.
Wife says to her programmer husband, "Go to the store and buy a loaf of bread. If they have eggs, buy a dozen."
Husband returns with 12 loaves of bread.
Most software engineers can read the above joke, and laugh at themselves, then go ahead and upvote your comment not realizing that it is EXACTLY THE SAME THING.
Employee: "This is extremely important to me. If you can't address what is important to me, then I can't work here anymore"
Google: "OK, thanks for your resignation".
This is not okay. Google is "technically correct" - but it is appalling behavior that should make every Googler shake with discomfort.
Employees are people. People can get dissatisfied. Frustrated. Angry. Sure, you can say companies don't owe anything to anyone because there is at-will employment.
But the reality is unless we want to live in a dystopia, people should be allowed to voice concerns and anger and frustrations, and not be fired for making things uncomfortable or inconvenient.
The joke is a pretty good one, but what's the supposed parallel with the scenario at hand? There is a meaningful interpretation of "buy a dozen" that the wife intended to convey but the programmer fails to notice (because he lacks common-sense knowledge about typical purchase volumes of different items and/or is blinded by the parallelism of "buy a... if <condition>, buy a..."). What is the meaningful common-sense interpretation of "I can't work here anymore" (was that even the literal wording?) that Timnit intended and Google failed to recognise?
If you are instead just saying that Google should have let it slide because it was just an emotional outburst ("anger and frustrations") and not meant to be taken seriously, well, in what context can we hold adults to their word at all? Email is written communication, not an offhand verbal remark uttered in a tense meeting in anger. You should be able to see what you are about to send while typing it, and have ample opportunity to look over it again before you press that "Send" button. If your ability to review your words has atrophied, perhaps under the influence of social media which encourages "unfiltered" stream-of-consciousness venting, then this ought to be on you, simply because it is doubtful that society can function without some possibility of making binding statements and we do not have anything more commitmental than email widely deployed (and operable at the pace of the modern workplace) in work-from-home times.
> What is the meaningful common-sense interpretation of "I can't work here anymore" (was that even the literal wording?) that Timnit intended and Google failed to recognise?
That something completely UNACCEPTABLE is happening, and needs to be addressed.
Unacceptable to her. Fine, she is entitled to that opinion. She is not entitled to everybody sharing that opinion. If there is no common ground, the involved parties should part ways. Which she expressed. And the other side followed through. Which she now is upset about.
If she really wanted to achieve change within the system she'd have needed to not express it this way. There are people on both sides of the table. One can talk to people and find common ground. But that is rare to work if you start with a threat.
Sorry, one can't go make a black-or-white threat and then expect people to not take it at face value. She literally wrote "if you don't do this-and-that then I can't work here anymore". If people then act on that then she can't go complain that they did. It's not a meaningful defense to claim well employees are just people. She should not say such a thing if she does not mean it. Words and actions have consequences.
Even pull the victim card, blast it all over social media. Sorry, that's a major disservice to everybody _actually_ being dehumanized. She's just dissatisfied with her employer and wants to make the biggest possible splash now.
You can't post like this to HN, regardless of how someone else is or you feel they are. Personal attacks are particularly not allowed and you've posted more than one of them. Beyond that, your comments have been almost all flamebait and/or unsubstantive and/or nasty, so I'm going to ban this account. If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. Here they are: https://news.ycombinator.com/newsguidelines.html. Note that we want thoughtful, curious conversation here, and when the topic happens to be a divisive one, comments need to become more thoughtful, not less.
The original article has been updated with Jeff Dean's response, while a member of Gebru's team (who still works at Google Brain) states that Dean's email is misleading and inaccurate: https://twitter.com/alexhanna/status/1334579764573691904
As I understand it: The paper had already been submitted to a conference, and the "cross functional team" Dean speaks of basically reviewed it like peer reviewers and said you have to retract it. It sounds like Gebru and her co-authors were not given an opportunity to respond to the criticisms to stave off retraction. If you've experienced peer review, you know that such reviews are by no means a definitive analysis of a paper, and one usually gets the opportunity to respond. Here, it sounds like anonymous internal peer review was used as a hammer to retract it. Dean's descriptions of the criticisms sounds rather pedestrian to me, as a researcher, and I'm sure Gebru and her co-authors would push back on these characterizations.
Honestly, it sounds like the paper was attacking some sacred cows internally. And Gebru's work probably threatened other researchers on other teams. Not unlike traditional peer review mind you, but at least one can usually post in on arXiv without worrying about this.
> Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers). We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
> A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
> revealing the identities of every person who Megan and I had spoken to
Big red flag. Why are the identities of reviewers important here? Did she plan to take those reviewers to the court of public opinions for a trial/exposure?
A lesson is delivered in time IMO. Those entitled people need to be called out.
I wouldn't necessarily put them on equal footing. One likely had the editorial review of many eyes, while the other was a more personal, empathetic message.
This. Failure to cite relevant research is a reason to reject a paper from a conference but that's not the job of an internal review. Research independence is fundamental and why professors are given tenure. Sure companies are companies and don't play by those rules, but you can't claim to be making a fair academic criticism if you're not adhering to academic norms.
If the paper was bad it should have been rejected by the conference to which it was submitted. If the paper was critical of things Google does and it didn't like that, that's pure suppression. And, as you say, academic norms provide a period of rebuttal, it's unclear how much was allowed here.
Secondly there is value of having anonymous review in an academic setting, however it sounds like they tried to enforce that anonymous review for the company, which again doesn't serve the same purpose. People should be told exactly why decisions are being made so they can correct their mistakes in the future. If you aren't willing to invite someone into a decision making process, when a large portion of their research is in biases in institutional decision making, then you have got to know you're in for a bad time.
I'm not very familiar with the norms of corporate research. Is it normal for corporations to give their researchers tenure-like freedom to publish anything they want? I assume that top researchers (who have their pick of institutions) will choose organizations where they can be relatively free. But what do you mean by "fair academic criticism"?
If a company publishes only papers that clearly advance its interests, but the research is sound and independently reproducible, does that bias really lessen the perceived value and quality of the research?
Fair academic criticism: We shouldn't publish this paper because it has flaws X,Y,Z. (usually followed by an opportunity for the researchers to address X,Y,Z).
Unfair and/or non-academic criticisms: We shouldn't publish this paper because: I don't like the author/authors institution. I don't like this approach, approach X is better. I don't like the conclusions.
The last is potentially relevant here. And yes, clearly the quality and value of research is reduced by filtering based on results you want to promote. Which well may be in a companies best interest, but there are sometimes difficult lines to draw - when exactly do you move from "promoting the companies interests" to "propaganda intended to mislead the public"?
That makes sense, but I don't think of corporate approval committees as being designed to render "fair academic criticism". For example, they might say "the core thesis of this paper reveals our valuable trade secrets and therefore you are forbidden from publishing it externally." That's not fair academic criticism, but it could be a reasonable decision for the committee to make.
Right, as I was trying to convey in the last - there are other corporate interests than research value, and I think all would agree there are situations where they reasonably trump the researchers or public's interest. And vice-versa; "This research shows we have been poisoning the public for 40 years" for most people probably really isn't the same as "This research exposes valuable trade secrets". The lines aren't always obvious.
To me, the Jeff Dean's email sounds like this paper failed to cite some research and as a result made Google look less good than Google actually is. This sounds like somewhat minor academic mistake, but at the same time a major PR mistake.
I think the logic is that the paper would need to be retracted, rectified, and then resubmitted. Since the submission deadline was already passed, resubmission would be impossible.
Submissions deadline is typically for evaluation, not camera ready. It's not unusual for textual changes to make their way in after that point, either in response to reviewer or to improve the text in a way that doesn't fundamentally change the results.
>> Failure to cite relevant research is a reason to reject a paper from a conference but that's not the job of an internal review.
The done thing is generally for the reviewers to suggest additional references to improve the paper. If any mention of related work is entirely missing, then that's another matter, but a reviewer thinking that the authors should cite a particular piece of work (e.g. the reviewer's own paper) is not normally something that could lead to outright rejection.
Look, on paper I agree with you, they can't claim 'fair academic criticism' and censure papers without explanation. But in the real world, Google has an outsized contribution to AI. They are also a company, and need to pay for all the expenses. There are practical, day to day concerns that are essential for their existence. If they can't ensure their finances, the research program stops. So you can't really be absolutely impartial, just 99%, from time to time you make an exception to impartiality. It's like light speed, you can't reach it.
Her exit from the company was not due to her research. It was for her behavior and her attitude. The paper was just the catalyst that kicked everything else off.
Why did they not fight to keep her, if she was so singularly brilliant? IMO, it's because she wasn't an employee they wanted to keep around.
People don't get fired for asking for opportunities to respond to criticisms. My guess is there were previous incidences regarding her behavior and, in this situation, she threatened to quit and they accepted.
What I don't understand is why nobody discusses this from the point of view of academic freedom and research integrity. My boss says "retract your paper" and I retract my paper? In what dystopian sci-fi novel? That's just mad.
I've published academic research while working at Google. If I got an emergency request to retract a published paper (note that I would have already followed all the necessary steps for publication approval internally), I'd retract it immediately, if there was a legitimate reason (you can see on retractionwatch the sorts of things that lead to retractions). If there wasn't, I'd ask "can we pause the retraction and figure out a better solution based on data", and if the answer was "no", I'd retract it immediately, but follow up with a vigorous protest.
Industrial researchers and non-PIs (IE, research analyst working in a PI's lab) don't have the same freedom and integrity rights as a PI. To me this seems like a reasonable (and probably the only reasonable) policy for industry.
I don't really think it's dystopian- if a paper I wrote reached the point of needing a retraction it means some serious messup occurred or there's a major PR fiasco.
Note that in this case I am not sure if there was a retraction, or an desubmission- the paper hadn't been published yet so it's not really a candidate for retraction. The term for this is 'withdraw submission'
There's no academic freedom expected - that's private R&D work, the research and the paper is done on behalf of the company and wholly owned by it, not the employees who did it, and so your boss naturally has full control on when, how, where and if that research should be published.
It's just as if you'd want to open source some code that you wrote for your job, and suddenly the boss says "hey, don't do that" - there are many reasons why they might want to not publish it, and it's not your code to publish as you want.
Because this isn't academia, despite the trappings. If I'm working for Reynolds Tobacco Research and wish to publish a paper on tobacco causation of cancer, they're doing to stop my publication under their auspices.
Censorship is a double edged sword that way. Over the last few years, many acclaimed journalists have been censored for not falling in line with the woke agenda. Threatening livelihoods is the prime tool of extortion by cancel culturists. (on either side)
To some extent, you have to realize that one day you will fall on the wrong side on the fence and you'll be the victim of your own creation.
> So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus is the entity that started forcing tech companies to report their diversity numbers.
This probably shouldn't need pointing out, but appealing to your colleagues to put pressure on your own management via the U.S. Congress seems pretty obviously like a good way to get yourself fired.
If you're going to try to exert political pressure from the outside, you're going to wind up on the outside. I'm no big fan of Google's management, but I find it hard to fault them for this one.
In plenty of countries it would be illegal to fire someone for doing that. It’s clear that the results of Timnit’s actions were hardly unpredictable given the state of USA corporate<>labour relations, but one could still fault management.
They didn't fire her. She said that unless they met certain conditions, she was quitting. They didn't meet the conditions and accepted her resignation.
Title here is misleading (probably deliberate). The email that got this person “fired” was one that allegedly contained an ultimatum and a threat to resign. The employer then called her bluff and accepted her resignation with immediate effect.
A manager with every reason to cover his ass is claiming that.
Her email, sent before that claim was made, suggests that wasn't even remotely the case - and it's entirely possible to think "she seems like an ass" and "this smells like bullshit" are both true.
I suspect reality is somewhere between our two accounts, where one describes this as a purely deadline-related problem and the other describes months of effort to seek feedback via various channels and moving goalposts for what types of review were expected.
Their paper was accepted by the conference's peer review system. Google, not researchers in the community, wanted it retracted. The only logical explanation is that it was good science that did not put Google is a positive light.
I don't think the paper has been accepted, because the conference itself has not made decisions on the papers. This article [1] says that the paper was submitted to "ACM Conference on Fairness, Accountability, and Transparency", which sends paper decisions (accept or reject) next week [2]. Of course, it might be accepted at that time, but that doesn't seem to be decided yet.
What's your experience with corporate internal processes been? Mine has been that they often exhibit an extreme disinterest in the judgment of external parties that are not part of the internal process. Whether or not the external group accepts the paper was likely irrelevant to what would be internally seen as a breach of process.
Generally "It worked out, everything's fine" is not a line that flies.
Very hard to take that at face value. This is the email that management took as somehow being beyond what an employee at google can do. The title is quite correct IMHO.
If you accuse unnamed co-workers of not only ignoring your expertise and of micro- and macro-aggressions, but also of dehumanizing you, yeah, someone is going to have an uncomfortable work environment.
> contained an ultimatum and a threat to resign. The employer then called her bluff and accepted her resignation
A threat to resign isn't the same thing as resigning. And it seems you don't think so, either, since you characterize her threat as a bluff. It can't be both a bluff and a resignation.
The original post was updated with more information.
Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
I would say that describes a conditional resignation, and the fact that she called this “being fired” indicates she didn’t really plan to resign (ie. she was bluffing).
It’s a strange reaction though. “I’m resigning.” “OK, that’s your prerogative, bye.” “Wait! I didn’t mean NOW! How can you fire me like this?!” Then make a big deal about being fired on social media. Seems to me that someone who genuinely was ready to resign would have just quietly walked away, even if they imagined it ending less abruptly. Can’t know for sure, obviously.
Y'all are alleging that it was phrased as an ultimatum. I'm not so sure that's the case. Proposing an option of resigning is not the same thing as "I will definitely resign".
"Chooses not to" is so disingenuous it's hard to take you seriously.
She cannot and will not expose that information without inviting all sorts of litigation. This is not a case where she can just "choose" to release that information.
This person forced Yann LeCun (one of the top 3 AI/ML researchers today, head of Facebook AI, and an ML researcher since the 80s) to give up on Twitter after she took exception to this tweet of his: https://twitter.com/ylecun/status/1274782757907030016
She lives her life attacking people who she disagrees with, and brings in race/gender into any conflict where she's involved.
I am surprised Google had hired her, because it was only a matter of time they'd become collateral damage in this roving tornado of hate and aggression.
From what I have seen today, she gave them an ultimatum: do this, this and that or else I will leave. They decided to not do those things she demanded, and then proceeded to fire her before she could do any more internal damage.
So I don't know enough to make a well informed opinion; however it seems part of the aggreivement on her part is that she was not given the names of those who peer reviewed her paper.
If internally she has a history of bringing race or sex into every discussion I could see why they'd rather be anonymous. Even being accused of being racist can ruin someone's career in this current climate.
You're commenting on an article where a person was fired due to being a racial/gender justice advocate, and you're worried about "Even being accused of being racist can ruin someone's career in this current climate" ???
While they were a jerk, it actually was far more specific than that. According to Jeff's response, first they bypassed a 2-week review process:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
Then, when called out on it, they made "an ultimatum" and said they would leave if certain demands weren't met. I'm not sure what every demand was, but one of them was indeed disclosing the name of the reviewers.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date.
Jeff declined to meet the conditions, and let them go. Seems pretty straight forward to me.
Ex-ethical AI researcher, we don't know what her next job will be, and she certainly lost an opportunity to lead by losing her place at Google. We are yet to see if this maneuver will cost her social capital or add to her reputation. Also, do you seriously believe being an ethics researcher automatically excludes one from misconduct?
From what we know so far, she seems to have overplayed her hand, assumed she should be treated above rules and when failed at it wanted to take names and encouraged mutiny, in a company where she is a junior manager. Not only there are actual legal complications for a manager to do that, it also sounds like narcissistic and toxic.
Her goals might be laudable, but she have executed poorly and emotionally, and lost an immediate chance to deliver ongoing impact for the ethics causes she cared about. I hope aspiring activist-ethicist-researcher-techies take note of this. Tantrums don't deliver long term impact. Narcissism clouds your judgement. Impassioned supporters mislead you. You have great power, if you wield it responsibly and pay attention not to overestimate it.
While there is some overlap, generally speaking Identity Politics != Ethics. She's an Identity Politics researcher, narrowly focused on race and gender topics. Not some philosopher or religious figure interested in the flourishing of humanity at large. Here is an abstract from a larger work, presumably her PhD thesis: https://arxiv.org/pdf/1908.06165.pdf. Critical gender and race theory through and through.
Amen. Her paper is citing dated problems in AI models. These problems have been known for at least 4-5 years, and the responsible AI community has been working on it. No progress mentioned. The paper read like an editorial rather than actual AI research.
Well written, well sourced and very narrow for someone that fashions themselves as 'ethicist'. There are more things in life beyond race and gender as seen through the prism of critical race/gender theory. I heard somewhere of this strange word 'love', something to look into. She's young, smart, capable and possibly well meaning in spite of her unbecoming behavior. Perhaps one day she'll grow to see the struggle common to all born of a woman.
They wouldn't even share with her the feedback on her paper to give her the opportunity to fix it. They didn't have to say who the feedback was from, but they could've at least told her the substance of the feedback.
"Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback."
I can imagine wanting to know the feedback, but she didn't ask for that she asked for the identity of everyone person who they consulted.
It could also be that disclosing the feedback would've identified the individual.
Timnit claims that she was told at a meeting called on short notice that her paper was being retracted because of anonymous feedback. She said she asked about the substance of the feedback and they refused to tell her.
After that meeting, she is told she can be read a private document with some, but not all of the anonymous feedback.
That led to Gebru sending a frustrated email to her colleagues and a separate email with conditions that she was willing to resign over. In that email it seems like she asked to knows the identities of the people who gave the feedback that let to the demand for retraction.
Jeff Dean says that Timnit wanted the identity of the people who gave the feedback as a condition for staying. Then he describes what that feedback was. He doesn't acknowledge that Timnit was not given the substance of the feedback in the first meeting when she asked for it. And it sounds like the full feedback wasn't given later during that private reading.
The feedback Dean cites that led to the retraction doesn't at all sound like something that required the protection of identities, and certainly nothing that required keeping that feedback secret. If the reviewers felt like the paper left out some relevant research, why wasn't that communicated fully and clearly in the first meeting?
Jeff Dean's email seems to leave out a lot of context about everything leading up to Timnit threatening to resign. At the very least it doesn't seem like the reasons the surprise retraction order came down weren't revealed at all until some time after that first meeting.
Per Dean, Timnit appears to have precipitated this by violating internal process for a 2 week internal review period and approval before external submission of a paper. (I have no idea how regularly this process is enforced, but I think it's standard in the industry). And then -- reading between the lines -- reacted poorly to the lack of approval. Whence Dean and Megan made the decision to force a retraction.
Then Timnit demanded the names and feedback of the reviewers. Again, as an outsider, I have no idea if the provision of this is customary. But not just the reviewers; also everyone that Dean and Megan had spoken with.
I do wonder how much of this was the irregular process around the paper, her reaction to criticism of the paper, or her public criticism of DEI efforts, particularly as a manager. Or all of the above.
Calling out your leadership chain, both for DEI metrics and for behavior about the paper -- while also disclosing that you seriously considering suing Google -- seems unwise if you don't wish to be terminated.
> The feedback Dean cites that led to the retraction doesn't at all sound like something that required the protection of identities, and certainly nothing that required keeping that feedback secret.
Perhaps the reviewers were afraid of being dragged on Twitter and possibly having their careers destroyed as a result.
"A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues."
Is it true that this information was not shared with her?
Because she wanted to attack them! That's how she has reached where she is: by attacking and silencing anyone who disagrees with her, and then turning around and classifying their opposition as "sexism/racism"; and who wants to be labeled as a sexist/racist these days? People would much rather just keep quiet.
Reminds me of the drama with Coraline Ada Ehmke and the Contributor Covenant a few years back that forced Linus Torvalds to take a break. These people are drama seeking instigators that demand that people do as they say, and companies keep giving in to them. It's a kind of rent seeking and power grabbing.
I do not know the details of this case and I can't speak to the subject's character. However I can't help but note that people who study the effects of race and gender tend to bring race/gender into anything and everything, because well, that's their job. If your response is that race and gender should be brought into only the areas where race and gender are relevant, I'd respond that the prevailing opinion among gender and race scholars is that gender/race is relevant everywhere.
I've thought about that filter through which we view the world a lot recently, and I think you may have identified part of where the conflict comes from. People viewing the world through fundamentally different filters.
The problem is that if you focus on race, gender and/or sexual orientation as an important factor underlying interactions, if you're not careful you can soon come to see race and gender as the primary driving force that dictates the outcomes of all interactions, and soon all issues get reduced down to a matter or racial, sexual, or class struggle.
The problem is the real world doesn't work that way, it is a complicated interacting network of events that are often only 2nd or 3rd order events from other completely different decisions made long ago.
To try and reduce all interactions and societal problems to a matter of a 1 or 2 factors is like trying to make money on the stock market solely by looking at the Fed's monetary policy. Sure the Fed's monetary policy is an important overall factor, and in some cases the most important factor in certain changes in the stock market, but that doesn't mean that there aren't a thousand other moving parts to focus on.
The problem is when we interpret every event through a single filter, or give one filter primacy in all interpretations of events that filter now colors everything we see.
So for example if my primary filter for understanding the world is race relations and/or critical race theory, any time someone disagrees with me I really do feel as though it is a racially motivated attack. Why? Because I have chosen to interpret everything in my world through the filter of race, and every event that occurs will in my mind be race related.
Note this applies just as well to any filter, christian extremists will interpret all events that happen as a sign of the times and the devil coming to power. If my filter is the class struggle than I will see every issue be between the haves and the have-nots.
My point is not to discredit people that want to bring gender and race in to everything I think there are many discussions where examining the role these factors played is important. My point is to caution and warn against getting into a single filter mindset, that is the cause of much of the divisiveness we see, we have people that really are living in totally different worlds because the way they choose to view the world is totally different and they refuse to ever consider there might be other filters to view the world through.
> The problem is when we interpret every event through a single filter, or give one filter primacy in all interpretations of events that filter now colors everything we see.
Anyone noticing the irony of this seeming to be pretty much what the paper author seems to be blaming the machine learning systems for ?
There's a big difference between raising honest, thoughtful points about social justice issues (even forcefully), and using those same issues (superficially) as a cudgel to marginalize your enemies and increase your own social power (and note that those things don't have to be happening at the same time).
These days I see a lot of both. I also have no idea what happened in this specific case, and at this time I wouldn't be willing to take a side.
Please share why you think a person who was hired to address race/gender bias in AI writing a paper about race/gender bias in AI is tending to "bring race/gender into anything and everything"?
Or let's cut to the chase, you saw the headline, and your immediate conclusion was this was a raging SJW. Why did you jump to that conclusion?
It is a big problem that in our current political climate people with those types of built in personal issues float to the top. Gender and race should only be brought up if there is evidence that someone is being discriminated against or harassed based on those traits. I see all kinds of articles immediately pulling race and gender into the equation and it’s dangerous and dishonest because it robs credibility from situations where there is truly discrimination/harassment based on this occurring.
It's incredibly hard to prove discrimination or harassment was based on ethnicity, sexual orientation, religion, or political views, even when it is.
You say "people with those types of built in personal issues float to the top" aren't you implying their positions are not earned based on merit and, therefore, that they are less competent than their peers?
I am implying that they often scare away their competition by attacking the character of the individuals who they work with, rather than focusing on true accomplishments that would give them legitimate merit for the positions they aspire to. Their supervisors can even be intimidated into promoting them if they have a history of filing complaints of discrimination when things don’t go their way.
It’s also possible that some promotions/hires are based on desired political appearance of the company. Promoting someone based on certain race and gender types ironically makes your company seem less racist and sexist to the casual outside observer.
I think there’s definitely a dynamic in any dogmatic arena (social justice is dogma heaven) where the most pathological people thrive.
In the high-dogma arena there’s incredibly easy and visible signals that give you the ability to appear like “one of the good ones.”
I think coincidentally this is highly attractive to elite circles too. Follow all the social justice rituals and shibboleth and you suddenly get the camouflage of an empathetic, good person.
> You say "people with those types of built in personal issues float to the top" aren't you implying their positions are not earned based on merit and, therefore, that they are less competent than their peers?
Many are, yes. I had to hire one because my director’s bonus was implicitly tied to a diversity quota OKR, and the VP’s bonus was explicitly tied to one, which he blogged about as a shameful virtue signaling PR exercise. They didn’t even interview her. Then we way over-leveled her to boot, because there was another stupid OKR to grow diversity among senior levels. And this was at a big tech company that you’ve definitely heard of.
This is happening at many companies who claim that they aren’t lowering the bar.
Truth be told I had already lowered the bar and wanted to pass her but still couldn’t justify it. I didn’t want to deal with the diversity police and had my fingers crossed that she would be the one.
Some perfectly capable white guy didn’t get an opportunity as a result.
In general, theatening to resign is never a good strategy and will never be in your favor unless you are a highly valued executive.
If you're anyone else in the company and think a problem could possibly work out, try to amicably work it out. If you believe it to be impossible to work out, it is much better to just resign than to threaten to resign, which will almost always result in firing.
If you resign, you have the upper hand, can still get 2 more weeks of pay, possibly a couple of good recommendations for your next job, and then go write your blog posts after that.
He's still posting on Twitter. Did so today, and yesterday.
Her response to that tweet is a bit harsh, but not completely unreasonable. I'm not a Twitter expert, so I can't really stitch together their conversation on Twitter to see where/when it goes off the rails.
Easy. Just call that person a racist, a bigot, or an oppressor without a shred of evidence.
> A person: I think bias in machine learning is caused by bias in training data.
> Me: Why are you ignoring all I've been saying? You're a racist. You're a white man who oppresses. I'm not going to talk to a bigot like you.
Repeat this a few times, the attacked will be punished both online and offline. Justice served.
I don't think that's a fair characterization of Gebru's argument there at all. A better summary might be "You're the Chief AI Scientist at a massively influential and profitable company with billions of users and exabytes of data, you can't just throw up your hands and say 'welp, that's what the model spits out so that's what we're stuck with.'"
Its more like " There are difficulties on collecting data and historical reasons that leads to bias in the trained model"
Other side " no you are a racist and responsible for bias due for being a white male"
It's pretty clear that Gebru criticized LeCun for being apathetic about an obviously biased model, not because he's a white male. And again, "There are difficulties on collecting data and historical reasons that leads to bias in the trained model" is accurate, but it's still a total cop-out! There's no way Facebook would let results like [0] slide if there were money at stake - they'd get more training data, or even throw out training images of white faces, if they had to - so why is LeCun defending someone else's failure to do so?
More importantly, how should that make us feel about the very real possibility of LeCun serving as an expert advisor on, say, police or other military applications of AI? Would he again just throw up his hands and say, "Well the training data consists disproportionately of black faces, so the model disproportionately implicates black people, nothing we can do about it"?
"You are probably going to be a very successful computer person. But you're going to go through life thinking that girls don't like you because you're a nerd. And I want you to know, from the bottom of my heart, that that won't be true. It'll be because you're an asshole." ~ The Social Network
A good number of social activists are rude and abrasive and then blame how people react as racism/sexism. No one wants to be around or work with anyone who is a jerk. The quote above sorta reminds me of how this person is going about their life.
I read her email and all I could read was her privilege and her entitlement. I think she knows there is racism and sexism in play but thinks everything is about that. The feedback going to HR was probably because they didn’t feel comfortable telling her it direct for fear of reprisals from her.
One unusual privilege that she has is the ability to describe her experience of racism and sexism and expect to be listened to - her e-mail here speaks mostly on this point.
Her e-mail identifies that you can speak on racism and sexism as much as you want, but all of your speech will be routed to /dev/null.
> Her e-mail identifies that you can speak on racism and sexism as much as you want, but all of your speech will be routed to /dev/null.
I think the fact people need to go through HR to give Feedback means the speech isn't going to /dev/null. People seem to take her speech very seriously and don't want to feel her wrath.
In fact, Google even appeared to take her concerns onboard after she got a lawyer and only threatened to sue. She just seems to expect more which is the entitlement I was talking about.
I totally agree with you. Screaming racism/sexism/marginalisation every time someone disagrees with you is annoying and tiring. I am a second degree immigrant in Norway and I always see this behaviour in people who blame society when they don't get their way when in reality people rarely get their way all the time. I hope the level of stupid activism does not reach Norway although we do copy many stupidities of the US :)
Worst of all, it pushes non-racists to become actual racists.
If you start attacking people who previously had no problem with you, grouping them as evil on the basis of their physical characteristics, you’re going to get various forms of backlash. One of those is going to be those people seeking support among others belonging to the evil group identity that you carved out for them, and they’re going to push back in kind.
> One of those is going to be those people seeking support among others belonging to the evil group identity that you carved out for them, and they’re going to push back in kind.
I don't think that happens very often outside of the villain origin stories in comic books. A normal non-racist person doesn't react to being called a racist by becoming the biggest racist they can just to spite people.
Oh, I don’t think it happens in the common case, no. But it absolutely happens at the margins, particularly among folks who are more predisposed to radicalization. I know that happens.
And racism isn’t binary. You don’t have to be pushed all the way to wearing KKK garb to have some level of contempt or prejudice towards other races. So a great way to avoid this outcome and minimize exacerbating existing tensions is to not engage in more divisive hostility on the basis of their identity. Like the guy she is berating likely didn’t even consciously realize he was white until she made a point of reminding him in the worst way possible. A dumb, counterproductive strategy.
But her behavior is going to make it harder for all other black people to get hired. People who are otherwise not concerned with race or sex at all will see this and then need to convince themselves that she is not representative of most black people. Some will undoubtedly just go with the option that seems like it carries the lowest risk to them personally and professionally.
Personally I would not hire anybody who has a history of trying to weaponize twitter. And being smeared as a racist, like she is doing to Jeff Dean, can be extremely damaging and severely career limiting. He’ll probably be ok, but it’s something that many people can never move past, especially if Vice or Vox or Salon or whatever culture war libel vehicle decides that they can twist your story into a hot hit piece.
How was she marginalized? Marginalized means keeping someone in a powerless or unimportant position in society, which she was not as she had one of the most important roles in AI and was a manager.
That's a little ungenerous. Social activists may go over the line and be rude or abrasive, but unfortunately it's impossible to be a polite social activist. Activism fundamentally requires telling people that their actions should be modified, that their beliefs may be flawed, that their systems have problems. It's really hard to do that and be a likeable person.
Of course social activists may at times cross the line and be unnecessarily rude or abrasive. But I'd have more sympathy because their fundamental job is to be subversive and critical of people.
For context he is a black man that has gotten over a hundred leaders of the KKK, including former convicts to leave the Klan. Not by arguing with them, not be reasoning, not by showing them the effects of their behaviour, but instead by sitting down, talking with them and becoming their friend.
In a nutshell this one man has probably done more to combat racism, than all the angry tweets on twitter combined.
Daryl Davis is certainly a wonderful inspiration in his patience, compassion and effectiveness in reaching racists. However I'm not sure about using him as a model for how activists should behave. What that's essentially requiring is that activists should patiently reach out until the people oppressing them learn enough to stop oppressing them.
In some cases this is possible. The Klu Klux Klan is comprised of broken men who dance in costumes at night. What about the racists who do not do that? Who enact their racism as government policy? Or as housing discrimination? Or through their art?
How long will it take to reach them? To quote James Baldwin^[1]:
> You always told me it takes time. It's taken my father's time, my mother's time, my uncle's time, my brothers' and my sisters' time, my nieces' and my nephews' time. How much time do you want. For your progress.
"I am a musician, not a psychologist or sociologist. If I can do that, anyone can do that. Take the time to talk with your adversaries, you will both learn something."
Thanks for the link, that was an incredible talk, and that guy is cool as hell.
Ilhan Omar said something similar on Twitter in response to Obama's criticisms of the "defund the police" slogan. I think you're being overly generous to social activists. Their fundamental job is to bring social change. You can't change anything by simply alienating those who disagree with you. That creates political entrenchment.
The fact is that Obama has got way more done for America than Omar, so I'm inclined to think he's right.
I think we need to distinguish between social activist and politician. Social activists can certainly be politicians, such as Ilhan Omar, Alexandria Ocasio-Cortez, Bernie Sanders etc. But many are not. Obama is not a social activist. He's an effective politician, one who has certainly brought about change. But he's not a social activist.
Many social activists were famously unpopular. Martin Luther King died with a 66% disapproval rating^[1]. Susan B. Anthony was ridiculed and accused of trying to destroy the institution of marriage (sound familiar?).
Of course some activists veer too far. Jane Fonda is an example. I don't fault her opposition to the Vietnam War, but her infamous photo practically advocated for violence against American troops.
In a sense the aim of social activists is to be ahead of their time. MLK was reviled in his time and practically infallible today. We need both politicians and activists for progress. Politicians push forward the policy, but the activists push forward the politicians. Without activists, politicians can end up in holding patterns, afraid to lose popularity. Without politicians, activists are just screaming into a void.
Obama got his start as an actual activist, under the title of "community organizer", in Chicago. Even in his early years, he clearly understood that to get real results, you have to make compromises and avoid demonizing people on the other side of issues. To my eye, Obama's emotional intelligence, communication ability, and strategic aptitude are genius-level-off-the-charts-stupendous, whereas these firebrand activists you mention are operating at an elementary level.
> The fact is that Obama has got way more done for America than Omar, so I'm inclined to think he's right.
Its funny how the people endorsing Obama's experience on the "defund the police" slogan losing people seem to ignore the other "error" he mentioned as losing people in the same interview: giving too small of a platform within the Democratic Party to voices like AOC's (who, obviously, has the opposite view as Obama on "defund the police") who have proven quite effective with connecting with large constituencies with which the party establishment (implicitly, including Obama himself) has been ineffective.
I would be cautious in selectively referencing that Obama interview to suggest he knows better than "The Squad".
As much as I respect Fred Rogers and Carl Sagan, they were white men working for goals that weren't exactly controversial. If we're calling them social activists—I'm not opposed to that, I just didn't consider them as activists in my original comment—then yeah there's plenty of polite social activists. The social activists I was considering were along the lines of MLK, Susan B. Anthony, James Baldwin. People who were advocating for something fundamentally opposite to current societal values. With that definition I believe there's no way to be polite.
I guess if I had to refine my original claim: it's impossible to politely disagree with fundamental societal values.
>unfortunately it's impossible to be a polite social activist
Tell that to Gandhi, tell that to the many people in the Civil Rights Movement and Martin Luther King, tell that to the people who were in the Monday demonstrations in East Germany, the list goes on.
The Freedom Riders did something fundamentally abhorrent to southerners. They went into white establishments as black people (or accompanying black people). It's only because of changing norms that we don't see their actions as horrifically rude or even violent in nature.
Basically the worst of online discourse, but in this case one-sided. Yann is discussing in good faith and Timnit is not.
If this is the normal way she interacts with people she disagrees with it's no surprise they didn't want her to stay. The public tweeting about it doesn't inspire much confidence either.
> Whatever your position, Yann engages on substance, and Timnit is obnoxious:
For what it's worth, I and many others disagree with this characterization. I find Yann comes across as incredibly condescending and holier than thou in their interactions. Nor does Yann engage on substance. He refuses to take the time engage with, or even acknowledge that he has read, the relevant academic literature (which Timnit repeatedly cites).
People can read the thread for themselves and decide.
I see a quote tweet from Timnit with "I'm sick of this framing...listen to us", then misquoting what he said. Ignoring his replies and then following up with "I'm disengaging for my sanity...not worth my time...Maybe your colleagues will try to educate you..." etc.
She attacks him (so he replies) and then she ignores him and talks down to him.
I don't have a dog in this fight, I'm just an outside person reading this - I suspect most people reading that thread would think Timnit's tweets are obnoxious. Imagine if the people were switched.
I wouldn't want to work with someone who argues that way when they disagree.
Not to get too heated or anything, but the framing of "People can read the thread for themselves and decide." concerns me. Saying "I think A", then responding to people who say "I think ~A" with "hey hey hey, let's let people decide themselves" is pretty stifling. I don't believe that was your intent, but that's one way it reads.
I appreciate that you've explained your reasoning for your position, though.
Yes, you've made your position clear. I'm simply asking that you acknowledge that not everyone agrees with you, instead of making sweeping statements that no matter your opinion, Timnit was "obnoxious". You have your view of the events, I'm not trying to convince you to change it. I'm simply saying that other people disagree. No need to continue to try and justify yourself.
I guess the way I think about it is this: Yann has a long history in ML. He's probably had to deal with bias problems for decades. Probably pretty experienced with it. Now he's heading up some Facebook ML stuff and on a daily basis he watches hundreds to thousands of engineers work on systems that process and learn from billions of users. I feel like after you do that for a while, you gain enough wisdom and experience to deserve to be engaged with respect and thoughtfulness. She has repeatedly engaged with bad faith, misleading interpretations of intent, and is just sort of really "attacky". Sure, he's a bit condescending (I've seen the same thing and it annoyed me for a bit, then I kind of read about what he's done and realized: he's got tons of experience and data about this and works with it at scale constantly.
My understanding is that this is not the first time they engaged on this type of topic, and Yann has a history of ignoring other people who brought up similar criticisms to him (at conferences, etc.)
At some point you lose the assumption of good faith, and deserve to be called out for refusing to learn.
For what it's worth, I'm well aware of who Yann is, and was at the time as well. That doesn't make him immune to being wrong. (Nor, by the way, do I see any bad faith in her initial tweet. I see exasperation, but not bad faith).
There was no disagreement. Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers). In particular Yann did not suggest in any way a) there are no harms or b) that harms are only related to biased training sets. Yann was commenting on the outcome of a particular research project and how they used a biased training set resulting in the outcome that was observed.
Timnit brought up harms first, then pretended Yann did marginalize such harms and attributed them solely to a biased training sets. And then viciously attacked that strawman. That's a bad faith argument.
I can appreciate that she might have been indeed generally sick and tired as she writes, and can appreciate that sick and tired people will not always manage to be nice or overcome their own biases and assume good faith all the time from the other party; we're all human after all. But that doesn't change anything about her argument being made in bad faith.
> Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers)
This feels like unreasonable semantics. The dangers are precisely the danger of causing harm. The harms therefore are concrete results of theoretical dangers manifesting. They aren't different.
> In particular Yann did not suggest in any way a) there are no harms
I agree, and I've said as much.
> b) that harms are only related to biased training sets
He did, insofar as he suggested that the dangers were due solely to bias in the training set, which is implied when he says that if you train the same model on different data, everyone looks African. Like yes, that is true, but it doesn't reduce the harms (or dangers, if you want to be precise). It just creates a different set of biases with a different set of potential harms (which again, are "dangers").
You've done a phd. Why on earth would you believe that someone, even an expert in the broad field, would know more about a particular topic than someone who specializes in that area is research?
Do you think Yann knows 100X more about every area of ML then everyone else, or is it just fairness, accountability, and transparency that he happens to be more knowledgeable in than arguably a founder of the subfield?
It's pretty simple: he did made ML work before almost anybody else did, and kept working on during the deep network explosion, and is now running Facebook AI, which has to deal with these sorts of problems with practical solutions on a daily basis, with billions of users. That sort of daily experience counts so much that I would I would place him in the "knows 100X about every area of ML" (excluding rare subfields).
It's rare but I have encountered people outside my field who knew more than I did about my field, because of their daily experiences over decades, or their raw intelligence. Yann seems to have both.
So are you suggesting that Yann has more expertise on what you're working on now (which I know, and would consider to be an ML subfield in the same vein as Timnit's), and therefore would defer to his expertise when he says things that show nothing more than an undergraduate level of the topic?
Because I'm only a dilettante in the AI ethics space (and admittedly ML as a whole), and I can describe the flaws in Yann's reasoning. Blind deference of that level isn't rational.
I'm always hearing tales of how insane Google's social justice warrior culture has become ... but I'm always hearing these stories from White, Asian, or Indian males. I wonder how bad it actually is in reality.
The picture insiders paint is that of organizations infested with technically mediocre people who form cliques that provide air cover for each others mediocrity ... with sex and race providing a disincentive for management to take action.
I wonder how accurate that is ... one explanation for the stories could just be that a lot of disgruntled males are unhappy with competition.
On the other hand, if it was true and I was a high performance underrepresented minority, there's no way I'd go to Google. Wouldn't people just assume I'm there because of quota filling, instead of my actual performance?
I definitely saw a bit of that when I was there, but only a bit. I think it mostly started after I left. Before about 2012 or so, every female engineer I encountered was just as competent as the men. I never thought about diversity culture because everyone was of pretty consistent skill, at least in the parts of the org I was in.
After 2012 we got a diversity hire on our team who was useless in every way, was a pathological liar and manipulator, yet who was consistently rewarded because the bosses boss was a female feminist. Her boss knew she was trouble but could do nothing. I did encounter a few more cases where female employees would just inexplicably have skills gaps that shouldn't have happened at Google, like not knowing what hexadecimal was.
I never encountered grumbling about competition. Google was not a hugely competitive place at that time because there was more than enough for everyone. Promos were generally not quotad, for instance, at least not for most of the rank and file.
I was a high performance underrepresented minority, there's no way I'd go to Google. Wouldn't people just assume I'm there because of quota filling, instead of my actual performance?
If you were genuinely high performance then no, of course people wouldn't assume that. And sure you'd go to Google, because they pay very well.
The real issue and it's not at all Google specific are the minority low performers or outright troublemakers, like Gebru. Then people absolutely assume they are quota filling. They go anyway because they aren't self aware enough to realise that's happening and blame any lack of respect they get to racism/sexism. And the money is still great so why wouldn't they go?
Not disputing what you are saying, but there seems to be more to the story. If this is all that happened why not let her resign on her own (perhaps take action if this did not happen after n months). Surely that would create less publicity.
Are you saying that someone that Google hired as an AI ethics researcher should not be allowed to disagree with an AI practitioner on matters of AI ethics, and should be canceled for expressing an opinion?
What happened to diversity of thought, academic freedom, free speech as a civic virtue and not simply a restriction on government action, and all those other things this community normally stands for? Just yesterday we saw the NLRB rule that Google had illegally fired workers, and we were talking about how the big tech companies use their power to suppress internal dissent and it's bad. How did we forget that so quickly?
> forced ... to give up on Twitter after she took exception to this tweet
Can you provide a source for this. What did she say?
There are legitimate criticisms of Yann's tweet. Just because he is technically correct about how ML works doesn't mean that what he is saying isn't ALSO a classical dismissal of the concerns people have with AI.
The issue isn't that ML is evil or racist. The issue is that if it is used too objectively detached from the reality of the world it operates in, the outcomes could be used for evil or with racist intents.
AI scientists like LeCun handwaves away concerns about diversity in data sources, just as his employer handwaves away concerns about engagement algorithms surfacing misinformation on the platform.
But the societal consequences remain for others to deal with.
Basically (to summarize a lot), his point was: an ML model is only as good as the data you feed it. If, say, the photos for your face recognition model are only of white men, then obviously the model will do well on white men, while (possibly) not doing as well on other races or genders. This is a statement of fact, and nothing controversial about it.
But she took offense to it, and started insulting him on Twitter. He got tired of defending himself, and just quit twitter.
She was never uncivil and she pointed out a series of issues, from what she believed was imprecise in his argument, to the societal issues that caused her to be viciously attacked by other people for engaging in the discussion.
Other people questioned whether data only can be accountable for the biases and were not subjected to the kind of vitriol she was. The fact I am a white cishet male shields me from a lot of misdirected anger.
Not just that, she never provided any links to tutorials/papers/etc that she had given on the topic, and when I looked into the one workshop (from memory) that someone else mentioned, it had literally nothing to do with the issue of dataset bias.
That episode gave me the impression she was more interested in drum-beating and axe-grinding than engaging constructively. I'm not surprised she seemed to be doing the same inside Google.
Coming back to the original tweet: if you changed the training data have more black people instead of white, would it perform the same but with inverted racial biases? Maybe? You really can't know without doing it. It might generate faces with a dark complexion but also big distortions or unrealistic colors. The original model doesn't just produce white people because of the training data, the hyperparameter tuning, perhaps the entire architecture, would have been modified until it produced acceptable outputs... using white training data. Ultimately the engineers and other humans behind the scenes are the arbiters of success, loss functions are chosen by humans, and swapping training data on the same model won't change those early decisions.
In another context I'm sure LeCun could have offered his somewhat reductionist take on this example of bias (something I might have done myself – reductionism flows in technologists' veins!) A discussion could have ensued, and everyone could have come out with a better understanding. A hot Twitter thread isn't where that will happen. Neither LeCun nor Timnit have the power to change what Twitter is. LeCun (reasonably!) doesn't like the nature of the discussion, and he leaves, and I think that's OK.
> Ultimately the engineers and other humans behind the scenes are the ultimate arbiters of success, loss functions are chosen by humans, and swapping training data on the same model won't change those early decisions
How would changing loss functions alter this? This makes no sense.
Hyperparameter tuning is done iteratively, and used get the best score on the test dataset. Do you really believe the engineers hand picked examples and purposefully trained it so that looked more "white" and disregarded the test scores?
Training/test data is 100% the cause for this bias.
The goal is to get pleasing face reconstruction, everything follows from that. There's no objectively correct loss function, it's something that is selected because it has a good effect.
A simple example could be sensitivity to color differences: with lighter skin tones there is a fairly large difference between eyebrows and eye features and the person's forehead or cheek. Someone with a pretty dark complexion might have features that are distinguished in a very compressed set of colors. Depending on the loss function a dark complexion face could be essentially flat and featureless.
This could be fixed of course... but it requires changing the loss function.
Learning conditional mean vs conditional median for superresolution might be differently affected by a large chunk of outgroup. That was what I remember folks talking about during this twitter feud. Or, like with word embeddings, where people have added some balance penalties to the loss function to make it unbiased in the presence of biased data. Dataset bias caused this, but dataset bias can sometimes be well mitigated.
I agree with you on the principles & morals, but frankly this reads as scolding - all he did was explain how this can be avoided, which is good context to provide to the public. There isn't some hidden message there where he's "dismissing concerns people have with AI" or he's facilitating "outcomes that could be used for evil or with racist intents"
His tweet doesn't rule out the possibility that he secretly has a malicious set of incentives, just like your handle doesn't rule out that you're a Communist spy sent to fracture the American psyche. Yet, its ridiculous. I feel like a lot of the world in 2020 is for some reason a hostile filter is applied ~99% of the time on the internet, and you _rarely_ see that in personal interactions in real life.
Wow. Outside of the issues discussed, this email was not what I would expect from senior manager. The language is all over the place. It is rambling. It is not unintelligible, but is far from an easy read. It seems like something dictated into a machine rather than something composed on a keyboard. If you are not confident in your use of language but still want to go out in a blaze of glory, make sure someone QCs your final speech.
>Do you know what happened since? Silencing in the most fundamental way possible.
"Silencing in the most fundamental way" to me means physically clapping your hand over someone's mouth or murdering them, neither of which I believe she actually meant. This email would have greatly benefited from peer review.
> And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything
Interesting. Having been on the receiving end of a (later proven to be bogus) HR complaint, this is exactly how it was handled. I was forced to respond to allegations where neither the allegations nor source were ever shared with me.
That's how it is supposed to work. If you complain to HR about your manager how can they reveal your identity when your manager controls your performance review and compensation?
If the complaint is about academic research in a research institute, then there needs to be a different process. It's typically a fairly conservative idea, in fact (see: Chicago Principles [1]).
This is not a research institute, it's a division in a for profit global corporation.
I'm frankly disgusted that people who call themselves researchers of the humanities go work in this sort of obvious PR stunt of an "institute".
If you're hired by a private company to do academic work, it may often be reasonable to expect academic norms to apply. I'm not sure why you're disgusted by the idea that people might take jobs at Google, who do quite serious work around AI.
I don’t think it is disgusting to expect that, but it most certainly is naive. There are a set of norms, but they are industrial-research norms, not academic norms and they overlap but are not the same
I agree that it is a perfectly reasonable expectation to have, however there is no notion of a tenure in a private company. To me that is a pretty fundamental distinction (with far reaching consequences) that people often overlook.
Which doesn't seem to be what was happening here though, or at least there's no indication to that effect.
At the same time, the narrative that someone above her in the management hierarchy decided to pull her research seems more plausible. In that case, I would tend to "Privacy for the Weak, Transparency for the Powerful." What you describe is the former, but this would seem to be a case of the latter.
It's really hard to tell from this message what really happened. Why isn't it plausible that the paper in question was somehow upsetting to one of the dozens of people to whom the author circulated it? It reads to me like this person rejects the very idea that any criticism of their paper could possibly be legitimate.
First, for this to be a valid case of HR protecting someone who needs protecting, these complaints would have to come from someone who Gebru was managing and therefore have leverage on. I don't know the specifics, but it would stand to reason that she did manage some people, so that possibility can't be excluded for sure.
But from the last paragraph it seems that she did actually try to address the points raised:
> You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.
I wasn't there, but simply not addressing counter points doesn't fare well in the public's (read: my) eyes.
I don't think modern HR policy really works like that. If one person complains to their HRBP that the speech or actions of another person at the company makes them feel personally attacked, derogated, or establishes a hostile work environment then the company will be forced to take that matter to the second person. HR isn't looking to setup an Oxford debate between persons A and B. They are required to resolve the hostile work environment allegation, else person A has an actionable complaint against the company.
Sure but HR wouldn't share the feedback on her paper and give her an opportunity to update it. Instead they demanded that she retract the paper.
How is it fair or standard to make someone retract a paper because of anonymous and secret criticisms that the researcher isn't even given a chance to address?
Actually, to be fair, anonymous referee reports are absolutely standard in academia, and they may be standard part of Google practice in this field - and if so that might be good practice. Waiving anonymity could compromise the integrity of the intellectual process.
I could maybe understand the anonymity, but I don't see why the substance of the feedback that let to the whole paper getting retracted was kept secret at first. At least give the paper's authors an opportunity to update the paper.
The reasons Jeff Dean cited in his email for the retraction sound like things that could've been fixable.
That's just how HR works. It's there to protect the business. They get a complaint and a bunch of HR people and lawyers decide what needs to happen, and then they make it happen. It's not a debating society.
>> Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, ... haven’t heard from PR & Policy besides them asking you for updates (in 2 months).
Based on that I assume that the author was soliciting guidance from PR & Policy but didn't receive any. Someone higher up isn't doing their job effectively.
That scenario still wouldn't be close to what the anonymity of the HR process supposedly exists for. Unless the research paper in question contained a footnote "oh, and BTW, <xy> is stupid".
Oh, I agree, I just found it interesting that the AI Researcher's case was treated the same way. It makes me wonder if the issue was really feedback about the paper and not some other HR complaint.
>That's how it is supposed to work.
The thing that sucked about it was that it was impossible to defend myself. I only found out details after the fact because the complaint was anonymously filed on behalf of a co-worker and it was the co-worker that later came to me to explain.
She is accused of being an abuser. I think we've seen enough fake accusations of various kinds of abuse (i.e. crimes without proof, one person's word vs another's) in the past few years to know better.
There's no evidence that she was accused of abuse. How do you abuse people in an academic paper, anyway? Seems much more likely that someone higher up didn't like what she might have implied in the paper in that it might have clashed with company policy somehow.
EDIT: a bit more info from the NYTimes:
"In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine."
https://www.nytimes.com/2020/12/03/technology/google-researc...
How, exactly, is it supposed to work? If you are asked to respond to allegations, yet those allegations are not shared with you, what are they actually asking you to do?
"Retract the paper" not an allegation though. It is a demand / conclusion of a decision process. It doesn't allow any response/defense, and you're right, they don't seem to have been vague about not wanting any of that.
Something fishy about submitting paper with 1 day notice before its deadline when internal review process takes 2 weeks - she must have known the drill as she published dosens of papers before at G. Approval sounds like somebody's mistake maybe, not awaiting for review feedback for submission and submitting - feels strange. Especially when followed by threat of leaving company if identities of people consulted in review process is not revealed, at the same time being involved in litigation against G while on G's payroll etc.
It's like you have good Silicon Valley espisode just from that, I'm not sure if I buy total innocence here.
Sorry, I was talking about the commenter you were responding to above, who described being asked to respond to allegations without being told what those were.
I remember that I read somewhere (maybe here on HN) years ago that HR is like the secret police of police states. Nobody willingfully talks to them, except their bosses. The dynamics of this incident, the secrecy, the lack of care about people, enforce that metaphor.
I wonder what People Ops is, something more about logistics maybe. Still paid by the company, so it can't be on the side of employees.
I don't have much to say about her research or Google, really. I don't know much about AI or this group she's in, or her for that matter. But I'm all for diversity in tech and women's rights, etc.
What strikes me is how out of touch she seems to be. We're in the middle of pandemic and millions of people are out of work. The worst economic downturn since the Great Depression. However, due to luck, her credentials and research, she has been able to continue working for one of the top companies in the world and is even able to take a vacation. Good for her.
And yet, while on vacation, she makes demands of her employer that perhaps she was aware they wouldn't/couldn't meet.
I'm stunned by the if you don't accept my conditions, I'll deal with you, when I'm good and ready, when I get back from vacation. Like, how out of touch and tone deaf is this?
Anyway, the first thing she does, after rage quitting, is to run to Twitter to complain. She even said on Twitter that she didn't have a lawyer and wanted to find one ASAP. Honestly, everything Google did seems perfectly reasonable. Is her behavior unbecoming of a manager? Based on what I saw on Twitter, I'd say 'Yes'.
I had been following her on Twitter and saw this pop-up last night. It was kind of confusing and seemed like it was going to become very annoying, so I unfollowed. Then today, it kept showing up in other people's feeds, so I blocked her and muted people who retweeted her.
There is such a thing as a "brilliant jerk" and usually companies keep such people around too long. Seems like Google decided this headache wasn't worth it, and cut her loose. But now we see a "brilliant jerk" who is female and a minority/POC.
> You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company...I understand that the only things that mean anything at Google are levels, I’ve seen how my expertise has been completely dismissed. But now there’s an additional layer saying any privileged person can decide that they don’t want your paper out with zero conversation.
This conclusion seems so non sequitur to the argument. There is literally nothing in the previous argument to suggest that any of this was motivated by anything other than a disagreement with the report. The argument presented by the researcher is conjecture. Coupled with the fact that she threatened to sue her employer and they kept her for a year? Any reasonable employer would see that her employment is a liability and dismiss her prior to a year.
In the world of business, there is no jewel too precious to discard.
>> A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.
Plain English translation: she didn't present the results of her research in a light that favours the company's goals.
Once in a while someone posts an article on HN about academia and how doing a PhD is badly paid with few opportunities for stable employment. Then the comments will wonder why people still bother with academia.
Well, this is why. Because, as a rule, if you are a researcher in academia you won't be asked to retract your research because it makes someone look bad.
> if you are a researcher in academia you won't be asked to retract your research because it makes someone look bad.
Not entirely accurate (worked in academia). If your research makes someone look bad or have a chance to jeopardize financing, usually you won't get a chance even to send the paper, because it will be shut down in your research center by either your mentor or other peers.
Sound mentors are well connected with publishers, and even if you somehow send it under the radar (which will be the end of your research career), someone will call either your mentor or management for confirmation.
> she didn't present the results of her research in a light that favours the company's goals.
The same is happening in academia, but let's not open Pandora's box :)
This is not my experience. As a PhD student I've already had a (strong) disagreement with my thesis advisor about whether to submit a paper or not. My advisor thought I shouldn't, I thought I should. In the end, I submitted it and it got rejected and that's where the whole affair stopped.
Actually, I really can't think of how my advisor would tell me not to publish a paper, except to suggest that it's not ready for publication.
But perhaps, as is often the case in these conversations, we're talking about different "academia"'s? I think there are differences between fields and between different countries' institutions in the same field etc.
I bet your papers have math and you know, neural nets in them. This paper was just prose, mostly non original claims, it basically says - this thing is dangerous, that thing has negative effects, offering no solution, just drawing attention to issues, let's say an issue inventory. Then it follows that it's more of an ethics paper than ML, and should be judged by the standards of ethics papers.
> Well, this is why. Because, as a rule, if you are a researcher in academia you won't be asked to retract your research because it makes someone look bad.
If your results go against current dogma, or if they reveal an uncomfortable truth, your paper will be rejected for spurious reasons every time you submit it. A recent example from Glenn Geher[1]:
> I have published more than 100 academic pieces in my career to date. I've pretty much been through it all.
> From this context, I will say that the most difficult paper that my team (the New Paltz Evolutionary Psychology Lab) and I have ever tried to publish was a paper on the topic of political motivations that underlie academic values of academics.
That's the pushback you get for researching political motivations in academia. It's much worse if your ideas directly connect to social policies. If your ideas are particularly heretical (or you engage in good faith with heretics), you'll have your life threatened by violent mobs. This has happened to Allison Stanger, Bret Weinstein, Jordan Peterson, and many others.
>> Each rejection came with a new set of reasons. After some point, it started to seem to us that maybe academics just found this topic and our results too threatening. Maybe this paper simply was not politically correct.
It looks like the only reason to believe the reasons for rejecting the paper were political is that it was rejected by many venues. Note that having different reviewers give different feedback is par for the course.
Personally, I don't think this is serious. Having one's paper rejected, stings, but it happens to everyone and all the time. There's no reason to see hidden motives behind it. The paper not being very good is sufficient explanation.
Not that it doesn't happen for papers to be rejected because they threaten a reviewer's work. But to claim that the entire academic establishment is threatened by a paper and that's why it was rejected? Sounds a bit far-fetched.
The article you publish is attempting to assign various incidents of forced retraction to political prejudice, but I don't see how any of those cases "makes someone look bad", as per my comment.
Mostly, I see in the article an attempt to engage in the usual online culture war, but I can't say what is reported in the article is making me concerned about academic freedom. To be honest.
If I understand the document correctly, she wants HR to be biased against hiring "privileged" people (I think that means white, possibly male too, not sure) and wants to get politicians to force the company to report to create pressure for the same.
a. Yes, but you can post 'women and minorities encouraged' job ads. b. No.
People invested in identity politics are, by their own admission, interested in outcomes. In this world view, the legal system is a tool for achieving the desired outcomes. The same set of rules can be illegal if it rejects certain identities in a context, and at the same time legal if it promotes those identities in
a different context. 'Heads I win, tails you lose', but dead serious.
A thread discussing the pervasive 'women and minorities encouraged' mindset is in western academia:
AI ethics researcher sounds like a fluff position anyway, she got hired to give Google a shine of inclusiveness to the non technical people that think AI is going to take over the world and doesn't understand that the current state of it is nothing more than pattern recognition and statistics.
The squeaky wheel gets the grease, but it still bothers me that people like this get highly paid positions to point out problems seemingly without any technical knowledge. And then she's delusional enough to think that she can misbehave and make ultimatums?
The article links to a "landmark study" that shows that black women are underrepresented in training datasets and therefore misclassified more. Sure, that's important to fix, but it sounds like an undergraduate project, and it's not something I would call AI.
Ok, I read her Wikipedia entry and apparently she has Bachelor's and Master's degrees in electrical engineering from Stanford. Those are probably fairly standardized, so I'll concede that she does have some technical knowledge.
I have however seen way to many doctorates that seem technical while they're not. There's a common theme in her career, and it's not necessarily technical knowledge.
She worked at Microsoft research, sounds really impressive. But it was in the "Fairness, Accountability, Transparency and Ethics in AI lab". which sounds like a PR stunt. Her research was supported by Stanford DARE fellowship, which is mainly concerned with increasing diversity. She was an AI researcher at Google, but it consisted in pointing out bias in datasets. Her PhD research[1] was using street view images to find pick up trucks and finding a correlation with republican voters, which sounds more like a sociological application of computer vision than a hard technical problem.
When I was a grad student at Stanford she was one of the TAs for the main ML class (CS229). I can personally attest that she has solid technical skills and is quite sharp. (I got my PhD in EE at Stanford around the same time)
Ok, I'll take your word for it, I guess I was wrong about the technical skills bit. Looking at her career path I still get the impression that she's more interested in social activism within technical companies than engineering though.
Fairness, Accountability, Transparency (aka FAT) is a real sub-field of machine learning, that is as technical as any other machine learning sub-field in my opinion. It is not a PR stunt. I've published at top ML conference a paper about some CUDA kernels I wrote to accelerate training a class of RNNs, so I feel have technical grounding to make these claims.
Many FAT papers are published at NeurIPS or ICML (generally considered top two machine learning conferences). There's also a conference just on the topic: https://facctconference.org/
Her technical work and credentials are solid. Working on better datasets is valuable and important. What is questionable is acting as if every dataset out there that can be found to have one dimension with any bias against some race/gender deemed underprivileged is a malicious crime against humanity. When there are many many biases, cutting both ways in those datasets. When there are many many alternate explanations to this state of affairs besides malicious discrimination and oppression. When reasonable courses of action include the constructive contribution of building and using better datasets.
The fine-grained classification work associated with her thesis isn't an _easy_ technical problem. The dataset, labeling, and architecture generated a few other first-author conference papers for her besides the thesis, so it's not like she just applied an off-the-shelf model to an existing dataset...
She might have been a bad apple, but attitudes like this are ten times as toxic. It can drive a person mad, just suspecting their peers think about them the way you do. Even if it isn't true and it's just because our view of collective consciousness is distorted by exposure to this kind of resentful bile online.
>But now there’s an additional layer saying any privileged person can decide that they don’t want your paper out with zero conversation.
Decisions in megacorporations are made in one direction, from the top on down. If this is her main point of contention, I sympathize but that's what you get for joining one of the largest corporations in the world. It seems both parties will be better off without each other. I hope her research won't suffer for it.
of course it will. The big tech companies have created networks of users and user data that is beyond anything that can be replicated in an academic or laboratory setting. By saying that it's fine for corporations to have draconian oversight over research about the systems they build, what you are saying is that these systems cannot be researched for any purposes other than to make them more profitable. That is an abdication of responsibility.
edit: I'd also like to point out the obvious: Google was censoring her publications. How can you suggest that remaining in that environment is going to be conducive to her publishing high quality research?
I disagree because I don't think Big N has exclusive access to the data necessary for cutting edge research. You don't need petabytes of data to train cutting edge neural networks.
It also seems she published the paper she is best known for before joining Google, suggesting academics are not submitting subpar research compared to Big N.
If her goal was to perform research that would positively impact AI that runs in production, then taking a position at Google doesn't seem like the worst idea. Especially with since their stated principles [1] seem to fit that agenda.
Unfortunately enacting change "from within" isn't always easy. It's easy to forget that Google AI still sits underneath the megacorp.
This is a very frustrating email, only because there seems to be a very significant issue at the heart of it — a research paper that is allegedly being suppressed by Google management for some reason -- and this email largely seems to omit that in favor of unrelated disagreements with management and “process”. It’s like trying to find an astronomical body that only shows up through its gravitational impact on other bodies: a lot of chaff, with something very heavy and unseen in the center.
I wonder why these privileged people (I don't have her huge salary, I don't live in the us or work for a FAANG) keeps forgetting that after all they are just _employees_.
That's one of the worse takes possible about this situation.
Her role is specifically created to hold her employer to account when they are using bad practices. Her opposition to particular policies enacted in the company is literally part of her job description.
I understand what you are saying, but I doubt they hired her to be some kind of controller with unlimited enforcing power (as she says: "I understand that the only things that mean anything at Google are levels").
Only highlighting issues without power to enforce anything is not enough to have any tangible results? That's for sure.
This is a very dangerous stance to take. You are advocating for a purely authoritarian mode of management, applied universally without concern for context.
Not saying that it's desirable or that's not horrible, but isn't that the default mode? Or at least that's the world I've lived in, if you are an employee, in the end your opinions don't actually count, you are just a cog. Your position in the hierarchy is what matter.
And that's why at some point you need labor unions (not saying high paid tech workers needs them) and at some other point you start considering stop being an employee.
I think the fact that she had a huge salary is why she feels comfortable doing this. The average person needs their job to pay the bills so they don't want to rock the boat. She's already set for life so she doesn't care who she offends because getting fired is not a big deal for her.
Future employment opportunities still exist for her. That is another reason an employee may opt in to present an alternative reality online upon events like “leaving” or “getting fired” as people externally usually don’t have time or the access to verify either side’s story. There will be another equal caliber company where she can continue to do her work. Almost everyone who left Google in such public fashion in recent history have landed similar jobs elsewhere.
And let's not forget that in most(all?) of the other 1st world countries you are lucky if you make one tenth of that base salary. It's easy to forget. Very easy.
It is surprising she was not fired long ago.
I suppose Timnit was usually rewarded for such an attitude, chock full of victim based language.
"after all the micro and macro aggressions and harassments I received"
"does it just happen to people like me who are constantly dehumanized?"
"Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed"
"the next day I get some random “impact award.” Pure gaslighting"
"Writing more documents and saying things over and over again will tire you out but no one will listen."
People with this ultimate victim attitude are absolutely toxic to employ and work with. I go out of my way to screen them out of hiring in technical roles.
For all its faults, Google bends over backwards (and sometimes forwards) to advance minorities' interests. It's just not plausible that they're doing the opposite in this one case.
I’ve been at companies where I’m faced with an approval process before publishing research to the outside world. It can be quite stressful, especially in the face of holidays and impending deadlines.
However I guarantee that if anyone had a problem with the work, they would take the time to tell me why.
And it would happen without untitled meetings or unrelated email drama.
They might even work with me to fix the problem because they’re my coworkers and that’s what it means to work together in a normal workplace where people respect each other as colleagues.
Sadly every piece of publicity related to Google and its workers makes me want to work there less and less...
"withering email" == "anger-fueled message sent by a manager to a large group that asks readers to ignore company policy and start arguments with other leaders"
I don't see where "ethical AI researcher" comes into play here. Instead, I see a leader at a company unprofessionally sending messages to a large listserv and is later fired.
Can we be more careful with our language? Saying "X silenced a minority/woman" or whatever is not saying that "X silenced a minority/woman because they are a minority/woman".
It isn't really clear with your first quote if one thinks that silencing people generally is okay, but not minorities/women, or if silencing anyone is never okay, or if the minority/woman was silenced merely for their demographic status.
It's an abbreviation for any/all/some of those things being more important/interesting than what most of discussion here is about (whether letter/firing is appropriate)
What's misleading about the title? You have your own way of characterizing the email, and so does the publication. Them not agreeing with you isn't the same as misleading you.
In general, if I hear "withering e-mail" without context, I imagine an e-mail sent between two parties (or at most, a handful) where one party is cutting down the other directly. When I hear that an employee was fired for a "withering e-mail", I immediately think of retaliation.
In this case, "e-mail" is almost inconsequential. This "e-mail" was a broadcast to hundreds of people who were not in any way involved with the subject matter. It was "withering" in the sense that it cut someone down, but the person/company it cut down wasn't even in the TO field.
Words matter. I now notice that the post title is changed to the much more neutral "AI researcher Timnit Gebru resigns from Google".
1. I think few companies are out there "hiring by race or gender". At most companies I know, affirmative action means hiring by skill, but making sure at least XX% of the hiring pool are women, black, etc. This really just means your resume is more likely to get seen and maybe you're more likely to get a first round, but not that "you're being hired by race or gender".
2. You could argue this is still unfair (although as previously stated, you're still hiring people who earned it based on skill). Unfortunately not doing anything would me more unfair, specifically by stagnating things in the problematic state they are now [1].
In the macro-historical context, white Americans brought African slaves to their country, and actively deprived of wealth and knowledge, right? That was a societal, conscious commitment to push these people down.
Now that they're free but seriously lacking wealth and behind on knowledge too, they're expected just create that wealth and knowledge themselves. Thanks to "unfair" institutions like public education, this is going to happen regardless, but very slowly unless there's a societal, conscious commitment to push these people up.
[1]: Having a huge chunk of your society be alienated not only socially but also economically leads to conflict, could worsen the gap instead of improving it, weakens social and cultural ties, etc.
Diversity initiatives are like a graduated social tax. As a passably straight white man I can afford to pay more than an LGBTQ person of color. Just like I can pay more in income taxes than people in different financial situations.
It doesn't bother me if an applicant pool is made a bit bigger for marginalized people than myself, because I do not struggle to fall into any applicant pool whatsoever and have not for most of my adult life. Be it for employment, education, or housing.
Where conversations can get annoying is when I feel demonized because inequity exists independent of my individual actions. But I can live with someone bitching on Twitter because I'm not afraid of the police or telling potential employers I have a kid on the way.
What I don't like living with is the inability of others like myself in my own caste who would dismiss reality on reductionist notions "hiring by race or gender is inherently racist" without critical thought. Ignoring lack of diversity tacitly supports inequity and is itself racist or bigoted, by virtue of doing less than nothing to correct it.
"As a passably straight white man I can afford to pay more than an LGBTQ person of color."
How is this true? What if someone is a broke straight white man with three kids and someone else LGBTQ person of color is a young childless heir of fortune? How is the first person possibly privileged in comparison with the second one?
Looking from Europe, America seems to be very determined to recast its very obvious class problem as a race (or even gender) problem, even if it means introducing disadvantages based on immutable characteristics AGAIN. Two wrongs do not make a right and this farce will bite you again in the future, just like slavery and old style racism did.
I mean if you compare a rich person to a poor person yes the rich person is going to come out ahead in most social metrics but we're not speaking of wealth inequality, but social divides outside of it.
That said, in the United States, non-white men were systemically marginalized and prevented from participation in society to the same degree. There are numerous examples of this over the last four centuries, but they did not decrease in fervor until about a generation ago. For many reasons, our class divide is correlated to our racial divide. That doesn't mean we don't need to fix both, just that the there are two problems to fix there.
Looking from America, it's hard not to be frustrated at Europeans on their high horses talking about class versus racial divide. Is it so hard to recognize that we have different social problems, and a different history? And that things we talk about might be colored one way, because race, gender, and sexual orientation impact real people on a daily basis? Do you reject the idea that bigotry exists independent of class and it's something we need to work on?
"Looking from America, it's hard not to be frustrated at Europeans on their high horses talking about class versus racial divide."
That is the Internet - Americans commenting on Europe, Europeans on Islam, Muslims on Israel, Israelis on India, Indians on China etc. The distance may make you wrong, but it also may make you notice more general outlines of the forest you are commenting upon, while the locals see only individual trees.
Theoretically, complex societies should be able to fix multiple problems at once. In practice, it is very easy for one topic to exhaust ALL the oxygen in public debate.
2020 was all about race and other immutable characteristics in America. Among all that, class problems such as enormous healthcare bills that destroy lives every day, are basically forgotten.
Of course that there are many facets of bigotry out there. But: would you rather be a black upper middle class than a white unemployed person without good marketable skills?
These days, I would say that the answer is very different from the KKK days.
There are still absolutely situations where I'd prefer to be the first person over the second. For example, when interacting with a police officer, or when visiting certain towns. Or when attempting to get medical care (there's strong evidence to suggest that physicians are worse at treating black patients, and are prone to ignoring complaints from women, independent of class).
While class problems do, absolutely, exist in the US, race problems also exist.
In America, class isn't just about how much money you have. It's also about what accent you speak with, where you went to school, and very obviously what you look like. Many people who aren't necessarily familiar with how this works have an idea that money is all that matters, and that e.g. a black woman has the same opportunity as anyone else to attend the right schools and speak with the right accent and then get a high paying job at Google, and thereby enter the highest level of the professional class. But of course, you can see in this very thread many examples of how someone who has done all of these things can be considered to not have earned the right to speak out or have academic disputes with prominent people.
The first problem is you’re making the issue worse, not better. It feels good to some white men to publicly announce that they are unworthy of anything positive in their life, but if anything is reductionist, it’s that. Trying to justify racism to combat disproportionate representation (which is not an indication of racism or bias whatsoever, and we know this) is the type of evil behavior (not to mention mental gymnastics) that have no place in a society that has made so much progress. Also, your usage of “caste” is highly inappropriate.
Er, if this is unrelated, why are you commenting something that you must be aware is certain to stir up unproductive, culture-war-y discussion? Anyone who's been within 10 feet of hiring responsibilities knows you can't even think about race and gender when considering a candidate.
Right, you can’t even consider it, just mandate that race and gender factor into the candidates interviewed for every open position, make the diversity of your workforce a performance goal for executives and report the racial and gender makeup of your workforce in quarterly company updates. But no, can’t take it into consideration when hiring, of course not.
Ok, but how do you do that? It was easy in classical music; they just started doing auditions behind a curtain, without any verbal Q&A, and suddenly a field that used to be dominated by men became pretty much 50/50, and all the people who said that men were better at the highest echelons of musical achievement had to shut up.
But tech companies want to talk to the humans they're hiring, so race and gender are (almost) always available as information inputs during the hiring process. There's no equivalent to the curtain to remove the possibility of bias, so you need to look at alternate methods. Setting hiring targets is one way to do it. If you're going to complain that the targets are unfair, you can't just say "Well shucks, guess we have to stick with the system that produces massively unequal outcomes." You've gotta propose a system that's less unfair than the status quo.
"It was easy in classical music; they just started doing auditions behind a curtain, without any verbal Q&A, and suddenly a field that used to be dominated by men became pretty much 50/50"
Be careful here, the wind has changed and blind auditions are now problematic.
> Be careful here, the wind has changed and blind auditions are now problematic.
The argument made in the article you cite is ludicrous, blind auditions cannot possibly hurt diversity. If there is a problem when blind auditions are used in hiring for professional orchestras, the problem is elsewhere in the pipeline, and the way to fix it isn't replacing blind auditions, its fixing issue further up the pipeline.
There may be a pro-diversity case for using something other than current skill as a factor at other stages of the pipeline which are intended as, in whole or in part, educational and where current skill is being used not as a measure of current professional competence, but as a proxy for potential. But even there that's largely a poor substitute for identifying and addressing the factors producing a skewed pool.
That's a good point; the original experiment is a good story that for many was proof that prejudice exists, but it doesn't mean blind screening is a panacea.
There are some fairly easy steps you can take that other, more-diverse organizations have taken. First, you have to get rid of your referral pipeline, because hiring people your employees already know is the opposite of diversity. Second, instead of recruiting from organizations you've already heard of, like Stanford where this person came from, you park a full-time recruiter at institutions with practical diversity, like Houston, Georgia, UNLV, CUNY. Finally, you go for diversity of thought, by hiring philosophers and carpenters instead of CS grads. I know for a fact that were I to hire into my AI Ethics department, the last person I would recruit would be a Stanford AI researcher. I'd go directly to the philosophy department.
>There's no equivalent to the curtain to remove the possibility of bias
Other companies have tried. Names and vitals deleted from resumes before any people involved in the hiring process see them, remote interviews conducted over IM or with voice masking. That's one benefit I see with the much-maligned leetcode-style interviews, is that your identity is completely isolated from the process until the very end.
There's no reason you can't do an interview over text-chat instead of in person -- hell, with most of us working remote for the foreseeable future, that's how the majority of on-the-job communication will happen anyway.
> But tech companies want to talk to the humans they're hiring, so race and gender are (almost) always available as information inputs during the hiring process.
This is actually kind of funny to me, because in the open source world plenty of people have extremely productive working relationships without ever seeing or hearing each other, and in many cases without even knowing what country the other person lives in, how old they are, or what their legal name is.
So while there are definitely arguments that face to face communication is "higher bandwidth" or has other advantages, it doesn't seem out of the question to me that the hiring process could be "blinded" to a similar extent to orchestra auditions, without any significant reduction in hiring accuracy.
(Ok, maybe not quite the same extent; language fluency and style of speech are still significant signals even if everything is done over text)
I would have just down-voted this, but I hate it when people do that to me rather than actually respond.
In this case, I'm not going to respond in much detail, other than to note that there are 30+ years of writings about affirmative action which get deeply into this issue. Your summary of it is misleading and incomplete.
Having a diverse workforce is very valuable. The paradigm shifts that can come from voices of different backgrounds are invaluable. Can those growing up under racism be considered a skill? seems like a bit of a stretch to me, but it can definitely benefit the employer.
Since you can't actually calculate someone's skill, all you can do is hire on an attempted measurement of skill. And if you find reasons to think that your measurement of skill has inaccuracies, you'd probably be better off taking that into account when evaluating a candidate.
Do you think the predominance of men in tech is "hiring by skill alone" and that it just happens there are almost no skilled women?
Do you think it's possible that you can "hire by skill alone" and given two equally skilled people - one man, one woman - always hire the man, and that you could hire some equally skilled women if you randomised or removed whatever male-bias exists in your hiring?
Do you think it's possible that there are equally skilled men and women available to employers to hire, but that employers have built frat-boy workplaces where only men are interested in working, and cultures that drive away skilled women?
Do you think "hire by skill alone" means always hiring the most expensive people - like buying jewellery by value alone? What if a company can't afford to staff with all the maximally skilled people, but can get 80% of the skills for 50% of the money?
What about hiring people who can be trained or skilled up, and training them?
What about teams, where the collective skills can be greater than the sum of their parts, but maybe not if all the parts are identical?
(IANAL) One small point to add is that since Timnit was a manager at Google, she also acts as an agent of her employer.[0] With this, there is going to be a higher expectation of conforming with company policies and processes, particularly how she interacts with other individuals inside and outside the company due to her actions having a larger impact on Google's general liability.
I am not sure how this may have influenced Google's decision in this particular case, but I imagine it was part of the evaluation. Other comments here delve further into her current and previous actions.
This isn't the email that she sent with conditions she wanted met. Apparently she said she would resign if those conditions were not met, and the response to that email is that she was fired.
Yeah, I immediately started humming along with Johnny Paycheck.
Imagine being a senior researcher at the peak of your career, working for 6 months to a year on a paper, submitting it to a top tier conference, getting accepted, then getting the pointy haired boss do-what-I-said-peasant treatment.
Gebru wrote a research paper and circulated it for feedback. Google managers said "retract the paper because we say so." Gebru responded with "I'd rather resign (and also you suck)."
Google "accepted" the resignation (i.e. fired her).
"I said here are the conditions. If you can meet them great I’ll take my name off this paper, if not then I can work on a last date. Then she sent an email to my direct reports saying she has accepted my resignation. So that is google for you folks. You saw it happen right here."
I was able to discern this, but it doesn't explain why they wanted her to retract the paper. The contents of the paper and why they want her to retract it are key to the story, otherwise it's he-said-she-said.
Regardless of what happened, it's stuff like this that makes me consider leaving management. Any employee may try to get social media to take their side with any call you've had to make, which is stressful and potentially career-ending. Management is already a thankless job, and managers want to have a 9-5 like anyone else. Moreover, managers are often more bound in what they can and cannot say in public about employee decisions.
I switched from IC to EM because I really love helping grow careers, but if I ever was put in Jeff Dean's position I'd probably quit immediately. No job is worth it.
This is a really hard problem space to solve. I'm very sorry to see this drama happening and I have sympathy for both sides.
I routinely run into the issue that if I try to understand why people did something that went badly/harmed people without looking to blame them per se, I get accused of being "an apologist." But in practice trying to understand why things happen is essential to finding a real solution. Merely asserting that X person's/group's needs matter and they are being harmed isn't really enough to find a viable path forward.
Like I have seen people say that suburbia is "White supremacist design" because in practice it ended up being mostly Whites who got suburban housing when suburbia was born in the US. I have tried to make the distinction that suburbia was born in part because large swaths of greenfield development was quicker, easier and cheaper to create and that this choice wasn't, per se, racist.
I am aware there was real racism happening and there were a lot of bylaws that intentionally excluded People of Color, but I'm sure those bylaws would have existed even if they had been building towers of condos in downtown areas instead. It wasn't specific to the architectural form that was built.
I recently had an exchange with someone that reminded me how important trust and mutual respect is and how the lack of such tends to cause problems to escalate.
I asked this person about a thing they did. I had been having paranoid fears that they did that thing in reaction to me and stuff I was working on and when they replied to my inquiry they had a wholly unrelated reason.
I didn't tell them that I was having paranoid fantasies that maybe this was some kind of negative reaction to me. I framed it neutrally.
Their reply was very reassuring to me. It was reassuring in part because I trust and respect this person. If you don't have that piece, having a long list of negative experiences makes it difficult to believe they aren't intentionally harming you and then lying about their real motives to cover it up and enable additional abuse.
It is, perhaps, a mistake to leave this remark. Like Timnit Gebru, I'm pretty tired and frustrated with a lot of things and feeling rubbed raw and exhausted.
But it seems to me it doesn't get better by throwing in the towel and giving up on saying anything just because the entire world is touchy after a nearly year-long global pandemic.
> I recently had an exchange with someone that reminded me how important trust and mutual respect is and how the lack of such tends to cause problems to escalate.
What feels like a breakdown of trust in society has been on my mind a lot recently. Without trust, it seems communication and collaboration becomes impossible. How can society solve any of its problems when people can't discuss anything about those problems or potential solutions without it turning into a fight?
I've been on Hacker News over 11 years. I would like to think I've been building bridges, but it seems like nothing I do is ever enough to reassure people I'm not some SJW Feminazi here to just piss on the guys and let them know what misogynistic assholes they are and it gets hard to keep trying when most of the community has watched me starve for years and openly told me "Not my problem. Get a real job." (while my writing hits the front page, but people don't want to support it financially, knowing I'm handicapped, etc).
I don't have a whole lot more to give and I've spent much of this year wondering if it's time to throw in the towel and leave HN. It feels downright abusive at times to stay in a community where I feel I have done so much to reduce sexism and open doors for other women and improve participation of women here and it's the funnel for a multi-billion dollar business, yet I remain dirt poor.
It's a rather jagged, bitter pill to swallow and I think I deserve a helluva lot better.
But then I don't know where I would go if I left. HN is the least worst thing. Other places are worse.
Metafilter was a toxic cesspit that banned me for supposedly "self promoting." They like to wrench their shoulder out of place patting themselves on the back for how what awesome, wonderful people they are and the mods were actively encouraging the membership to bully me at a time when I was homeless. One member of Metafilter that had a hobby of harassing me while I was homeless was a female ER doctor. Another was a very privileged American pursuing their PhD while living in Europe.
When you left your comment, I was staring blankly at some other open tab wondering how in the heck to talk about rape prevention and best practices for dismantling rape culture without using the word "rape" at all, in part because it's a triggering word for people who have been assaulted and in part because I get accused of being full of bull for thinking I know anything about such topics.
Some problems are just hard to solve. I do what little I can, which doesn't seem to amount to a handful of sand in the grand scheme of things.
I guess the upside is I'm mostly well at this point, in spite of the entire world telling me I'm a deluded fruitcake so it's not like I'm ever going to get taken seriously or given any respect.
And this is probably all the wrong things to say, as usual. I wish I had an answer for you. There doesn't appear to be one, as best I can tell.
I vouched for your comment. It had been auto-marked [dead] due to containing some blocked keyword -- perhaps the very word that you talked about trying to avoid. I have noticed that a few words cause posts to initially show [dead] even when they are from high karma members such as yourself.
You can see if one of your own comments is dead by checking it in a private/incognito browser session. The status isn't visible to the comment poster while they are logged in.
Or if you log out, if I recall. It just generally doesn't occur to me to wonder if my comments are dead. I'm enough of a social outcast here that if it gets ignored, I figure people were just ignoring it.
For what it's worth, I think your contributions are important and you're a valuable member of this community. You've got a different perspective from many of the users of this site, and unlike many of them, your writing has actual _substance_. Regardless of whether or not I agree with what you say, if I notice your name on a comment, I know that I should pay attention to it.
Good for Google. I don't know anything about her, but her debate with Lecun on Twitter was enough for me to know she is toxic. I don't care how good of a researcher she is. The fact they fired her given the current climate shows they actually have balls. I had no idea...
I read her email. You know what's crazy about it? she is talking about Google silencing marginalized voices. She keeps referring to herself as a marginalized voice in the research community! She has a PhD from Stanford, working at one the best places in the world for AI researchers, and she gets more attention than almost any other researcher I know of (she earned it of course). Wake up woman. You're a privileged researcher! If I were making the comments she made to Lecun, no one would even bother to listen, yet alone Lecun actually respond and eventually leave twitter because of that. Give me a break. There's not even one sentence in her email about what is actually the content of her paper that is so controversial. It's just a rant about nothing. Welcome to the real world, companies have policies and once you join them you lose your freedom to do whatever the heck you want. You can go back to academia but no one would pay you what you've been making at Google Research.
> Welcome to the real world, companies have policies and once you join them you lose your freedom to do whatever the heck you want.
Yea there is something uniquely activist academic about all these complaints you see surfacing nowadays.
As though millenials are landing in industry now and they're surprised their employers are companies selling products and not activist organizations.
And you're right it's even funnier when the people complaining are urbane academics from elite institutions outraged that they don't unquestionable authority to enact policy.
Yea it's a broad/inaccurate catch-all, but I'm a millenial and I feel like when I was in college (early 2010s) you were beginning to see illiberal attitudes on campus start to take off and normalize.
Like stigmatizing advocates of free expression, a general embrace of safety-ism, scream invectives until the bureaucracy buckles tactics, etc.
I'm partially sympathetic to this point of view. I agree that Timnit will ultimately be fine, despite what she is going through right now. I also agree that Google should ultimately do whatever it wants in accordance with its determined corporate policies.
Anyone who recalls this letter should be able to muster sympathy for those who have joined Google with an activist mindset. Google is supposed to be different. Google is supposed to be for people who want to make the world better. Sure, maybe it's all just so much branding, but this is how Google (and by extension many Googlers) styles itself. This news is more evidence that Google is really bad at living up to that image - as idealistic/lofty/unrealistic as it may be - where it counts.
And any way, Timnit isn't toxic. She is upset with the status quo of equity among genders and ethnic groups in her field, and she is fighting to improve it. This is naturally unwelcome - and perhaps comes off as combative - to anyone who is satisfied with the status quo, but I would not call it toxic.
I disagree. In one thread she responds with tweets such as: "“I’m sick of this framing. Tired of it. ", "Again. UNBELIEVABLE. , "tell your fanboys to stop trolling me". Basically she was telling Lecun he's an idiot of not agreeing with her. And pretty much everyone else who is not agreeing with her.
Did you see Jeff Dean's email they just added to the article? Not surprisingly, she is doing the same thing. The reviewers are not agreeing with her and she is shocked how someone dares to reject her paper. She starts making demands to reveal who that was and sends an email to the Women AI list like it's some racist attack against her because she is a woman and black researcher. That's a toxic behavior of a very arrogant researcher. She clearly expects to get some preferential treatment because she is a big shot researcher or she just thinks she is smarter than everyone else. The irony.
The best and the brightest are elevated to positions of leadership because we trust them to lead the way. I am grateful that she knows her worth, and risked her own happiness for the sake of others. She is paying a short-term price for it because Google is struggling with fundamental corporate identity issues (such as whether or not it can live up to its self-image). In the long run, she will be vindicated.
Where I come from, behavior like this is interpreted as - at worst - iconoclastic when exhibit by white, male leaders. I have found that it is rewarded more often than not when it happens in places I have worked at in the past.
Also, just read some of her Twitter mentions - the fanboy harassment is real.
Sorry, I don't understand how she risked her own happiness for the sake of others. She picked a wrong battle. Google didn't do anything wrong here. An internal review process is something that happens everywhere. It's not always consistent, but it's still important and valuable.
The sad part is that social media, like always, focus on her race rather than the details. Here's an example tweet from Jeremy Howard, a known researcher (114K followers) from fast.ai:
"Whatever else you think of Google, you gotta admit their crisis PR response is staggeringly incompetent.
The day they were found to have illegally spied on and terminated activists - they picked that day to fire their top AI ethics expert (a black woman)."
He completely ignores the most important thing - why she got fired. He calls Google PR incompetent because they fired a famous black woman researcher after what happened. You see, people expect that Google would check their employees' skin color and gender before they decide to fire them. The content is not important.
For me, toxicity often boils down to these questions: Are you acting in good faith or not? Are you acting to resolve a disagreement and advance knowledge, or is the primary function of your speech to inflame and provoke? Do you accept that it is possible for smart people to disagree with you, or do you believe any disagreement is completely unacceptable?
When I view Gebru's Twitter argument with Lecun throug the above lens, it is pretty obvious to me that she is behaving in a toxic manner.
In that fracas, she emphasizes emotional appeals and "you just need to listen to us" and "I'm so sick of this". She also takes a page from the AOC school of argument where you (the rhetorician) play the meta-game where you judge who is allowed to participate in discourse (hint: everyone who disagrees with me is excluded!). It's a useful trick, for once you have purified the field your arguments will easily win.
Lecun later offered an olive branch and apology, but, true to form, Gebru doesn't offer any hint of awareness that she also may be responsible for the toxic devolution of the conversation. She personally blamed Lecun for her getting trolled, whilst failing to acknowledge that he was also being trolled by her supporters.
Later, and separately, when Google calls her on her resignation bluff, she employs the "call out" tactic, where she accuses Jeff Dean of personally firing her in an obvious attempt to shame him. This whole affair reeks of a PR play (and thus, not acting in good faith). That, to me, is highly toxic.
Wow, this is really well articulated and a great analysis of modern Twitter-driven social complaint. Please consider expanding this (IE, iterating over the various techniques used to "win" social arguments on Twitter). You can see her other tweets and replies, but I'd also include and contrast less progressive tweeters (such as the president's supporters).
She is defining herself and her existence using critical theory, and so her perception of experiences is filtered through that world view. That will always create a "toxic" environment, not to mention just make a person miserable to be around.
Critical theory is useful for the purpose of study and quantification of different social bias, norms, tends, values, or whatever other social aspect you want to slice up. The problem is when people use those divisions to identify themselves. It forces people to identify with a certain group, when in reality the divisions are often not nearly as clear cut as they are defined. But over time people are pressured socially into moving closer to how their group is defined, and what may have once been a false dichotomy becomes a self fulfilling prophesy of sorts; where the social interactions are entirely defined by "my tribe" vs "their tribe". My tribe is all good, theirs is all bad, therefor we are justified in treating everybody in their tribe as less than human. Even is cases where there is very real injustice and evil taking place, allowing oneself to be defined by these groups does nothing to resolve the problem. It only serves to shore up divisions and conflict between groups, making it worse.
It is unfortunate that the admonition to "love your neighbour as yourself" is no longer held with high regard.
Nitpick. Jesus Christ raised the bar to 'love your enemies'.
43 You have heard that it was said, ‘Love your neighbor and hate your enemy.’ 44 But I tell you, love your enemies and pray for those who persecute you, 45 that you may be children of your Father in heaven. He causes his sun to rise on the evil and the good, and sends rain on the righteous and the unrighteous. 46 If you love those who love you, what reward will you get? Are not even the tax collectors doing that? 47 And if you greet only your own people, what are you doing more than others? Do not even pagans do that? 48 Be perfect, therefore, as your heavenly Father is perfect.
Completely agree with you. I would normally expect the inclusion of "enemies" into "neighbours" to be implicit in light of the prelude and postlude around the parable of the Good Samaritan.
I know you are referring to 'that thread', but in this email (from the article) she said:
"So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus..."
Skipping parts of the chain of command to deal with grievances is usually a toxic behaviour. Trying to skip the entire corporation and going straight to the government counts too, IMO. This woman has opportunities, abilities and successes that most of us can only dream of and she's using that to try and make Google a vehicle for political goals not quite articulated here (this email is obviously not written for public consumption, it is light on specific details apart from her thinking there are insufficient women hired). I can see the complaint against her, she's already got an advantage in life and now she's using her access at Google to try and strengthen her hand and weaken that of her coworkers.
If you've got strong reasons to believe your chain of command is going to respond not at all (or worse, badly) to your grievances, then going outside that chain is not toxic, but simply attempting to be effective.
I am not disregarding the skepticism coming from that drama with LeCun..
In the email Jeff writes that the paper she wanted to submit conference review system did not meet Google's internal review standards.
To me this looks really fishy! I did not read her work, but Google has conflict of interest in publishing this paper? If the paper is so out of date as Jeff writes, it will get rejected in conference review system. Reviewers are known for being brutal..
Perhaps she's right to call out BS here, probably not the way she did.
"There's not even one sentence in her email about what is actually the content of her paper that is so controversial"
From the Jeff Dean email it seems like the "cross-functional team" (read: includes people who are focused on company's bottom line, not the subject matter of the paper) wanted some sort of "fair balance" when discussing AI. He cites some examples.
Does this mean if we review papers on AI from Google employees that Google has approved for publication in the past, we will consistently find this balanced view.
They added Jeff's response later. I just read it. They thought the paper is missing important information and doesn't meet the bar. Fair point. That's why there's an internal review process. Her reaction was childish. She took it personally and emailed a rant to the Women list even though it has nothing to do with her being a woman, nor black. She also demanded to reveal the reviewers. (again it shows she is sure it was personal). All her public interactions so far show clearly what kind of person she is. She should join academia where she can publish whatever she wants. I'm sure she made a ton of money at Google and I wouldn't be surprised if she is going to milk more money from them via a lawsuit. She is a privileged woman.
This is not the email in which she offers a resignation ultimatum, an exchange referred to in her Twitter thread as occurring between Timnit, (presumably) her boss (Megan?), and possibly others.
It sounds like LeCun stepped away from twitter and that thread because he was fed up with the toxic discourse and did not attribute that to Timnit in particular. Its strange that OP post sounds exactly like what LeCun was asking people to stop doing:
".. I'd like to ask everyone to please stop attacking each other via Twitter or other means.
In particular, I'd like everyone to please stop attacking
@timnitGebru and everyone who has been critical of my posts.
Conflicts, verbal or otherwise, are hurtful and counter-productive... "
-- https://twitter.com/ylecun/status/1277372578231996424?s=20
TLDR : Yann Lecun tweeted that ML models are biased when the dataset they are trained on is biased. Bunch of people, including Timnit Gebru, replied he is wrong and the biases are every where and systemic.
Yann later tweeted again to explain what he meant in the context of deep learning models without hand engineered features, but people ignored him and still replied he was wrong.
It looks like they were accusing Yann of all the systemic biases of society... common.
Is there a paper showing that the Transformers based models are "biased" no matter the dataset ?
I don't think she's saying she's marginalized because she's a well known researcher who people listen to, she's saying she's marginalized because one or more people, who have their identities concealed, asked her to retract a paper she already published via HR.
Granted, because the process she described is anonymous and opaque by design, it could have happened to other people at the company and only they, HR, and the people requesting/ordering the retraction would know about it. At the same time, if that's the case, then Google needs to stop that, because it looks really bad, especially when it involves a minority researcher who published a paper on how certain models Google uses may have some degree of discriminatory bias built into them.
Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from PR & Policy besides them asking you for updates (in 2 months). A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week, Nov. 27, the week when almost everyone would be out (and a date which has nothing to do with the conference process). You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.
Then, you ask for more information. What specific feedback exists? Who is it coming from? Why now? Why not before? Can you go back and forth with anyone? Can you understand what exactly is problematic and what can be changed?
And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored. And you’re met with, once again, an order to retract the paper with no engagement whatsoever.
Jeff Dean's response about mitigations being made to minimize the issues she highlighted could be justified too, but again, because the process she described is opaque, no one external to the process has any idea about the merits of the arguments raised.
I didn't mean she is saying she's marginalized because she's a well known researcher. From her email:
"Does that sound like a standard procedure to you or does it just happen to people like me who are constantly dehumanized"
She is not happy with the process and she is taking it personal. She was hired as a lead researcher and manages a big group and project. She has the nerve to say she is constantly dehumanized? 99% of Google employees would want to be in her position.
Google is a big company with many good but also bad processes and policies (I never worked there). Instead of trying to make change she starts wars. Her thing with Lecun was exactly the same. The funny thing she said she hired a feminist lawyer to sue Google a year before! She still kept her job after that. What company does that? She is lucky she had an employer like Google.
I'm not saying that you're saying that she's marginalized because she's a well known researcher.
I think she feels marginalized because the process she's going through is dehumanizing, especially given the topic she wrote about (bias in AI frameworks).
It's possible that 99% of Google employees would want to be in her position, but I think that's a bit high. I imagine even fewer would to be there after going through what she goes through, especially racism, which sucks.
I can see how you would think she's confrontational. I think she is to some degree, but I don't think it's unfounded.
She's a minority, and as such, has to deal with all of the explicit and systemic racism that's present in the US. She believes that there are intrinsic characteristics associated with the AI models used at Google that perpetuate the racism she has experienced. She was able to write a paper about the defects she believes were present in those models, which apparently passed whatever initial review processes are present at Google. Not long after that, someone at Google didn't like her paper and, through HR, used a "review" process, which isn't really a review process because there's no mechanism for her to discuss/debate anything with the reviewers, to effectively order her retract the paper under the guise of an anonymous review.
In effect, Google may be perpetuating racism indirectly via problems with their AI models, and directly by silencing her work via a process that sounds like a peer review, but is really just a way for someone to anonymously order her to retract her work.
I'd be angry too if my work implicitly supported frameworks that discriminated against me, and when I submitted valid criticism of those frameworks, my work ordered me to retract it under the guise of some anonymous review process.
I dunno. Consider that when Google found that a sexual harassment claim against Andy Rubin was valid, and so Google asked him to leave, they also paid him 90 million bucks when they didn’t have to.
“ Google could have fired Mr. Rubin and paid him little to nothing on the way out. Instead, the company handed him a $90 million exit package, paid in installments of about $2 million a month for four years, said two people with knowledge of the terms.”
Let’s assume that’s true. What is worse behavior? Obviously his. Yet he gets paid.
It’s not an apples to apples comparison but if google was to say to anyone, “goodbye and don’t let the door hit you on the way out”, shouldn’t it be this guy and the other dudes mentioned in the article?
And IMHO the fact that they didn’t is a great example of systemic sexism in action. So it’s not unreasonable to wonder if similar forces played a role in this situation.
It’s not surprising that companies function like this but that doesn’t make it right, and, it doesn’t mean that (even though change is slow) it isn’t possible to change norms.
We don’t know the full details; Googles firing of her may be valid.
But to be as dismissive of her being a representative of marginalized groups, and facing prejudice (conscious or unconscious) herself, is wrong. IMHO.
Hopefully google will come to their senses at some point and stop creating roles within the company where the job description is to tie the company's shoelaces together
This seems like such a self-defeating move? You are already paying "Ethics in AI" researchers purely to whitewash your always expanding inscrutable data gathering and mining efforts, what do you care what they write on some internal listserv. It's all just regulatory cover. Pretend you are the leading force on researching this, never implement any of it.
Why do these parties not agree on the timeline for the review period? One party claims they had been going through review for 2 months and one said the review was submitted with only 1 day ...
It seems likely that both sides are mis-characterizing events in pursuit of their own goals -- which to me suggests a general environment where parties are tending not to communicate with the goal of arriving at agreement -- but rather as adversaries.
Hackernews will of course assume that the only reason for an adversarial relationship in this scenario is "cancel culture SJW memes" and other "deemed toxic" trends for which the minority voice is almost always the only guilty party.
I think that level of dismissal is unwarranted as it seems pretty likely to me that there is a definite gotcha-oriented communication style coming from the google leadership here -- which very well might reflect an intentionally or unconsciously adopted strategy to create obstacles for voices of disagreement like this.
These things seem to play out like clockwork. Progressive companies hire social justice warriors into vague ethics or policy roles, let them hire their activist friends, shower them with fast track careers and promotions. It's good optics, the progressives like it because it makes them feel better about working at big techs.
Said hires have thus far, as far as I can tell, not produced anything of substantial value to any of these companies. They nevertheless become increasingly emboldened and misinterpret their situation. After all, no-one dares criticise them, they are showered with feel-good publicity. The only thing they need to do is keep their end of the deal.
Until they overstep, get fired, and cause drama by claiming everything that happened to them was due to discrimination, and not because they are toxic and disruptive.
I think this is all a good thing because it may alert some executives to the price for hiring activists into research roles.
Edit: Should add that I am purely talking about the power dynamics here, not research impact or the question of whether AI policy is an interesting subject. It is, I am just observing that it has not driven particular value to companies when compared to the negative publicity these dramas cause.
What is disturbing here, from a researcher's point of view (even within a company!), is the lack of transparency on why a paper is being blocked from publication. The implicit understanding of being a researcher in a company like Google is that, while papers are subject to review prior to submission/publication, that the company will not wholesale reject it and instead will work with you to make modifications and changes. Clearly something different happened here that should be of concern to people considering working at Google Brain.
Citations by like minded publications hardly constitutes impact. Contrary to what most academics believe, impact isn't measured between the ears of individuals in academia.
Uh, what? She's a researcher and this is a Google research division. A 1000 citations is a large number for any single paper, even in ML. And yes, even inside Google research, this aspect of impact plays a big role in evaluation of researchers.
1000 citations is A LOT. I think the person you're responding to is that it's a self-fulfilling prophecy. Popular people get quoted regardless if their paper is actually impactful and that it's, unfortunately, often a popularity contest. People are lazy and only read/cite popular papers
I work in research, and citation count, especially of a single paper, is such a dumb metric for evaluating performance. There are about a million reasons that one paper gets cited over another that don’t have to do with quality.
Does the same paper get 1000+ citations if it were published in 2010? Or even 2015? Probably not.
It's an original research paper (not review) and it was published in 2018. So yes, 1000 citations at the very least conveys that it was noticed and had an impact.
Also, anyone who works in the field of AI fairness and ethics research knows about it. So yeah, it's an important study.
Attention != quality. Especially not the type of quality that keeps one from getting fired after publicly badmouthing ones boss.
As far as I can see, the only way she would keep her job, is if her research was so fundamental to the company that they literally could not function without her. Citations don’t tell you that. My boss is a star researcher in his field with multiple 1000+ citation papers. He still has to act respectfully of his boss and peers.
FAANG hires top diverse ethics researchers for good publicity and they demand meaningful changes that aren't in FAANG's interest and get silences/fired resulting in bad publicity
Vs
FAANG hires top diverse ethics researchers for good publicity and they demand clout/unreasonable changes that aren't in FAANG's interest and they get kicked resulting in bad publicity.
Unclear which is happening here. I agree it isn't in FAANG interest to hire these people but that misses the point.
It seems plausible that a key result of her research (likely including the unnamed paper which she was asked to retract) is driving change in policy and social opinion that is apparently considered harmful to Google, so the research does not add value to Google, it benefits others (e.g. particular underprivileged groups) but just imposes extra restrictions and costs to her employer.
Industry research is not academia, where you'd expect to be free to choose the avenues of your research, and you should expect that publication is restricted. The technical improvements you make may be kept under wraps to gain a competitive advantage; and policy agenda work is essentally part of PR strategy, it's accepted if and only if that agenda benefits the strategic interests of the company. Tenure in academia offers some freedom of research (though not even then abolute), but when working for private companies, you don't have that freedom unless you have a specific contract asserting that, and even then you'd be wary of biting the hand that feeds you.
Like, if you want to expose climate change risks, it's not going to work out while employed in an oil company, and if you want to advocate for regulation of ML research while working in a company that would be bound by this regulation, this is certainly something that the management would be expected to care about.
I generally agree with your comment - industrial research is not academia. But (relative) freedom to publish is a very important consideration when attracting top-flight researchers. Especially in ML! I mean, even Apple's ML team publishes.
I'm not sure Google Brain's recruiting pitch includes a description of them blocking publication of a paper you'd worked a long time on (and even given them a heads up about, as Gebru apparently had). Their pitch is almost certainly about being able to do research similar to how you would in academia (except with more money and resources, etc.).
Sure, companies may have interests in some areas and not others, but it is counterproductive and even irritating if the company does not clearly communicate what the areas of interest are and which are not. An order to retract without explanation is rude and counterproductive.
Yes, sure, the current situation a sign of a misalignment in goals (and likely values) that has gone unaddressed far too long; and that's the responsibility of managers to ensure that this communication happens early. However, it somehow seems that if this situation was managed properly, the result would simply be that she would quit or get fired some time earlier.
It is not exactly a wholesale reject. She submits the paper to the conference first and asks for approvals later, which might be common if you are rushing for a deadline. This has the side effect of approvers forcing you to retract the paper (because the conference deadline has passed and you can't make modifications on it).
Can you really call them toxic and disruptive when they're doing exactly the thing they were told they were hired for?
Companies hire activists because they want to look good without actually changing things. Activists go to work for companies because they want to change things. So, naturally, the company has to lie to the activist about their function to do this, and the activist has to be naive about it. If the activist starts doing what they want to do, and not what the company wants them to do, then the company will fire them, and the activist will reach for an explanation.
I'm with you on this one; a company can't hire an activist and expect that person to not commit activism. Of course, a pessimistic person might say "it just makes it easier to fire them for 'just cause'", making the company sound good going into and out of the employment agreement.
"Look how awesome this activist is we're hiring," easily becomes "Look how unreasonable this employee is we're firing."
> Companies hire activists because they want to look good without actually changing things.
I agree with this statement but for the life of me I can’t come up with a satisfactory answer as to why companies don’t want to actually change things. Is it purely institutional inertia? Are the benefits of a more diverse workplace not accepted by the powers that be, so efforts to achieve that are undermined?
It's that companies are made up of a bunch of humans, not omniscient rational agents.
The company doesn't "want" anything. It's a big social machine that is not so much designed as evolved, and only the very core of the company experiences selection pressure.
At the center of every company is a system that prints money. This bit is almost all that really matters, and is the part that experiences selection pressure.
Threaten it, screw with it, or break it and you will be out of a job, because you either killed the company or got thrown out by people who were afraid you would (or, sometimes, who are just greedy and were worried you'd hurt the amount of money they make through the company).
Everything outside of that central system is secondary, and doesn't really impact the company's survival a hoot.
People at the company may love their secondary systems, really believe they matter, and pour sincere effort and careers into them, but when secondary systems come into conflict with primary systems, they Lose. Every. Time.
In the very long term, nothing else can happen, at least in for-profit companies. Competition will destroy kinder companies by being ruthless to secondary systems that the kinder ones keep. This is what Scott Alexander of Slate Star Codex calls the "Malthusian race to the bottom". Amazon and Wal-Mart are locked in a violent struggle of this nature right now.
I think most people are not entirely aware of these dynamics, as my premise that companies are mostly evolved not designed suggests.
So, in all likelihood, there are very sincere people who really want to see the company change for the better recruiting activists and noble researchers.
Everyone likes to feel that they're being goodness, so they all applaud the secondary "goodness optimization" systems.
After enough time, though, those activists come into conflict with a primary system, and discover to their shock that they don't have the leverage to change it at all. This is particularly astonishing to them because they've fought to change a lot of secondary systems and had many successes there.
They (rightly, from their frame of reference) refuse to budge, get fired or leave, wash their hands of the place, then start at the next place in one of its secondary systems.
From behind the former Iron Curtain, I can smell the "we need to pay lip service to a powerful dogma, but we think implementing it would be a disaster" approach. This was a normal state of things in the Czechoslovak Socialist Republic.
If Timnit Gebru were a cynical opportunist, she would just hop on the bandwagon, pretend to care and collect her paycheck. But she seems to be a true believer, and true believers won't accept ineffectual groveling. They want to see the real thing. Ergo, collision.
But why do they think implementing this would be a disaster? Seems you are saying the fundamental issue is a lack of buy-in, and I am unclear what this stems from. The research shows diversity adds value to companies. Fewer blind spots, better long term performance.
I think Shakespeare described this by the words "hoisted on his own petard".
In this case, both Google (a social justice activist gets them into a public screaming match, what a surprise) and Ms. Gebru (played chicken with her employer and lost).
> Said hires have thus far, as far as I can tell, not produced anything of substantial value
"As far as I can tell" is doing a lot of work here.
I can confirm this is untrue. I worked with someone like Gebru (definitely an activist researcher) at a non-FAANG and AI ethics drove 3 multimillion dollar deals. Was a better business case than most I built over that time.
Besides, saying they don't drive revenue is a critique of research roles, not Gebru...
Agreed: Even if one is cynical (as I am) about the reason these companies hire AI policy and ethics researchers (for PR and to deflect criticisms on how their products operate, far too often) - it is clear that many potential consumers and regulators (!) that are inclined to be skeptical or critical will think more positively of the company.
Also, let's be real. How much revenue are the vast majority of AI/ML researchers actually driving in these companies? Sure, someone gets promoted for multiple NeurIPS papers, and it looks great for the company. But most of the time, it doesn't directly impact products and revenue -- in the short-term -- and that's ok. Same deal for AI ethics and policy researchers.
It really is the same story every time, isn't it? They always flame out of the company with some over complicated victim story because the company had the nerve to not 100% buy in to their non-optional radical demands.
The archetype expands beyond this particular instance of company/employee, as well. It's something we've been seeing play itself out across the world culturally and politically.
No, James was just a random engineer who got fired after the VP of Diversity took issue with the memo he shared. Firing people for sharing unpopular opinions is a whole other nest of bees. One that I'm sure Timmit and her ilk would fully support. So he wasn't exactly the person I had in mind. Nor was Timmit fired and I don't believe Damore was pushing unconditional demands.
But that said I can see some similarities with the whole politics at work landmine issue.
I thought he was fired for insinuating that his female colleagues were hired for reasons other than their merit, not just for sharing a memo with unpopular opinions.
I'm still not sure what any of that has to do with my original comment, he was fired for "breaching the code of conduct". Other than maybe(?) the comment was some attempt at a "but they do it too" sort of reply you find on Reddit.
Judging by your interpretation of James memo I'm leaning towards that direction. Especially considering the fact James made multiple direct claims in the memo he was not suggesting anything of the sort which you are alleging (I have a feeling few people actually read the memo).
Regardless the two stories were very very different. Nor does his story match my comment's 'archetype' of a certain type of employee who always end up flaming out of their job, usually via making non-conditional demands (unsurprisingly coming from a bubble/worldview where it's entirely normal to have things which are not open to discussion).
Can’t agree more with this comment, except that I don’t know anything about the person here, but I’ve definitely seen that happening. It exists in security as well. It’s easy to give up and say “they don’t care about security here”.
Timnit Gebru has a BSc, MSc and PhD in electrical engineering all from Stanford. She did actual electrical engineering work at Apple previously. Maybe it's hard to believe but she's not just another SJW.
The idea that companies only need to do maximize shareholder value isn't that old and was only popularized in the 80s. If you restrict creating value to that, then maybe she didn't create any at Google. But maybe Google wants to create a software that works equally well for everyone and raising the issue of IA biases would have helped achieve that goal.
How is this either/or? Highly educated people can still join fringe political clubs.
There is nothing about technical education that immunizes you against radical politics. At the risk of Godwinizing the reply: the engineers who built the first liquid fuelled rockets in Peenemunde were N..z..s, including the guy who was later brought into the US and built all kind of powerful and ingenious rockets there (Wernher von Braun).
It seems that one of the reasons they have not produced anything of value is because, at least in this case, the work is actively suppressed. The heart of the issue here is that Timnit tried to publish a paper and was told by upper management to retract it with no further feedback.
Maybe if you’re going to hire ethics researchers, you should listen to them?
The response email by management claims that it was submitted with one day before deadline.
Context:
> Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers). We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission).
And the key point:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
Not sure who approved for submission. Gebru herself or by someone in management? Then they address the weaknesses of the paper (as part of the review process).
> A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
Granted, its hard to track the timelines and who said what at what time, but there seems to be reasons why the paper would of failed their internal review process.
If the upper management wanted to retract that paper then it seems reasonable to assume that preparing this paper did not produce any value to the company.
Doing work that needs to be actively suppressed is not producing value - at least not to the company. So apparently there's a fundamental misalignment between the work Dr. Gebru wants to do and the work that Google wants her to do; this was made clear with the suppression of the paper, and escalated as Dr. Gebru apparently (and understandably) did not want to work in a very different direction, (quoting her tweet) "asked for simple conditions first" which were totally unacceptable to the employer ("We cannot agree to #1 and #2 as you are requesting."), so that's it, if neither is willing to compromise on their primary goals, they can't work together.
This does indicate that apparently listening to ethics researchers they hired is unacceptable to Google. Sad, but not surprising.
> If the upper management wanted to retract that paper then it seems reasonable to assume that preparing this paper did not produce any value to the company.
I don't know about publishing standards at Google but, assuming they are uniform for all researchers, I am pretty sure Google would be unable to retain talent if researchers were asked to prove value to Google for every paper they publish. I think for researchers, this reinforces the oft-dismissed notion that you give up significant academic freedom in exchange for a fat paycheck when you join an industrial research lab.
This can be particularly damaging for Google because many of its research projects don't have an immediate business impact, and it hires top researchers by offering an alternative to the slog of academia.
> Until they overstep, get fired, and cause drama by claiming everything that happened to them was due to discrimination, and not because they are toxic and disruptive.
IMO it’s the _companies_ that strike me as toxic drama queens with their shady tactics and obfuscations, hiring ethics and policy folks to look good without any intent on following through with support or respect.
> Progressive companies hire social justice warriors into vague ethics or policy roles...
> I think this is all a good thing because it may alert some executives to the price for hiring activists into research roles.
Ah, yes. They're going to do their jobs and point out poor ethics and policy in _your_ company. I mean, they were hired for that, but if they could just look down and _pretend_ to care about the space they were hired for, instead of actually caring, that'd be great.
Were they hired for that? Or were they hired because - at best, management don't know what else to do to plug a hole in quotas that exists outside their ability to fix, or at worst, it looks good and - they believe - pacifies the critics?
You're right. If companies want guidance in ethics, they should hire ethicists and not activists. Activists have an ideological pre-commitment that, unless they sell out, will always trump allegiance to their employer.
The negative publicity is a result of Google only providing feedback on her research paper, and forcing its retraction, through an HR mediated and anonymized process. That is toxic.
I don't understand this argument. Why do the providers of the feedback need to be unmasked? So they can be disregarded because of their race or gender or "position of privilege"?
From what I can tell, she doesn't seem to address the points raised at all and instead complains they won't tell her exactly who said what.
I think the process being opaque is more detrimental, because there's no way for anyone outside of the process to know it happens. If it wasn't opaque, then everyone could see when it happens.
Not being able to engage in a debate about the merits of something is also sketchy. It seems like an order being dressed-up as peer feedback.
I mean, from the tweets what I gather is she wasn't planning on leaving just yet. Saying "hey, you want to resign by [X] but we'd rather you be gone so let us inform you've just been resigned as of now" is essentially firing [1].
> Offering to resign was a huge mistake.
I think this will be good for her in the long term. I'm sure she'll find another research position, and the alternative was betraying her convictions or just the huge burden of constantly being worn down by Google.
From the response, it was more like "I want you to do X otherwise I'll resign", and google said "ok, cya". Curious the author isn't willing to publish the actual email, just the management's response.
I have no doubt she'll land on her feet, just like everyone who quits Google in such circumstances, but if you actually want to have an impact in such a situation, make them fire you instead of giving them an out like this.
No, you know they come from a certain viewpoint that has increasing common cause with people fed up with the excesses of the left.
They may be wrong, but it's a valid coinage that has worryingly become mainstream enough to be both a cliche and indicative of something deeply wrong with our society.
OP's post is a long string of personal attacks against Timnit so this comment is pathetic and hilarious. The entirety of this site is just worshiping at the altar of technosupremacy and comments in this thread have put that reality on full display. I am not surprised by the cowardice on display, but I am surprised by the uniformity.
In a way it’s a good self-regulating mechanism for modern capitalism. Dominant companies slowly become paralysed by activist employees, letting nimbler competitors outflank them in the market.
I'm especially introspective about this line in the post:
> You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.
It feels more generally accurate to say that no ones humanity is respected at companies like Google. That isn't to discount or lessen the difficulties minorities and women have in tech companies; those are substantial and significant. But if someone in a diversity role is slighted, the natural place to jump is a diversity motive; its far more likely that the company just doesn't give a shit about anyone.
And that's true of nearly every company. It was immature for us, as a community, to believe big tech would be any different than any other megacorp. They say they care about diversity, inclusion, and people; they don't. They don't care about any individual. Companies, as a collective intelligence, care about two things: Money and Self-Preservation (in that order).
That being said, I totally support the fight that people like Timnit are fighting; they're fighting for the recognition of humanity, through the lens of diversity and inclusion, a critical component of a much larger fight intrinsic to a capitalistic society. That's the value of a hire like this; its just the unimaginably unfortunate case that this value hasn't been realized at Google, and probably never will.
>Said hires have thus far, as far as I can tell, not produced anything of substantial value to any of these companies.
I mean, yes. It's an unfortunate reality that ethics are often at direct odds with the unrelenting pursuit of business value. For a big tech like Google, by caring about people and the impact of your technologies, you will almost surely sacrifice profits in the short term.
Sure - I am just observing this from the power dynamics here. A relatively low-level employee making aggressive and public demands to a large corporation should think carefully about their position. Unless the goal is a scandal for publicity.
- Timnit submits a paper to a conference about environmental costs of large language models like BERT. Note Google invested massively in BERT for their infrastructure this year.
- Right before the external deadline, she submits it for internal review
- Internal review asks for revisions, they want mitigations they worked on mentionned at minimum in the paper.
- She responds to these with an effective "publish or I QUIT" email
- However you put it, she gets terminated.
- Gebru is somehow shocked at this and posts her half on social media
Seeing this develop over the last day, I've grown less empathetic to her side of this affair. She created an unwinnable situation, then responded with an ultimatum.
Let's try to be honest about this. Any executive who receives this letter (and they will certainly be involved due to the liability concern) will immediately conclude that Gebru has to be gotten out as quickly as possible. The venom drips from the letter. It is clear that she is not a salvageable employee who could operate with the business interest of Google in mind.
Sadly, she will undoubtedly come up with some trumped up reason to sue for discrimination and probably be bought off with a settlement.
I remember Timnit from that Twitter thread a while back with LeCun on the subject of bias in a deep learning system.
In that thread Timnit established that she was an "authority" and then instead of engaging on the merits in the conversation, simply said basically, "You have to submit to me because I'm the authority" and refused to offer any information of any substance ... as though she was some kind of wizard and what she knew was just way too complicated for us mortals to understand.
Meanwhile, LeCun just very calmly engaged on the specific technical details about bias in Deeplearning systems.
After that, Twitter just piled on LeCun and claimed he was being a "mansplainer" ... as though saying anything technical to a black woman is "mansplaining".
As an engineer what would you seek to research if you wanted to reduce the bias in specific AI systems? You'd figure out how to minimize bias in the algorithms and in the data, and identify any other sources that could introduce bias, and figure out how to put controls on developers and data collectors to prevent bias ... right?
Ok, now go look at what is actually discussed in AI bias papers.
Sometimes I feel the emperor has no clothes and everyone just pretends like vacuous bullshit is something profound because if you criticize the papers you'll be fired or blacklisted in Silicon Valley.
Speaking as someone who is a senior manager/director, my experience is dismissals of high-profile employees usually have a ton of complexity under the hood, and whatever information is publicly available is often the tip of the iceberg. Out of respect for the folks involved, the full details of the lead-up to the dismissal are almost never released even internally in the company. This information asymmetry makes it relatively easy for a sufficiently vocal dismissed party to control the public narrative even if their internal behavior at the company was extraordinarily problematic.
These dismissal decisions are not made lightly and I would wager there was a long history of events leading up to this. You usually only see the straw that broke the camel's back.
"This is the exact email I received from Megan who reports to Jeff. Who I can't imagine would do this without consulting and clearing with him of course."
I am sure Jeff Dean was made aware of the situation given the PR impact. But it is a stretch to say that Jeff Dean personally decided to fire her against her manager's wishes.
> This is the exact email I received from Megan who reports to Jeff
> Who I can't imagine would do this without consulting and clearing with him of course
The email from Jeff Dean in the article describes one of them - to ubmask all of the anonymous feedback provided on her paper and list everyone who was consulted on the decision to prevent publication.
Where she loses me is demanding the names of her reviewers. I never once have known who peer reviewed anything I wrote.
Sounds like she was teeing them up for a Twitter auto da fe.
Then you try to engage in a conversation about how this is not acceptable and people start doing the opposite of any sort of self reflection—trying to find scapegoats to blame.
The message was clear, stop! The paper rubbed someone the wrong way. Go rewrite it on your own without any references to Google and publish it. I've been in tech for a long time. In nearly every org turnover was minimal on the engineering side of the house versus sales who dropped like flies every quarter. The only people who were ever laid off were those who routinely rocked the boat. Getting some sense of that here.
Mods: the official title on this news article is "The withering email that got an ethical AI researcher fired at Google"
The current HN story is titled "AI researcher Timnit Gebru resigns from Google". That current title is either inaccurate or is pure editorial from the submitter.
Sounds like an authorship dispute (e.g., against her by one of her junior RAs). In that case an HR complaint would be the correct process to retract the paper. Once it is retracted HR would presumably try to mediate between the disputed authors.
Corporations and their managers care only about making money and maintaining good vibes to produce a positive business environment. Only unions engaging in strikes and outside pressure can force them to change materially. All these internal committees and such are just PR.
By the way, that part about unions and strikes also applies to the joined at the hip government as well.
I think worker cooperatives could also be a pressure for material change industry wide, and specifically in the tech industry I suspect that's more attainable than for workers of many other industries.
That's a crazy idea. There are sometimes unions that have racist members, and sometimes unions that are no good, but anyone paying attention to history has seen that unions are what fight for progressive change - material change that improves the lives of workers, who have always been multi-racial. Often they tackle racist structural problems. For example, in the Sanitation Workers' strike.
> Unions have historically been some of the most racist and discriminatory organizations in the USA.
The USA has been pretty racist in discriminatory, and I think it's disingenuous to single out unions for that history. I mean, who instituted Jim Crow and profited from slavery? White voters, government officials, and private enterprise. Unions were definitely not leading that.
Private enterprise was usually castigated for not being racist enough. Economics was the 'dismal science' because it didn't support racism, segregation, and slavery.
> Private enterprise was usually castigated for not being racist enough. Economics was the 'dismal science' because it didn't support racism, segregation, and slavery.
I had slavery in mind with the private enterprise comment.
But it's not like businesses are actually run by perfectly rational automatons according to textbook economics. They're run by owners and managers who can be just as racist as the surrounding community, and who can very well prioritize prejudice over a little more profit.
Don't forget that creating arbitrary hierarchies among workers is good for business by creating divisions among the working class, which keeps them from banding together effectively against the ownership class.
That history is not entirely correct. Typically private enterprise has been against segregation: Think about it, it is pretty expensive to have to maintain two sets of facilities for society. The montgomery bus company was AGAINST segregation when the laws forcing bus segregation were passed. Barry Goldwater donated a ton of money in the early days first NAACP chapter in Arizona in part because segregating his department store would be expensive for operations.
As for Unions, think about it, their major threat is cheap labor from abroad, so the way that they have historically been discriminatory has been in lobbying for anti-immigration and immigration quota (don't know if they still do that as much).
>...Typically private enterprise has been against segregation
Unclear why you were downvoted. Another example of this was Plessy v. Ferguson where the railroad worked with a civil rights group to bring the case:
>...On June 7, 1892, Plessy bought a first-class ticket at the Press Street Depot and boarded a "Whites Only" car of the East Louisiana Railroad in New Orleans, Louisiana, bound for Covington, Louisiana.[11] The railroad company, which had opposed the law on the grounds that it would require the purchase of more railcars, had been previously informed of Plessy's racial lineage, and the intent to challenge the law.
That is irrelevant. Pointing out that unions have effected negative changes or engaged in discrimination internally in the past is not an argument against the idea that they're necessary to force positive change.
"Unions have historically been some of the most racist and discriminatory organizations in the USA."
You're telling much less than half the story, and taking the latter half of the twentieth century in isolation without considering what came before.
U.S. unions with communist and socialist leadership were universally anti-racist in word and very often in deed. Employers often interfered with employee choice of leadership to the benefit of conservative, racist, union leaders as opposed to leftists with anti-racist commitments.
In one of the most flagrant violations of the First Amendment in U.S. history, the Taft-Hartley Act[1] criminalized membership in all manner of left-wing political organization. With the cooperation of union leaders ready to sell out their membership, many of the strongest voices for integration were purged.
Employers and union officials alike had relationships ranging from extortion to open cooperation with organized crime. The relationship between the Hoffa Sr's Teamsters and the mafia was complex, and notorious in part because it was so exceptional.
> The US has historically been one of the most racist and discriminatory organizations in North America.
That seems like a very US-lib centric POV where the US is a cartoonish evil-dooer.
In my experience Mexico is pretty much just as bad, even more so in different ways too - classism, racism, anti-indigenous sentiment, Malinchismo, rampant feminicide, etc.
You have a point, but the key difference is the US is able to export its product worldwide using the military and corporations. I don't think Mexico has the same level of power.
My point is that these countries aren’t cosmic forces. We’re all humans that instantiate the same fundamental prejudices, instincts, biases, etc. wherever you find us.
I totally agree with that point. The cartoonish depiction was intentional, in response to the cartoonish depiction of unions. The statement is defensible in the sense that I left it wiggly: “one of the most racist orgs” is kinda vague. I’m past my “US out of North America!” Phase, and all generally irritated with one dimensional interpretations of the world, though I could still play a radical leftist on TV
I would partly agree. We're all in a single global system of social organization and it's hard to see outside that (capitalist realism). Often these divisions are sown intentionally by the upper classes. I don't want to say humans aren't shitty, but I don't think they fundamentally have to be. I think they respond to their material conditions and an appropriate kind of social organization and material support can reduce the bad stuff a lot.
Cash-only and cash-intensive businesses are an incredibly easy and common vehicle for money laundering. The less customer information involved, the more effective. Such businesses are incredibly widespread and vital to organized crime, and their ease of operation in this manner is baked into their DNA.
Sure some of them have, but their foundational democratic and working class structure makes it possible for them to have a progressive effect on society. By contrast, the corporations that they are in conflict with supported the Nazis and planned a coup against FDR because their foundational interests in a regimented, nationalist, power hungry, and elitist society were aligned.
1. Union busting since the New Deal era resulted in one of the most oppressed working classes in the world. Hence, low union enrollment.
2. FDR saved capitalism. I'm not his biggest fan. I'm a socialist.
3. Serious money was offered during the business plot. It didn't come to fruition, but the danger was real. Corporations are the real power in America.
Working class are people that have to sell their labour to get income; capitalist class are those that live off capital income. (Dividends, rent, etc)
If someone is employed in a government organization, like the CDC or a secretary of the white house, and they don't have capital income then they are working class.
> This is so unhelpful and dumb. The US has historically been one of the most racist and discriminatory organizations in North America. Does that mean we shouldn’t engage the US? Does that mean the US can’t be less racist? Does that mean the US isn’t a tool we can try to use to make the world a better place?
I'm not sure this makes much sense. "Racism" (I'm guessing you mean things like slavery or de jure discrimination) were practiced concurrently in the US and elsewhere in North America, not to mention many other countries. The US was one of the first countries to have major movements to end slavery (Britain was the first global power to do so, and they preceded the US). I think it would be most accurate to say that the US has historically been both among the most racist and least racist countries, just depending on how
you measure.
> Unions provide countervailing power to corporate power. Racist people are racist. Sometimes these can overlap.
Unions, on the other hand... Unions are a monopoly on labor. Monopolies reduce the amount of a good they supply and drive up prices. This means unions depend on certain people leaving the labor market that a union has captured. When a bunch of unions formed in the late 19th century and early 20th centuries, which people did the unions choose to exclude? More or less across the board, they excluded women, immigrants and black people. This was extra convenient, as members of those groups would typically work voluntarily for less pay, so the unions had double incentives to get rid of them. Many early Progressives were quite proud of this and other policies like a minimum wage forcing "inferior" groups to have to work harder to catch up to the white man.
The question we should be asking is why unions, minimum wage, hours restrictions, etc. were useful in eliminating "undesirables" from the labor force 100 years ago but help these same groups today.
For the former, I'd recommend chapter three of Thomas Sowell's Black Rednecks & White Liberals, about the history of slavery.[1]
For the latter, I'd recommend Illiberal Reformers, about the Progressive Era and its devotees.[2] Later Progressives were responsible for the school-to-prison pipeline and militarized police.[3] The Progressive Era has a wide array of interesting literature, including the writings of many of its early exponents (since they tended to be writers or academics).
Compare unions to corporations and I think unions come out ahead on that front. Any large American unions who directly worked with Nazi Germany like IBM did, for instance?
I find it funny how all these commenters have come out of the woodwork to question a highly accomplished and respected AI researcher's track record for their "tone" and the "value" of their research. Unsurprisingly, there's a fixation on railing against SJWs, activism, and political correctness.
None of these comments offer any substance on the main issue in focus: the fact that Google may be actively suppressing the publication of valuable research for reasons other than its academic merit.
I'm a bit unclear about how corporation research is conducted then. I would have naively assumed that when you undertake research as an employee, that research belongs to your employer and they have a right to decide if they want it made public with their name attached to it, or keep it secret for their own benefit.
Of course there are all kinds of ethical questions that can arise in that situation, but is that not the normal case in corporate research?
I find it pretty disturbing that tech has this right-wing streak and not only is it a lack of substance or focus, but an inability to consider the argument from her point of view, thus proving her point about prejudice.
FT article [1] "Google embroiled in row over AI bias research" seems calm/considered. Includes:
> Jeff Dean, Google’s head of AI, defended the decision in an internal email to staff on Thursday, saying the paper “didn’t meet our bar for publication”. He also described Ms Gebru’s departure as a resignation, after Google had refused to agree to unspecified conditions she had set to stay at the company.
> The dispute has threatened to shine a light on Google’s handling of internal AI research that could hurt its business, as well as the company’s long-running difficulties in trying to bring more diversity to its workforce. ...
> The paper looked at the potential bias in large-scale language models, one of the hottest new fields of natural language research. Systems like OpenAI’s GPT-3 and Google’s own system, Bert, attempt to predict the next word in any phrase or sentence — a method that has been used to produce surprisingly effective automated writing, and which Google uses to better understand complex search queries.
> The language models are trained on vast amounts of text, usually drawn from the internet, leading to warnings that they could regurgitate racial and other biases that are contained in the underlying training material.
> “From the outside, it looks like someone at Google decided this was harmful to their interests,” said Emily Bender, a professor of computational linguistics at the University of Washington, who co-authored the paper.
>A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.
So I don't work in AI, but in every field I have worked in, and in the academic journals I used to work for, all of these are issues that would be addressed during the standard peer review process.
You ignored recent publications? Reviewers will complain. A good journal will have peer reviewers who know which citations are missing.
You left out citations that weaken your conclusion? Reviewers will complain. Et cetera.
I realize people object to Gebru's history on Twitter, but Google exec's defense of her dismissal sounds pretty weak to me.
She seems very passionate about makeing the world a better place. The only downside to this sort of drive is that it doesn't always align with everyone else's vision of a better world. I don't think she and Google were on the same page and perhaps both parties will be much happier going their separate ways.
> She seems very passionate about makeing the world a better place.
Some of the worst people in history who committed the worst atrocities were very passionate about making the world a better place, at least as they defined “better”.
Agreed, I'm simply stating that she is clearly very motivated to propagate her world view and that it doesn't align with her employer. I am making no comment on the validity or morality of either her position or Googles position.
But I would guess that if you added up all the contributions of the people most passionate for making the world a better place, the bad far outweighs the good.
Too many comments focused on whether Google was justified in firing her/the appropriateness of the email and not enough on the impact/veracity allegations.
Much more interesting questions that might get revealed a bit:
Does Google treat minorities appropriately? Is Google covering up ai ethics research? Does Google care about AI ethics/bias?
Google cares about AI ethnics because talking about it gives those people, especially higher ups, an imagined sense of significance and adrenaline rush of being a savior.
But in reality, AI ethnics is just a gloried showroom of diversity/inclusiveness.
It is not a product, it doesn't generate revenue, in corporate world, nobody cares.
I am not very sympathetic with Timnit Gebru in this affair. At the same time, I am discouraged by the number of comments here that go over the top, fail to recognise nuance and uncertainty, and engage in name-calling (“virtue-signalling SJW” and so forth).
This article actually brings to mind an old question of mine regarding short range infra-red rangefinders. Are these devices discriminatory because they respond poorly to less reflective objects? If white men were not so prevalent in the development of these devices, would we see a greater emphasis on inexpensive sensing solutions that measure return from objects based on a human common denominator like density rather than the reflection of nearly-visible light?
I think this title could be a little more descriptive. The conversation is about DEI, the fact she pointed out more accountability is necessary, the abruptness of it all, etc. I didn't know who Timnit Gebru was and almost didn't click on it because this sounded like some random researcher and I'm sure Google fires people all the time.
Once I was reading it I realized this is quite interesting after all.
What passes for AI ethics research is akin to a "Hiroshima ethics research" that focuses on the carbon footprint of the Enola Gay and the lack of BIPOC representation on the team that assembled the Little Boy bomb.
I don't care if the corporations building a dystopian panopticon are intersectional.
Does anyone have a link to the paper? Jeff Dean's email response mentions mitigations that were made to address the concerns, but didn't go into any detail.
Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.
,,Google Head of AI Jeff Dean, in an internal email, told staff that an internal team determined that Ms. Gebru’s most recent AI research was insufficiently rigorous. ''
When Jeff Dean wrote me feedback for a small thing that I made, I was extremely happy to even get an answer from him. I think Timnit doesn't have any idea of the things Jeff and Sanjay wrote together, their code base was always clean, beautiful and well documented and tested, with lots of benchmark tests. She should have gone with the comments and work much more on the paper.
@dang/mods: Apart from what you think about the story, the new title "AI researcher Timnit Gebru resigns from Google" is even more biased and incorrect than the old one or the headline from the linked article.
Timnit definitely did not intend to resign "immediately" and from everything what is known was fired on the spot (that she mentioned a possible resignation in an Email is not important for that distinction). A correct headline would be "AI researcher Timnit Gebru fired by Google after threatening to resign".
It sounds like Google was at odds with the research and the push by Timnit. And she gave them the easy exit with her ultimatum. It is shameful on Google to use a weak excuse as "not meeting protocol" to let go of a great researcher. Looks like the paper hit home raw. When explained, even a layman can see the bias these AI models will have. Looks like it is time for a better search engine to emerge and call out these monopolies.
Title is contentious. The article's title says she was fired. The sequence of events was that she was said she would resign if her conditions were not met, Google did not meet those conditions and terminated her.
Neither "fired" nor "resigns" seems correct. I feel that she was "forced out" or "let go" (ironically this euphemism is more accurate here). She threatened to leave and they kicked her out.
> I had stopped writing here as you may know, after all the micro and macro aggressions and harassments I received after posting my stories here (and then of course it started being moderated).
That's all I need to know about why she was fired. I'm glad Google despite all the stuff they've done before knows when someone has gone off the SJW deep end.
So odd, why does google employ these researchers? Isn't it obvious it's just a vacuous PR stunt?
What self respecting researcher gives his name to a corporate entity and expects anything but being used to woke-wash the company?
This is a silly communion that puts both sides to shame.
This also sounds like an opportunity for an awesome person to spread their wings now they're out from under what sounds like a pretty oppressive system
She claims to be a leading researcher in the field, but has she written any significant papers or research of note?
Her biggest accomplishment seems to be driving Yann LeCunn off Twitter with angry accusations. She tried to continue her power trip at a workplace but her boss called her bluff.
This is a great firing for Google. She won’t be missed.
The first paper listed there has over 1000 citations in 2 years. That’s influence and impact.
Just because you don’t know of her work doesn’t mean that her work has not had an impact and that she’s not well known in her research community. She definitely is.
I’ve read the papers. The ones where she is a (1,2,3)ary author are all glorified rants - well worded, but offering no actionable feedback or new methods.
Your comment betrays your inherent bias, and frankly, it is not a good look.
Everything you've said here is not only falsifiable, but straight up false.
It appears you didn't even try to challenge your own beliefs before spraying them out into the public sphere. You aren't representing yourself in a decent light.
I’m sorry you feel that way. But Timnit is an internet bully with an army of Twitter minions, with the stones to write a ridiculous “How To Apologize” document she tweets at researchers she doesn’t like. I maintain this is an excellent firing if there was such a thing.
Timnit Gebru is very clear that she didn’t resign. The actual headline at the link doesn’t say that she resigned. The moderator editorialization of headlines (and occasionally outright substitution of completely different urls than the ones that were submitted) on HN has gone way beyond disappointing, it’s outrageous. Shame on you dang.
I'm torn on this. Actually, if you think you're a researcher and then feedback comes in this weird way, that seems pretty weird.
At the same time... "people like me who are constantly dehumanized" - really? I mean maybe, but to me this sounds like hyperbole, and actually, weaponized hyperbole. This is someone who's a respected and presumably very highly paid researcher. Dehumanized?
“People like me,” not referring to her privilege, but to women and people of color, I would guess. Since there are serious culture and systemic biases / prejudices against women and PoC.
Sure, but either she's constantly dehumanized or she isn't, and if she's actually rather powerful and important, and is then claiming that by being a black woman, she automatically gets a share in the victimhood, then that doesn't seem very convincing.
> spilling 'special sauce' information to the public. You never heard of it, and that's a good thing.
okay, cool. That's not what's going on here. We're talking about AI ethics, and the goal is that our (society's) use of AI meets a high bar for ethics and equity.
In light of Jeff Dean's email, which you can find in the updated original post, it appears that Timnit Gebru threw a public tantrum when one of the many papers she collaborated on was deemed unsuitable for Google Brain technical standards. That's the point: adults handle minor setbacks like adults, instead of rushing to social media to vent in public about their 'constant dehumanization'.
This is ideological battle boilerplate of the sort that contradicts the mandate of this site and which users were therefore right to flag:
This is the fundamental victim mindset of the social media woke: any setback is chalked up to identity based oppression. Reality check: everybody has setbacks, most of them unfair, or at least easily perceived as unfair, and definitely frustrating. Most have the decency to deal with it,
Complaining about life adversities on social media in a hyperbolic race / gender wrapping, 'constantly dehumanized', is a recent widespread phenomenon. The heroine of this story happens to embody this novel behavior, which used to be a whole lot more private and a whole lot less race / gender driven. Not sure where 'ideology' enters the picture, it's a reasonably truthful observation from someone that is old enough to remember life before social media.
Ideology enters the picture when people repeat predictable clusters of angry and rigid reactions about complex topics. Things we've heard many times already are off topic here, especially when they're tied to classic flamewar themes. This follows from the definition of curiosity, which is the mandate of this site.
One thing I learn each day on HN is that software practitioners need more ethics lessons, it's sad that it's so hard for some folks to open their mind enough to understand why this isn't just some "rabble rouser" causing drama. Ethics in software, especially AI, are severely lacking in the present day.
Although the language she used in the email may not be what’s expected by a manager, I choose to be sympathetic.
Obviously, this is a person which is very passionate about her research. She’s also a Black woman; given US history, there’s a probability that she was discriminated against during her life — maybe, multiple times. If not her, maybe some of her relatives were. Sure, this is hypothetical and does not excuse her attitude — but try to put yourself in her shoes.
It’s easy for us to reason about it, and think about how we would have dealt with it differently. Maybe she was in a bad place, or felt that she was being discriminated against.
She’s also a human being like me and you so — yes — maybe it’s not about racism and discrimination. Maybe she’s just entitled.
Either way, Google’s HR should’ve done better. It’s easy to let anger or exasperation get the hang of you. They should’ve scheduled a call to discuss everything calmly.
1. She and her co-authors submitted a paper for review 1 day before its (external, for publication?) due date. It's supposed to be submitted to Google 2 weeks prior, so they can review.
2. The authors submitted it externally before the review was returned.
3. Google decided that the paper "didn’t meet our bar for publication" and demanded a retraction.
4. Timnit Gebru demanded access to all of the internal feedback, along with the author of the feedback. If Google wouldn't provide this access, she said that she intended to resign and would set a final day.
5. Google declined her demands and took her ultimatum as a resignation. Rather than setting a final date, they made the separation effective immediately.