In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?
I'm disappointed that Jeff has chosen to imply that pubapproval is used to enforce rigour. That is a new use case and not how it has been traditionally used. Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now. If this has changed, it's a very, very new change.
Ultimately it makes the whole Ethical AI department look more like a rubber stamp for Google.
It's one thing for reviewers, even anonymous reviewers, to reject a paper on its merits; it's another, in Timnit's own words , to be told "'it has been decided'" through "a privileged and confidential document to HR" despite clearing the subject matter beforehand. In light of a more general frustration, it's very reasonable for Timnit to escalate the situation by putting her own career on the table, simply to request that people engage with the paper rather than flat-out rejecting it.
And if Jeff wants to respond by immediately cutting ties, and by putting out a document that doesn't even address the situation at hand (edit: much less the underlying issues of unequal treatment for women that Timnit describes)... that's a reflection of his ethics and the ethics of the company that stands behind him.
 For those who haven't read Timnit's memo that Jeff references in the OP, it's worth reading: https://www.platformer.news/p/the-withering-email-that-got-a...
EDIT 2: follow https://twitter.com/timnitGebru to see more of her side of the story. She retweeted https://www.wired.com/story/prominent-ai-ethics-researcher-s... as a good explanation of the situation for laypeople.
>And you are told after a while, that your manager can read you a privileged and confidential document
Emphasis mine. Showing your employee that you don't even trust her with a written copy of the rejection of her paper is not a great way to engender a good working relationship. Note that this pretty clearly seems to have happened before Gebru sent the email that Dean characterized as an ultimatum.
The fact that she issued an ultimatum for the identities of the reviewers suggests that management was correct to have safeguarded them in the first place.
This might sound a bit exaggerated but all of this is just putting google in bad light and top of that over 500 googlers have written a letter demanding an explanation for the same, those guys know about there internal workings more than you and me, so it is surprising how many review processes does google have, its just like double pressure first get the internal clearance and then work with the original reviewers of the conference.
And now Jeff comes up with this explanation:https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQ....
And for not once he mentions that the paper did already pass the internal standard review process.
What inference was I supposed to make?
From what I’ve read about Gebru and this situation it doesn’t seem implausible to me that had she identified the reviewers she would have named them in a public venue and characterized their criticisms as being driven by discriminatory bias or an intent to suppress her work. Obviously nobody is going to present criticism, regardless of whether the criticism is legitimate, if that is a possible consequence.
I suspect not, because it's probably a carefully constructed document to fit the pretextual narrative of the constructive termination campaign that it was part of, which was targeting Gebru based not on the particular paper but on race/sex and criticism around the internal culture on those issues.
At least, it's pretty clear to me from all the Google AI people describing how Dean’s characterization of the review process does not comport with the usual practice of that process, and in some way differs from even the official documented process, suggests very strongly that the entire review issue was pretextual and personally targeted, and not about the paper itself at all. The interpretation of what is behind that pretext is a little more speculative, but you don't need a pretextual campaign unless the actual basis is prohibited or even worse for PR than the pretext.
I didn't read that. I read the person _demanded_ who said a particular critical feedback, or questioned the approaches instead of addressing them. The person gave the ultimatum to resign if details were not shared.
The critique here appears to have been fairly minor, too. Failing to cite some recent research is rarely grounds for rejection.
--Nicolas Le Roux
A lawsuit emerged. A settlement followed.
Just because "we made rules for this" doesn't mean the scrutiny should suddenly cease.
Someone farts once - no big deal. Someone farts all the time => they're quitting or I am.
It was only ‘at the last second’ because Gebru chose not to follow the normal procedure.
If the paper genuinely can't be ready until one day before the external deadline, the right thing to do is engage with the reviewers in advance, explain the problem, and provide them with drafts and work in progress, so that they can complete their work a few hours after yours.
What Gebru did is the equivalent of bypassing code review and pushing to prod on Friday afternoon.
This is ethics for Google's AI+Search which is currently undergoing global scrutiny, particularly by Congress and specific politicians who are considering anti-trust measures against Google - and who believe that 'their political party is being treated unfairly'.
It's existential concern for them right now, relating to the possible breakup of the company.
Every public communication on 'ethics' or search results etc. at Google is obviously going to have to be reviewed.
If you're publishing the latest thing on 'AI Random Number Generation' obviously nobody cares about anything other than IP.
The fact is, she must have known this and submitted anyhow - which is in and of itself not so bad, but that there was calamity afterwards ... there is no excuse.
Google was absolutely reasonable - they did not ask to change the nature of the research, but wanted to make sure that information about new, better processes were included.
It's beyond gracious for Google to do this, when really their starting point is 'silence' and they really don't have to do anything at all.
A request for a fairly short review with very basic and reasonable concerns blew up.
This is not a public university, you don't get perfectly tenured academic freedom, if Google wants to put a reasonable subnote in there - and take 2 weeks to do it, it's perfectly fine.
Obviously Google would have kept her if they wanted to, but it's clear they were both looking for a way to part ways and it's probably for the better.
They may be allowed to, but they're fools if they think world-class academics are going to work for them under draconian publishing standards that are not even consistently specified. I'm sure Gebru could get a tenured position at a university of her choice. They're throwing away a lot by choosing to die on this hill.
I suggest it might be 'foolish' to imply that 'a 2 week quick review with minor additions' are anything remotely 'draconian'.
Just the opposite -- this is a siren call to great researchers who want to be highly paid and work on great and novel things, full well knowing Google has a very light review process, won't interfere or suppress.
This makes Google sound like a great place to do research, probably better than most public institutions.
yeah that's super draconian.
ESPECIALLY if other people in your department are claiming no one else has to go through this, just one of the few black women! Damn!
They can't have their cake and eat it too; if they want hire people to do AI ethics research, and then censor them for doing their job, then they should get called out for ethics-washing, which is exactly what's happening.
I don't know why so many people love to defend power, especially when that power is not benevolent.
This is false.
The requirement for a fairly light review process, and asking for more, truthful, factual and contextualizing information to be disclosed is not censorship.
Nobody is suppressing research, or even asking that specific opinions or results be changed.
The commenter above used the term 'draconian' to refer to this process, which is just superlatively false.
"I don't know why so many people love to defend power, especially when that power is not benevolent."
How is this power not benevolent exactly?
What's 'hard to understand' is the petulance and irreverence people have for the offices and responsibilities they hold, and the lack of professionalism in their conduct.
This should have been an easy issue to address by any mature researcher who cared about working with others to achieve positive outcomes - instead of trying to force their opinion on an organization, or engender public support for their career.
There are plenty of reasonable voices at the table for 'Ethics in AI' nobody has a magic wand in this equation.
It doesn't imply that if a manager or VP or CEO concedes the point at issue once, then the person can now go around "putting their badge on the table" and getting their own way over and over again on other issues; probably making a habit of issuing ultimatums (ultimata?) will get you fired PDQ.
I guess it only works if you have a good enough reputation that people actually want to you to stick around.
One case someone succeeded. The other case some other person resigned.
An ultimatum like this is an opportunity for a responsible manager to talk and rethink, but it seems like Google jumped at the opportunity to double-down on their mistake and then send out cowardly emails claiming the employee had actually resigned.
If I were to apply a simplistic rule here, I would actually invert it - if you get to a point where you are sufficiently undervalued that you feel the need to issue an ultimatum, you basically have to resign.
No, an ultimatum is a choice between two options; she offered Google a choice, and they selected. Expecting them to try to carve out a 'third way' is just unrealistic.
I agree that you can frame this many ways; she could have portrayed this as her resigning in protest, instead of blaming Google for being vindictive.
It is fine if you think that, but accept that you are the weak one here. If you want to err on the side of keeping your job it's fine, but don't pretend you didn't make a trade off.
You do not have to accept anti social behavior but a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom.
It does seem like she judged the situation incorrectly, as she is now complaining, not gloating.
The fact that this has over-flowed into the public sphere is a failure.
If they handled the situation correctly it should have been sorted internally. Whether the person in question spilled the beans at all is proof of that.
Managing people is a skill, and being good at computer science does not make you a good manager. They should know that complaining to social media is an option that someone might take and they should consider that when dealing with these issues.
The fact that we are here discussing anything at all proves the above, It isnt 1995, if someone feels slighted for whatever reason, expect it show up on Twitter, true or not. You don't want to be chasing the narrative with a potentially one sided google doc. No one is giving the mega corp the benefit of the doubt in 2020 which means it is bad PR either way.
"a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom."
My point is that I do not know whether this situation could have been handled better. We don't know enough to judge whether this could have been sorted out neatly. You seem to think that a clean resolution was possible, and you might be right, or you might be wrong.
I consider that this being discussed in a public sphere a failure regardless of situation as it looks bad on the company no matter what.
If a person feels their only way out is to appeal to the mob then I think the people doing the management have made a misstep. If that person has a history of appealing to the mob then it is still a misstep as that should have been considered when dealing with the issue.
Perhaps they did the calculus and this is the best result, but looking in, it doesn't feel like it.
Let's follow the timeline and discover a root cause:
1. Anonymous feedback being given through HR about a
research paper in AI Ethics to be published in an
2. Manager schedules a meeting where: “it has been
decided that you need to retract this paper by next week..."
without context and without a chance to confront others.
3. She puts an ultimatum to her boss that she can't
continue to work there with conditions like that limit her
freedom to speak and research. Google decides to accept
A. People can just go to HR with criticisms of a research paper
apparently with the intent to sabotage authors, and HR is
apparently fine with being used like this. Or possibly a manager
convinced HR that OKR's trump AI Ethics.
B. They wanted her to say certain things in an academic forum --
which didn't appear to be IP/Trade Secret related, but for
some other reason, which they refused to disclose. This is
in an environment of ethics where papers might become guidelines
C. They're not interested in fixing the issues she brought up,
because they allowed #1 and #2 to happen above.
Should HR be involved in "fixing" a paper in AI ethics? Probably not. Just like you wouldn't take your car to HR to get it repaired. They simply don't have the knowledge to do so.
Then Jeff Dean probably has $20 to $30 million wrapped up in Google, so he's going to take their side on the matter publically, unfortunately. Privately he may have been cussing out HR because of forcing him into the situation. We don't know.
Ultimatums shouldn't be a frequent occurrence but they are a part of business relationships. It seems a bit unfair for an employer to treat an employee ultimatum as a fireable offense when company policies are sometimes the equivalent.
Employees sometimes decide that an employer ultimatum is offensive and quit sometimes too. But I don't think it is nor should be a set-in-stone rule that an employee that issues an ultimatum should be terminated.
But you're claiming that for a company there shouldn't be a choice, it should just lead to termination.
Accepting a resignation achieves three separate objectives:
-resolves the ultimatum
-discourages future ultimatums
-preserves the status quo ante
In every employment contract there is a balance of things and employee is willing to do and an employer is willing to provide in exchange. If my boss said that they wouldn't pay me anymore I would rightfully respond with an "ultimatum" of "pay me or I quit". That's the ultimatum they respond to every day by paying me; they look at the balance of things I offer, consider what I provide to the company to be adequate, and then give me the money I ask for. The same is true for any ultimatum: you come to the table with one final negotiation; the negotiation of "do you value me? Then you must provide me this". It's an entirely transactional exchange.
Now, ultimatums are general to be discouraged not because they undermine some sort of authority, but because they are a sign that negotiations have broken down on both sides. As a manager, your goal should be to try to reach a compromise far before that point–not only does to hurt your relationship if you don't, even when the ultimatum is "successful" from the point of the employee, but by letting a conflict reach an ultimatum point you're exposing yourself to significant risk and often poor deals. The way to handle an ultimatum is to forestall "pay me x or I quit" with "I'll pay you almost x if you show good performance for the next three months". If you are at the point where the argument is "I'm going to quit" then yes, you may have to carry through with the termination if you think what they provide is less valuable than what they want from you, but you should really be looking at what you did to get to that point instead.
Yeah, and whether intended or not, a "fire anyone who gives you an ultimatum" strategy absolutely creates that vibe.
If you have a top down management style where you employees do not question anything you say, that might be the way to go, but I find in the software business what you want is the opposite. You want all the criticism and feedback you can get from your skilled and knowledgeable work force. If you don't get that, you're wasting the majority of that money in their pay check.
The irony here is that if you have a manager firing someone who presents an ultimatum, then tat in itself is effectively an ultimatum that you are supporting. ;-)
That of course also doesn't mean you accede to every ultimatum. I mean, if your business plan is to do X, you want employees that will help you to do X. If they are getting in the way of X, then you need different employees anyway. Usually though, you and they have already worked out that they want to work with you to help you do X before you hire them.
So the main reason you get ultimatums is because they didn't anticipate and do not like the approach you are taking to get to X. Assuming they are smart and have good judgement (and again, if not, why did you hire them? why are you paying them?), there's a very good chance that there are some problems with your approach and you'd be wise to at least consider that possibility and their perspective. They may be trying to save you from making a terrible mistake, and feel like it is incumbent on them to stop working for you because allowing you to proceed would be working against that goal you hired them for.
It's not uncommon for two people to have very different perspectives on what helps to achieve a company's objective. It's also not uncommon for one of those people to be horribly, horribly wrong. Sure, if you've got an employee who has presented an ultimatum based on horribly wrong judgement, it may make no sense be their employer.
I'll tell you though... just because their a subordinate doesn't automatically mean they are the ones exercising horrible judgement... and the farther you go up the food chain, the more severe the consequences from supporting someone's horrible misjudgement. So having a policy of summarily firing subordinates who present ultimatums both creates the wrong environment to get the best out of your team and terribly harmful for the leadership of your organization.
Usually the goal of management is to employ explicit, stop-gap communication to avoid having to get to the explicit question of continued employment, because the company has already made a committment to that employment by hiring the employee in the first place. Obviously, most employees want to continue on, also. So it seems nonsensical to view anything save an explicit declaration of resignation as the same. "I would like to discuss what would cause me to resign," is not a declaration of resignation, and the people reading this situation in good faith understand that.
You might think that's exhausting, I certainly do, but that's what we're dealing with here.
Why? Is there something about CRT that threatens your means and way of living, or is it forcing a type of introspection about what minorities have and continue to go through in various forms and machinations you'd rather not entertain?
Anyways I was just explaining to OP that the situation was already about that stuff before any response was considered.
That tactic is not a problem inherent to CRT, that tactic is a problem with how people deploy and weaponize CRT.
In the absence of anything else, yes, people are going to make assumptions.
Edit: I agree that one could incorporate some CRT into their worldview without becoming insufferable, in fact I think lots of normal people have without calling it that. That said, there are a lot of true believers out there, that's who I was talking about.
No, I'm not doing that right now, I am trying to understand your framing of CRT and where your issues lie with it. It would seem those issues lie with how certain people argue CRT, not CRT itself.
Thank you for clarifying that.
They are just that, objections and responses. Which you are free to entertain or not, attempt to unpack and understand or not, respond to with better critiques, objections, observations and rebuttals of your own...or not. But you're not being prevented from making them by anyone or anything short of I suppose committing some sort of crime in order to make that point (that's just an extreme example to stretch the metaphor).
This is the form and function of debate, it is a crucible that boils away impurities of all manner and dialect (for anyone who may be thinking they've heard this one before, yes, I absolutely stole this from an episode of Star Trek).
If you feel you are being stopped from doing any of this, might I ask why and how you have been completely prevented and kept from expressing yourself?
Elite coastal white: Absolutely not threatened. Beneficiary of the system and knows how to navigate all of the social codes.
Less elite or poor white: Takes the bullet that was aimed at the elite white.
Asian: Scores way too high on tests for their % of the population and this is a problem for a worldview that cares about what % of college slots go to which races
Professional class black or latin: Does great, huge beneficiary of CRT activism
Working class black or latin: Invisible and accidentally hurt despite good intentions. CRT proponents tried to pass a referendum legalizing racial discrimination in hiring in California this year, which would have helped professional class POC and probably hurt this class. Fortunately it failed.
EDIT: I removed some cattiness above. Not trying to pull the rug out from under you but I'm rate-limited and wanted to focus on my actual points. I don't think I'm a caricature of 'unwoke' person who never thought about or dealt with these things before.
Is that what you truly believe I did above? That I am labeling you, and think you to be a white supremacist?
If so then allow me to be clear for a moment: I have literally no way of knowing if you're a white supremacist. I have no way of knowing if you're not actually an armada of ants collectively working to actuate the keys of a mechanical keyboard or a Boltzmann brain sending these messages through some strange and baffling form of quantum entanglement. What I am trying to expose is the very real reality that these are uncomfortable conversations, that's just intrinsic to this topic and the climate we are in.
This is fine. It is fine to admit being uncomfortable trying to process where we are, how we got here, and how we got out of it.
But one has to start by looking that beast in the face first in order to reckon with it. For some, that uncomfort gets unwittingly channeled into anger and frustration and they might not even know why or even realize it, but that can be focused, and turned into knowledge and wisdom on the issues. One's just gotta start, like I said: see it for what it is, and working from there.
If you took that to be me associating you with white supremacy, I'll try to find other ways of seeking out clarity from people next time.
Looked like it to me, an uninvolved curious third party.
> the very real reality that these are uncomfortable conversations
Huh, that's what the poster that you replied to said. Weird that you got all up in their grill about it.
Let me just attempt to paraphrase the initiating series of comments, seeking only to illustrate how your comments looked to me, not attempting to do justice to the full meaning of each commenter.
nickff > advice on how the manager should do power dynamics
saagarjha > "strange power dynamic" [followed by lots of savvy commentary, irrelevant to my point here]
free_rms > CRT is all about power dynamics. That's the point. I find it exhausting, but me being exhausted is not the point, the point is that it's about power. [bit of a reductionist take on the parent comment, but probably correct?]
dvtrn > Are you exhausted because, as a beneficiary of oppression, you'd rather the oppression continue? Or is it because you're just too lazy to care about fairness?
free_rms > See, I dunno where you got that I'm an oppressor, where did this threat come from?
Yikes! And free_rms didn't even say that your implication that she/he is an oppressor was wrong, nor were they defensive about it. They just said that it's exhausting! I mean, it would be! Who would not be exhausted by that, whether or not it's a fair accusation!
I mean... now, at the bottom of this thread, you imply that you were seeking to know more, and not trying to imply that the exhaustion is evidence of being a bad person. Okay, I believe you, and nothing in your first comment belies that reading (though some of the intervening comments, hmn not so sure). But I don't think it's the natural read of what you said, at least it wasn't the natural read for me.
Me personally, btw, I dunno what CRT is, so in my privileged ignorance (enabled, of course, by my general white privilege) I'm immune to the exhaustion. I read this whole thread to see if I could learn something useful. Not so far, though I don't regret the time spent.
I dunno, is this helpful? Maybe I'm not being helpful.
Not saying it's wrong or right, but couldn't she make the exactly symmetric argument?
Mostly it's an opportunity to let the staffer know that such ultimates are unacceptable, and that taken literally by her own terms - she could be called out and let go. Which is what happened.
It's very doubtful that if they wanted to keep her, that they couldn't have found terms.
Surely the manager would have bent, indicated the wording was a little bit strong, and found a way forward.
It seems clear they were wavering, she crossed a line and offered them the path out and they took it.
If there were material issues being covered up, there was material suppression of information, this story would look completely different - but there wasn't.
This was the right thing to do by Google in a tricky situation.
I told my company that I was fed up and the only way I would continue in that situation is if I was given a sizable raise because I wasn't paid enough to put up with him. They gave me the raise rather than having me walk. I worked there for several more years after that.
I never issued an ultimatum before or since. Maybe there are people issuing threats all the time, bug it seems to me that people usually do that if they're frustrated but they want to stay at the company. For IT folks with desirable skills it's far easier to just get another job.
A leader is not a friend or an ally, they are just a leader; the leader can be friendly and supportive, but they are still just a leader.
I hope you've said it with the intention to make a point about how dysfunctional certain managers can become, rather than illustrating a belief. If you can't lead other human beings without having control over them, then please hang up your leadership hat and go do something else for a living.
Given that, at least in Timnit's narrative, the email included a request to discuss the issue in person when she returned from vacation, I don't think that the "ultimatum" characterization is uncontroversially accurate for the immediate case.
There’s tons of issues I wouldn’t compromise on, and better leave the company if I had to. Does that mean I’ll be fired the very second these subjects become remotely relevant and/if I make clear where my boundaries are ?
"X is the lowest price I can sell for, take it or leave it" "How about X-10?" "Done"
"I want to read 5 books!" "You can either read one book before bedtime or go to bed now" "How about 2 books?" "Okay, but then straight to bed!"
Putting the ultimatum in e-mail form really raises the stakes, because there may be other people CC-ed or BCC-ed, and any response could later be weaponized.
If the relationship was already troubled, anything like an ultimatum is an opportunity for the manager to be rid of all their troubles.
The level of the threat also comes into play, and more severe threats increase the risk/tension. If the guardian had threatened to disown the child rather than send them to bed, we would read the situation differently.
They way this was handled doesn't make any of the involved parties look good.
The narratives from the participants on the actual communication differ on key points relevant to evaluating whether it was really an ultimatum.
"Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation."
I read that as an explicit ultimatum.
Sometimes the person making an ultimatum is right, sometimes they're wrong. It shouldn't be as adversarial as viewing yielding as weak. Insisting on always "winning" is in my view the weak position.
Additionally, firing someone is not always legal in some countries, even after an ultimatum, assuming they pick the wording of their ultimatum carefully (e.g. "I may very well resign if/unless [desired condition]") to retain control over whether they will later finalize their conditional decision to resign.
As one example, in Quebec, employees who don't qualify as "senior management" and who have been employed at a company there for an uninterrupted period of at least 2 years cannot legally be fired without what the law considers good cause, period, not even if the company gives them a notice period or pay in lieu. Any alleged noncompliance or misconduct that falls short of the most extreme examples must be first dealt with a graduated process of progressively stronger discipline, and it must be possible for someone to recover from that instead of having the outcome of the process as a foregone conclusion. There is a government tribunal to which an aggrieved party can appeal if they aren't happy with the outcome, with the power to order remedies including back pay and even reinstatement.
Similar things are found in many European countries, though certainly not all.
Of course, ultimatums with more definitive wording like "I resign if/unless [condition that the listener has control over]" -- note the absence of hesitating words like "may very well" -- can irreversibly become an effective resignation worldwide, based on choice of the listener on whether to satisfy or reject the condition.
This doesn't seem logical to me. I don't doubt there are indeed scenarios where this is true, but as an absolute, this doesn't resemble my real world experience at all. It seems like kind of the opposite of how human interaction should work.
I think that statement presumes some degree of unreasonableness. Honestly, I value having employees that have principles and clear boundaries, if for no other reason than I can rest assured that when I'm not observing/involved, those principles and clear boundaries are still there. Now, if those principles are, "I won't accept that paying me gives you any kind of authority over what I do", then you know that's not going to work out for anyone involved. However, if it is something like, "You can't pay me enough to do X", and I have no desire for them to do X, I'm really okay with that.
Further, she claims that initially she was not allowed to even see the contents of the criticisms, only that the paper needed to be withdrawn.
Let's say you were working on a feature. At the 11th hour, just before it hit production, you get an email telling you to revert everything and scrap the release. Apparently somebody in the company thought it had problems but they won't tell you the problems. Then after prying you do get to see the criticisms and they look like ordinary stuff that is easily addressed in code review rather than fundamental issues. They still won't tell you who made the critiques. Would you be upset?
Fro my understanding, this paper had already passed peer review and been accepted. Google management then decided to block the publication using the IP review process.
Timnit shared the paper a day before the publication deadline, ie, no time for internal review, and someone with a fat finger apparently approved it for submission without the required review.
1) Is the review protocol that requires a two-week review period a peer review process intended to maintain scientific rigor, or an internal controls process intended to prevent unwanted disclosure of trade secrets, PII, etc.?
Repeating the comment at the very top of the thread:
> Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.
If it's not a scientific peer review process, arguments about why scientific peer review is generally anonymous are irrelevant, just like arguments about why, say, code review is generally not anonymous would also be irrelevant. It's a different kind of review process from both of those.
2) In practice, is the two-week review period actually expected / enforced? Other Googlers, including people in her organization, are saying that the two week requirement is a guideline, not a hard rule, and submissions on short notice are regularly accepted without complaint:
(I don't work for Google, but I work for another very IP-leak-sensitive employer that does ML stuff, and we have a two-week review period on publications. The two-week rule exists for the purpose of not causing last-minute work for people, but if you miss it, it's totally permissible to bug folks to get it approved, and if they do, it's not considered "someone with a fat finger." It certainly doesn't exist for the purpose of peer review - it's assumed that the venue you're submitting to will do review, and I think everyone understands that someone from your own employer isn't going to be a fair peer reviewer anyway. There is a "technical reviewer" of your choice, but basically they just make sure you're not embarrassing yourself and the company, and there's no requirement for how deeply they review. I think I've gone through the process twice and missed the deadlines both times.)
So, if this "rule" exists on paper, but only exists in practice for her, then this is the textbook definition of unfairness.
https://twitter.com/mena_gonzalo/status/1335066989191106561 (an intern!)
(... Also, comparing this rule to our overpoliced society where everyone commits some sort of crime and the police just choose who they go after kind of reinforces my point about unfairness. Sure, it may have been strategically wrong for her to not do everything by the book, but if so, it's very interesting that the in-house ethicist has to play by all the rules to not get fired and the practitioners can safely skip them.)
Anyway, the culpability for rubber-stamping this paper is on the person who rubber-stamped it, given that short approvals are commonplace. Saying "You should have known that this approval didn't really count, so it's your fault for going through the normal process and not realizing it should have been abnormal" is nonsense. That's literally the job of the reviewer, and if the reviewer can't do that, someone else needs to fulfill that role. At worst, if they told her on day one "Your job is publishing high-impact papers with fundamental conflicts of interest with the company, so everything needs detailed review from X in addition to the usual process," that would be different. But they didn't. Better yet, they could have flagged her in the publication review system as needing extra review. There were lots of options available to Google if they weren't trying to make up rules after the fact to censor a researcher.
And in any case, she gave advance courtesy notice of the planned work: https://twitter.com/timnitgebru/status/1335018694913699840 Someone could have said something then. They didn't.
This is key, and I don't see it being mentioned as much in other comments. It was approved.
This is a essentially false. The author submitted the paper the day before publishing, given there at least was some form of standard review - the actions by Google could not be construed as 'roadblock'.
There is no 'roadblocking' and the review was certainly not 'unexpected.
The constant misrepresentation of the facts in this situation is harmful for those ostensibly wanting to do good.
"This is why understanding who raised these concerns is important."
Since there was no roadblock - this answer makes no sense.
The answer more likely that the researcher wanted a named list of what she perceived to be as her personal enemies.
"Failing to cite some recent research is rarely grounds for rejection."
There doesn't seem to be any reasonable cause for major concern in this whole issue - it seems the company raised some points and she could have managed them reasonably in professional terms.
Google stepped in and changed the procedure for this paper, because they wanted to spike it because they were embarrassed by it.
Asking for the identity of people that have the authority to ask for a withdrawal of your research without stating their issues with it seems understandable, if excessive.
But maybe I misunderstood something.
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. [...] We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
There is no statement at all of how to reconcile "approved for submission" with "didn’t meet our bar for publication", which probably means that there is no reconciliation, and the cancellation was done outside normal process.
I wonder if he is trying to say that there was a process error, it was approved without review (in error), she sent it out, and then they came back to her and said "wait, no, you can't publish that after all"
If people have an expectation of Google to turn out academically pure research then I certainly respect the position and encourage it in reality. But thinking like that means life is going to contain a bunch of surprises that really shouldn't be surprising. Google is simply not going to employ people who they recognise as undermining the success of Google. It is not feasible to run a company that way; roughly speaking companies can choose between ruthlessness and bankruptcy. If you expect tolerance of radicals and debate, look to the universities.
The possibly shady part is that they could be suppressing evidence that they broke the law, but, like your said, they can decide how to run their own business. I'm not even sure if the researcher would be a whistleblower if they didn't intend to report something illegal.
To makes matters worse, in this case at least, the law or laws they may be breaking were established to protect a class of people the researcher is a member of.
Universities used to tolerate radicals and debate. But going by the copious media reports of the last few years, that doesn't seem to be how they operate any more.
You can't have it both ways.
And then when there was backlash they "promised to do better" and Sundar Pichai came out with some "principles" that the company would follow for AI.
Another 1-2 years later and here we are again - this just proves that whatever "AI Ethics Board" they might set-up, it will end up being a sham, because they'd never allow that board to stop them from using AI however they like if it's in the interest of the company's profit growth.
If we want real AI oversight we need to demand it from outside nonprofits or even government agencies (why not both?!) - and there should be zero affiliation between the company being monitored and those organizations/agencies.
It might be that they follow ethics because the appearance to do so has a monetary public relations value. It always comes down to that, and for publicly traded companies that set up things like an "AI Ethics Board" it is always for show since the incentives don't allow for anything else.
At the end of the day someones compensation depends on these things and you can't be hurting the bottom line.
The founders have a controlling stake.
I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction. It's simply unprofessional to critque your company for something while not mentioning the work they're doing to combat that.
It would seem deeply problematic for an AI ethics researcher to have the expectation that when they critique their own employer, they should mention all their work to ameliorate bias or ethics problems, but to not have a similar expectation when they're critiquing other companies. Is the point of having an ethics researcher to expand our understanding of ethical issues, or merely to aid in PR?
If a university administrator were to attempt to tell a PI not to submit a paper critical of work from another lab at the same institution, I think that would be judged as a shocking overreach. But for Google, we're not even in agreement that this behavior is a problem. It's unfortunate that we expect so little from corporations, even if those corporations are some of the main drivers of research in a field.
But the parent comment to which I responded, and which I quoted, specifically said the problem was to criticize google while working for google, and seemed to approve that this should be judged unacceptable.
> I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction.
The "regardless of who's doing the work" part is key, and not all participants in this conversation are on the same page about it.
I don't see anything contradicting, but merely failing to add the "regardless" footer. Admittedly, sometimes I read too quickly.
Seriously. Choosing who to cite is a discussion and a battle. It's not done thoughtlessly.
Ethics researcher publishes piece critical of company's activities
Company is shocked as to how this could happen.
Company calls it out.
Researcher realizes the limits of her narrative setting powers.
Non-vocal majority is happy to see runaway activism being curbed in corporate settings. Slow march through institutions is slowed down for a day.
Reasons for not citing research, especially recent research, range from lack of relevance (since even though environmental improvements could have been done, they were not done! So the actual impact wasnt lessened by them at all!); To simply not having known about it. The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard. Moreover, that's the role of a conference reviewer, not the employer. Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.
The fundamental difference between activism and research is that activism sets the agenda ahead of time while research explores the knowledge space and reports findings. One helps us incrementally make better sense of the world, the other wants us to narrow on particular facts while omitting other relevant facts in the name of advancing a cause.
> Reasons for not citing research, especially recent research, range from lack of relevance ... To simply not having known about it
The omitted research is clearly relevant. Deciding what is relevant and what is not is precisely what narrative warfare is. If they did not know about the adjacent research, then they would simply be incompetent researchers (which I highly doubt is the case).
> The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard.
There is no retraction because there was no external publication. Jeff Dean states reviewer response as you stated was there but was ignored by the approver.
> Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.
Corporate researchers are still employees and are bound by a job description. Independent of the content of the research, it is also entirely within the rights of the employer to set a certain bar of quality being cleared. In this case it seems like Google didn't want affiliation with this paper, not the other way around.
On the contrary, I totally agree with you on this, researcher needs to pick a particular part of the combinatorially explosive knowledge space to explore, in that they get to be opinionated on what hypothesis they want to prove. What they can't do is however to ignore opponent research that conflict with their propositions. This is precisely what Jeff Dean is talking about in his second letter, you need opponent processing to overcome self-deception and bullshitting.
You can't have opponent processing when you omit relevant research, try to steamroll the review process, throw a tantrum when your paper is found lacking, and ask names to further your agenda through social engineering.
It is not onto me to prove if and why I disagree with the conclusions, it is onto the paper to prove that their assumptions and methods were sound to begin with, if they want their conclusions to be taken seriously. And they were not.
> but this renders any future research from them irrevocably tainted -- from now on, it's no more than Google PR.
On the contrary, this move increases trust in Google research and it would have lessened if they were to buckle under activist strong-arming.
If folks think this was a sign of broken research machinery, they are free to ignore all future Google research, at their own risk for competitive disadvantage.
I wouldn't be that sure. I know the "headline narrative" is invested in painting Google as evil (and they don't tend to be that wrong in many instances), but the actual sentiment among the general talent pool is very divided in this instance. There is a sizeable percentage who are relieved to see activist pressure being resisted in a corporation and would be inclined to make a pick on that basis. We all know corporate pressure is not the only threat to academic/intellectual freedom.
I wrote a paper recently in which I omitted most tangentially-related papers in mathematical physics, as they would not be mathematically accessible to the audience in question and also do not address the questions posed in the paper. A mathematical physicist wrote to me and was grumpy about it. Fine, I added his name one more time to make him happy. That's the reality of research papers.
It's clear that Google didn't want to be affiliated with this paper. And it's clear that it's time for Gebru to find a place with intellectual freedom, so her work can be judged on its merits.
Agreed, but there's obviously a difference of perspective about what transpired here and I don't think any of us knows with certainty what the truth is. Finding ethics problems is grand and all, but framing a narrative that misrepresents those problems is highly problematic, particularly if they are in a role that makes them the leading ethical voice of the company.
Hopefully the paper gets leaked so we can judge for ourselves.
Having said that, if Jeff were to make public the paper, criticisms of the paper, and improvements made to address the problems described in the paper, that could go a long way towards clearing the air.
The past three years of work in natural language processing have been characterized by the development and deployment of ever larger language models, especially for English. GPT-2, GPT-3, BERT and its variants have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pre- trained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We end with recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.
Anyone have the actual paper?
I have plenty of experience in Natural Language Processing (NLP), but I am not an expert in ethics and bias – although I have read my fair share of papers on it. To me, the abstract comes across as modest, very reasonable, and exploring questions highly relevant to the community as a whole. Sure, if I was to review it I would be picky in regards to any strong empirical claims since I suspect it would be difficult to demonstrate conclusively some aspects of what they hint towards in the abstract. But as a position paper it looks better than plenty of work already published at top-tier NLP venues and I doubt that it could not get accepted on academic merits.
Still, to echo the parent, does anyone have the paper in its entirety?
Publishing a transparently one-sided paper in Google's name would be a problem, not because of the side it picks, but because it suggests the researchers are too ideologically motivated to see the see the problem clearly.
Ironically, it indicates systemic bias on the part of the researchers who are explicitly trying to eliminate systemic bias. That's just a bit too relevant to ignore.
They didn't give her a chance to address those factors at first.
Later they had a manager read the confidential feedback on the paper in question, but still didn't leg her read it herself.
If that feedback was only saying that the paper lacked relevant new context and advancements, why were they being so cagey about it? Something doesn't smell right about that.
In paper reviews you can often see reviewers asking the authors to rewrite, clarify, add extra experiments, add missing citations. It's all normal.
Representing a more truthful reality is not 'softening'.
It's only 'softening' for those who have an already accepted, extremist view, and for whom any evidence to the contrary doesn't help their arguments.
While initially sympathetic to the author - the more I read - the more I have completely the opposite view.
Even more, it sounds like Google didn't ask originally for retraction, they just asked to take into account the newer research contradicting the paper - the thing that any researcher valuing integrity over agenda wouldn't refuse.
If somebody wants to do that research and publishing they just have to find another source of funding, i guess.
Anyway, the firing wasn't over the paper, the firing was over the unacceptably unprofessional reaction to it.
Salary aside (because I do doubt she earned $1M+/year, my guess is probably more on the ballpark of $300k~$500k and either way not really denting Google's finances), you are not wrong, but also it's worth understanding here we're entering the realm of the notion that companies can (and for many reasons should) be about more than maximizing shareholder value.
Also, if I'm being completely honest, from a PR perspective this could be worse than Timnit's paper might've been just given how public it has become and the people involved. People internally are perhaps more comfortable having that paper not be published and not having Timnit in their ranks, but as far as PR for Google goes this isn't great.
But that aside, Google should want this kind of paper published. They absolutely should want to know and discuss every possible weakness in the ethics of their approach to AI - Google has a scale of influence so large that how they act in areas like AI, trickles down to many other organisations. To me, that gives them a responsibility to make it as ethical as is reasonably possible, and that will only happen if experts are allowed to speak freely.
One can make short-term arguments about how that hurts them, but the long-term damage of getting massive AI systems wrong, will be far, far worse.
But as others point out, it's entirely in Google's long-term interests to have internal critics who prod Google and the rest of the industry toward long-term behavior. So I think it makes good sense for them to have independent academics that occasionally make people uncomfortable.
that reminds how in USSR each non-miniscule factory, organization, etc. had "the department #1" - it was an ideological check and control department which at sufficiently large/important organizations even included KGB officers.
The Soviet Union was about equality for workers. Who could be against that?
Should we really keep manufacturing cars using the same tools that Stalin used?
1. There exist laws to prevent discrimination against people based on protected attributes
2. ML models make predictions based on attributes without interpretability (it's not possible to prove that protected attributes are not factoring into model predictions)
3. Empirical observation that a model proxies a protected property exposes corporation to liability for regulatory non compliance
4. Therefore any study that could expose bias of a model used in production is to be road blocked or prevented ...
To combat flows like above -- seems like regulators are going to need to update rules with third party audits and an incentive structure that encourages self-regulation and derisks self-detection and self-reporting of non-intentional violations... ideally google should not be put into a position where it is incentivized to police its own ai ethics research to ensure that such research doesn't expose their own illegal/non-compliant activity ...
In this case, there were recent changes to the statute of limitations for CA laws that extended it from a year to 3 years, which could be why this whole process seems weird.
This is spin at best, gaslighting at worst. We'll never get the full story (and should we? it is an internal company matter made public, after all)..
Not really sure what the point of an 'ethical AI' department is when there's no transparency or accountability facing the public because if it can be cancelled internally at any point if it threatens the company you've basically recreated some kind of Soviet ministry for truth
The official outputs and products of that department should (hopefully) be public and shared, I 100% agree with you. But that's not what this is about.
This matter in particular is an internal employee/employer dispute and dismissal, and is only public because of the high profile of the persons involved.
And what I meant that we'll never get the full story is that these kinds of situations are always more complicated than they appear. We are only seeing the tip of the iceberg, and are not privy to the history that led to this moment.
These kinds of things don't just happen out of nowhere.
If I had to guess who's "more right" here, I'd side with Timnit, personally.. but again I don't have all the facts, so it's just a gut feeling based on what I know about how large enterprises work.
To be fair it doesn't sound like her ultimatum was "I'll leave immediately"— that was forced on her by Google, and is an important detail.
Perhaps it's just me as a URM, but her email resonated with me, especially this part. I see this position of calling what she did "exhorting them to stop working" often, but this isn't really what she did.
I too care about DEI, but after putting lots of time and effort into it I saw how futile the effort was in my organization because there was real buy-in from higher ups. I was putting a lot of unrewarded volunteer work helping with "inclusivity" and talking about the problems/solutions, but that was all it was in the end for the people we needed action from; "talk". I did decide eventually to dial it back and stick to my actual paid job of programming, and although I didn't send an email to other people telling them their effort was being wasted, if someone came and asked me, I'd tell them to not bother. There's other places, usually further removed from the the company and easily PR-able channels, where the effort is better spent.
In any case, I hope you realize your comment is full of hyperbole and the people who think she isn't in the wrong, myself included, aren't being unreasonable. We're smart people too.
> There's no way that Google (or any company) would continue to employ her after that
I agree. None of this comes as a surprise and I'm sure she expected it too; that doesn't mean Google is in the moral high ground.
I also suspect that if she'd written the equivalently passionate comment about a technical failure or bad product choices, people here would be cheering her on. Especially if she were a he.
I personally do not have enough info to decide who is telling the truth in this case.
And that's both a negotiating position and a resignation.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
I find it unlikely Dean would lie about that, not least because the email would be easy to find.
Now, were the actions leading up to that effectively a firing, ie Timnit would have been unable to effectively continue in her role? Quite possibly.
Which means that Google is likely to have to produce all of the documents that they didn't want to produce.
It looks like Google's AI Ethics team is meant to be green washing.
What's different in her case is that you don't see the names of the people reviewing. Being the devil's advocate, she MIGHT have a pattern of aggressively attacking people who reviewed their work before. So they might have made the reviewers anonymous this time.
If they approved the paper the message would be "google thinks language models are a waste of resources and racist". There would be no academic debate on this topic as its been framed as woke and published by a militant activist, so any disagreement would be racist (see prior interactions between this researcher and other researchers ).
Thats why the standard process of publishing, peer reviewing, academic critique etc would not work.
Why would their researchers working on language models stay? when they can go to Facebook, OpenAI etc. Why would new researchers join?
The proper response to her position would be to publish a response or critique. Attacking her entire field does nothing to further the conversation.
The variation in reviewers' response is often due their lack of knowledge and unfamiliarity with the problem. Take a look at the recent reviews on some of the more popular conferences on OpenReview.net. Most of the reviews don't have any substance and are often vague/generic.
I'd take the reviews from peers that I trust and are aware of my work more seriously than reviewers of conferences.
Demand the best from your multi-billion dollar corporations.
The same paper from outside of Google also creates liability, but now the argument for increased damages becomes about whether Google knew.
Submitting conference papers last minute is... normal.
You only bring in HR protections to protect the company from a legal standpoint.
This is sad gaslighting of a reasonable concern the team had.
Having to endure some external review for what could otherwise be sensitive material.
The inability for the SJW crowd to work reasonably within very reasonable terms, to then resort to aggressive tactics such as 'demand the names and opinions of everyone on the board' and then publicly misrepresenting the situation is going to lose you a lot of favour.
Every time I read one of these stories I immediately feel sympathetic to the individual, but then upon learning more, I feel duped and maligned for having been effectively lied to.
The doors are wide open for progress, those who take it to micro-totalitarian lengths are not doing anyone any favours.
Publishing a paper with a lack of rigor about some obscure mathematical technique isn't a problem for google (beyond some possible but unlikely mild reputation damage). Publishing a paper with a lack of rigor that says google is doing unethical things, when those things are questionably accurate, that is something google would (and should) have a problem with.
Whether the paper actually lacks rigor in a relevant way is not something I can comment on.
In some cases, though, it's not simply a matter of listing it as other work in the Intro - you may need to incorporate it into your models, etc.
> Since when is a scientific paper required to do that.
Unfortunately, for a long time (since well before my time, at least).