Hacker News new | past | comments | ask | show | jobs | submit login
About Google's approach to research publication – Jeff Dean (docs.google.com)
526 points by yigitdemirag 6 months ago | hide | past | favorite | 573 comments



Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.

In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?

I'm disappointed that Jeff has chosen to imply that pubapproval is used to enforce rigour. That is a new use case and not how it has been traditionally used. Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now. If this has changed, it's a very, very new change.


And the examples of issues flagged in review that Jeff keeps highlighting—like Timnit’s alleged failure to mention recent work to reduce the environmental impact of large models—are themselves a bit worrisome. Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks.


Yeah, put more simply, they pushed out someone in their Ethical AI department because they did not soften critiques against AI enough. They couch these in terms of rigour, but the substance of the problem has to do with her criticisms against AI.

Ultimately it makes the whole Ethical AI department look more like a rubber stamp for Google.


Let's be even more clear - they pushed out someone in their Ethical AI department because she wanted to have human conversations to determine the basis for being asked to soften critiques.

It's one thing for reviewers, even anonymous reviewers, to reject a paper on its merits; it's another, in Timnit's own words [0], to be told "'it has been decided'" through "a privileged and confidential document to HR" despite clearing the subject matter beforehand. In light of a more general frustration, it's very reasonable for Timnit to escalate the situation by putting her own career on the table, simply to request that people engage with the paper rather than flat-out rejecting it.

And if Jeff wants to respond by immediately cutting ties, and by putting out a document that doesn't even address the situation at hand (edit: much less the underlying issues of unequal treatment for women that Timnit describes)... that's a reflection of his ethics and the ethics of the company that stands behind him.

[0] For those who haven't read Timnit's memo that Jeff references in the OP, it's worth reading: https://www.platformer.news/p/the-withering-email-that-got-a...

EDIT 2: follow https://twitter.com/timnitGebru to see more of her side of the story. She retweeted https://www.wired.com/story/prominent-ai-ethics-researcher-s... as a good explanation of the situation for laypeople.


Also from Gebru's memo ([0] in the parent comment):

>And you are told after a while, that your manager can read you a privileged and confidential document

Emphasis mine. Showing your employee that you don't even trust her with a written copy of the rejection of her paper is not a great way to engender a good working relationship. Note that this pretty clearly seems to have happened before Gebru sent the email that Dean characterized as an ultimatum.


It sounds like there wasn’t a great working relationship. It seems management was concerned (reasonably, based on her track record) about the prospect of her responding with hostility directed at the coworkers who expressed concerns about the quality of the work if she managed to discover their identities. Refusing to let a person have a written copy of anonymous feedback is a rational thing to do if you’re concerned that the person will closely analyze the feedback in an attempt to de-anonymize the reviewers.

The fact that she issued an ultimatum for the identities of the reviewers suggests that management was correct to have safeguarded them in the first place.


What I could read was and from the responses of her own teammates was the fact that the paper passed the internal review and that she already gave heads-up to the PR department about her work and they gave a heads up to her and suddenly a meeting pops-up and a manger's manager says to her that you need to either retract the paper or make certain changes. She was fine with the internal committee being anonymous but at this stage anyone would have demanded the same, i.e. who is the authority that thinks this paper sets a lower precedence for what google stands for i.e. some sort of human engagement and what does the authority do they take her at her word twist it and fire her by "accepting her resignation" what does this sound to you, for me a kind of high and might attitude by the authorities i.e. how come a black woman that too from the ethics department question our conception of the matter, let us how her what we can do Fire Her!!!.

This might sound a bit exaggerated but all of this is just putting google in bad light and top of that over 500 googlers have written a letter demanding an explanation for the same, those guys know about there internal workings more than you and me, so it is surprising how many review processes does google have, its just like double pressure first get the internal clearance and then work with the original reviewers of the conference. And now Jeff comes up with this explanation:https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQ....

And for not once he mentions that the paper did already pass the internal standard review process.


"based on her track record"? The fuck does that mean? If you wanna say that people shouldn't call out their employers, then say that. No need for character assassination.


Anyone can look up her tweets and her interactions with, eg, Yann Lecun, and make a pretty good inference.


I looked them up. I found the exchange robust but not extraordinarily so. She was firm but didn't seem to step out of line.

What inference was I supposed to make?


Her contributions to the exchange were certainly unprofessionally hostile (“I don’t have time for this.”), but in and of itself that sort of behavior isn’t a real problem. The actual problem is that as a person with a large following, her hostility precipitated harassment by her followers that ultimately drove Lecun from Twitter. For someone on the receiving end there isn’t a meaningful distinction between whether someone harasses them directly or whether they incite harassment by others.

From what I’ve read about Gebru and this situation it doesn’t seem implausible to me that had she identified the reviewers she would have named them in a public venue and characterized their criticisms as being driven by discriminatory bias or an intent to suppress her work. Obviously nobody is going to present criticism, regardless of whether the criticism is legitimate, if that is a possible consequence.


Call out her employers? No. I’m referring to the exchange she initiated with Yann Lecun that caused him to be harassed to such an extent that he withdrew from Twitter.


Really makes one wonder if this document is one that Google does not want to come out in discovery, ever, and that it's in some system with a relatively short TTL before it gets deleted, because policy.


> Really makes one wonder if this document is one that Google does not want to come out in discovery, ever, and that it's in some system with a relatively short TTL before it gets deleted, because policy.

I suspect not, because it's probably a carefully constructed document to fit the pretextual narrative of the constructive termination campaign that it was part of, which was targeting Gebru based not on the particular paper but on race/sex and criticism around the internal culture on those issues.

At least, it's pretty clear to me from all the Google AI people describing how Dean’s characterization of the review process does not comport with the usual practice of that process, and in some way differs from even the official documented process, suggests very strongly that the entire review issue was pretextual and personally targeted, and not about the paper itself at all. The interpretation of what is behind that pretext is a little more speculative, but you don't need a pretextual campaign unless the actual basis is prohibited or even worse for PR than the pretext.


> Yeah, put more simply, they pushed out someone in their Ethical AI department because they did not soften critiques against AI enough.

I didn't read that. I read the person _demanded_ who said a particular critical feedback, or questioned the approaches instead of addressing them. The person gave the ultimatum to resign if details were not shared.


If your work is suddenly and unexpected roadblocked at the last second by internal review, the only way to make changes that prevent that from happening in the future is to clearly understand the situations and criticisms that led to the roadblock. This is why understanding who raised these concerns is important. Anonymous feedback blowing up a project at the final moments is sure to frustrate anybody. If what she has said is true then it was also very difficult for her to even have access to the substance of the critique in the first place, with the initial story from her management being that she would not be able to see the documents explaining why the paper was to be retracted.

The critique here appears to have been fairly minor, too. Failing to cite some recent research is rarely grounds for rejection.


>Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them.

--Nicolas Le Roux


It would be pretty easy to discriminate if you had loose underspecified rules, then decide action on a case by case basis. The problem seems to not be in the rules but in the deciding.


Why can't it be both? I once observed an instance very early in my career, while working in an employment litigation office where rules were explicitly created in order to box an individual in such that their actions, while completely legal and moral and in the course of their professional duties could be used as grounds for dismissal "because policy".

A lawsuit emerged. A settlement followed.

Just because "we made rules for this" doesn't mean the scrutiny should suddenly cease.


This is how most 'fascist' states work- theoretical controls on each and every aspect, laxly imposed on everyone but the deplorables.


I mean this sounds deep and sensible and I support the underlying sentiment when it comes to laws and government force, but it's not so black-and-white in social situations..

Someone farts once - no big deal. Someone farts all the time => they're quitting or I am.


> at the last second

It was only ‘at the last second’ because Gebru chose not to follow the normal procedure.

If the paper genuinely can't be ready until one day before the external deadline, the right thing to do is engage with the reviewers in advance, explain the problem, and provide them with drafts and work in progress, so that they can complete their work a few hours after yours.

What Gebru did is the equivalent of bypassing code review and pushing to prod on Friday afternoon.


According to Jeff Dean's account of events, yes. According to nearly everyone else's, this process was unusual in that it even involved reviewing for content and not just IP, and Gebru herself says that the website about the process says to submit at least 1 week before publication, not two.


"According to nearly everyone else's, this process was unusual in that it even involved reviewing for content and not just IP"

This is ethics for Google's AI+Search which is currently undergoing global scrutiny, particularly by Congress and specific politicians who are considering anti-trust measures against Google - and who believe that 'their political party is being treated unfairly'.

It's existential concern for them right now, relating to the possible breakup of the company.

Every public communication on 'ethics' or search results etc. at Google is obviously going to have to be reviewed.

If you're publishing the latest thing on 'AI Random Number Generation' obviously nobody cares about anything other than IP.

The fact is, she must have known this and submitted anyhow - which is in and of itself not so bad, but that there was calamity afterwards ... there is no excuse.

Google was absolutely reasonable - they did not ask to change the nature of the research, but wanted to make sure that information about new, better processes were included.

It's beyond gracious for Google to do this, when really their starting point is 'silence' and they really don't have to do anything at all.

A request for a fairly short review with very basic and reasonable concerns blew up.

This is not a public university, you don't get perfectly tenured academic freedom, if Google wants to put a reasonable subnote in there - and take 2 weeks to do it, it's perfectly fine.

Obviously Google would have kept her if they wanted to, but it's clear they were both looking for a way to part ways and it's probably for the better.


> This is not a public university, you don't get perfectly tenured academic freedom, if Google wants to put a reasonable subnote in there - and take 2 weeks to do it, it's perfectly fine.

They may be allowed to, but they're fools if they think world-class academics are going to work for them under draconian publishing standards that are not even consistently specified. I'm sure Gebru could get a tenured position at a university of her choice. They're throwing away a lot by choosing to die on this hill.


"but they're fools if they think world-class academics are going to work for them under draconian publishing standards"

?

I suggest it might be 'foolish' to imply that 'a 2 week quick review with minor additions' are anything remotely 'draconian'.

Just the opposite -- this is a siren call to great researchers who want to be highly paid and work on great and novel things, full well knowing Google has a very light review process, won't interfere or suppress.

This makes Google sound like a great place to do research, probably better than most public institutions.


I think that demanding retraction to a paper with no reason, and then only providing a verbal reason (aka the researcher cannot have the notes with them when making revisions), refusing to explain the process in which feedback was solicited, and then demanding retraction (NOT a revise and resubmit)...

yeah that's super draconian.

ESPECIALLY if other people in your department are claiming no one else has to go through this, just one of the few black women! Damn!


> when really their starting point is 'silence' and they really don't have to do anything at all.

They can't have their cake and eat it too; if they want hire people to do AI ethics research, and then censor them for doing their job, then they should get called out for ethics-washing, which is exactly what's happening.

I don't know why so many people love to defend power, especially when that power is not benevolent.


"and then censor them for doing their job"

This is false.

The requirement for a fairly light review process, and asking for more, truthful, factual and contextualizing information to be disclosed is not censorship.

Nobody is suppressing research, or even asking that specific opinions or results be changed.

The commenter above used the term 'draconian' to refer to this process, which is just superlatively false.

"I don't know why so many people love to defend power, especially when that power is not benevolent."

How is this power not benevolent exactly?

What's 'hard to understand' is the petulance and irreverence people have for the offices and responsibilities they hold, and the lack of professionalism in their conduct.

This should have been an easy issue to address by any mature researcher who cared about working with others to achieve positive outcomes - instead of trying to force their opinion on an organization, or engender public support for their career.

There are plenty of reasonable voices at the table for 'Ethics in AI' nobody has a magic wand in this equation.


But they didn’t ask for more factual and contextualizing information. She wasn’t given a chance to revise the paper to include that. It was just canned.


The top comment seems to imply that that was/is SOP.


I’ve only ever done the paper review process twice. In both cases I got it approved concurrently with submission to a conference. Other googlers have similar stories.


As a manager, if someone gives an ultimatum, you basically have to fire them. There's no real option; Ben Horowitz covered it somewhere, but the bottom line is that if you yield, you've given up all control.


This is rubbish. At Intel we called it "badge on the table". It's a statement of complete commitment (and I've been the beneficiary of someone going 'badge on the table' at least once, under circumstances I can't disclose).

It doesn't imply that if a manager or VP or CEO concedes the point at issue once, then the person can now go around "putting their badge on the table" and getting their own way over and over again on other issues; probably making a habit of issuing ultimatums (ultimata?) will get you fired PDQ.


During my time at Intel I've only seen once the "badge on the table" and the manager took the badge without hesitation.

I guess it only works if you have a good enough reputation that people actually want to you to stick around.


Interesting, do people who put their 'badge on the table' actually quit when they lose, or is it just a saying that's used to show emphasis?


I only know of 2 cases where I know for sure that it happened, although I've heard rumors about more. It's not going to be one of those things where you run around yelling about it, especially if you succeeded (after all, flexing on your management that you did it is likely to make your management unhappy).

One case someone succeeded. The other case some other person resigned.


Sorry, that was clumsily put - what I meant was in one case person A got what they wanted, and in another case not involving anyone from the first case, person B didn't get what they wanted and resigned.


If they don’t quit, the next time they try it, it won't be believed.


I’m more interested in whether it’s interpreted as an actual ultimatum, or just an emphasis.


I understand some of the reasons why some managers think that way, but you can't have such simplistic rules.

An ultimatum like this is an opportunity for a responsible manager to talk and rethink, but it seems like Google jumped at the opportunity to double-down on their mistake and then send out cowardly emails claiming the employee had actually resigned.

If I were to apply a simplistic rule here, I would actually invert it - if you get to a point where you are sufficiently undervalued that you feel the need to issue an ultimatum, you basically have to resign.


The problem with accepting anti-social behavior is that you encourage it. This individual or another will use the same strategy against you in the future, and it will create antagonism and toxicity.

No, an ultimatum is a choice between two options; she offered Google a choice, and they selected. Expecting them to try to carve out a 'third way' is just unrealistic.

I agree that you can frame this many ways; she could have portrayed this as her resigning in protest, instead of blaming Google for being vindictive.


Ah the old I don't want to do my job and accept that there "was nothing I could do" which is middle management bullshit.

It is fine if you think that, but accept that you are the weak one here. If you want to err on the side of keeping your job it's fine, but don't pretend you didn't make a trade off.

You do not have to accept anti social behavior but a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom.


Respectfully, we don't have enough information to judge whether she was net-positive to the team who was acting reasonably. That's a complex calculation, and I don't know whether management triumphed or failed.

It does seem like she judged the situation incorrectly, as she is now complaining, not gloating.


That is sorta my entire point.

The fact that this has over-flowed into the public sphere is a failure.

If they handled the situation correctly it should have been sorted internally. Whether the person in question spilled the beans at all is proof of that.

Managing people is a skill, and being good at computer science does not make you a good manager. They should know that complaining to social media is an option that someone might take and they should consider that when dealing with these issues.

The fact that we are here discussing anything at all proves the above, It isnt 1995, if someone feels slighted for whatever reason, expect it show up on Twitter, true or not. You don't want to be chasing the narrative with a potentially one sided google doc. No one is giving the mega corp the benefit of the doubt in 2020 which means it is bad PR either way.


You asserted:

"a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom."

My point is that I do not know whether this situation could have been handled better. We don't know enough to judge whether this could have been sorted out neatly. You seem to think that a clean resolution was possible, and you might be right, or you might be wrong.


Fair, but I don't think we need to know what happened to asses what is happening now.

I consider that this being discussed in a public sphere a failure regardless of situation as it looks bad on the company no matter what.

If a person feels their only way out is to appeal to the mob then I think the people doing the management have made a misstep. If that person has a history of appealing to the mob then it is still a misstep as that should have been considered when dealing with the issue.

Perhaps they did the calculus and this is the best result, but looking in, it doesn't feel like it.


> My point is that I do not know whether this situation could have been handled better.

Let's follow the timeline and discover a root cause:

    1.  Anonymous feedback being given through HR about a 
        research paper in AI Ethics to be published in an 
        academic forum.

    2.  Manager schedules a meeting where: “it has been 
        decided that you need to retract this paper by next week..."
        without context and without a chance to confront others.

    3.  She puts an ultimatum to her boss that she can't  
        continue to work there with conditions like that limit her
        freedom to speak and research.  Google decides to accept 
        her resignation.
This suggests that:

    A.  People can just go to HR with criticisms of a research paper 
        apparently with the intent to sabotage authors, and HR is 
        apparently fine with being used like this.  Or possibly a manager
        convinced HR that OKR's trump AI Ethics.

    B.  They wanted her to say certain things in an academic forum -- 
        which didn't appear to be IP/Trade Secret related, but for 
        some other reason, which they refused to disclose.  This is 
        in an environment of ethics where papers might become guidelines
        for legislation.
  
    C.  They're not interested in fixing the issues she brought up, 
        because they allowed #1 and #2 to happen above.
It looks like the root cause was A above. Everything after that cascaded from there.

Should HR be involved in "fixing" a paper in AI ethics? Probably not. Just like you wouldn't take your car to HR to get it repaired. They simply don't have the knowledge to do so.

Then Jeff Dean probably has $20 to $30 million wrapped up in Google, so he's going to take their side on the matter publically, unfortunately. Privately he may have been cussing out HR because of forcing him into the situation. We don't know.


Judging from Timnit's Twitter account it would have blown up anyways no matter what. Have you seen her previous tweets? She seems to enjoy the drama.


Is it anti-social behavior if a company tells you, 'do X or else'? Even recently plenty of companies have told employees that they can move and work remotely but they had better report it so their salary can be adjusted. The penalty for not reporting being firing.

Ultimatums shouldn't be a frequent occurrence but they are a part of business relationships. It seems a bit unfair for an employer to treat an employee ultimatum as a fireable offense when company policies are sometimes the equivalent.

Employees sometimes decide that an employer ultimatum is offensive and quit sometimes too. But I don't think it is nor should be a set-in-stone rule that an employee that issues an ultimatum should be terminated.


> No, an ultimatum is a choice between two options; she offered Google a choice, and they selected. Expecting them to try to carve out a 'third way' is just unrealistic.

But you're claiming that for a company there shouldn't be a choice, it should just lead to termination.


Well, the company is in slightly different position from the manager. They can abrogate the manager's authority, but that would permanently undermine that individual. On the other hand, they can also choose to accept the subordinate's resignation. They could try to transfer the subordinate somewhere else, but that's also risky, and wouldn't really address the ultimatum in this case.

Accepting a resignation achieves three separate objectives:

-resolves the ultimatum

-discourages future ultimatums

-preserves the status quo ante


I think you're coming at the ultimatum from some sort of strange power dynamic perspective, where an employee who successfully gets their ultimatum approved somehow disenfranchises their manager of their authority, enabling future employees to…what? Vie for the managerial position? This has a "crush dissent" kind of vibe.

In every employment contract there is a balance of things and employee is willing to do and an employer is willing to provide in exchange. If my boss said that they wouldn't pay me anymore I would rightfully respond with an "ultimatum" of "pay me or I quit". That's the ultimatum they respond to every day by paying me; they look at the balance of things I offer, consider what I provide to the company to be adequate, and then give me the money I ask for. The same is true for any ultimatum: you come to the table with one final negotiation; the negotiation of "do you value me? Then you must provide me this". It's an entirely transactional exchange.

Now, ultimatums are general to be discouraged not because they undermine some sort of authority, but because they are a sign that negotiations have broken down on both sides. As a manager, your goal should be to try to reach a compromise far before that point–not only does to hurt your relationship if you don't, even when the ultimatum is "successful" from the point of the employee, but by letting a conflict reach an ultimatum point you're exposing yourself to significant risk and often poor deals. The way to handle an ultimatum is to forestall "pay me x or I quit" with "I'll pay you almost x if you show good performance for the next three months". If you are at the point where the argument is "I'm going to quit" then yes, you may have to carry through with the termination if you think what they provide is less valuable than what they want from you, but you should really be looking at what you did to get to that point instead.


> This has a "crush dissent" kind of vibe.

Yeah, and whether intended or not, a "fire anyone who gives you an ultimatum" strategy absolutely creates that vibe.

If you have a top down management style where you employees do not question anything you say, that might be the way to go, but I find in the software business what you want is the opposite. You want all the criticism and feedback you can get from your skilled and knowledgeable work force. If you don't get that, you're wasting the majority of that money in their pay check.

The irony here is that if you have a manager firing someone who presents an ultimatum, then tat in itself is effectively an ultimatum that you are supporting. ;-)

That of course also doesn't mean you accede to every ultimatum. I mean, if your business plan is to do X, you want employees that will help you to do X. If they are getting in the way of X, then you need different employees anyway. Usually though, you and they have already worked out that they want to work with you to help you do X before you hire them.

So the main reason you get ultimatums is because they didn't anticipate and do not like the approach you are taking to get to X. Assuming they are smart and have good judgement (and again, if not, why did you hire them? why are you paying them?), there's a very good chance that there are some problems with your approach and you'd be wise to at least consider that possibility and their perspective. They may be trying to save you from making a terrible mistake, and feel like it is incumbent on them to stop working for you because allowing you to proceed would be working against that goal you hired them for.

It's not uncommon for two people to have very different perspectives on what helps to achieve a company's objective. It's also not uncommon for one of those people to be horribly, horribly wrong. Sure, if you've got an employee who has presented an ultimatum based on horribly wrong judgement, it may make no sense be their employer.

I'll tell you though... just because their a subordinate doesn't automatically mean they are the ones exercising horrible judgement... and the farther you go up the food chain, the more severe the consequences from supporting someone's horrible misjudgement. So having a policy of summarily firing subordinates who present ultimatums both creates the wrong environment to get the best out of your team and terribly harmful for the leadership of your organization.


What gets me is all this talk of an employee making their terms of employment known (this so-called "ultimatum") being somehow unusual. An employee/employer relationship is ultimately a running series of ultimatums. What's really discouraged is making each one explicit, but of course, that doesn't mean they aren't there, nor that occasional forthright discussions aren't customary. What do these people think a performance review is?

Usually the goal of management is to employ explicit, stop-gap communication to avoid having to get to the explicit question of continued employment, because the company has already made a committment to that employment by hiring the employee in the first place. Obviously, most employees want to continue on, also. So it seems nonsensical to view anything save an explicit declaration of resignation as the same. "I would like to discuss what would cause me to resign," is not a declaration of resignation, and the people reading this situation in good faith understand that.


Getting to the point where things need to be explicitly stated is unusual, I think. The rest of the ultimatums remain unsaid because people are aware of them already and work within their bounds already. And getting to the point where you have to give a verbal ultimatum requires a party to not be aware of its existence, which is rare when communication isn’t totally broken.


Critical race/gender theory is all about power dynamics permeating every interaction.

You might think that's exhausting, I certainly do, but that's what we're dealing with here.


You might think that's exhausting, I certainly do

Why? Is there something about CRT that threatens your means and way of living, or is it forcing a type of introspection about what minorities have and continue to go through in various forms and machinations you'd rather not entertain?


This right here. The veiled accusation and attempt to put me on the back foot in a power dynamics debate. That's what I find exhausting.

Anyways I was just explaining to OP that the situation was already about that stuff before any response was considered.


It wasn't veiled at all, it was a bald-faced ask, why dodge it? If the veiled accusations that some people utilize in the name of CRT is bothersome, why wouldn't you call THAT out from the very start?

That tactic is not a problem inherent to CRT, that tactic is a problem with how people deploy and weaponize CRT.

In the absence of anything else, yes, people are going to make assumptions.


But you're doing the thing. The tactic that you agree is bothersome.

Edit: I agree that one could incorporate some CRT into their worldview without becoming insufferable, in fact I think lots of normal people have without calling it that. That said, there are a lot of true believers out there, that's who I was talking about.


okay, well since the comment I initially replied to has been edited entirely post hoc to represent an entirely different tenor than what you originally replied with, I guess I need to edit mine as well:

No, I'm not doing that right now, I am trying to understand your framing of CRT and where your issues lie with it. It would seem those issues lie with how certain people argue CRT, not CRT itself.

Thank you for clarifying that.


I guess my beef is, how come everything is game for criticism EXCEPT the critical theorists? Why can't we analyze their power incentives as well?


What stands in your way other than being faced with possible objections to, and responses in kind to whatever your critiques may be? Objections and responses that-I would boldly say-are not stopping you from making said critiques, or rather, they hold no enforceable power that precludes you or anyone from making rebuttals of your own.

They are just that, objections and responses. Which you are free to entertain or not, attempt to unpack and understand or not, respond to with better critiques, objections, observations and rebuttals of your own...or not. But you're not being prevented from making them by anyone or anything short of I suppose committing some sort of crime in order to make that point (that's just an extreme example to stretch the metaphor).

This is the form and function of debate, it is a crucible that boils away impurities of all manner and dialect (for anyone who may be thinking they've heard this one before, yes, I absolutely stole this from an episode of Star Trek).

If you feel you are being stopped from doing any of this, might I ask why and how you have been completely prevented and kept from expressing yourself?


Alright, here's my substantive criticism of how CRT affects various groups in practice (not doxxing my membership in any of these groups):

Elite coastal white: Absolutely not threatened. Beneficiary of the system and knows how to navigate all of the social codes.

Less elite or poor white: Takes the bullet that was aimed at the elite white.

Asian: Scores way too high on tests for their % of the population and this is a problem for a worldview that cares about what % of college slots go to which races

Professional class black or latin: Does great, huge beneficiary of CRT activism

Working class black or latin: Invisible and accidentally hurt despite good intentions. CRT proponents tried to pass a referendum legalizing racial discrimination in hiring in California this year, which would have helped professional class POC and probably hurt this class. Fortunately it failed.

EDIT: I removed some cattiness above. Not trying to pull the rug out from under you but I'm rate-limited and wanted to focus on my actual points. I don't think I'm a caricature of 'unwoke' person who never thought about or dealt with these things before.


We just went through a mini-version of it: the go-to move is that any criticism is immediately labelled as closet white supremacy.

Is that what you truly believe I did above? That I am labeling you, and think you to be a white supremacist?

If so then allow me to be clear for a moment: I have literally no way of knowing if you're a white supremacist. I have no way of knowing if you're not actually an armada of ants collectively working to actuate the keys of a mechanical keyboard or a Boltzmann brain sending these messages through some strange and baffling form of quantum entanglement. What I am trying to expose is the very real reality that these are uncomfortable conversations, that's just intrinsic to this topic and the climate we are in.

This is fine. It is fine to admit being uncomfortable trying to process where we are, how we got here, and how we got out of it.

But one has to start by looking that beast in the face first in order to reckon with it. For some, that uncomfort gets unwittingly channeled into anger and frustration and they might not even know why or even realize it, but that can be focused, and turned into knowledge and wisdom on the issues. One's just gotta start, like I said: see it for what it is, and working from there.

If you took that to be me associating you with white supremacy, I'll try to find other ways of seeking out clarity from people next time.


> Is that what you truly believe I did above? That I am labeling you, and think you to be a white supremacist?

Looked like it to me, an uninvolved curious third party.

> the very real reality that these are uncomfortable conversations

Huh, that's what the poster that you replied to said. Weird that you got all up in their grill about it.

Let me just attempt to paraphrase the initiating series of comments, seeking only to illustrate how your comments looked to me, not attempting to do justice to the full meaning of each commenter.

nickff > advice on how the manager should do power dynamics

saagarjha > "strange power dynamic" [followed by lots of savvy commentary, irrelevant to my point here]

free_rms > CRT is all about power dynamics. That's the point. I find it exhausting, but me being exhausted is not the point, the point is that it's about power. [bit of a reductionist take on the parent comment, but probably correct?]

dvtrn > Are you exhausted because, as a beneficiary of oppression, you'd rather the oppression continue? Or is it because you're just too lazy to care about fairness?

free_rms > See, I dunno where you got that I'm an oppressor, where did this threat come from?

Yikes! And free_rms didn't even say that your implication that she/he is an oppressor was wrong, nor were they defensive about it. They just said that it's exhausting! I mean, it would be! Who would not be exhausted by that, whether or not it's a fair accusation!

I mean... now, at the bottom of this thread, you imply that you were seeking to know more, and not trying to imply that the exhaustion is evidence of being a bad person. Okay, I believe you, and nothing in your first comment belies that reading (though some of the intervening comments, hmn not so sure). But I don't think it's the natural read of what you said, at least it wasn't the natural read for me.

Me personally, btw, I dunno what CRT is, so in my privileged ignorance (enabled, of course, by my general white privilege) I'm immune to the exhaustion. I read this whole thread to see if I could learn something useful. Not so far, though I don't regret the time spent.

I dunno, is this helpful? Maybe I'm not being helpful.


> The problem with accepting anti-social behavior is that you encourage it.

Not saying it's wrong or right, but couldn't she make the exactly symmetric argument?


She definitely could, and she could present herself as the hero of the story who resigned in protest, but she has chosen a different narrative.


She is clearly the hero. Public seems to be on her side.


No, they didn’t give her an ultimatum.


Is that the only form of anti-social behaviour? The description of the rejection process is similarly inflexible...


"an opportunity for a responsible manager to talk and rethink"

Mostly it's an opportunity to let the staffer know that such ultimates are unacceptable, and that taken literally by her own terms - she could be called out and let go. Which is what happened.

It's very doubtful that if they wanted to keep her, that they couldn't have found terms.

Surely the manager would have bent, indicated the wording was a little bit strong, and found a way forward.

It seems clear they were wavering, she crossed a line and offered them the path out and they took it.

If there were material issues being covered up, there was material suppression of information, this story would look completely different - but there wasn't.

This was the right thing to do by Google in a tricky situation.


Maybe in certain situations. But as an engineer on more than a couple of occasions I have pushed back on safety concerns and I was adamant that certain things be fixed for the company's reputation and for safety reasons. I did go over my manager's head because he wouldn't listen. Should I have been fired? Ultimately on one occasion I went up 4 layers of org chart to a VP who finally had the sense to listen because my concern was going to cost the company a lot to fix. I didn't get fired and actually got a bonus and raise that year because I stood my ground. However I never threatened to resign, they would have had to fire me to get me out of there :)


If you didn’t threaten to quit then it’s not an ultimatum at all.


That's not true at all. You can say you'll (go to the media/refuse to sign off on the regulatory paperwork/refuse to change the code that way and they don't have anyone else who can do it/refuse to change the password) or any of a number of other things.


I once worked on a federal IT contract where the project manager for the team was from another company. He was a dishonest, backstabbing snake, and it reached a point where I was quite fed up.

I told my company that I was fed up and the only way I would continue in that situation is if I was given a sizable raise because I wasn't paid enough to put up with him. They gave me the raise rather than having me walk. I worked there for several more years after that.

I never issued an ultimatum before or since. Maybe there are people issuing threats all the time, bug it seems to me that people usually do that if they're frustrated but they want to stay at the company. For IT folks with desirable skills it's far easier to just get another job.


I told my boss I won't work past 6pm and that I won't bring my work phone on vacation. I've had reports tell me they'd leave the team unless they can see a certain rate of career growth. No instant firings on either side.


As a manager, it's your job not to push people to a corner where they need to make an ultimatim. If your company is ethical, you should be able to navigate this.


Well, you're assuming she was pushed in 'to a corner where they need to make an ultimatim [sic]'. All I know is that she used an ultimatum to challenge/corner her manager, and he decided to discontinue their working relationship.


She certainly felt that she was pushed into a corner where she had to make an ultimatim. As a manager, it's my job to make sure my people don't get into situations where they feel that threatened.


I disagree with your premise. I abide by Andy Grove's philosophy, which is that the manager's job is to optimize value production by a team. Sometimes the manager's objectives are in conflict with those of the subordinates, and there is no way to avoid the problem.

A leader is not a friend or an ally, they are just a leader; the leader can be friendly and supportive, but they are still just a leader.


That is a psychotic reaction that assumes the world is just a Hobbesian nightmare of domination or death.


It is a VC's opinion we're talking about here...


Hey, just so you know, regardless of whether things work this way under /today's environment/ and/or whoever has said it as some "management wisdom", the words you have typed here represent some blatant power-tripping BS to anyone with half a brain.

I hope you've said it with the intention to make a point about how dysfunctional certain managers can become, rather than illustrating a belief. If you can't lead other human beings without having control over them, then please hang up your leadership hat and go do something else for a living.

-Another manager.


Uh, or you could compromise with them and act like two adults? This isn't a zero sum game.


Isn't the definition of "ultimatum" that no further compromise is possible? You can try or offer different alternatives, but if the other person is really at the ultimatum stage then you've both already lost.


> Isn't the definition of "ultimatum" that no further compromise is possible?

Given that, at least in Timnit's narrative, the email included a request to discuss the issue in person when she returned from vacation, I don't think that the "ultimatum" characterization is uncontroversially accurate for the immediate case.


I'm responding narrowly to the subthread here, which is about firing someone who gives an ultimatum in the abstract. I don't know enough about the specifics of Timnit/Google's situation to pass judgment. (I'm also an employee there, so doing so would be unwise and a potential violation of confidentiality rules if I did know anything.) To me I'm filing it under "Everybody sees through their emotions, and different people will have different perceptions of what actually happened and what people actually intended."


Your point seems to be that not willing to compromise on a specific point means the employee is lost forever.

There’s tons of issues I wouldn’t compromise on, and better leave the company if I had to. Does that mean I’ll be fired the very second these subjects become remotely relevant and/if I make clear where my boundaries are ?


If someone wants to negotiate, they don't use an ultimatum.


I understand your sentiment, but in my experience ultimatums are a common negotiation tactic and are rarely "true" ultimatums.

"X is the lowest price I can sell for, take it or leave it" "How about X-10?" "Done"

"I want to read 5 books!" "You can either read one book before bedtime or go to bed now" "How about 2 books?" "Okay, but then straight to bed!"


Well there are a few factors which make this situation different. If the ultimatum had been made in person, I'd think there might be room for negotiation, depending on the relationship between manager and subordinate.

Putting the ultimatum in e-mail form really raises the stakes, because there may be other people CC-ed or BCC-ed, and any response could later be weaponized.

If the relationship was already troubled, anything like an ultimatum is an opportunity for the manager to be rid of all their troubles.

The level of the threat also comes into play, and more severe threats increase the risk/tension. If the guardian had threatened to disown the child rather than send them to bed, we would read the situation differently.


I agree. An ultimatum given in an e-mail is more difficult to treat as a negotiation tactic, and it seems like there is much more to this story than we will ever know.

They way this was handled doesn't make any of the involved parties look good.


If someone is using an ultimatum, they don't request an in-person discussion.

The narratives from the participants on the actual communication differ on key points relevant to evaluating whether it was really an ultimatum.


Her description, quoted from a Wired article which she has re-tweeted (which I interpret as an endorsement) is as follows:

"Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation."

I read that as an explicit ultimatum.

https://www.wired.com/story/prominent-ai-ethics-researcher-s...


It's also not a static one-period game. You can ask them to reconsider.


I don't want to work for a manager who will never think: "Huh, this situation is serious enough for someone to make this kind of ultimatum. Who is right and why? Let me take a moment to think about it with an open mind and pick the most appropriate reaction regardless of what I previously thought."

Sometimes the person making an ultimatum is right, sometimes they're wrong. It shouldn't be as adversarial as viewing yielding as weak. Insisting on always "winning" is in my view the weak position.

Additionally, firing someone is not always legal in some countries, even after an ultimatum, assuming they pick the wording of their ultimatum carefully (e.g. "I may very well resign if/unless [desired condition]") to retain control over whether they will later finalize their conditional decision to resign.

As one example, in Quebec, employees who don't qualify as "senior management" and who have been employed at a company there for an uninterrupted period of at least 2 years cannot legally be fired without what the law considers good cause, period, not even if the company gives them a notice period or pay in lieu. Any alleged noncompliance or misconduct that falls short of the most extreme examples must be first dealt with a graduated process of progressively stronger discipline, and it must be possible for someone to recover from that instead of having the outcome of the process as a foregone conclusion. There is a government tribunal to which an aggrieved party can appeal if they aren't happy with the outcome, with the power to order remedies including back pay and even reinstatement.

Similar things are found in many European countries, though certainly not all.

Of course, ultimatums with more definitive wording like "I resign if/unless [condition that the listener has control over]" -- note the absence of hesitating words like "may very well" -- can irreversibly become an effective resignation worldwide, based on choice of the listener on whether to satisfy or reject the condition.


Speaking as another manager, it sounds like you just can't handle a hard conversation.


> but the bottom line is that if you yield, you've given up all control.

This doesn't seem logical to me. I don't doubt there are indeed scenarios where this is true, but as an absolute, this doesn't resemble my real world experience at all. It seems like kind of the opposite of how human interaction should work.


Indeed. The very idea of using words like "yield" or "control" belies a fundamental weakness - managers who are so insecure that they can't ever change their minds in case someone figures out how mediocre they are (when in fact the opposite is true - listening to your expert employees and allowing them to change your opinions is seen by them as a sign of strength).


I'm sorry, but that's no more true than when the ultimatum is the other way around.

I think that statement presumes some degree of unreasonableness. Honestly, I value having employees that have principles and clear boundaries, if for no other reason than I can rest assured that when I'm not observing/involved, those principles and clear boundaries are still there. Now, if those principles are, "I won't accept that paying me gives you any kind of authority over what I do", then you know that's not going to work out for anyone involved. However, if it is something like, "You can't pay me enough to do X", and I have no desire for them to do X, I'm really okay with that.


As a manager, your job is to manage people to get results, and if you are insecure enough about managing those people that you feel you have to enforce some sort of idiotic one size fits all policy then you shouldn't be a manager and should resign yourself, immediately.


This sounds incredibly inhumane.


Sorry, I strenuously disagree. I'm also a manager and think setting boundaries is essential, up, down, and sideways.


Can you post a reference? Some people are doing a lot of ultimatum with me these times


Well, I agree that if you're weak and need yes-men working for you, that's certainly the way to manage. Another way to do it is hire dumb & docile.


Isn't the academic journal peer review process generally an anonymous feedback mechanism? Why does this need to be different?


This isn't (or wasn't) a review process for scholarship. Oodles of people within Google (even within Brain) have gone through this process and it seems to have always been the case that it just checks for things like PR problems, IP leaks, etc.

Further, she claims that initially she was not allowed to even see the contents of the criticisms, only that the paper needed to be withdrawn.

Let's say you were working on a feature. At the 11th hour, just before it hit production, you get an email telling you to revert everything and scrap the release. Apparently somebody in the company thought it had problems but they won't tell you the problems. Then after prying you do get to see the criticisms and they look like ordinary stuff that is easily addressed in code review rather than fundamental issues. They still won't tell you who made the critiques. Would you be upset?


Seems like part of this was that the paper represented a PR problem


And their handling of it has only given them more PR problems.


Because this isn't peer review — or at least, it's not meant to be (per the top-level comment). That's the whole issue, really: there already exists a peer review process to ensure the paper's academic rigor, so why is Google hiding behind a claim of the necessity of anonymity for a corporate (not academic) process?


Because typically your academic journal peers don't work for the same bosses you do.


> academic journal peer review process

Fro my understanding, this paper had already passed peer review and been accepted. Google management then decided to block the publication using the IP review process.


Please go read the link first. Jeff clearly states that Google has a review protocol for journal submission which requires a two weeks internal review period.

Timnit shared the paper a day before the publication deadline, ie, no time for internal review, and someone with a fat finger apparently approved it for submission without the required review.


That's not under dispute. What's under dispute is:

1) Is the review protocol that requires a two-week review period a peer review process intended to maintain scientific rigor, or an internal controls process intended to prevent unwanted disclosure of trade secrets, PII, etc.?

Repeating the comment at the very top of the thread:

> Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.

If it's not a scientific peer review process, arguments about why scientific peer review is generally anonymous are irrelevant, just like arguments about why, say, code review is generally not anonymous would also be irrelevant. It's a different kind of review process from both of those.

2) In practice, is the two-week review period actually expected / enforced? Other Googlers, including people in her organization, are saying that the two week requirement is a guideline, not a hard rule, and submissions on short notice are regularly accepted without complaint:

https://twitter.com/le_roux_nicolas/status/13346245318860718...

https://twitter.com/ItsNeuronal/status/1334636596113510400

https://twitter.com/lizthegrey/status/1334659334689570817

(I don't work for Google, but I work for another very IP-leak-sensitive employer that does ML stuff, and we have a two-week review period on publications. The two-week rule exists for the purpose of not causing last-minute work for people, but if you miss it, it's totally permissible to bug folks to get it approved, and if they do, it's not considered "someone with a fat finger." It certainly doesn't exist for the purpose of peer review - it's assumed that the venue you're submitting to will do review, and I think everyone understands that someone from your own employer isn't going to be a fair peer reviewer anyway. There is a "technical reviewer" of your choice, but basically they just make sure you're not embarrassing yourself and the company, and there's no requirement for how deeply they review. I think I've gone through the process twice and missed the deadlines both times.)

So, if this "rule" exists on paper, but only exists in practice for her, then this is the textbook definition of unfairness.


BTW, a couple more examples of Googlers saying this isn't a firm deadline by any means and one-day reviews are quite permissible:

https://twitter.com/william_fitz/status/1335004771573354496

https://twitter.com/mena_gonzalo/status/1335066989191106561 (an intern!)


Papers differ. A short, straightforward, low-impact paper on a non-controversial topic could probably be reviewed in a glance or even rubberstamped. A long, complex, high-impact paper on a controversial topic (or worse, a paper with a fundamental conflict of interest) might take a long time and definitely can't be rubberstamped. The paper at question seems to fall under the latter category? It's like skipping a stop sign; 99 times you do it in your neighborhood with no one around and there are no consequences whatsoever, but that one time you do it in downtown with a cop parked right around the corner and you get a ticket.


I think the "skipping a stop sign" analogy doesn't quite work because there was someone around - someone had to approve it, and furthermore, the fact of the late submission and shortened approval is recorded in the review system. If they wanted to tell people "Hey, in the future, don't do that," they could. There'd be more of an argument there if the common case was that, say, people ignored the system and submitted anyway and hoped nobody would notice.

(... Also, comparing this rule to our overpoliced society where everyone commits some sort of crime and the police just choose who they go after kind of reinforces my point about unfairness. Sure, it may have been strategically wrong for her to not do everything by the book, but if so, it's very interesting that the in-house ethicist has to play by all the rules to not get fired and the practitioners can safely skip them.)

Anyway, the culpability for rubber-stamping this paper is on the person who rubber-stamped it, given that short approvals are commonplace. Saying "You should have known that this approval didn't really count, so it's your fault for going through the normal process and not realizing it should have been abnormal" is nonsense. That's literally the job of the reviewer, and if the reviewer can't do that, someone else needs to fulfill that role. At worst, if they told her on day one "Your job is publishing high-impact papers with fundamental conflicts of interest with the company, so everything needs detailed review from X in addition to the usual process," that would be different. But they didn't. Better yet, they could have flagged her in the publication review system as needing extra review. There were lots of options available to Google if they weren't trying to make up rules after the fact to censor a researcher.

And in any case, she gave advance courtesy notice of the planned work: https://twitter.com/timnitgebru/status/1335018694913699840 Someone could have said something then. They didn't.


> Anyway, the culpability for rubber-stamping this paper is on the person who rubber-stamped it, given that short approvals are commonplace. Saying "You should have known that this approval didn't really count, so it's your fault for going through the normal process and not realizing it should have been abnormal" is nonsense. That's literally the job of the reviewer, and if the reviewer can't do that, someone else needs to fulfill that role.

This is key, and I don't see it being mentioned as much in other comments. It was approved.


it got approved without the review from what i understand


By peer review, I mean review by fellow academics, not Google management.


"and unexpected roadblocked at the last second "

This is a essentially false. The author submitted the paper the day before publishing, given there at least was some form of standard review - the actions by Google could not be construed as 'roadblock'.

There is no 'roadblocking' and the review was certainly not 'unexpected.

The constant misrepresentation of the facts in this situation is harmful for those ostensibly wanting to do good.

"This is why understanding who raised these concerns is important."

Since there was no roadblock - this answer makes no sense.

The answer more likely that the researcher wanted a named list of what she perceived to be as her personal enemies.

"Failing to cite some recent research is rarely grounds for rejection."

There doesn't seem to be any reasonable cause for major concern in this whole issue - it seems the company raised some points and she could have managed them reasonably in professional terms.


I’ve personally submitted papers for this form of review on the same timeline that she did. No problems. So no, I don’t consider the method by which her paper was rejected to be normal practice.


Given that internal prepublication review at every company I've ever been with is merely there to avoid IP leakage, I find it very hard to believe that the feedback is is given in good faith. It's like the oil industry claiming that a climate change paper isn't talking enough about the economic benefits of growing citrus in Alaska. Quite frankly, there's simply no reason to address them, because the problems with BERT, exist with every LLM.

Google stepped in and changed the procedure for this paper, because they wanted to spike it because they were embarrassed by it.


Unless she lied in her first e-mail, which it doesn't seem like she did, the reason she made those demands is because they asked for a retraction of the paper without indicating why the paper should be retracted.

Asking for the identity of people that have the authority to ask for a withdrawal of your research without stating their issues with it seems understandable, if excessive.

But maybe I misunderstood something.


AIUI, they asked her to retract it because she submitted it before getting final approval, and then they in fact decided not to approve it.


Dean's statement is clear that it was approved before being submitted:

> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. [...] We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.

There is no statement at all of how to reconcile "approved for submission" with "didn’t meet our bar for publication", which probably means that there is no reconciliation, and the cancellation was done outside normal process.


I see what you mean.

I wonder if he is trying to say that there was a process error, it was approved without review (in error), she sent it out, and then they came back to her and said "wait, no, you can't publish that after all"


Sounds like the “editor” or analogous person didn’t wait to hear back from the “referees”. Whose fault this is is not made clear.


Big oil companies don't pump out damning studies on oil use, big tech companies won't pump out research damning the use of their tech.

If people have an expectation of Google to turn out academically pure research then I certainly respect the position and encourage it in reality. But thinking like that means life is going to contain a bunch of surprises that really shouldn't be surprising. Google is simply not going to employ people who they recognise as undermining the success of Google. It is not feasible to run a company that way; roughly speaking companies can choose between ruthlessness and bankruptcy. If you expect tolerance of radicals and debate, look to the universities.


I think the difference here is that the research may have shown that Google was unintentionally breaking the law, and after Google realized this, they used an existing review process in a way they don't normally use it to block publication.

The possibly shady part is that they could be suppressing evidence that they broke the law, but, like your said, they can decide how to run their own business. I'm not even sure if the researcher would be a whistleblower if they didn't intend to report something illegal.

To makes matters worse, in this case at least, the law or laws they may be breaking were established to protect a class of people the researcher is a member of.


> If you expect tolerance of radicals and debate, look to the universities.

Universities used to tolerate radicals and debate. But going by the copious media reports of the last few years, that doesn't seem to be how they operate any more.


Yeah, but then they should get called out for wanting the good PR of having an AI ethics team, but without the headaches of having their poor ethical standards exposed.

You can't have it both ways.


Companies are given a life under the premise they provide a social good. They get a charter from the government. They are what's called a legal fiction. Government should demand AI ethics instead of letting the companies self regulate. (after all, most AI is developed with gov funds)


But Google has published a number of papers that have critiques of the ethics of AI...why would this paper be different?


I remember when Google made a whole big deal about their "AI Ethics Board" (or something along those lines), and then not even a year later they reshuffled it because those people were too critical of the company's practices.

And then when there was backlash they "promised to do better" and Sundar Pichai came out with some "principles" that the company would follow for AI.

Another 1-2 years later and here we are again - this just proves that whatever "AI Ethics Board" they might set-up, it will end up being a sham, because they'd never allow that board to stop them from using AI however they like if it's in the interest of the company's profit growth.

If we want real AI oversight we need to demand it from outside nonprofits or even government agencies (why not both?!) - and there should be zero affiliation between the company being monitored and those organizations/agencies.


At then end of the day the incentives for large companies are always monetary.

It might be that they follow ethics because the appearance to do so has a monetary public relations value. It always comes down to that, and for publicly traded companies that set up things like an "AI Ethics Board" it is always for show since the incentives don't allow for anything else.

At the end of the day someones compensation depends on these things and you can't be hurting the bottom line.


This has nothing to do with being a publicly traded company.

The founders have a controlling stake.


> Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks.

I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction. It's simply unprofessional to critque your company for something while not mentioning the work they're doing to combat that.


> It's simply unprofessional to critque your company for something while not mentioning the work they're doing to combat that.

It would seem deeply problematic for an AI ethics researcher to have the expectation that when they critique their own employer, they should mention all their work to ameliorate bias or ethics problems, but to not have a similar expectation when they're critiquing other companies. Is the point of having an ethics researcher to expand our understanding of ethical issues, or merely to aid in PR?

If a university administrator were to attempt to tell a PI not to submit a paper critical of work from another lab at the same institution, I think that would be judged as a shocking overreach. But for Google, we're not even in agreement that this behavior is a problem. It's unfortunate that we expect so little from corporations, even if those corporations are some of the main drivers of research in a field.


That's not the assertion here. A good researcher should be aware of the current state of the field, and thus mention current efforts to solve a problem when discussing that problem. Regardless of who's doing the work.


I agree with you that a researcher should be aware of current work relevant to the problem under discussion. Jeff's quoted email states that part of the issue was that Timnit's paper "ignored too much relevant research", without specifically saying whether the unmentioned relevant research was done by Google.

But the parent comment to which I responded, and which I quoted, specifically said the problem was to criticize google while working for google, and seemed to approve that this should be judged unacceptable.

> I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction.

The "regardless of who's doing the work" part is key, and not all participants in this conversation are on the same page about it.


> not all participants in this conversation are on the same page about it.

I don't see anything contradicting, but merely failing to add the "regardless" footer. Admittedly, sometimes I read too quickly.


That's what the peer-review process is for, though -- for the peers to suggest that their papers get cited a few more times ;)

Seriously. Choosing who to cite is a discussion and a battle. It's not done thoughtlessly.


Company with unethical practices hires ethics researcher

Ethics researcher publishes piece critical of company's activities

Company is shocked as to how this could happen.


Ethics researcher omits relevant research in paper to smuggle in a particular agenda.

Company calls it out.

Researcher realizes the limits of her narrative setting powers.

Non-vocal majority is happy to see runaway activism being curbed in corporate settings. Slow march through institutions is slowed down for a day.


It's not "runaway activism" if you hire ethics researchers who find ethics problems.

Reasons for not citing research, especially recent research, range from lack of relevance (since even though environmental improvements could have been done, they were not done! So the actual impact wasnt lessened by them at all!); To simply not having known about it. The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard. Moreover, that's the role of a conference reviewer, not the employer. Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.


> It's not "runaway activism" if you hire ethics researchers who find ethics problems.

The fundamental difference between activism and research is that activism sets the agenda ahead of time while research explores the knowledge space and reports findings. One helps us incrementally make better sense of the world, the other wants us to narrow on particular facts while omitting other relevant facts in the name of advancing a cause.

> Reasons for not citing research, especially recent research, range from lack of relevance ... To simply not having known about it

The omitted research is clearly relevant. Deciding what is relevant and what is not is precisely what narrative warfare is. If they did not know about the adjacent research, then they would simply be incompetent researchers (which I highly doubt is the case).

> The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard.

There is no retraction because there was no external publication. Jeff Dean states reviewer response as you stated was there but was ignored by the approver.

> Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.

Corporate researchers are still employees and are bound by a job description. Independent of the content of the research, it is also entirely within the rights of the employer to set a certain bar of quality being cleared. In this case it seems like Google didn't want affiliation with this paper, not the other way around.


You seem to have an exceptionally narrow view of research; notably, that you can't start with a thing you want to prove, which is in fact step 2 of the elementary-school scientific method. You disagree with the conclusions, so like Google, you have retroactively declared the research incompetent. You seem to think this is within their rights, but this renders any future research from them irrevocably tainted -- from now on, it's no more than Google PR.


> notably, that you can't start with a thing you want to prove, which is in fact step 2 of the elementary-school scientific method.

On the contrary, I totally agree with you on this, researcher needs to pick a particular part of the combinatorially explosive knowledge space to explore, in that they get to be opinionated on what hypothesis they want to prove. What they can't do is however to ignore opponent research that conflict with their propositions. This is precisely what Jeff Dean is talking about in his second letter, you need opponent processing to overcome self-deception and bullshitting.

You can't have opponent processing when you omit relevant research, try to steamroll the review process, throw a tantrum when your paper is found lacking, and ask names to further your agenda through social engineering.

It is not onto me to prove if and why I disagree with the conclusions, it is onto the paper to prove that their assumptions and methods were sound to begin with, if they want their conclusions to be taken seriously. And they were not.

> but this renders any future research from them irrevocably tainted -- from now on, it's no more than Google PR.

On the contrary, this move increases trust in Google research and it would have lessened if they were to buckle under activist strong-arming.

If folks think this was a sign of broken research machinery, they are free to ignore all future Google research, at their own risk for competitive disadvantage.


Ignoring future Google research in ethics has low-risks, as top ethics talent will certainly avoid working for Google, and prefer academic freedom elsewhere (where reviewers are external, to avoid CoI).


> as top ethics talent will certainly avoid working for Google, and prefer academic freedom elsewhere

I wouldn't be that sure. I know the "headline narrative" is invested in painting Google as evil (and they don't tend to be that wrong in many instances), but the actual sentiment among the general talent pool is very divided in this instance. There is a sizeable percentage who are relieved to see activist pressure being resisted in a corporation and would be inclined to make a pick on that basis. We all know corporate pressure is not the only threat to academic/intellectual freedom.


"The omitted research is clearly relevant" -- how do you know? Why do you get to decide?

I wrote a paper recently in which I omitted most tangentially-related papers in mathematical physics, as they would not be mathematically accessible to the audience in question and also do not address the questions posed in the paper. A mathematical physicist wrote to me and was grumpy about it. Fine, I added his name one more time to make him happy. That's the reality of research papers.

It's clear that Google didn't want to be affiliated with this paper. And it's clear that it's time for Gebru to find a place with intellectual freedom, so her work can be judged on its merits.


You don't have enough context or evidence to assert the things you asserting.


> It's not "runaway activism" if you hire ethics researchers who find ethics problems.

Agreed, but there's obviously a difference of perspective about what transpired here and I don't think any of us knows with certainty what the truth is. Finding ethics problems is grand and all, but framing a narrative that misrepresents those problems is highly problematic, particularly if they are in a role that makes them the leading ethical voice of the company.


> I get the impression that she wrote a hit piece

Hopefully the paper gets leaked so we can judge for ourselves.


I've read it internally. It's rather bland and doesn't actually indite google. I'm inclined to believe that it was a paper-submission-process issue.


That disagrees with what Dean himself is saying...


Self-critique is more precious than gold, particularly when you have the scale of influence that Google does.


Academic papers are not hit pieces.


What if they aren't doing work to combat the problems? As a citizen I want to know.


For me, it's pretty unprofessional (and cowardly) to hire a well-respected ethics researcher, write some PR pieces about how the company takes ethical actions seriously, and then tell her that her publications have to follow the party line and cannot overly criticize the company.


That was my impression as well.

Having said that, if Jeff were to make public the paper, criticisms of the paper, and improvements made to address the problems described in the paper, that could go a long way towards clearing the air.


Abstract from a reviewer (source reddit /r/ml)

Abstract

The past three years of work in natural language processing have been characterized by the development and deployment of ever larger language models, especially for English. GPT-2, GPT-3, BERT and its variants have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pre- trained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We end with recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.

Anyone have the actual paper?


Link to the Reddit thread:

https://old.reddit.com/r/MachineLearning/comments/k69eq0/n_t...

I have plenty of experience in Natural Language Processing (NLP), but I am not an expert in ethics and bias – although I have read my fair share of papers on it. To me, the abstract comes across as modest, very reasonable, and exploring questions highly relevant to the community as a whole. Sure, if I was to review it I would be picky in regards to any strong empirical claims since I suspect it would be difficult to demonstrate conclusively some aspects of what they hint towards in the abstract. But as a position paper it looks better than plenty of work already published at top-tier NLP venues and I doubt that it could not get accepted on academic merits.

Still, to echo the parent, does anyone have the paper in its entirety?


oh yes absolutely, i would love for timmit or jeff to release this paper as-is so we can see what happened. could be a good read.


The draft is rolling around the web already.


link?


A second hand account from a VentureBeat journalist is the best I could find [1]. As a researcher with more than a decade of experience in Natural Language Processing, what is described by the journalist in regards to the paper content seems to be non-controversial and nothing out of the ordinary from this kind of work. If anyone could find an actual leak I would be more than excited to have a look at it using my own eyes rather than filtered through someone else’s.

[1]: https://venturebeat.com/2020/12/03/ai-ethics-pioneers-exit-f...


The communication doesn't give that impression; instead it says that the paper makes claims that ignore significant and credible challenges to those claims. Dean said that these factors would need to be addressed, not agreed with.

Publishing a transparently one-sided paper in Google's name would be a problem, not because of the side it picks, but because it suggests the researchers are too ideologically motivated to see the see the problem clearly.

Ironically, it indicates systemic bias on the part of the researchers who are explicitly trying to eliminate systemic bias. That's just a bit too relevant to ignore.


If that is indeed why the demand for retraction, why didn't they state that up front in the meeting where they told Timnit she needed to retract the paper or remove her name? Instead they initially refused to tell her the reasons for the demand for retraction.

They didn't give her a chance to address those factors at first.

Later they had a manager read the confidential feedback on the paper in question, but still didn't leg her read it herself.

If that feedback was only saying that the paper lacked relevant new context and advancements, why were they being so cagey about it? Something doesn't smell right about that.


> Jeff gives the impression that they demanded retraction (!)

In paper reviews you can often see reviewers asking the authors to rewrite, clarify, add extra experiments, add missing citations. It's all normal.


Usually those are "accept with minor revisions" or "revise and resubmit". Rarely are they grounds for complete rejection. This is extra true for internal review, since the actual conference review process would provide an additional layer to ensure that the scholarship was strong.


"Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks."

Representing a more truthful reality is not 'softening'.

It's only 'softening' for those who have an already accepted, extremist view, and for whom any evidence to the contrary doesn't help their arguments.

While initially sympathetic to the author - the more I read - the more I have completely the opposite view.


Google isnt publicly funded academic institution. Whatever they are doing, in particular publishing, is part of the business/PR. So if the management sees something not good for business it is a reasonable that they decided to not do it. If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.

Even more, it sounds like Google didn't ask originally for retraction, they just asked to take into account the newer research contradicting the paper - the thing that any researcher valuing integrity over agenda wouldn't refuse.

If somebody wants to do that research and publishing they just have to find another source of funding, i guess.

Anyway, the firing wasn't over the paper, the firing was over the unacceptably unprofessional reaction to it.


> If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.

Salary aside (because I do doubt she earned $1M+/year, my guess is probably more on the ballpark of $300k~$500k and either way not really denting Google's finances), you are not wrong, but also it's worth understanding here we're entering the realm of the notion that companies can (and for many reasons should) be about more than maximizing shareholder value.

Also, if I'm being completely honest, from a PR perspective this could be worse than Timnit's paper might've been just given how public it has become and the people involved. People internally are perhaps more comfortable having that paper not be published and not having Timnit in their ranks, but as far as PR for Google goes this isn't great.


Yes, this is absolutely far worse than just letting the paper be published. AI ethics papers are not exactly the kind of material that gets a lot of conversation at the best of times, outside that world, but Google firing a black woman for speaking up is the kind of thing that definitely does get talked about (as we can see here).

But that aside, Google should want this kind of paper published. They absolutely should want to know and discuss every possible weakness in the ethics of their approach to AI - Google has a scale of influence so large that how they act in areas like AI, trickles down to many other organisations. To me, that gives them a responsibility to make it as ethical as is reasonably possible, and that will only happen if experts are allowed to speak freely.

One can make short-term arguments about how that hurts them, but the long-term damage of getting massive AI systems wrong, will be far, far worse.


Even from the narrow view that in-house academic work is part of the PR budget (which I disagree with), Google has made a huge mistake here. This is a giant PR black eye for them. If the game is to pretend to have in-house ethical checks (say to avoid actual regulation), then they need to at least generate the appearance of independence. The correct sinister move here would either been to keep her on staff and give her the runaround or manage her out the door in a way that she wasn't particularly angry and where she signs a non-disparagement agreement.

But as others point out, it's entirely in Google's long-term interests to have internal critics who prod Google and the rest of the industry toward long-term behavior. So I think it makes good sense for them to have independent academics that occasionally make people uncomfortable.


From a certain narrow, selfish perspective it's reasonable for Google to not want to have an AI ethics department placing a check on their leading edge research at all. Fortunately, we don't live in a world where corporations are the ones to determine right from wrong with total impunity.


> AI ethics department placing a check on their leading edge research

that reminds how in USSR each non-miniscule factory, organization, etc. had "the department #1" - it was an ideological check and control department which at sufficiently large/important organizations even included KGB officers.


You have identified a similarity between two situations, but it is not a similarity that matters. The distinction that matters is one of normativity, and on that measure there is clearly no equivalence to be drawn here.


every time it is the same - somebody got the power to enforce the prevalent ideology of the time and place, they happily do it under the premise that it is the most right and good ideology, and because of being such visibly pious followers and strict enforcers these self-declared occupants of high moral ground start to feel and behave themselves as more entitled and better than others. They highjack the cause and frame any disagreement with or critique toward them as a heretical attack on the cause. The main point here is that once something becomes an ideology the "right", "good", etc. gradually lose any meaning in that context, and the only thing which really continues to matter and grows more and more is the enforcement of the ideology.


You are right that there have been many iterations of normative standards, but that does not imply that all situations, ideologies, positions and so on are equally correct. It does not mean that we should stop trying to do better, nor that we have made no progress made through these efforts toward a better world.


No, they're describing a particular scenario where the Political Officers of those norms wind up being a sick joke of careerism and weaponized ideology.

The Soviet Union was about equality for workers. Who could be against that?


I think you replied to the wrong comment. You start off by disagreeing with something but it does not seem to be anything I wrote in this thread.


I should have been more precise. The phenomenon the other poster was describing is independent of a particular norm or ideology. Talk of evolving norms misses their point.


I see. Yes, any norm or ideology can and often does grow cancerous and counterproductive. What I mean to do is cancel one implicature instantiated by that statement. It's not a reason to be a nihilist, or to stop holding things accountable in a normative sense, in this case as justification for giving Google unchecked free rein of AI development. That the Soviet Union preached and botched "equality for workers" doesn't make it any less important an issue, and indeed we could see every failure toward that end as progress, as in "finding 10000 ways that don't work".


Even the fact that the USSR had factories at all makes the concept of a factory suspect to me.

Should we really keep manufacturing cars using the same tools that Stalin used?


In most cases, yes. In this case, because the paper was about bias in Google's AI models, it might not be just a business decision because the racial bias described in that paper might result in a disparate impact on users, which could be in violation of state or federal law.


This is pretty dystopian ...

1. There exist laws to prevent discrimination against people based on protected attributes 2. ML models make predictions based on attributes without interpretability (it's not possible to prove that protected attributes are not factoring into model predictions) 3. Empirical observation that a model proxies a protected property exposes corporation to liability for regulatory non compliance 4. Therefore any study that could expose bias of a model used in production is to be road blocked or prevented ...

To combat flows like above -- seems like regulators are going to need to update rules with third party audits and an incentive structure that encourages self-regulation and derisks self-detection and self-reporting of non-intentional violations... ideally google should not be put into a position where it is incentivized to police its own ai ethics research to ensure that such research doesn't expose their own illegal/non-compliant activity ...


A company can still protect themselves by fixing the model and delaying publication of a study about it's bias until after the statute of limitations had expired.

In this case, there were recent changes to the statute of limitations for CA laws that extended it from a year to 3 years, which could be why this whole process seems weird.

https://www.ebglaw.com/news/ab-9-extends-employees-statute-o....


Hmm... makes one curious. Google either has an AI to rate employees or screen interview candidates.


well, imagine a manager in your company publishing a paper stating that your company products are probably violating state or federal laws. All that without raising the issue up the proper management chain, without working through the correct procedure with compliance and legal depts, and without going to law enforcement if the law violation is still continues after all that.


At least when I was there, my papers were getting thoroughly reviewed and often had to make some adjustments before getting approval. Never occurred to me to make any demands from the reviewers or threaten to resign if my paper doesn't get immediately and unconditionally approved. Seems like she's asking for preferential treatment.


She's the one asking for an option to revise and discuss it. Management is the one demanding an unconditional retraction, with no recourse.


Do you want to know whats interesting? I read alot of computer science research, particularly what comes out of Google. Its clear to me that details are left out of specific papers, especially how things are done in sub systems. But, like a jig saw puzzle, I discovered that many papers are actually descriptions of computing systems and algorithms that interact. If you read between the lines and squint your eyes, you can get a much bigger picture of internal google AI systems than you guys think you can.


Would love to read a blog post of your observations about that


Oh yeah, absolutely. It's not at all hard to see the systems view of this.


This response really seems like gaslighting. He doesn't address her concerns and glosses over whether she was held to a different standard than others at GR.


He was also extremely vague, perhaps intentionally, about what the issue actually was. His sentence about when the paper was submitted and approved and all that is impossible to parse and make sense of who did what and when.


Of course he was vague. This just happened, tensions are high, and no doubt Timnit is talking to an employment lawyer to find out what both parties' right and obligations are, and I'm sure Google's lawyers are also getting all their ducks in a row.

This is spin at best, gaslighting at worst. We'll never get the full story (and should we? it is an internal company matter made public, after all)..


>We'll never get the full story (and should we? it is an internal company matter made public, after all))

Not really sure what the point of an 'ethical AI' department is when there's no transparency or accountability facing the public because if it can be cancelled internally at any point if it threatens the company you've basically recreated some kind of Soviet ministry for truth


I think you misunderstand. This is an HR matter, not an "Ethical AI" matter.

The official outputs and products of that department should (hopefully) be public and shared, I 100% agree with you. But that's not what this is about.

This matter in particular is an internal employee/employer dispute and dismissal, and is only public because of the high profile of the persons involved.

And what I meant that we'll never get the full story is that these kinds of situations are always more complicated than they appear. We are only seeing the tip of the iceberg, and are not privy to the history that led to this moment.

These kinds of things don't just happen out of nowhere.

If I had to guess who's "more right" here, I'd side with Timnit, personally.. but again I don't have all the facts, so it's just a gut feeling based on what I know about how large enterprises work.


The point is to do foundational research, and to help Google ensure that its AI development complies with its own ethics. Google did try to set up an AI ethics board for accountability to the public, but it fell apart, because many segments of the public have ethical views which were seen as unacceptable. (https://www.cnbc.com/2019/04/04/google-cancels-controversial...)


If Exxon had a fracking safety department it's goal would be to help Exxon make fracking safer, not to call out Exxon for being an unsafe fracker.


If Exxon had a fracking safety department, it's goal would be to help Exxon sell it's fracking as safe.


Right -- he once again talks about "accepting" her resignation, when it reeeeeally just looks like they fired her. At the very least, she certainly feels like she was fired; why is that not mentioned at all? Even just, I don't know, "sorry we were abrupt?"


Don't make ultimatums unless you're ready to accept the consequences of those ultimatums.


> ...if we didn’t meet these demands, she would leave Google and work on an end date.

To be fair it doesn't sound like her ultimatum was "I'll leave immediately"— that was forced on her by Google, and is an important detail.


Not really. If your girlfriend told you she was planning on breaking up with you after her birthday, would you stay with her until she did it or would you end it immediately?


Definitely. "If we don't fix this I don't think I can continue working here" is not a resignation, it's a negotiating position.


Sure, but sending out an e-mail accusing your colleagues of racism, exhorting them to stop working, and talking about potential lawsuits isn't. There's no way that Google (or any company) would continue to employ her after that.


> exhorting them to stop working

Perhaps it's just me as a URM, but her email resonated with me, especially this part. I see this position of calling what she did "exhorting them to stop working" often, but this isn't really what she did.

I too care about DEI, but after putting lots of time and effort into it I saw how futile the effort was in my organization because there was real buy-in from higher ups. I was putting a lot of unrewarded volunteer work helping with "inclusivity" and talking about the problems/solutions, but that was all it was in the end for the people we needed action from; "talk". I did decide eventually to dial it back and stick to my actual paid job of programming, and although I didn't send an email to other people telling them their effort was being wasted, if someone came and asked me, I'd tell them to not bother. There's other places, usually further removed from the the company and easily PR-able channels, where the effort is better spent.

In any case, I hope you realize your comment is full of hyperbole and the people who think she isn't in the wrong, myself included, aren't being unreasonable. We're smart people too.

> There's no way that Google (or any company) would continue to employ her after that

I agree. None of this comes as a surprise and I'm sure she expected it too; that doesn't mean Google is in the moral high ground.


Citation needed, for sure. I certainly believe that most companies don't particularly value honesty, especially when pointing out managerial flaws. But looking at what she wrote with a manager's eyes, I don't see anything I'd fire her for. But like aylmao writes, I see it more as an impassioned and probably valid critique of DEI work that is more posture than substance. What I see is somebody who really cares about the problem, and who could be channeled into productive work as long as that work truly has impact.

I also suspect that if she'd written the equivalently passionate comment about a technical failure or bad product choices, people here would be cheering her on. Especially if she were a he.


Just had a corporate counsel seminar on this: Under federal and most state harassment/discrimination laws the company actually can’t fire her for any of those “protected” activities unless the accusations of racism are shown to be untrue and made in bad faith.

I personally do not have enough info to decide who is telling the truth in this case.


Judging from most of her writing online (which is all pretty assertive) I think it's far more likely she said "if you don't fix this, I can't continue working here".

And that's both a negotiating position and a resignation.


Something can't be both. A resignation is an unconditional desire to leave. "If you don't fix this, I can't continue working here", though, is a desire to stay.


It sounds like she said "I demand that you do X Y Z or I must resign" and they said "Very well we regrettably accept your resignation. No backsies." and she was like "You're firing me??!"


Dean previously said this [1]

> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.

I find it unlikely Dean would lie about that, not least because the email would be easy to find.

Now, were the actions leading up to that effectively a firing, ie Timnit would have been unable to effectively continue in her role? Quite possibly.

[1] https://www.platformer.news/p/the-withering-email-that-got-a...


And I'm sure that her lawsuit will allege bias, and will include a demand for exactly that information so that she can prove racial discrimination against her.

Which means that Google is likely to have to produce all of the documents that they didn't want to produce.


They almost certainly get to produce them under seal in that case though.


Having your research -- the very rationale for your employment -- being squashed by execs suddenly without explaination, and in a highly unusual procedure, should make you question whether you are able to effectively continue in your role, or whether you're simply window dressing.

It looks like Google's AI Ethics team is meant to be green washing.


With such public communications, there really isn't much of a chance they'll be detailed and specific. It's why they are so rarely done.


Every paper we submitted went through a technical review as well as legal and IP reviews. They were along the lines of cite this, cite that, run these experiments etc.

What's different in her case is that you don't see the names of the people reviewing. Being the devil's advocate, she MIGHT have a pattern of aggressively attacking people who reviewed their work before. So they might have made the reviewers anonymous this time.


They might have enough of a PR budget to make the Google version of the story stick. But, its concerning that, if what you say is true, they are hoping to make that work by leveraging the public's ignorance of how the Google specific process works. Its also not the smartest move, since Google is important enough and public goodwill towards tech is low enough that journalists will have a field day looking for evidence of double standards/a cover up. And they're not making it too difficult for the journalists if that evidence is found in the top comment on a hacker news post.


This really seems like whistling past the graveyard on google's part here. There's too much meat to the story for them to really do much more than obfuscate. The intersection of race and gender, ethical implications of big tech, the capitalistic pursuit of innovation at the expense of individual freedom. All of these look bad for google


We know large language models are super important to google, and there are lots of competitors.

If they approved the paper the message would be "google thinks language models are a waste of resources and racist". There would be no academic debate on this topic as its been framed as woke and published by a militant activist, so any disagreement would be racist (see prior interactions between this researcher and other researchers [1]).

Thats why the standard process of publishing, peer reviewing, academic critique etc would not work.

Why would their researchers working on language models stay? when they can go to Facebook, OpenAI etc. Why would new researchers join?

[1] https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...


Academic debate is, in fact, done through conferences and journals. You saying there can be no debate is a strawman position with no basis in reality. The idea that standard rigor cannot be applied to ethics research is absurd, and seems to insinuate that the entire field is absent discipline.

The proper response to her position would be to publish a response or critique. Attacking her entire field does nothing to further the conversation.


The statement that "Academic debate is, in fact, done through conferences and journals" is not strictly true. Specially given that a lot of reviews in more popular conference are very hit and miss. You can submit the same paper to the same conference multiple times and get wildly different opinions on the same paper.

The variation in reviewers' response is often due their lack of knowledge and unfamiliarity with the problem. Take a look at the recent reviews on some of the more popular conferences on OpenReview.net. Most of the reviews don't have any substance and are often vague/generic.

I'd take the reviews from peers that I trust and are aware of my work more seriously than reviewers of conferences.


She's got solid STEM credentials. If google managed to hire a "woke militant activist" instead of what they wanted... is that really much better?

Demand the best from your multi-billion dollar corporations.


I have to assume both sides here are adults that can deal with criticism of their chosen discipline without immediately resigning, or not joining a specific company over it.


But in this scenario, shielding Google from liability is actually a primary concern given that Timnit is discussing ethics/bias. A paper on say novel transformer architecture, the lottery ticket hypothesis in a new setting, a new RL benchmark suite, etc is not going to expose Google to legal risk the way ethical AI research often can.


This. I have been arguing the unpopular opinion that most AI ethics work in corporate settings is not designed to empower real research. It was a matter of time before an actual researcher with an ethical compass was removed unceremoniously. Anyone in an AI ethics team at a large company — you need to know exactly what your job means to the company, because it isn’t safe.


Then what's the point of hiring her and people like her to work for Google in the first place? So that Google could claim that they have Ethical AI researchers and Google's AI research is indeed ethical?


Of course, it's like going from the government to a lobbying firm. Everyone knows why you're there and why you're getting paid to be there.


Yes.


How would publishing a paper open a 3rd party up to legal risk? Research papers aren't laws, and it is chronologically impossible for a research paper to influence already on the book laws.


I can imagine a scenario where a politician who wants to pick a fight with Google uses some of the unflattering findings in the published work as supporting material for why Google needs to be regulated/fined/etc: "Google does {bad thing}. Look at this research report from Google researchers! They admit to doing {bad thing}!"


That would be a future act, not making previous acts illegal ex post facto.


A paper from Google saying that Google knows that its systems discriminate against minority groups can open Google up to liability for a class action lawsuit from said minority groups against Google. And the fact that Google knows increases the damages that they can look for.

The same paper from outside of Google also creates liability, but now the argument for increased damages becomes about whether Google knew.


That would be more of a journalistic paper than a research paper then. Timnit's research, at least in the past, is along the lines of "Hey, this <thing you thought benign> is not actually benign"


That confused me as well -- where I work we have a legal dept. approval for IP issues, and that's it. Academic review doesn't make sense in that context or time frame.


Similar experience to another current Google Brain researcher: https://twitter.com/le_roux_nicolas/status/13346019609729064...

Submitting conference papers last minute is... normal.


This tweet is on point.


Also, when has feedback on a paper in this process been relayed via HR?

You only bring in HR protections to protect the company from a legal standpoint.


In technical infra I saw papers rejected that were deemed to be not interesting enough (not a big enough novel contribution).


"Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now."

This is sad gaslighting of a reasonable concern the team had.

Having to endure some external review for what could otherwise be sensitive material.

The inability for the SJW crowd to work reasonably within very reasonable terms, to then resort to aggressive tactics such as 'demand the names and opinions of everyone on the board' and then publicly misrepresenting the situation is going to lose you a lot of favour.

Every time I read one of these stories I immediately feel sympathetic to the individual, but then upon learning more, I feel duped and maligned for having been effectively lied to.

The doors are wide open for progress, those who take it to micro-totalitarian lengths are not doing anyone any favours.


Black female scientist Timnit Gebru fired - the end of Google as a top AI research institution?

https://melwy.com/blog/black-female-scientist-timnit-gebru-f...


I don't think the approval process is being used to enforce rigor in general, the (claimed) problem is the paper lacks rigor specifically in regards to claims about google's behavior.

Publishing a paper with a lack of rigor about some obscure mathematical technique isn't a problem for google (beyond some possible but unlikely mild reputation damage). Publishing a paper with a lack of rigor that says google is doing unethical things, when those things are questionably accurate, that is something google would (and should) have a problem with.

Whether the paper actually lacks rigor in a relevant way is not something I can comment on.


Why wouldn't you want to weed out bad papers as close to the source as possible to save company embarrassment and external people's time? If you see something wrong during a review why not push it back to the author before it does rounds outside the company? That would seem like a very bad practice to me.


Ditto in my past experience at Microsoft Research. Never an actual review of paper's merits, just IP and maintaining trade secrets if applicable.


Yes it is disingenuous for Dean to pretend that this was a normal process applied to a normal situation. Clearly whatever happened on that team, this latest round was not the beginning or even most important part. Gebru's letter mentions her threatening to sue Google previously, for instance. [1] The discussion about rigour in a conference paper or internal review is obviously a pretense.

[1] https://www.platformer.news/p/the-withering-email-that-got-a...


The key sentence is the bullshit about a paper not offering mitigation as not helping. Since when is a scientific paper required to do that.


When I was in academia, it was not unusual for the referees to reject a paper for this reason. Of course, you are informed of that and always have the option to rewrite the paper to include that information.

In some cases, though, it's not simply a matter of listing it as other work in the Intro - you may need to incorporate it into your models, etc.


That's no justification for the rationale that critique or negative results in general are not paper worth or do not advance science in less short-sighted ways.


I wasn't discussing the appropriateness - I was pointing out that this behavior is normal in science. Your original comment was:

> Since when is a scientific paper required to do that.

Unfortunately, for a long time (since well before my time, at least).


beyond reviewers or editors asking you to make changes like this (for example, "so-and-so just published <blah> which means that your sentence about <blah> is obsolete"), we're talking about research coming from a corporation. If one part of the company is trashing another part of the company, and it's based on stale results, asking to have the paper updated to include the latest results is reasonable.


If you are actually an academic researchers in an academia institution, and are exposed to a large scope of the dealings of the community, then you might find that professional academia is as political as corporations, if not more so...


Increased scrutiny on minorites speaking up is a common historical occurance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: