Good grief - this terrible clickbaity headline writing: "We read the paper that forced Timnit Gebru out of Google. Here’s what it says"
The paper DID NOT force her out of Google. Her subsequent behaviour - submitting without approval, rant, ultimatum, and resignation - did. And she wasn't "forced out": she resigned of her own volition. She could have chosen to make improvements to the paper based on the feedback she was given, resubmit it for approval, and then get on with her life, but she went the other way.
The headline from the last discussion on Timnit's exit[0] was awful as well: "The withering email that got an ethical AI researcher fired at Google". So bad in fact that it was changed on HN to more accurately reflect what actually happened: "AI researcher Timnit Gebru resigns from Google" (much more neutral and factual in tone).
Seriously, what happened to journalistic standards and integrity? Why are the actual events being so forcefully twisted to fit a particular narrative? No wonder the general population struggle to figure out what's true and what's not, and fall victim to all kinds of nonsense, BS theories, and conspiracies.
I wish I had a good idea on how to change this behaviour by journalists and publications.
(Clearly this is a problem that goes far beyond Timnit's story.)
Basically, every one of those arguments evaporates when you dig into the details. She was doing her job. The paper was anodyne and perhaps even boring. She was rightfully pissed off that some middle manager was telling her and her four coauthors that they must retract their paper (the reviewers later posted on Reddit that they would have been happy to accept edits). Jeff was micromanaging her citation list, which is almost unheard of, apparently. Multiple google employees came out of the woodwork to say “there is no such review process, other than a quick scan to see if you’re leaking company secrets by accident.”
All of this is fascinating. Because, as far as I can tell, the entire AI community is now on her side. The only people who keep bringing up her character flaws are people who don’t do AI / ML. Or at least, they seem to be pretty quiet now that Jeff revealed Google supposedly has some weird academic review process no one’s heard of till now.
Your feelings about the journalists are completely warranted, to underscore that point. But it’s odd how much momentum is building. Try to ignore the circus; you’ll be surprised to find there is substance behind the claims. I was as surprised as anyone.
And to reiterate, pick any one of the arguments you mention, and really dig into it deeply for details. Try to verify the claims. The most you’ll get is that she was curt. And I remind you that it’s usually known who the reviewers are at any given venue. Even if it’s blinded, you at least know the general group of people involved. In this case it was highly unusual to do some kind of anonymous, selective, ad-hoc enforcement of rules that really don’t seem to benefit the scientific process in any way.
> Jeff was micromanaging her citation list, which is almost unheard of
This is quite normal, even for purely technical papers [^1], and absolutely important for an expository paper like this one. Narratives are always formed by selectively including references.
> the entire AI community is now on her side
On the MachineLearning reddit, which is popular among a proportion of ML grad students at least, it's completely different. Indeed the top comment in the discussion thread there [^2] discusses this discrepancy. And the fact that I'm creating a throwaway account for this reply is telling.
[1]: The review process of ICLR 2021, a premier ML conference, is taking place on public at https://openreview.net/group?id=ICLR.cc/2021/Conference. You can navigate through the reviews and see how many requests are wrt citations.
>Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
With regard to the internal review thing, I thought this comment made a good point: https://www.reddit.com/r/MachineLearning/comments/k6467v/n_t... Sure, maybe Google is usually lax with internal reviews. But if you're going to write a paper which says "this area of research Google is engaging in is harmful", and neglect to mention the fact that Google is also doing research trying to address the harms, then from Google's perspective, you are just smearing its corporate brand without making any real progress.
I'd totally agree with you. That would indeed be ridiculous. But... It's strange... each time a new argument pops up, I dig into the new detail, and surprise: it seems to have a straightforward, boring answer. "This is a pretty standard paper. Google wouldn't have been hurt reputation-wise by letting it through. And we should probably be thinking more about energy usage and bias. She didn't namedrop all relevant research, but there doesn't seem to be anything here to demand a retraction over."
It only gets stranger when you take this into account, too. From the journal reviewer:
However, the authors had (and still have) many weeks to update the paper before publication. The email from Google implies (but carefully does not state) that the only solution was the paper's retraction. That was not the case.
In some parallel universe, Google could re-hire her, Jeff and her could sit down and hammer out the paper, send the updated version, and there would still be two weeks to make even more edits. Isn't the point of the edit window to address these problems?
What really got my attention though, was that she informed everyone months ago that her and her coauthors were writing this paper. She wasn't working on some hit piece of a research paper. It's just ... a standard survey of the current ML scene circa 2021. I read the abstract and go "Yup, we use a shitload of energy. Yup, we should have better tools for filtering training data -- I've wanted this for myself. Where's the bombshell?"
For all the fuss people are making, you'd expect the paper to be arguing that we should stop doing AI for the betterment of humanity, or something weird. But it's nothing like that.
The entire course of action you suggest could have happened were not for Timnit going public by her own volition, accusing her employer and coworkers of unethical behavior in the process, and encouraging sympathetic colleagues to apply external political and judicial pressure on Google. What for, so she can publish a review paper disregarding relevant internal feedback?
She made a lot of fuss and burnt a lot of bridges. Nobody forced her to do so.
> Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
This means the date of her leaving was not set in stone, ie both parties would agree to the date or a date when she would leave. She had a vacation coming up and they would discuss it when she returned from vacation.
>According to Ms Gebru, Google replied: "We respect your decision to leave Google... and we are accepting your resignation.
>"However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behaviour that is inconsistent with the expectations of a Google manager."
She was fired, not on account of the content of her paper, but the way she expressed her feelings to her colleagues, which wasn't an exhortation for them lay down their tools, but because laying down their tools made no different as their work was ignored.
She was not fired. She offered to resign if her demands were not met.
The only difference is that the timeline was not of her choosing. She knew exactly what she was gambling for when she wrote those words. Why would Google want someone around who doesn’t want to work at Google? Better to accept their resignation and let them go immediately. It happens all the time. I once gave 2 weeks notice and my boss said I might as well leave immediately since there was no point in hanging around (I was consulting and on the bench). It happens.
I've been hoping to address that point, actually. Because, I get where you're coming from, and I've felt those frustrations myself. Let's just say I'm probably the last person to hop on the "do what I say or else" bandwagon.
... but like everything else, that turns out not to be an issue here. I would feel fine saying what you said. It's nothing to do with the color of her skin, or the fact that she was working on ethics rather than optimizers. Horrible employees who cause drama wherever they go are a liability and a downer, and I'll happily say that to whoever's listening. Black or white or purple, the goal is to serve the company's business interests.
But, much to my own surprise, as someone who grew up memeing on 4chan and poking fun at leftist dogma, here I am after about a year and a half of ML, wondering "Where's my harassment? I was promised harassment."
Because I feel exactly the opposite. Not only is everyone in the ML twitter scene cool, but they're some of the most open minded people I've ever met. Sure, you get some people showing up sometimes to give you a hard time, e.g. when we train danbooru: https://i.imgur.com/RMZd6mu.png and then say that the dataset is very objectifying, and so on.
But the antidote is to simply be straightforward. Make it clear you actually hear them. "Yes, I am concerned about that, but it's simply the problem domain; people use danbooru as a repository of this content. You're right that we could e.g. make a classifier to pick out the cooler looking gals and focus on those, but the challenge is simply to solve the problem at all. Once we have lovely auto-generated anime, it'll be straightforward to filter it. Come help us get there! It's fun!"
With all the horror stories I've heard, and how afraid everyone is of "the mob," I went into it expecting to "smile, and don't tell them what you're thinking." And I was really surprised to find exactly the opposite atmosphere. Everyone pretty much agrees that yeah, we have a bias problem, and that it's probably good to address that. People also seem to agree that it's ridiculous to take it too far, e.g. when OpenAI enforces a mandatory content filter that you can't turn off and isn't too choosy about what it deems harmful, and prevents you from shipping to production unless you enable it.
Because I'd be a part of the problem, if you felt like you can't talk to me freely (at least in private). I don't want to foster that kind of environment. So my reaction of the idea of you "talking down a black female, AI, ethics researcher" is "Y'know, I see where you're coming from, and I was worried about the same thing, but I'm just not getting that sort of vibe at all. I was surprised too; I expected the opposite."
And once you relax about it and look around, the most interesting part to me is that you start to feel like "well... why look at whether she's a black, female, AI, ethics researcher? I should probably read her work and listen to what she's saying, and judge for myself whether it seems crazy." And when you sit down and really listen to people, and put your full mental focus on what they are saying, I find it hard to disagree with a lot of their points, simply from a logical perspective.
So you might argue "Well, you're just part of that culture then. You'd be appalled how I feel, but I'll keep that to myself." Fair. But it's so weird being in a situation where everyone is like "watch out for that mob!" and meanwhile all of the people who actually work in the field seem pretty cool to me.
Everyone feels equally lost, i.e. that we have this magical new power (ML) that we don't really know what to do with. We know it will affect society, but we don't know how it will affect society. We also don't know the best ways to guard against some of the obvious problems on the horizon.
I ran into that problem myself. Since I'm already on a ramble, I may as well lay it all out, because it really is interesting: I was training some FFHQ latent directions, trying to get a skin color working. And I ended up discovering, quite by accident, that my latent vectors were "racist." My model was generating caricatures of black people that I would not be comfortable showing. And I couldn't figure out why. Why black people? I flip the skin color to white, no problem. Flip it to asian, no problem: https://twitter.com/theshawwn/status/1184074334186414080
Flip it to black, and horrible results. (I mean "horrible" as in "this would be an especially bad idea to show anyone," rather than merely "it has some visual defects.")
The answer to this mystery was obvious, but only in hindsight. There are far fewer black people in FFHQ than whites or asians. I found a classifier, got it to identify an order of magnitude more black faces than I had before, retrained my model, and the result was instantly so much better: https://twitter.com/theshawwn/status/1209749009092493312
That experience stayed with me to this day. I think about it a lot, because it would be so easy to overlook that kind of bias when it's numerical data rather than facial data. And as far as I can tell, that's exactly the sort of ethics that Timnit has been arguing for: we need to pay more attention to bias, and unexpected ways that bias can creep in. Which seems reasonable to me.
I don't really know why I'm posting this to you, but, just in case it changes your mind, I leave it to you. I really thought I'd end up feeling every bit as pinched as you expressed, yet it's nothing like that. Felt quite the opposite. I keep re-reading http://paulgraham.com/orth.html wondering if I have orthodox privilege, or if my social group simply doesn't include people who are comfortable enough to express themselves around me, or what. Because you say that there is a massive number of people who feel exactly the opposite, and I can't help feeling curious where they are. Our discord's up to 1200 people, and they don't seem to be there. I talk to dozens of researchers, sometimes on a weekly basis, just to poke my head in and see what they're up to. They don't seem to be there either, even in private. And, picture someone who is the opposite of "leftist" in basically every way. I know a few people like that in the ML scene, and even they don't seem to be saying "whoa, this bias stuff has gone too far, and these ethics concerns are nonsense." It's the opposite. Eleuther has a dedicated ethics channel, people spend a lot of time freely debating about what the "right" ideas might look like, and so on. We also maintain internal research channels with the idea that, if people have concerns of the type you mention, they can freely express themselves there with no fear of any kind of retribution -- that's the whole point of having secret research channels. It's up to ~30 or so active researchers, and no one has brought up concerns like that.
So you see, I end up being dragged to the conclusion that, yes, the politics / activism stuff is a concern, but no, it doesn't seem to be affecting anything. We're not hearing "do this or I'll bite your face off," or something nuts like that. It's more like "Could you please listen to me for a bit? I have this experience I'd like to share." And the experiences tend to be interesting, at least to me.
So that's why I urge you to be super skeptical about the angle you mention. ("You'd be instantly taken to court of public opinion...") Dig into the situation and look for evidence of that yourself. Don't pay attention to newspapers; talk to researchers, and ask them how they feel about it. I just can't find any trace, no matter how hard I look. I feel if you also look, in a scientific way, for evidence to support your concerns, that you might not find it either.
Anyway. I related to what you were saying and just wanted to give perhaps a new way of looking at it, since it's what changed my mind. Feel free to hit me up in twitter DMs if you're looking for someone to chat with about some hard topics, since those are often the interesting ones.
> The answer to this mystery was obvious, but only in hindsight. There are far fewer black people in FFHQ than whites or asians. I found a classifier, got it to identify an order of magnitude more black faces than I had before, retrained my model, and the result was instantly so much better: https://twitter.com/theshawwn/status/1209749009092493312
Ironically, that exact observation about that exact dataset was what got Dr. Gebru so mad at Yann Le Cun in the twitter thread that people brought up. So maybe you're just lucky with who saw your tweets.
You've worked with a visual appearances dataset, lacking sufficient examples from one class of entities, and it failed to perform well for that class. You solved the problem by adding more example of that class. While the malfunction might have hd some, as of yet unquantified, real world impact in some hypothetical police face recognition system, it doesn't follow that:
a. Datasets that are not about visual appearances, are prone to the same problem and to the same degree. Perhaps the house lending datasets / systems have small race (visual appearances) issues, but large class issues. The political debate of how to handle class issues is as old as politics have been around.
b. The extent of real world impact, which depends on the actual system deployed. Perhaps a hypothetical real world system has a 1% failure rate vs .1% failure rate. Should we stop developing useful system just because they are not produce exactly the same results across all visible demographics we can carve ourselves into?
c. Can the impact be mitigated by human post processing. The hypothetical face recognition system is part of the judicial process, there are many checks and balances before one gets to suffer drastic consequences. For example a human actually looking at the picture, or a solid alibi. "Your honor, I was skiing in Canada at the time of the alleged Florida murder".
As others have expressed in this thread, dealing with first order visual issues is easy. everyone can agree at a glance what a correct solution to visual questions is, and bugs are usually straightforward to fix. Language issues on the other hand, are second order, everything is subject to interpretation. Once we open the can of worms of talking 'critically' about language and AI, we are getting uncomfortably close to language police, and via Shapir Whorf, to thought police. The BIG underlying stake of 'AI Ethics', one that possibly neither side has completely articulated just yet:
Should a small group (in the thousands) of hyperachieving hyperpriviledged individuals working in the AI labs at the handful of megacorporations controlling the online flow of human language, get to decide what we can speak, and by extension what we can think?
Re the existence of an internal review (tl;dr: it definitely does exist!): A Google employee here who's actually published research there. There definitely IS an internal review and an approval process and every single paper I have published while there had to go through it. Everyone I knew had to submit for an internal review well in advance and I personally couldn't even submit to a conference review once because we only asked for the internal review 3 days in advance, which wasn't enough time. My understanding was that everyone around me was adhering to this.
Thank you for speaking up! To be clear, there’s no question it exists. But multiple (admittedly former) google researchers have gone on record saying that this is meant to be a quick scan for trade secrets leakage, and nothing more than that. Certainly not some kind of academic integrity review. That’s what the journal reviewers are for.
The point of the scientific process is to be allowed to fail and to be mistaken. And the paper was pretty bland. Sure, she didn’t namedrop all relevant research over the last decade, but why demand a retraction? Especially when the reviewers posted on Reddit that there was a big edit window to make revisions. Adding a few more cites seems like “oh, why not throw in X?” then you both go out to lunch / email an updated version later that day. A retraction is basically “your last few months have been wasted,” right? I’d probably be upset too.
Thanks again for the datapoint about the 3 day window. It seems rather selective, but that’s standard for any bigco. I’m just having a hard time stomaching the idea that your ideas need to pass through some anonymous internal review panel where you don’t even know who’s judging you. Is that really what it’s like to be there? Seems strange.
Typically this is not something you worry about too much. You work on a research project, you talk to people around you, your research manager and everyone around you pretty openly, and if something was going very wrong, you'd likely know because someone would have told you along the way. So by the time you are submitting for internal review, the expectation is that you will almost certainly pass, but typically the reviewers tale their job pretty seriously (and so did I when I served as one) and spent a week or two reading through your paper in their "spare" time and writing a comment on the science, the method, the conclusions etc. The not-leaking-secrets part is certainly there, but that's the lowest bar to pass, so one rarely worries about it practically if one works on public datasets and doing fundamental science.
My experience might be different from what Dr Gebru was going through since I never rubbed against anything that could have been considered company secret. My work was entirely academic and I never felt that I was restricted in any way in the questions that I could ask or the papers I could write. That is likely very different when you are criticizing a product, using internal data etc, which might have been the case with her. It also seems that she was in no way diplomatic about her actions.
When you do fundamental research there, it's as free or possibly even freer than standard academic institutions. As I said, I personally never felt any implicit let alone explicit forces telling what to work on / what to avoid.
The fact is that you’re wrong. There is an thorough internal review process. It’s not just a rubber stamp. So please stop spreading misinformation. What Dr. Gebru went through was standard.
What organization are you in? Several people have said AI and Brain don't work that way. Someone from Tech Infra said their papers could get spiked just for not being interesting enough.
The question is whether the review process vets the topic of the paper, the writing, the citations etc., or it's only a screening that avoids revealing company secrets.
> The paper DID NOT force her out of Google. Her subsequent behaviour - submitting without approval, rant, ultimatum, and resignation - did. And she wasn't "forced out": she resigned of her own volition. She could have chosen to make improvements to the paper based on the feedback she was given, resubmit it for approval, and then get on with her life, but she went the other way.
This sounds a lot like taking what Google said without including Timnit’s point of view. Is it really fair to disparage an article for having a title like that when you’ve completely ignored such a large part of the issue?
> This sounds a lot like taking what Google said without including Timnit’s point of view.
Not at all. I've read everything she's written on the issue including the long message she wrote to the brain group, and her tweets where she shared what Google said when they accepted her resignation. These latter make specific reference to the ultimatum, which was the ultimate reason for her departure.
As I pointed out in the previous discussion she may have faced some sort of disciplinary action upon her return from vacation due to the content of her message to the brain group.
However, when she resigned what Google managers did (and this is no great leap of logic) is figure out that if they made her work her notice period she'd probably only cause more trouble in the meantime, so they brought it forward and made it effective immediately.
This might or might not be unusual behaviour for handling a resignation at Google, but it is fairly common practice amongst different organisations for a variety of different reasons that usually boil down to mitigating some kimd of risk to the organisation.
Are they not being paid for their notice period? Usually the way this works is "gardening leave" where you're paid and are not supposed to come into work.
If she was cut off without “gardening leave”, then she was fired. It can then be true that she both quit (two weeks notice) and she was fired (no two weeks for you). I’d be surprised if the latter was true as it would be petty on Google’s part. More likely would be gardening without access which would still be gardening.
IMO, it’s germane. “I quit. My last day will be sometime in late Dec.” “We’ll pay you and recognize your employment through that date, but you are relieved of all duties effective immediately” is quite different from “Nope; your job ends today.”
How is that really a big difference? She's gone either way, because they wouldn't let her publish the paper. This way I guess she's more eligible for unemployment, but beyond that?
Well, I mean, for one thing it about $20K difference in salary alone.
> She's gone either way
True.
> because they wouldn't let her publish the paper.
That's...less clearly true in any meaningful sense. The public statements from all the other Google AI people about how the official narrative is inconsistent with general practice on publication review suggests very strongly that the management actions related to the paper were a pretextual component of a constructive termination campaign, and that even when it succeeded in generating something management could at least seize on as a “resignation” the result was insufficiently immediate requiring finding another pretext for immediate termination.
What makes it dishonest? Do you not believe that she stipulated that she'd resign under certain conditions? Do you not believe those conditions then came to exist? Do you not believe that her managers accepted her resignation?
I don't have a strong opinion in the matter (and have no connection to Google), but if she in fact unambiguously offered her resignation conditioned on her paper not being approved to publish and Google accepted her offer, I don't see how she can turn around and claim she was fired.
She didn't offer to resign immediately. Her manager explicitly rejected her actual offer and imposed new terms as punishment for her email to the group.
If the text Jeff Dean wrote below is overwhelmingly true, I'd agree that she resigned rather than was fired. I suspect that it is overwhelmingly true as I'm fairly sure that Google legal would have reviewed it and ensured they didn't say anything falsifiable and likely that this paragraph is entirely true.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
To me (and I suspect the courts): In the second case, she was fired today prior to her offered last day. In the first case, Google accepted her resignation and just didn’t require her to work through her last day.
Do you really think it's valuable for me to regurgitate an entire discussion that I've provided a link to (and which anyone can easily read) in my comment when what I'm actually trying to do is make a wider, but relatively pithy, point about journalistic integrity and the impact that the lack of it is having on our societies more than comment on Timnit's specific case? I will say that your comments are an almost perfect illustration of that point though.
Timnit claims she was fired, you’ve completely erased that part from the discussion and used it to prove a point about journalistic integrity. You provided a link to a massive discussion that has clearly not yet been able to piece through the details. When I asked you why you were so confident in your position that Timnit was clearly in the wrong and rightfully terminated because she didn’t comply with what Google told her to do, you decided that I lack journalistic integrity.
Google, talking through Jeff Dean, claims that Timnit was unhappy with her situation and submitted a good faith resignation which Google accepted due to her not following their policies. Timnit claims that she was forced into a position where she had to issue her ultimatum, forcing her into a resignation. And we have claims from Google employees saying that the process she had was unusual and did not match a normal review. Isn’t the true journalistic malpractice ignoring this and claiming that any title that doesn’t match your view, which appears to be Google’s view of the situation is inaccurate?
Do I dispute that she provided an ultimatum of which one side was a resignation offer? No. But can I say that this ultimatum wasn’t the result of Google pulling the rug out from under her and putting her in an unfortunate position from which she felt her only way to exercise her leverage was to make such a proposition, then pretend like they’re blameless by “choosing a provided option” of terminating her without looking at the reasons why she had to do such a thing? I’m not sure yet. I don’t think we have enough information at this point to judge, so I’m a bit concerned with comments like the one I just responded to that ignore like this concern doesn’t exist.
> Do I dispute that she provided an ultimatum of which one side was a resignation offer?
Ok so it's established that she threatened her managers and employer to either comply with her personal wishes or she would "exercise her leverage" to cause the company harm.
And in the end, as their managers didn't caved into her threats, she decided to pull the trigger.
> But can I say that this ultimatum wasn’t the result of Google pulling the rug out from under her and putting her in an unfortunate position (...) ?
So she threatened someone, her target didn't caved in, and thus she proceeded to execute her threat.
And somehow the responsibility of her executing her threat is supposed to be on her target?
It seems to me that people misunderstand how ultimatums work. I guarantee you that you maintain a number of unsaid ultimatums with your employer; for example, one of them may be “pay me or I will quit”. Once an ultimatum reaches the point where it is nonverbal it is difficult to classify as a straightforward resignation or firing, because at that point it is clear that communication has broken down and pressure is being applied from at least one side. Without knowing who the “victim” is here the argument could go either way: “you made me issue an ultimatum”/“you put us in a position where we had to accept your resignation”.
It's clear all off the facts are not available, and likely never will be, experiences are subjective.
It's also clear that before learning of this event, we had 0% knowledge of the situation.
Between 0 and where we are now with both sides expressing their point of view to some degree, people on HN began making up their mind in the absence of complete information. There is no requirement or urgency that we come to some inconsequential conclusion of our own.
My bias is that it seems difficult to obtain the position the researcher held at Google. How can I be willing to believe the engineer has the ability to navigate the subject matter and its application without being able to navigate this employment scenario. It feels as though I am required to accept the engineers brilliance while calling them dumb at the same time. That feels like a larger handwave than considering the known actions of Google and questioning the few assertions they are willing, but not required truthfully or untruthfully to provide.
There are plenty of instances of "smart" people doing "dumb" things. The idea that there is only one intelligence without considering people have a lot of individual foibles due to experience, temperament, predisposition, and any number of other factors is really dangerous. It's that type of thinking that led to presidents who were former movie stars or real estate developers.
> The idea that there is only one intelligence without considering people have a lot of individual foibles due to experience, temperament, predisposition, and any number of other factors is really dangerous. It's that type of thinking that led to presidents who were former movie stars or real estate developers.
Or it doesn't and we victimize people with smaller PR budgets to present their perspective.
Do you have any links to this discussion? I’m trying to verify the toxic claim and am having a hard time finding it on Twitter right now because of all the noise.
I’d like to see more elaboration than a claim that her behavior is toxic - that is not helpful or conducive for spreading knowledge. I don’t see anything here that matches up to that claim at all, speaking as an outsider.
A word you seem to have no reservations about using against your opponents just from a quick search of your comment history. Why such outrage when it's turned back on your own sacred cows?
Why are you assuming anything about me? You realise I e used the word toxic in my whole comment history only related to this topic right?
I, like many others, dont like the way she deals with people. It is toxic. You call a toxic person, a toxic person. Ample evidence for it. Its not outrage, its just facts.
> Toxic is an inflammatory and unnecessary word to use when no-one is privy to the actual facts.
This response reflects an unwillingness to understand a situational nuance from multiple sides, show empathy to a person in distres and offers no workarounds, support or evidence. I've grown so tired of these factual tug of wars to justify callous.
> Seriously, what happened to journalistic standards and integrity? Why are the actual events being so forcefully twisted to fit a particular narrative?
clicks. after a decade during which they lost money over fist not knowing how to monetise their product on the internet, they all went to the lowest common denominator: clicks. this in turn forced them to start bending the truth (something that tabloids where known for, and for which they got huge profits).
basically we still don’t have a viable business model for this industry. the only one that works needs headlines such as the one you mentioned to function.
the interesting bit is that publication such as Nikkei or the FT are still top-notch, but these are niches, not general audience publications. (also my FT subscription is £30/month and that’s a lot of money if you’re not in that niche)
> the interesting bit is that publication such as Nikkei or the FT are still top-notch, but these are niches, not general audience publications. (also my FT subscription is £30/month and that’s a lot of money if you’re not in that niche)
Indeed. £30/mo sounds like quite a lot of money in an era where a teenager doesn't come to your house every morning and shove the newspaper through your door (as I used to many years ago) but, adjusted for inflation, it's probably less than the cost of that older delivery mechanism.
It's hard to persuade people of that point of view though so they stick with free, ad-supported "news". Not that paid news wasn't historically ad supported as well, but at least they had more diversified revenue streams.
And you are, of course, correct: it's all about the clicks and ad revenue. And, given most peoples' preference for free over paid news, I don't have any great ideas on how to fix that.
I don't see how you can possibly be that confident about the truth of what happened. Unless you have non-public information, then there is still considerable uncertainty.
Timnit also shared a number of tweets, which you can easily access via the one you referenced, where she quotes from the email that Google sent her in response to her ultimatum.
They accepted her resignation and brought her finish date forward. They cited her message to the brain group as reason for doing so (without the resignation I suspect she may have faced some disciplinary action, though whether it would have gone as far as firing I don't know).
They clearly weren't happy with the content but, beyond that, if somebody is pissed off enough to write that the kind of message Timnit did then, by making them work their notice period, you only invite them to cause more trouble whilst they're still part of the organisation. You therefore bring forward their leaving date and make their resignation effective immediately.
When you do this it's about mitigating risk to the organisation. Commonly I've seen it done with salespeople in certain sectors, where when they resign they are escorted from the premises and access is revoked as part of minimising the risks that they'll take clients with them to their next role (particularly if they'll be working for a competitor). Still, any situation in which continuing to have an employee around represents a significant risk to the organisation is one in which you might ask them to leave immediately.
Google already had a mess to try to clean up with the brain group as a result of Timnit's message to that group. They probably didn't want any new messes to deal with, so they asked her to leave immediately to mitigate that risk.
Btw, I'm not advocating for Google here: I'm just looking at this in terms of, "What would I as a manager do in a circumstance where an employee has set out conditions of an ultimatum for their continued employment that I am unable or unwilling to meet?"
No-one knows fully without seeing her employment contract.
But having worked at many companies similar to Google in similar roles it is not normal for (a) contracts to not have notice periods and (b) for companies to not honour them.
And IP-flight risk is a concern for many roles but it's typically handled through legal channels as we've seen with Uber.
> it is not normal for (a) contracts to not have notice periods and (b) for companies to not honour them.
Nobody's talking about the contract not having a notice period, much less about not honouring the notice period. I'm talking about (metaphorically these days) getting you out of the building and stopping you from potentially causing damage.
I'm not sure about the US but here in the UK your notice period will still be honoured because you will receive the salary you would have received had you continued to work through that notice period even though you are no longer able to do any work for the company (i.e., "gardening leave" - https://en.wikipedia.org/wiki/Garden_leave).
I don't know what Google's severance policies are, and particularly with regard to remuneration for severence period (they will certainly vary by country/region though), so this situation might be different. Nobody's explicitly said whether or not Timnit will be paid some standard notice period (though reference is made to her final paycheque in the email she quotes where her resignation is accepted). No doubt an organisation as large and complex as Google has some policy that covers these circumstances that they will follow.
As I say, here in the UK, if somebody's resigned it's perfectly OK to ask them to stop working before their notice period is up (which might include revoking access to email and other company systems), as long as you pay them for the whole notice period. Doing work on behalf of the company and getting paid are two separate issues under these circumstances.
The effective date of a resignation is when the employee is officially no longer employed. Not when they're put on gardening leave. Google doesn't pay people past the effective date normally.
Thank you. That's helpful to know and clearly illustrates the difference in practice across different countries (they wouldn't be able to do that in the UK unless it was a firing).
I think the difference is more phrasing than practice. Both countries require paying employees for the notice period when they resign. That's why people are saying she was fired.
All the employments contracts I've seen were along the lines of "you may give notice of x day which we may refuse" and "in case of termination we give notice according to laws (2 weeks)" or something iirc. If she resigned it's pretty usual for the employer to be able to waive the notice period.
The paper DID NOT force her out of Google. Her subsequent behaviour - submitting without approval, rant, ultimatum, and resignation - did. And she wasn't "forced out": she resigned of her own volition. She could have chosen to make improvements to the paper based on the feedback she was given, resubmit it for approval, and then get on with her life, but she went the other way.
The headline from the last discussion on Timnit's exit[0] was awful as well: "The withering email that got an ethical AI researcher fired at Google". So bad in fact that it was changed on HN to more accurately reflect what actually happened: "AI researcher Timnit Gebru resigns from Google" (much more neutral and factual in tone).
Seriously, what happened to journalistic standards and integrity? Why are the actual events being so forcefully twisted to fit a particular narrative? No wonder the general population struggle to figure out what's true and what's not, and fall victim to all kinds of nonsense, BS theories, and conspiracies.
I wish I had a good idea on how to change this behaviour by journalists and publications.
(Clearly this is a problem that goes far beyond Timnit's story.)
[0] https://news.ycombinator.com/item?id=25292386