Hacker News new | past | comments | ask | show | jobs | submit login
We read the paper that forced Timnit Gebru out of Google (technologyreview.com)
334 points by DarkContinent on Dec 5, 2020 | hide | past | favorite | 633 comments



Here's why I have absolutely no sympathy for Google in this situation.

They hired Gebru as a professional thorn in their side. "Come up in here and be a pain in the ass! Tell us what we're doing wrong!", they said. "We're Enlightened Corporate America, after all!" She is a chess piece in the game of Wokeness Street Cred.

She then proceeded to do the job she was hired for, and now they're all "Hey lady, Here At Google that's not how we do things".

That said, as a manager, I would have "accepted her resignation" and/or fired her without hesitation.


Agreed; but this isn't just a Google problem. Seems to me like a lot of SF (and SF-inspired) "big tech" wants to be known for their "wokeness"[1], which leads to hires like Timnit and other "politically-outspoken" people, which in turn leads to situations like this, James Damore, and other individuals/situations that amount to workplace political activism.

I am wholly uncomfortable with any discussion of politics in a workplace environment like a mailing list. I am more than fine with employees choosing to associate politically outside the workplace and workplace spaces, no matter how radical I find their views. This is in itself political - it supports the status quo - but anything else is inviting dissent and combativeness, and situations like these will keep happening.

That said, I've not seen the email outlining her conditions "or else", but I feel I'd have very much taken the same stance as you, given the surrounding coverage of what was in that email. Ultimatums to your employer don't often go well. And perhaps this is a good thing for her, because she may leave for a place that better suits her.

-

[1] my derisive use of this term does not stem from actual efforts at inclusiveness, those are good, but from surface-level attempts at it that end up feeling performative, at best.


The problem here is that AI and race now intersect in non trivial ways. It’s like having privacy as political discourse when you are scanning emails for marketing purposes. There’s a line at which not having the discussion is equivalent to taking a political stance.


This. And since certain companies/industries pose a certain "systemic risk," failing to self-regulate might lead to external regulation later.

For example, legislation on AI, predictive models, and facial recognition in policing.

Who knows, we could see limits on the use of imperfect models in other areas.

After all, there is no "market solution" for ethics.

https://mathbabe.org/2013/11/12/there-is-no-market-solution-...


This.

It was literally her job to conduct such research. It's not politics, it's ethics (tethics)


Ethics is not not politics though. Everything is political. Claiming that ethical concerns should impact business decisions is certainly political. We can wish to “not discuss politics in the workplace”, but then it isn’t really possible to have any discussion.


I think you are confusing her previous work (which was on race) and the work this article is discussing.

This research paper in question was about AI and natural language processing and its carbon footprint and all that. Nothing to do with race or political agenda.

Google also has done a ton of work on creating renewable datacenters, so I think they are definitely onboard with making tech have less of a carbon footprint.


This is not true. She does discuss how some countries and regions of the world use the Internet less, or not at all, and thus Transformer/BERT are training on the linguistic habits of the most affluent of the world.


It was still an extremely scattergun approach. I'm moderately familiar with NLP but nothing in that paper really had traction overall, it was more of a list of mildly bad things than anything huge.


This is true. And the research raised an important point which was worth discussing. But it seems, based on the totality of the information, that Google received an ultimatum demanding that it take particular action with respect to the issue. That’s going beyond an ethics researcher’s role.


The ultimatum was about the review process and not the paper per se. I do think the ultimatum was a bad idea, but I don’t think I’d fire an employee over one. Id just tell them no, on their demand.

That said a blind internal review process that apparently blocked a paper for the first time in Googles history probably deserves more explanation than simply “we will take your resignation”.


They didn't fire her. She said "I will quit if you don't meet these demands" and they were unwilling to meet the demands.

If she didn't threaten to quit, I can't imagine they would've done anything other than tell her no. But she gave them an ultimatum and they chose which side they would be on.


Google didn't hire Gebru for her "wokeness" or for whatever petty internal political issues drive discussions here on HN. They hired her because they are a massive AI company operating on a multi-billion user scale that will very likely produce massic impacts on our society, impacts that we can't even imagine today. They understand at an intuitive level that what they're doing is risky and (perhaps at minimum) they understand there's a risk that it will provoke a backlash. To insulate themselves, they wanted a credible AI ethics researcher even at the cost of her being potentially a bit difficult to manage.

Now, I don't fully understand what happened in this incident, but I think there's a chance that it's far more impactful than some internal workplace spat. To put this succinctly: businesses like Google are playing cards for thousand dollar bills, and HN is interested in debating their favorite game of nickle poker.


I really like this train of thought. I want to push on the idea of an AI-focused corporation “insulating” itself from backlash. I think what’s happening here is an example of the human agents inside of a corporation acting human, counter to the interest of the organization within which they find themselves. This particular spat might not necessarily be the tipping point, but it looks like history is starting to overtly dance with the impacts AI will have, as you stated.

The current imperatives of a profit-focused corporation are not aligned with human interests. This has been clear since the time of 1940’s fascist Germany. There are examples of corporations aligning themselves with that government, solely because of the stability that a strong government structure provided. In hindsight, it is clear that corporations provided Hitler et al with the economic powerhouses that moved him and other sympathetic leaders to construct frameworks which were not in accordance with modern ideas of human rights. Especially under the imperative of shareholder primacy in the US, that motivation is stronger than ever. In China, the imperative is strengthening government power.

A development I’m personally watching is how AI will fit into that tension between human-interests and corporate interests. Alphabet’s leaders seem to be attempting to steer that corporation closer in alignment with human interests, although the profit motive is an unshakeable goal. Should the US government take a more active role in the development and control of AI, or should we mainly allow the market to pursue AI within the framework of profit motive?

Offhandedly, I am wary of allowing unrestrained pursuit of AI development, and so departments like which Gebru was leading take a historically vital role. I also feel a tension between my awe of progress (as a layman) and the possibilities that AI might unlock for humanity. That leaves me with the question of, who should be in charge of artificial intelligence? From our perspective in 2020, I know Bostrom’s and Musk’s concerns about the future of AI seem far-fetched, but even if those negative outcomes have a non-zero possibility of coming true, then we should spend time considering them.


>Seems to me like a lot of SF (and SF-inspired) "big tech" wants to be known for their "wokeness"

It's just a fashion. It would be so dead in 10-20 years as being a hippy was in 1985.


It's more than politics. It's more about moralizing things as in Animal Farm's "Humans are bad, animals are good". If it is politics, it is the worst kind of politics, as history in Cambodia/China/Europe repeated showed.


The performative sense could have to do with a lack of personal connection to it. It’s a frustration for something not wholly understood, specially when the “performative” express strong conviction. I’m trying to understand how one could imagine someone would risk imprisonment, bodily injury, or death just for performance.


Exaggerating risk of “imprisonment, bodily injury, or death” is a hallmark of the “woke”.


I don't really have a perspective on this particular case. But yeah. "Companies" (really senior people within those companies) hire people who are known in their field, bring an outside perspective, have an independent voice, etc. And then those people (or perhaps different senior people) get unhappy when the individual who was hired doesn't go through channels or otherwise toe the party line. Either both sides find a middle ground by some combination of putting certain opinions in a box and choosing to ignore the use of channels or messages outside of official guidelines--or you part ways.

I have some experience with this. Not to nearly the same degree. But I had something of a tiff with an exec my second day on the job in a prior role.


I agree with this in part, but I think there is a difference between "tell us what we're doing wrong" and "send emails to mailing lists telling our employees to lobby congress against us". Plus the whole setting an ultimatum thing.


I'm less cynical about this as a concept.

Would you feel differently if, say, Apple hired a "Privacy Watchdog", with a long history of activism on the topic? Someone you could trust to speak out if something was amiss.

If the person is later fired, that's a sign that something is wrong at the company. But if they stay, and have generally good things to say, that's a sign the company can be trusted.

I do think this is a good system!


It's a difficult type of role. See the NYT getting rid of its public editor/ombudsman.[1] I'd actually have trouble seeing Apple hiring someone like that given how tightly they control external messaging/speaking/etc.

It's really hard for execs, including but not limited to PR, to have worked really hard on getting some message/narrative accepted by the media/analysts/etc. and then have a high profile employee on social media or in a major publication saying it's sort of B.S. even as an occasional thing.

ADDED: It doesn't make it "right" for them to storm into their boss' office and demand said person be fired but it is at least an understandable human reaction. I think anyone who takes that kind of role has to understand they'll be a lightning rod, feel passionate about it anyway, and understand they may well eventually set off a tripwire and be fired in a very public and humiliating way.

[1] https://www.thedailybeast.com/why-the-new-york-times-fired-i...


> I think anyone who takes that kind of role has to understand they'll be a lightning rod, feel passionate about it anyway, and understand they may well eventually set off a tripwire and be fired in a very public and humiliating way.

Oh, absolutely—that’s practically a part of their job description!


And yet here everyone is, discussing whether or not Google is secretly racist, with protest petitions involving 1000s of signatures. I had never heard of the name of Gebru before but now I have.

I'm not meaning to take sides here, and I think what you're saying needs to be kept in mind in understanding this, but it's hard for me personally at least to not see this in terms of some diversity-speech Streisand effect.


This is probably like hiring the EFF (by Apple) to help with privacy ethics, it is 100% guaranteed that they'd be fired in time :) . EFF are idealists, Apple is a pragmatist, so eventually they would part ways, it's inevitable. Hopefully google and she both learned something from it.


I think that works if there's a clear way to distinguish the hypothetical privacy watchdog leaving/getting fired due to issues with the privacy program, versus leaving/getting fired for other reasons. Making that distinction, imo, seems to be the core disagreement with the current Google issue, at least among all of us commenting on the outside.


Well, if the ombudsman leaves the company amicably, they can say so.

But it should be understood going in that there's no way to "fire" a person like this (at least without raising a storm), regardless of the reason. I don't think that should be such a problem—it's like giving tenure to a professor, except unlike at some universities, an organization will only have a small number of this kind of employee.


I agree it depends on the number of folks in such a role. One person at a large company should be manageable. In this case, Google seems to have a moderate number of AI ethicists, which presents a more difficult hiring problem.


It seems like that is always going to be a problem, since if the person is getting fired critiquing the thing they are there to critique, the company will always have a huge incentive to present another explanation.


I think we agree on the difficulty here. I believe the reverse is also true: if the person is fired for a legitimate, non-watchdog reason, they have an incentive to say the firing was due to them bring up privacy/ethics concerns.


This is true, but it’s not symmetrical. If they actually have privacy/ethics concerns, those can be presented publicly. In many cases of employee performance etc, it boils down to subjective issues or he said/she said.

In this case, it seems like the technology review article supports the idea that there are real concerns being raised, and elsewhere it has been suggested that the reasoning for the firing is not credible.


believe Google vs believe a solitary worker (female, black, known political agenda)


> believe Google vs believe a solitary worker

But it's not a solitary worker, many Google AI workers have spoken up and said that even if the management narrative is true as to what happened in the specific case, it would represent a radically different review process — both as to what the review looked at and the request to retract rather than correct when there was a plenty of time for corrections for the conference — than is applied in Google AI research to other workers.

This strongly suggests that even if Google's incident-specific behavioral narrative is true as far as it goes that the entire incident was constructed as personally-targeted harassment, and that the narrative about qualitative motivations and regular process are false.


Good point, I agree.

My implication was that in the absence of all the facts, many are making a deliberate choice to believe one side or the other.

As far as this specific community, I'm not surprised that many (not all) take Google at their word.


What facts do you think are missing, and what do you mean by ‘a deliberate choice to believe one side or the other’?


Most of the circumstances around the firing are hearsay.

My weakly-held assessment agrees with your comment here:

https://news.ycombinator.com/item?id=25316014

The following comment and thread demonstrate how many in the HN community reflexively defend Google.

https://news.ycombinator.com/item?id=25320745


Interesting.

It seems like we share an overall impression of what is going on.

It seems to me that ‘reflexive’ and ‘deliberate’ are contradictory adjectives when assessing people’s responses.


What's your point?

Jeff Dean is a solitary worker (male, white, known political agenda).


What is his political agenda then, if you know it? (Possession/absence of Y chromosome/melanin does not constitute such.)


Protect Google.


While that may be a self-interested professional agenda (they pay him an obscene salary), that's hardly politics.


AI ethics is politics.

Google is one of the largest players in AI, not to mention one of the largest companies in the world, and the focus politics surrounding Tech in general.

Defending Google’s AI ethics is politics.


> That said, as a manager, I would have "accepted her resignation" and/or fired her without hesitation.

Her manager, however, apparently would not have: https://m.facebook.com/story.php?story_fbid=3469738016467233...

Something is seriously wrong, BTW, when a manager has to find out from their own ex-employee that that employee has been fired by the manager’s manager, who, while not informing the manager, has sent a message about it to the fired employee’s direct reports.

Irrespective of the merits of the firing itself.


It’s not clear that’s how they found out. The post just says that they were stunned by “what happened”.

If someone offers their resignation to their skip manager or some other manager in their chain, it’s not surprising that they might accept it without consulting that employee’s manager.


> It’s not clear that’s how they found out.

Yeah, that bit wasn't from the managers FB post but elsewhere.


Anybody that publicly says their corporation is gaslighting them won’t be staying at that corporation

It doesn’t matter the topic that’s the outcome

Not to mention healthier for everyone involved

Google employees are way too comfortable with that message board they have over there


I mean she did not HAVE to make an ultimatum.

Real change takes hard work and time, and patience, and also playing politics with powerful people you dont like.

Sounds like she basically made a one sided hit piece which was against the work some teams of brain, and took the "my way or the highway" approach. That wasn't so smart from her either...


Playing politics can take many different ways, and creating a scandal to draw attention can also be very efficient.


We'll see.. but minority activists have a tendency to use drama and twitter outrage a lot.

But if you take the minority part, google has a point: as a manager, she had a setback. Now instead of sucking it up and maybe wait for the next submission deadline and modify her article, she went and complain its discrimination on a internal mailing list of employees and then making ultimatums. :roll-eye: Any manager would have been fired.

Its only a special situation because she's black and work on ethics. Its privilege actually, more than discrimination here.


The problem is she made some higher up angry and they said to fire her post haste. I think this is all being blown out of proportion. If she had been professional she wouldn't have put in an ultimatum, if google would have been professional google would have said "hol' up, no firing, let's communicate on this". So as usual both sides were wrong and unwilling to be adults.


She had been doing that job for awhile. IMO the reason this particular paper got the treatment it did is subtle but simple, based on the following observations:

* Many jurisdictions impose strict controls on the use of racially biased methods (eg: redline laws like the fair housing act in the US).

* Different advertising companies have access to different targeting methods (eg facebook has your social graph, apple have your app use, google have your search history).

* Language models are a key technology for google to maintain advertising relevance & thus keep adwords competitive

"Models trained on modern language reflect societal biases" may seem like an obvious fact, but once google says it publicly they no longer have a basis to deny it in court.


What do you do for a living?


It’s classic tokenism: invite a minority to the table but don’t let them speak.


If you would fire someone for doing what they were hired to do then maybe you should reflect on whether you are in fact a manager.


> Here's why I have absolutely no sympathy for Google in this situation.

Abstract away from the particular goal here, the problem itself is difficult and interesting. How can the leader of a large organization change the organization’s culture?


Nadella famously did it at Microsoft, but I don’t know exactly how. It would be fascinating to read a book on just that topic.


ICYDK, Nadella wrote a book on this and more: "Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone". Really good read.


Question for Microsoft employees: does it feel as if Satya changed the internal work culture at MS or has it been more about public perception and less about on the ground culture change?


IMO, he really did, but it was ripe for a change in any case - what we needed was leadership that realized the need for it, and would clearly and unambiguously greenlight the critical mass of proponents on lower managerial levels (where those things actually get done); Satya provided exactly that. But I don't know whether this is translatable to, or even applicable to, Google.


Such an excellent point. Hiring activists is fine, unless your ACTUAL mission is profit vs whatever utopian vision (sometimes worthy, usually delusional) the activist is going for.


I've always found it funny how we treat companies like Google as imaginary people with no single individual bearing responsibility.

To be fair, some people are aware of the people involved like Jeff Dean, but I look forward to a shift in the future where individuals within companies are as popular and held to the same standards as individuals within politics. Some of these corporate individuals already seem to have just as much power so hopefully it doesn't stand to be too much of a stretch in an ideological vision for the future.

With respect to this particular case of Timnit Gebru, it sounds like she was already on her way to being let go. From reading her Twitter, it seems like she has a flair for the dramatic which could make her critiques of people potentially come off as needlessly harsh and unconstructive. Whether that's good grounds to fire her may only be known to those within the company who interacted with her most I guess.


> With respect to this particular case of Timnit Gebru, it sounds like she was already on her way to being let go.

This is just wild speculation and part of a frustrating pattern where all sorts of questionable behaviors by authorities are justified by other unrelated properties of the victims.


She's not a victim. She was an employee who sent in her resignation/ultimatum letter and then blasted an unprofessional message to a bunch of her colleagues.

She is obviously doing good research, but I could also see why she wouldn't be a good fit for Google (or really any other large tech company).

I really don't see why this isn't a win for everyone. She said she didn't want to work there under current conditions, now she's not. I'm sure she'll get a job in academia or at a think tank where she'll be able to be much more critical of Google and big tech. The world will better off because her voice and research won't be toned down by a Google internal review. And Google gets to get rid of a problematic employee. It's a win-win-win.


> She's not a victim. She was an employee who sent in her resignation/ultimatum letter and then blasted an unprofessional message to a bunch of her colleagues.

The most interesting part of the story is the part that happened before this.


Where google requested she retract her paper to include information that was flattering to google and Google thought was relevant?


They just demanded that she retract it. No opportunity to adjust it to include the additional information (there would be ample time to do this between review and the camera-ready).


In the article it says the co-author specifically requested the paper not be published because they "didn’t want such an early draft circulating online". Sounds like Timnit might have jumped the gun a bit and it wasn't just google that thought it wasn't ready.


They requested that it not be distributed online because it has not yet completed peer review, which usually comes with recommendations to improve the paper in either small or large ways. That is very different than retracting it from the conference review.


> justified by other unrelated properties of the victims.

Is it unrelated though? Her behaviour in terms of communication, less what she says but more how she says it, is extremely related to this case.

However I do think I agree that it's speculation, at best.


Are these numbers the energy to train a model? The whole point of these new NLP models is transfer learning, meaning you train the big model once and fine-tune it for each use case with a lot less training data.

5 cars worth of carbon emissions is not a lot given that it is a fixed cost. Very few are retraining BERT from scratch.

EDIT:

The other two points are also disingenuous.

* "[AI models] will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. "

NLP in "low resource" languages is a major area of research, especially because that's where the "next billion users" are for Big Tech. Facebook especially is financially motivated to solve machine translation to/from such languages. https://ai.facebook.com/blog/recent-advances-in-low-resource...

* "Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy)."

This is also a major area of research. Achieving understanding falls under the purview of AGI, which itself carries ethical and safety concerns. There are certainly research groups working toward this. And reducing parameter sizes of big networks like GPT-3 is the next big race. See https://news.ycombinator.com/item?id=24704952


Carbon emissions arguments tend to ignore the value of what's being done as well. BERT and other transformers were meaningful experiments that were valuable in furthering a major research direction and enabling more effective consumer and business applications. In that sense, it's like any other company doing R&D - of course energy will be used and of course there will be some inefficiencies.

I think it's quite misleading to compare the energy usage of an industry-wide research effort to individual consumption. The graphs look bad - "wow, 626,000 lbs! that's 284 metric tons of CO2! a plane flight is way less!" - but there's a fundamental difference between "progress on a problem being worked on by thousands of highly-paid researchers" and "I bought a car".

Meanwhile, the worst power plants are generating on the order of 10+ million tons of CO2 every year. There are at least a dozen of these in the US alone. Car factories are emitting hundreds of thousands of tons of CO2 (Tesla is somewhere around 150,000 tons a year, apparently, and it's designed to be efficient). Perhaps activism around CO2 emissions in ML training might be better focused on improving the efficiency of those things instead, seeing as a 1% improvement would outweigh the entirety of the NLP model training industry. It's certainly good to keep in mind the energy costs of training in case things balloon out of control, but right now the costs relative to the results seem small and not worth highlighting as some forgotten sin.


This is exactly the problem I have with naive environmentalism. The most recent data I could find for the United States total carbon footprint was 5 billion (metric) tons in 2016.

Total energy consumption of all computers, mobile phones, datacenters, servers etc, combined, isn't even a percentage point of that.

Yes, CO2 emissions are a problem. You are not going to solve that problem by targeting sectors which entire footprint is not even a significant digit.


I think the main was sidetracked. If we collectively emit as much CO₂ as 0.1 or 100 persons lifetime for a single model, every year .. who cares? I don't.

But, what if Amazon wants it's own model with its own curation? Maybe we need different languages, maybe countries would like to have their own model with a different world-view. Why shouldn't a researcher train their own model, maybe experiment with different versions? Why should consumers be relegated to pre-trained model with inscrutable preconceptions?


this is a great comment, and shows why the original author’s “opportunity cost” argument is a double-edged sword


Still though, it’s good that these numbers are brought into light, along with other “hidden” costs in the IT industry. Otherwise we’ll just spiral into whataboutism and “my own carbon footprint is totally fine, because somewhere out there a model is doing worse things”. It also goes to show the sheer scale of this research field (arguably in a double-edges sword way) if the general public was still thinking “nerds in a basement recognizing cats”.


I think we all know that there is a carbon footprint cost to what we do. Its the reason google has been working on renewable energy datacenters for so long:

https://www.google.com/about/datacenters/renewable/


The Strubell paper which is the origin of this "5 cars" number isn't even in the right ballpark for this stuff. What they did was take desktop GPU power consumption running the model in fp32, extrapolate to a 240x GPU (P100) setup that would run for a year straight at 100% power consumption.

Yes if you do run 240x p100s at literally 100% 24/7 for a year you get the power consumption of 5 cars. This run never happened though, this all ran on TPUs at lower precision, lower power consumption and much lower time to converge.

If anything this tells you that electronics are ridiculously green even when operating at 100%. I've never profiled world-wide carbon production but something tells me if you wanted to carbon optimise you'd be better served trying to take cars off the road and planes out of the sky.


As I already mentioned, the paper also uses an industry average PUE factor of 1.58, when Google's is 1.1. Other large tech companies can't be too far behind.

Which makes me wonder how far removed AI researchers are from actual production environments. I'm not faulting them, because there's only so much time in your life; the more realistic problem is when someone else takes a paper as gospel and runs headlines with it. Kinda like the trolley problem. Imagine the absurd extreme in this case of governments wanting to regulate large language models because of pollution or to level the competition playing field.


> I've never profiled world-wide carbon production but something tells me if you wanted to carbon optimise you'd be better served trying to take cars off the road and planes out of the sky.

We're getting a bit off-topic here, but the #1 target by far in reducing greenhouse emissions is power generation. In transportation it's significantly trickier to replace petrol-based fuels (especially for airplanes), but it's straightforward enough in power plants. And crucially, you can convert all the petrol-powered vehicles to EVs that you want, but if the electricity they're getting from the wall is still provided by burning petroleum then you haven't actually done that much.

Luckily, computation can for the most part be located anywhere (exactly the opposite of transportation), and thus you have a lot of data centers near hydro and other renewable sources so that they can use the cheapest green power available.


> We're getting a bit off-topic here, but the #1 target by far in reducing greenhouse emissions is power generation.

I readily admit I don't know any of the numbers associated with carbon production and my comment was solely based on the one GPU vs car figure presented in the aforementioned paper.


> 5 cars worth of carbon emissions is not a lot given that it is a fixed cost. Very few are retraining BERT from scratch.

It's not a fixed cost though, it's just how much was spent on this year's iteration of the model. The overall point being made is that model training costs are growing unbounded. Next year it could be 30 cars' worth or whatever for BERT-2, then 600 cars' worth for BERT-3 the year after. That's what it's warning against. At some point it isn't worth it.


This is a very valid argument but it's hard to know what scaling a transformer will really do without trying (looking at you GPT-3). This is probably an issue for ML in general at this point.

I think a more nuanced conversation around these topics will look at exactly what you bring up, how do we properly trade the potential knowledge benefit against the costs?

It pains me that entirely valid avenues of research like this get covered up in nonsense and drama and their message seemingly lost in the midst of it.


Yes, an in-depth nuanced conversation is absolutely merited here, and Gebru and her colleagues were having it in the most rigorous way possible -- in peer-reviewed papers at AI conferences. I won't pretend to remotely be contributing as much to the discussion as they were.


> reducing parameter sizes of big networks is the next big race

I think this race has been ongoing for a while now


I'm more interested in the related nugget, which is the carbon footprint of a Google Search. Some estimates from a decade back put it at 7g (boil a cup of water), but since then it's probably only gotten larger. However if Google's dstacenters truly carbon neutral, does it even matter?


Yes it's something I often see ignored as "common knowledge" dictates that in ML inference is way cheaper than training. But if you're running a model in production at google with loads of google searches hitting it every second. At what point does the inference costs start to outweigh the training costs?

I simply have no idea where the hinge point is. This could inform other questions like, could it be worth to scale up to get a more accurate model (pay up-front in training) to avoid further searches (inference)?


Green energy is often used in addition to the carbon-based energy. The total amount of used energy increases. Green energy should replace carbon-base energy.

Wind and solar are finite too. The places where they can be harvested are scarce. So if Google using up green energy for bells and whistles on the search page, less homes can use green energy for heating and transport.


I did not read the paper (just like most people here), but by the title — “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — it does not look like the CO2 emissions thing is the main topic of this research.

BTW, "Stochastic Parrots" is a very descriptive name for the problem

> Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, [...] undocumented training data perpetuates harm without recourse.”

Since these models are being applied in a lot of fields that directly affects the life of millions of people, this is a very important and underdiscussed problem.

I really want to read the paper.


> Since these models are being applied in a lot of fields that directly affects the life of millions of people

In particular, it is being applied right now to rank Google search results, and probably responsible for lots and lots of Google's profit. You should be skeptical of Google's appraisal of the paper that is material to Google's profit.


I bet you don't need to evaluate the NLP model when you have direct access to the end result, Google Search.



The first page was leaked. The environmental angle was a significant part of it, particularly the claim that environmental and financial costs ‘doubly punishes marginalized communities’.


Didn't google get to carbon neutral?


Yes, since 2007.


> BTW, "Stochastic Parrots" is a very descriptive name for the problem

Meh, both parrots and language models are inferior to humans in producing language, but one is more useful than the other. And real parrots are also stochastic, like all living things.


Analogies aren't meant to withhold careful scrutiny. They're suggestive.


Real parrots are clearly more useful.

After the stochastic parrots have caused the collapse of civilization, we can eat the real parrots.


>> And real parrots are also stochastic, like all living things.

I don't understand how living things are "stochasic". Can you please elaborate on the matter?


Why? It sounds like ideologically driven circular reasoning. If you train an AI on the largest dataset it's possible to obtain then you have, almost by definition, done the most you can to avoid bias of any sort: the model will learn the most accurate representation of reality it can given the data available.

Gebru is the type of person who defines "bias" as anything that isn't sufficiently positive towards people who look like herself, not the usual definition of a deviation from reality as exists. Having encountered AI "fairness" and "bias" papers (words quoted because the words aren't used with their dictionary definitions), it's not even clear to me they should count as research at all, let alone be worth reading. They take as the starting premise that anything a model learns about the world that is politically incorrect is a bug, and go downhill from there.


If you train an AI on the largest dataset it's possible to obtain then you have, almost by definition, done the most you can to avoid bias of any sort

All politics aside, this is not even true for toy ML problems. If I’m trying to do digit recognition and “all the data I can find” is a billion hand-written 0’s and a million hand-written 1’s through 9’s, naively training on that data will yield a model that’s pretty close to guessing 0 every time.


We're talking about tech firms that have access to the entire internet and use it. Hypothetical examples involving imaginary datasets that nobody would use don't prove anything relevant. And note that my argument is not about whether you actually avoid bias, it's about whether you've done the most you can do to avoid it. If you used all the data you've got, then you've done the most you can, even if for some reason the data you've got isn't any good.


> We're talking about tech firms that have access to the entire internet and use it.

I think you're trying to say that a large enough dataset will be free of bias. I don't see how that follows. If I train a model on home mortgage decisions, I will replicate the bias on that currently exists on that dataset - https://news.northwestern.edu/stories/2020/01/racial-discrim... - unless there are conscientious choices to reduce that bias.

Researchers in ethics in ML are specifically trying to enable tech companies to do a better job of not replicating bias and justifiably point out where that is occurring.

Third, I would argue that applying an ML model to do something faster if it replicates the bias of a previously human decision is even worse. The bias has taken the human element completely out and systematized the bias and made it possible with even less friction.


Given the current public discourse climate in academia, where 99% of the researchers are politically left or extreme left, I recommend a grain of salt for any research that makes claims racial discrimination. The report you quote has at least a couple of strange holes. The paper itself is behind a paywall, thus beyond mere mortals reach.

> For example, in about 10% audits in which a white and an African-American auditor were sent to apply for the same unit after 2005, the white auditor was recommended more units than the African-American auditor. These trends hold in both the large HUD (Housing and Urban Development)-sponsored housing audits, which others have examined with similar findings to us, and in smaller correspondence studies

They fail to mention how large is the gap. Is the white auditor recommended 102 vs the black auditor 97, or 150 vs 50, or 200 vs 3? Without such critical information it is hard to form an opinion, unless one already has a large bias in accepting discrimination narratives uncritically.

> In the mortgage market the researchers found that racial gaps in loan denial have declined only slightly, and racial gaps in mortgage cost have not declined at all, suggesting persistent racial discrimination. Black and Hispanic borrowers are more likely to be rejected when they apply for a loan and are more likely to receive a high-cost mortgage.

They fail to mention the magic words 'when controlled for income'. America has a huge income disparity problem, which is conveniently forgotten behind the ongoing race (and gender) hucksterism. Assuming we'd wave a magic wand and fix all disparities across visible populations tomorrow, it will still not fix the fact that huge income disparities exist between individuals. Google engineers and researchers get paid 5 times the median national income or more, and (senior) Google management in the 10x to 1000x range. The vast majority of the population is stuck in dead end precarious jobs, with little social mobility, one medical emergency from bankruptcy.


In causal modelling thou shallst not control by consequences of the causal treatment (the causal treatment here is being born black). Read your Rubin&Rosenbaum.


>> Gebru is the type of person who defines "bias" as anything that isn't sufficiently positive towards people who look like herself, not the usual definition of a deviation from reality as exists.

I haven't seen this "usual definition" of bias as a "deviation from reality" that you give anywhere before. Can you say where it is coming from?


The greatest struggle facing the tech elite today is how to use the proletariat's data to train an artificial intelligence that doesn't inherit the proletariat's beliefs.


Pithy and concise, I like it. That's exactly it.


If your data set is biased for the purpose of answering your question, then having more data doesn't make things better. In this case 'big data' just becomes 'big bias'. Go to wikipedia and read on Simpsons Paradox.


I feel very disheartened that whatever good Timnit Gebru is doing in her work seems to be undermined by her constant antagonism and hostility. Based on this article, and additional context from other Google employees, I think she is in the right, but that she herself is such an obnoxious person it's hard for a lot of us here to be able to separate the issue with her personality.

My first introduction to Gebru (and probably a lot of others here) was through her fight with Yann LeCun. Whatever everyone thinks about that debate, one thing was clear: LeCun consistently asked his supporters to not attack Gebru, and tried to engage the discourse, whereas she repeatedly encouraged her supporters to attack LeCun, and avoided his points.

FWIW, I myself am a minority. I am very sympathetic to the investigation of AI ethics and Gebru's general area of work. I also completely understand and support the need for activists to disrupt norms: we wouldn't have civil liberties if activists weren't willing to disrupt middle-class norms.

But I think for exactly that reason activists have a responsibility (a greater responsibility even) to be selective about which norms or civilities they choose to disrupt. They do such enormous harm when their actions can be picked apart and dismissed for completely unrelated issues - in this case because Timnit doesn't seem to engage in good faith discussions with other experts in her field. It sucks, I really hate that this is what ethics in tech is going to be associated with.


> whereas she repeatedly encouraged her supporters to attack LeCun, and avoided his points.

This is historical revisionism, colored by your misremembering of the events. If you go back and look, you'll be unable to find Gebru encouraging anyone to attack LeCun.

Nor will you find LeCun making an effort to engage with Gebrus research. He never so much as acknowledges the existence of her publications when asked to read or comment on them.

E: as an amusing example of the importance of this kind of research, my phone decided that when I typed "Gebru", I meant "Henry".


So as I recall, you're right that she didn't encourage people explicitly, but she kept retweeting other supporters attacking LeCun, whereas LeCun kept telling his supporters to not attack her, and talked very positively of her. I agree my phrasing is muddying the waters, I should have been more precise.

On another note, it's interesting to me that I keep seeing the LeCun debate come up here again and again as an example of her being a bad-faith debater, given that it technically doesn't have anything to do with this Google debate.

Whether we agree with her on the substance, I feel like it strengthens my point that she badly damaged her perception amongst the ML/AI crowd, and their willingness to give her the benefit of the doubt in the way she chose to debate LeCun.


I think twitter is a better representation of the actual ML community than HN. Yes, her reputation suffered amongst a group predisposed to consider her area of research to be something between not real and propaganda (both sentiments I've seen expressed and upvoted!).


It's possible, I'm not on twitter so don't really know how it played out there. I agree there's definitely a large population on HN with simplistic and reactionary political inclinations[1] that's predisposed to dislike her. I guess I'm going off the fact that I myself was put off by the way she argued, and I consider myself more progressive then the median HN-er.

[1] The fact that you're getting downvoted without explanation (at this particular moment) is a good example of that to be honest. Your responses are just providing further context and understanding about the issue, there's no reason it should be downvoted other then the fact that people don't like that you're supporting Gebru.


AI ethics research is funny. It's obviously important but it's also kind of ... obvious. I am surprised they get paid so much.

* Lots of processing uses more energy..

* Large amounts of data might contain bad data (garbage in, garbage out)

* Wealthy countries/communities have the most 'content' online so less wealthy countries/communities will be unrepresented.

here's some more:

* AI being forced to chose between two "bad" scenarios will result in an unfavorable outcome for one party

* AI could reveal truths people don't want to hear e.g. it might say the best team for a project is an all white male team between 25 - 30 rather than a more diverse team. It might say that a 'perfect' society needs a homogeneous population.

* AI could disrupt a lot of lower paid jobs first without governments having proper supports and retraining structures in place


Playing the devil's advocate here, I'm with you that half of AI ethics is obvious and the other half is wrong, but is't it the goal of the field to try and give meaning to things that aren't obviously well defined?

To make an example that's en vogue right now, AI explainability. Nobody even has a definition of what it means for a model to be explainable (is a linear regression "more explainable" than ML? isn't Google search far less explainable than any model of anything ever?), but a reasonable framework for that concept could certainly be interesting.

Obviously, serious frameworks are done with definitions and math, not with words and storytelling, but all the air around those things seems to me more the fault of politicians, crap journalists and freaking idiots on social media (the current term is 'influencer', I reckon) rather than an issue with the field itself.


Sorry to ask, but I don't understand the points you are making. Could you please elaborate on them? I had trouble with the last 2 paragraphs. Thanks


AI explainability is a concept that's being thrown around frequently these days; it revolves around the idea that machine learning models should be "explainable", that is, their predictions should be traceable not just to the mathematical operations that define it, but also to some properties of the input which should be understandable by a human.

While I won't deny that the concept is interesting, it's terribly difficult to understand what it would translate to, technically speaking. It's true that machine learning models make predictions that are hard to check (and sometimes even understand), but they aren't inherently "less explainable" than even the simplest statistical models, like linear regressions. For example, it's pretty weird to be angry that "ML is not explainable" but to be okay with things like Google search that have literally zero transparency.

My main problem with it is that people with poor understanding of computer science and math in general - let alone machine learning - throw around the term like it's obvious what they mean, when their real goal is generating social media clout in the case of journalists and influencers, or, more darkly, to enforce political control of the industry in the case of politicians.


I think it's a little better than that.

An example from class. Suppose you are building a ML decider which can stop a production line if it sees defective products. If you are choosing between a decision tree and a neural net, one of the things to consider is that with a DT you can look at the tree the model comes up with a d say, okay if the mass of the widget is low, we reject.

With a NN, you can't see why things are rejected in the same way.

Some tasks benefit from having more explainable models, some it doesn't matter. But I don't think it's just a buzzword or trying to enforce political control.


> But I don't think it's just a buzzword

I don't think it's just a buzzword either, in the same sense that 'AI' is not just a buzzword. But both "AI" and "explainable" are also buzzwords, or at least often used as such.

I have no objections to the example you made, some models are "obviously" more explainable than others. I'm simply refuting the claim that some models are inherently more explainable, because the entire concept starts falling apart when you take it out of its mathematical context. For example, it's easy to see what a DT does when it's small, but larger DTs are as "unexplainable" as a NN.

> trying to enforce political control

I'm not against political control per se, some political control is well-justified; but I find it pretty suspicious that everyone is jumping on this train when there are more glaring issues, (for example, if and when a branch of the government or a government-controlled agency decides to use a ML model, they should be required to declare that they are using it and make the source code public, so that it can be cross-examined) and my guess is that it's because "explainability" provides a nice narrative, unlike other concerns.


I think linear regressions are perfectly explainable.

https://en.wikipedia.org/wiki/Proofs_involving_ordinary_leas...


Yes, a linear regression is explainable. You know what your features represent, and the regression puts a straight-up co-efficient on each of them.

You can look at the weights you trained, see how strongly your input features contribute to the output prediction, and say things like "if all else is equal, 25-49 males are 0.08 likely to click on this ad".

Deep nets aren't explainable in that way, even if you devoted a lot of effort. Maybe someone will come up with a novel method for doing it but as far as I know the best we can do is statistics about what the net's likely to predict.


> Yes, a linear regression is explainable.

I don't agree that this is, in general, the case.

Dense linear models trained on a lot of parameters aren't exactly simple to trace, and even the weights aren't as intuitive as they might seem.What about regularization? What about pre-processing? What about feature selection?

There are a lot of examples of linear regressions where, for example, you can reverse the sign of the correlation by picking some features rather than others.

Assigning meanings to features of your model is something that requires extreme care and deep understanding of both the model and your dataset.


Ok, fair enough, the more sophistication you add, the less explainable it gets.

That being said, you can look at magnitude and direction of various coefficients and conclude certain obvious things. Even the case where reversing the correlation by picking some features instead of others says something -- those features point different ways but one has a higher magnitude so it 'wins'. They're ultimately pretty simple statistical observations about the training set.

A deep net, in comparison, is a totally unexplainable black box between the cross-linking and the sigmoid functions.


I agree that there is an intuitive concept of "explainability", which is why I said that a formal model for it would be interesting to see. I also agree that "the average explainability of a deep net is lower than the average explainability of a linear model" (whatever that sentence means), my point is that it's extremely hard to pinpoint what explainability means, because the usual characterization of "you can fit the model in your head" is extremely subjective and brittle.

I'm all for more research in this field. Heck, I'm all for more research in any field, but because our definition of "explainability" is so primitive and unrefined, arguing that we should build ethical or regulatory frameworks on top of it seems premature.


> you can look at magnitude and direction of various coefficients and conclude certain obvious things.

Sometimes, but if you have correlated predictors, the 'explainability' can be distributed across coefficients of those predictors in unpredictable ways. Training a linear model twice with two different subsets of training data can give you very different coefficient weights.


Even linear regression is not very easily explainable for real world systems with currently utilized technologies -- especially when you have a virtually unlimited range of mechanisms intentionally or unintentionally subset or modify the sampling process (or select for desirable results at "sampling design time") ...


It's always obvious in hindsight, much like what can be said of something like Dijkstra's algorithm, but the fact of the matter is not everyone can spend the energy to both understand the context of the problem and direct their attention towards the evaluation of ethics within that context. Some people even find it hard to understand the situations of others enough to identify where their technologies can be used for harm.

The problem that AI ethics research addresses are ethical problems that executives and employees aren't paying attention to. It may seem obvious when stated explicitly because of the amount of ease it takes to grasp the concepts (and the seemingly simple derivation of cause and effect relationships), but I assure you it is not obvious to a lot of people I know in the field at least.

There's also a clear misunderstanding of what ethics research should entail:

* AI being forced to chose between two "bad" scenarios will result in an unfavorable outcome for one party

This is a trivial result that doesn't hold much value as a standalone observation and probably wouldn't be touted as a research point in a respectable publication of AI ethics.

* AI could reveal truths people don't want to hear e.g. it might say the best team for a project is an all white male team between 25 - 30 rather than a more diverse team. It might say that a 'perfect' society needs a homogeneous population.

The fact that you made this comment may be a cause for an ethics discussion in itself. You used "truths" to describe the statement "the best team for a project is an all white male team between 25 - 30 rather than a more diverse team." This shows a disregard for the reality that most data is contextualized and biased. Using terms like "best team" and "more diverse team" make the statement like the one you made at risk for having took a misguided conclusion from data.

Maybe the following revised statement would be closer to what we can call a contextualized "truth" generated by an ML model:

"Teams that comprise of white males between 25 - 30 have a statistically larger chance of meeting milestones set by leadership rather than teams with one or more non-white male."

Even statements like that aren't complete as my definition of "meeting milestones" could be sourced from self-reported data (in which case it could mean that white males just self-report more milestone completions).

* AI could disrupt a lot of lower paid jobs first without governments having proper supports and retraining structures in place

A problem to consider, sure, but this is one of the more popular observations and has been echoed over time within the context of technological advancements in general.


> wouldn't be touted as a research point in a respectable publication of AI ethics

Check out https://www.moralmachine.net

> Maybe the following revised statement would be closer to what we can call a contextualized "truth"

Your rewriting of my comment and critique demonstrates that I got my point across. I put perfect in quotes on purpose. The whole example could be full of the same.


maybe there's less obvious and more interesting findings in the paper that the blog post author have chosen to ignore.


[flagged]


As much as I agree social "science" is often flawed, your flippant dismissal of an entire field with an extensive pedigree strikes me as unlikely to come from a place of knowledge and familiarity.


She's got a PhD in CS from Stanford. She is published in CS conferences. Even if you want to shit on the social sciences for some reason, it is clear that this work is computer science.


It almost reads like ‘Ethics Philosophy applied to AI’


The most important problems in life are ill-defined.

We still need to work at solving them, with the limited means we have.

If we'd only work with hard sciences, life would never get better.

Many things in our lives have advanced based on what could be considered guesses (for example capitalism doesn't really have a rigorously defined mathematical model, yes, they are some hand-wavy explanations but nothing you'd put in a solid math or physics paper).


Last comment about it since I spent too much time on this story. As a researcher in the field I feel bad people don't give Google more credit (I have no affiliation with Google). They created a research environment where researchers have freedom to work on their own interests and publish papers (they publish more than any other company). You don't find many environments like that outside of academia. I still remember Microsoft Research closing down their research lab in the west coast and sending a huge number of researchers home. I can tell you I always apply when there is an opening. So far without luck. If you're a Google Researcher don't forget how lucky you are.


This is a poor argument assuming discriminatory practices are happening there. This is like giving Weinstein a free pass because his movies are really good.

Note, I’m not saying Google discriminates, but if they do then your statement doesn’t contribute to their defense.


Well, nothing in this story suggest they do. They might have an inconsistent review process, like pretty much most processes in these companies - from interviews to promotions. Let's see how many people follow her and leave Google Research due to their practices. My prediction is zero.


That’s the beauty of modern discrimination. You never come out and say you’re discriminating. Rather you just apply the rules selectively. An inconsistent process is the best best way to discriminate.

My prediction is you will see some people leave Google due to this. Likely those who worked closest to her. Apple is already extending interest.


Maybe I'm not understanding the argument. But why do people think she was let go?

My understanding is she wrote a paper critical of certain aspects of Google's business.

They asked her to retract it for revisions because they didn't think it was fair to Google which believed it had made advancements relevant to her critiques.

She thought this review was unfair so she wrote an unprofessional message to an internal group and submitted an ultimatum/resignation. Which google accepted.

Do you believe that corporations generally accept ultimatums?

Do you believe the message she sent out would never be a fireable offense?

Do you believe she was let go because she was black or female?


> Do you believe she was let go because she was black or female?

Possibly. I do think it is likely that she isn't fired if she was a white male. In my experience, people have more patience with people who are in their "in-group". This goes beyond race/gender -- just look at sports fandom for an example. Isn't it odd that players for the team you cheer for can do no wrong, but as soon as they're traded then they start doing a bunch of bad stuff, even the stuff they did while on your team actually now looks kind of bad. In the tech industry role/team/company/race/gender are probably the biggest in-group delineations.

> Do you believe that corporations generally accept ultimatums?

Yes they do. The most common ultimatum is given before employment, where candidates will often give a minimum salary/benefits they will take for the job. I've seen and even been involved in these negotiations where a candidate states that they will not go below this offer, period. And then we counter with a number below that but provide some other incentive we think may counter it.

There are other ultimatums I've seen in the workplace. For example, "I can't work with XYZ or I'm leaving the team". I don't think I've ever seen the response come back as, "good luck finding a new team". It's always been to dive deeper into the relationship.

In fact I issued an ultimatum many years ago where I said I couldn't travel any more due to a personal issue or I'd have to find new employment. I guess you could say they yielded to my threat. I like to think that I was just being honest and they helped find a way that I could still help the team, especially since I only traveled a couple of times per year.

> Do you believe the message she sent out would never be a fireable offense?

I can imagine it being listed as a firing offense in some handbook. I can also imagine that others who have written damning emails have gotten away with no more than a slap on the wrist (or less).


> Possibly. I do think it is likely that she isn't fired if she was a white male. In my experience, people have more patience with people who are in their "in-group". This goes beyond race/gender -- just look at sports fandom for an example. Isn't it odd that players for the team you cheer for can do no wrong, but as soon as they're traded then they start doing a bunch of bad stuff, even the stuff they did while on your team actually now looks kind of bad. In the tech industry role/team/company/race/gender are probably the biggest in-group delineations.

I think you're right about her not being in the in-group. But the in-group had nothing to do with race or gender but instead was about whether or not she played ball with Google executives. It seems like she was a team player with regards to people she worked with directly in AI ethics research, but definitely was not a corporate team player.

I think in this specific circumstance a white male would have been easier to fire. It's a much worse look to fire a woman of color investigating AI ethics (A lot of which is revolves around racial and gender bias) than a white male.

> Yes they do. The most common ultimatum is given before employment, where candidates will often give a minimum salary/benefits they will take for the job. I've seen and even been involved in these negotiations where a candidate states that they will not go below this offer, period. And then we counter with a number below that but provide some other incentive we think may counter it.

When I used the word ultimatum I wasn't referring to pre-employment salary negotiations, which this most certainly wasn't.

> There are other ultimatums I've seen in the workplace. For example, "I can't work with XYZ or I'm leaving the team". I don't think I've ever seen the response come back as, "good luck finding a new team". It's always been to dive deeper into the relationship.

I think there is a subtle difference between delivering an ultimatum and communicating the minimum needs for a job. "You need to stop requiring me to travel or I quit" will be treated very differently by most employers than "I talked with my wife, and due to us having a new born, I don't I think I can fit travel into my life any longer. Is there any way to continue this job without travel?". Judging by her generally assertive nature (which is a valuable asset in a researcher and activist, but can be a liability when navigating corporate America) and the other message she sent out to her co-workers I imagine her email was phrased more like the former than the latter.

> I can imagine it being listed as a firing offense in some handbook. I can also imagine that others who have written damning emails have gotten away with no more than a slap on the wrist (or less).

Even without an employee handbook, this is incredibly unprofessional behavior. Telling co-workers to stop doing certain types of work and apply external pressure to your employer seems unfathomable to me? If I was really angry or drunk and sent out a message like that I would be afraid of losing my job. If I also submitted my resignation at the same time I would just assume they would accept it unless they had a very important reason they couldn't let me go.

I think if she had just submitted her resignation/ultimatum letter or just sent out that withering letter she'd still be employed at Google. I've seen two people act that unprofessional and one was immediately fired and the other was told "This is you're first and second strike. You think we can't replace you. We can, and if you do this again we will." And those unprofessional outbursts weren't in writing, they were just people being unprofessional because they were stressed and frustrated.


That's another beauty of modern discrimination: it's unfalsifiable.

From everything made public, her behavior constitutes a fireable offense at any company I've ever worked at, regardless of skin color. And there is precedent for non-PoCs being fired for posting unprofessional messages to internal message boards. So why would you consider her actions permissible?


I’ve seen similar emails and never anyone fired from them. People have tried to claim equivalence with Damore and they are so night and day different that I struggle to believe the comparison is in good faith. The main difference is that the most egregious emails are often from technical leaders attacking those much lower in the pecking order.

Regarding the beauty of modern discrimination that you note. Plaintiffs in these cases almost always lose, because it is almost unprovable and the bar is high (being called the n-word for example isn’t sufficient). Being unfalsifiable provides little solace if there is no consequence to the discrimination.


You've seen an employee send out an email to colleagues instructing people of color to stop their efforts to improve the company, claiming the internal review process was an act of racist discrimination and dehumanization, and essentially calling their work a lost cause?

You've seen this followed up by threats to resign from the company if the anonymous reviewers' identities aren't revealed? And all this following prior threats of litigation against the company within the past year?

And you saw this from men or non-persons of color who were not fired? Obviously not, but if you do have examples that are sufficiently comparable in your mind, feel free to describe them and make a case. As it is, you're again falling back on unfalsifiable claims.

And I'm still curious to know why you'd expect a company not to fire someone in this scenario?


As you imply, something this specific I've never seen. I have seen emails go out to large mailing lists calling out the integrity of the company with respect to their dealing with a foreign country and saying that the leadership was condoning if not contributing to genocide. This was a white male making this statement and nothing was done (at least not publicly, and they weren't fired).

I posted something earlier about another company and internal emails to a broad mailing list relating to sexual harassment from people in the company. Again, nothing was done there against the people who wrote the email (at least not publicly). And in this case, similar to Timnit, it made national news. Although in that case it was a white female.


There is no evidence they applied rules selectively. What I meant by inconsistent process is that it's subjective. These process usually say something like "Two L6+ need to approve...". Then it's assigned to someone and that person can spend 2 minutes or 2 hours on it. Paper reviews are esp prone to that. How likely your paper being accepted is often more of a factor of who reviews it than how good the paper is. The main thing for me is that it was clear the paper would have been published eventually. I've seen it many times - reviewers ask for some changes. Authors make the changes, Reviews are happy. Paper is published. The reviewers didn't ask for anything that would make the paper weaker, nor did they ask to remove any content.

I'm guessing she was so mad because they were external people on the paper and pulling the paper out would make her look bad.


Other Google researchers have stated that they’ve never had their papers internally reviewed based on the criteria she did (lit review for example). Just for things like corporate sensitive information. If no one else in Google history has had the same criteria used, that seems selective to me.


Modern? It's a trick that's probably as old as discrimination itself. Here's a rare case of an American judge speaking openly about this - the quote is from a 1941 Florida Supreme Court opinion (Watson v. Stone):

"I know something of the history of this legislation. The original Act of 1893 was passed when there was a great influx of negro laborers in this State drawn here for the purpose of working in turpentine and lumber camps. ... It is a safe guess to assume that more than 80% of the white men living in the rural sections of Florida have violated this statute. ... There has never been, within my knowledge, any effort to enforce the provisions of this statute as to white people, because it has been generally conceded to be in contravention of the Constitution and non-enforceable if contested."

Note, by the way, that this also references the other common trick - you can also make rules that are seemingly not arbitrary, but are easy enough to challenge for those with the right knowledge and/or means.


What? People should check their principles at the door? They literally hired an ethics researcher and then it appears they fired her unethically.


Did I say people should check their principles at the door? It's a large research org within a large company. Obv it's not all perfect. I'm sure it has some inefficiencies, politics, etc. It's a company after all and people who accept their offer should know they are going to lose some freedom. But it's still one of the best environments for researchers outside of academia. Google should get some credit for that. Not to mention they get paid extremely well. Researchers in other fields can only dream of such opportunities. Also, it is your opinion it was unethical. I'm sure there's a lot to this story Google cannot share. Nothing in this story to me makes Google Research a place I wouldn't want to be part of.


Its a dangerous assumption that a company can conduct ethics research to begin with no? One should just assume it's a highly paid PR group.

Academy has it's challenges with ethics and issues of representation in research even without direct influence of share holders and academia supposedly makes huge investments in this area.


Anyone else see the narrative on Twitter to be so much different than Reddit and hacker news?

On reddit and hacker news I have never seen any unconditional support for Timmit. Even amongst those anti Google and broadly in support of her, there's no whole agreement with all of what Timmit says; the assertion of her story, the reasons why and the conclusions.

that a. She was fired b. That she was fired because of sexism and racism and c. That all research at Google is pointless and should stop until reform.

On Twitter there is more vocal support, but here and Reddit even on those most critical of Google and behind Timmit, there's no entire agreement with her. There are comments supporting some parts but none supporting all, none agreeing with the Google Walkout document. Comments pointing out holes with official explanations but not supporting the conclusions.

Why is this? I find it odd, I would expect more unequivocal solidarity seen with less public visibility.


Hacker News is about pseudonymous discussion. Only a small percent of HN posters are building a brand name. Twitter is much more real-name based and personal brand-building focussed.

Jumping in on a bandwagon to signal group membership in an in group can help brand-building. Going against the in group is potentially fatal to your brand. The risk/reward balance does not favor genuine discussion of a controversial topic.

On HN most comments are up/down voted based on their content. The post authorship usually matters little. There is little benefit for the author to announce support of a popular position and little risk for the author to express an unpopular one. So discussion here can be more about substance and less about signaling.


This is one of the most illuminating assessments of Twitter. It’s virtue signaling , as cliche as it sounds to use that term - that’s what Twitter has become. No fault of Twitter, we’ve created a culture in our society where persons with their open identity cannot easily voice dissenting opinions and going against the grain without getting cancelled.

What a fucked up world we live in.


I am not on twitter. What's with this canceling? Who got cancelled? What does that mean for them? Could you illustrate some examples?



This is not true. HN very much behaves like the rest. The hive mind here just keeps repeating the mantra while still downvoting dissenting opinions. Look at the other dead comment. Was that really bad enough to kill it? And from my experience, if you really challenge the powerful crowd you get vengeance killed all over the thread.

Also accounts here are not like on reddit. You gain power with accepted comments. That's the most echo'y power dynamic imaginable. If you don't play the old boy's game, you can't participate and shape the conversation.

So, HN, are you gonna give me voting rights or will you kill me, too?


Remember that outside the valley AI bubble Gebru is primarily famous for having publicly whipped up a mob against Yann LeCun and chasing him off Twitter by constantly claiming he was a racist sexist white man with no clue. Given that, it's maybe not a huge surprise that the tweets you're finding are more pro-Timnit than actually exists.

Another obvious problem: she keeps being described as a highly respected AI researcher, a leader of the field etc. Er, no. I read a lot of AI papers. I don't respect Gebru or the sub-field of AI ethnics much at all. What's happening here is simple: criticising a black woman in tech is impossible to do under your own name in corporate America, as radical leftists will immediately set out as a group to destroy you. The only acceptable position is to praise her and claim she's a genius. On HN and Reddit you can see what people really think and it's very far from that.


I find that the average HN thread demonstrates a broader diversity of thought, and generally deeper, more critical, more scientific thought than the average Twitter feed. So it’s not surprising that Twitter would be near unanimous and HN would be more nuanced.

This is one of the reasons I prefer HN to just about any other online forum.


Deeper and more critical sure (likely due to the platform, not necessarily the users).

On Twitter I can choose to follow POC, women, experts inside/outside this field and hear their opinion—-which I’d argue provides a more diverse representation than anonymous posts by a largely white male message board.

I also trust the opinion of someone willing to share their identity, it forces you to think more carefully about the stand you’re taking when your reputation is on the line.


What makes someone's opinion particularly valuable when that someone's a woman or not white? They might have different life experiences and perspectives on day to day life compared to a white male, but sex or race has zero relevance to knowledge-based fields.

The opinion of someone anonymous is probably very close to their true opinion. The opinion of non-anonymous users on Twitter is almost certainly censored.


HN is the new /.


Oh I haven't thought about that site in a long time. I used to be obsessed with it back in the day.


I agree that there is a broader diversity of a thought, but there’s also a dominant, conservative streak when it comes to deferring to corporate authority. This is unsurprising since many people on this site get paid a lot of money by the same people being critiqued, but does represent a kind of self-selection toward servility.


Er, who is claiming "all research at Google is pointless and should stop until reform"? Presumably not the person who was trying to do and publish research at Google?

(Are you confusing this with her email that said that participating in Google's feel-good diversity and inclusion work, specifically, was pointless and should stop, so that people could focus on their real jobs - such as research - instead of wasting time placating unknown managers?)

Anyway, my theory re your question is that Reddit and HN are usually pseudonymous, and so you have no idea whether the person saying something actually knows anything about Google or actually participates in the industry. The norm on Twitter is to use your real identity, and so comments from Googlers and ex-Googlers are boosted there. On Reddit and HN, good-sounding first-principles arguments are boosted. The two models are better at different kinds of discussions.


A bit of a mixup I think. Good to clarify it. The Google walkout blog demands all research needs reform (but no stop) but the email did specify that specific field to stop.


Yeah, the email was clearly written amidst frustration and isn't the world's most polished piece of prose, but I think it's pretty clear that the thing it's saying you should do is stop writing DEI docs that meet OKRs and are aimed at an internal audience, not stop doing research:

> What I want to say is stop writing your documents because it doesn’t make a difference. The DEI OKRs that we don’t know where they come from (and are never met anyways), the random discussions, the “we need more mentorship” rather than “we need to stop the toxic environments that hinder us from progressing” the constant fighting and education at your cost, they don’t matter. [...]

> So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus is the entity that started forcing tech companies to report their diversity numbers. Writing more documents and saying things over and over again will tire you out but no one will listen.

(It seems to consistently use the word "paper" instead of "document" for research.)

And Jeff Dean's reply identifies that she was asking to stop the DEI work, not to stop research:

> I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t.


Maybe deeper knowledge of the issues at hand than in Twitter. Or maybe different interests. Why would they agree? There is a reason why I don't use Twitter and I read HN.


This is not surprising. On Twitter there is a strong cancel culture to irrationally oppose anything that fails to unquestioningly accept “wokeness” narratives. Critical thinking and evaluation of evidence is not really permitted. Twitter is more about trying to look (publicly) like you affiliate with this or that celebrity, so the regular person can feel some sense of patronage to their chosen circle of intellectuals or artists. It has no connection to objective appraisal of evidence. It’s more tribal. It’s “Team Timnit” (all the details supporting wokeness narratives are believed, all details supporting Google taking an appropriate action are heresy) or “Team Google” and you feel the opposite.

The James Damore episode is another example. It is essentially 100% equivalent to what Timnit has done in this chain of events, but in that case the “wokeness” bloc on Twitter called for Damore’s firing and railed against his perspective - virtually a complete double standard compared to how they view Timnit’s behavior.

By the mostly anonymous nature of Hacker News, and the removal of any type of social credit for patronage of different ideologues, the commentary here is able to be much more nuanced and try to really account for a much greater variety of facts or perspectives.


The basic difference is that she implied I&D work at Google was ineffective while he implied half his co-workers were unworthy of their jobs.

She made her managers mad while he made half the company mad.


> while he implied half his co-workers were unworthy of their jobs.

Why do you and other keep propagating this myth?

If anything I think you are making it harder for women in IT:

You are building a narrative that many respected people doesn't respect girls in IT which probably makes it even harder for them to choose IT.

Also I think you are building up under the idea that men and women are enemies in the workplace.

At this point I think it is well established for anyone that cared to read what he actually wrote that he was trying to help bring more women into Google, not saying that those who were there were somehow unworthy.


I would say she did much more than imply - she wrote a workplace email imploring her coworkers to stop doing their jobs - something that would get you fired anywhere, any time. I also think you’re overstating Damore’s claims and you’re taking the interpretation or reaction to his claims to be equivalent to the claims themselves.

The claims themselves, that innate biological differences between men and women can partly account for their different representation in hiring in certain professions, is pretty much beyond dispute and has so much academic research supporting it that it’s pure gaslighting to treat someone like they are sexist or discriminating or biased for saying so. Note this has to do with factors that lead to representation in certain fields, and is not a statement about whether people of a given gender actually perform jobs in that field in any better or worse way.

Both Gebru and Damore are pushing agenda-based interpretations of existing facts, and both are doing so in ways that are not appropriate for a professional workplace.

I am glad at least to see that Google handled both situations relatively consistently, given how extremely similar in scope and impact Gebru’s comments are to Damore’s.


Did she say to stop doing your job? Or to stop writing about achieving diversity?

Damore said more than just that there might be reasons for the lack of women in tech. He also said that we should stop trying to increase their representation. Among a bunch of other things.


She said to stop doing your job.

Re: Damore, I agree. He was pushing a bad agenda that tried to draw more than is reasonable from some academic research on gender disparity.

Very, very similar to what Gebru did as well. First by pushing agendas in a publication leading to its disapproval, then in her follow up email.


I can’t find where she said to stop doing your job. Can you quote it here?


It’s directly in the email Timnit wrote, among just the first few lines. If you were unable to find it, then either you did not try whatsoever, or you are disingenuously pretending like there is some other potential way to interpret Timnit’s email that miraculously causes it to not refer to the explicit labor tasks of the Google employees she sent it to.

The direct quote (which can be read in the link [0]) is:

> “What I want to say is stop writing your documents because it doesn’t make a difference.”

This is a direct, explicit, unequivocal reference to researchers and ethicists employed by Google to write research and policy documents on fairness, DE&I, and bias, within machine learning, staffing and hiring, and other areas.

The phrase, “stop writing your documents because it doesn’t make a difference,” cannot be falsely made to apply to any context other than actively encouraging coworkers to specifically stop performing their direct job responsibilities.

My suspicion is that you would try to falsely misrepresent Timnit’s imploring to stop work as instead being about some type of non-work related activities or optional / extra-curricular activities that Google wouldn’t have a formal labor productivity stake in - and if that is in fact what your response would be, it is completely and wholly false and wrong.

[0]: https://www.platformer.news/p/the-withering-email-that-got-a...


What was communicated on social media was that the company had fired her, but it appears that the researcher tied the continuation of her work to the company accepting her conditions. The company did not accept.

In poker terms: one party bluffed and the other party called the bluff.

Now whether the clauses were abusive and the company was compelled to satisfy these conditions is left to lawyers, I suppose. I'm not privy to more details to have an opinion one way or the other.


This isn't poker. She made demands and stated she'd discuss it further after her PTO. Google accepted a resignation never made as a punishment (it has financial implications).


Sure it's not poker. I believe the researcher was looking for an employment lawyer, and the pressure is increasing. The email was discussed in public and subsequently "published" by the company's people. Again, not privy to more details.


She directly said stop writing these documents. They are not effective for doing the job. The data shows this. She doesn’t say stop doing your job. In one example she gives job is to increase diversity in gender and lamenting that all of this talking has had no impact yet Samy seems to have done it with no incentive (I’d like more of the backstory on Samy).

The job is to increase diversity and inclusion, not to write documents. Programmers get this in other domains (the job isn’t to write lines of code, but to provide value to customers), but suddenly become naive when it comes to things like this.


sarcasm

I heard buying Twitter bots is cheap.


> Why is this?

Because there are lots of FAANG engineers in here (i.e. working at Google, or with friends working at Google, or working at companies who do the same nasty AI stuff as Google) so them unconditionally supporting Gebru on this would be against their direct financial interests. It's as simple as that.


HN strikes me as being pretty consistently anti-Google almost every time Google comes up, so I don’t think this is the reason.


It's split quite evenly in pro and against camps, at least on privacy.


FAANG employees tend to consistently be among the most vocal and outspoken critics of FAANG. Also see the 1,200+ employees who signed a petition unconditionally supporting Gebru while turning a blind eye to her problematic behavior.

There's significant overlap between FAANG employees (and SV tech employees in general) and Woke Twitter.



Hi dang,

I’ve recently noticed that this commonly occurs ie multiple relevant threads that make up the top 50 of HN over say a 72 hour period, all related or rifts on a similar discussion.

Wondering if there is a way to group them as part of the same topic/submission? (thus saving you the manual work of posts like this). I appreciate this would (i) require a code change and (ii) would shift the HN model from individual threads based on 1 URL submission, but just thought I’d suggest it to the HN brains trust.

In other news: Keep up the good work that you do on HN to give this online community ‘structure’.


One potential implementation: at the top of each thread, after the metadata for the thread itself and before the comment box, are a list of ‘Related discussions’ with a bulletpoint list of related HN threads.


Unfortunately, PG is of the opinion that he "finished" HN over a decade ago, which is why we don't get any iteration on the feature set here, no matter how useful it might be. I agree that this would be a good addition but I don't see it happening with this mindset.

https://news.ycombinator.com/item?id=18183822


pg is referring there to October 2006, which is when HN was released (https://news.ycombinator.com/item?id=1), so he must have meant that he'd "finished" it enough to release it. It can't mean "finished" the way you're interpreting it, because he continued working on HN code for years after that. Indeed, he added a major new feature (https://news.ycombinator.com/item?id=7484304) a few days before he left HN (https://news.ycombinator.com/item?id=7493856).

As someone who's worked on HN code since then, I can tell you there's no such "mindset", nor is it true that "we don't get any iteration on the feature set here" (11 days ago: https://news.ycombinator.com/item?id=25197418). It is true, though, that most of the changes are subtle enough not to be so visible, including the ones I'm working on this evening. Most of the effort goes into attempting to improve, or at least preserve, the quality of submissions and comments.


That will be really good


We'll do something like that eventually.


Lobsters have pretty good implementation of this, see this merged advent of code thread, for example: https://lobste.rs/s/3uxtgb/advent_code_2020_promotion_thread


This doesn't strike me as anything that needs to be buried. The energy argument is tenuous and at this point models use relatively little compute power. The language bias is a better tack but I don't think this is earth shattering to anyone. Most of the internet is from Western sources and some fraction of that is racist/prejudiced. I think this is obvious to anyone who has ever used the internet.


It sounds like it should be buried because it's clickbait headline generating fluff that only would get attention because of where the authors come from. And I say that after reading a summary that sounds like the authors had a positive opinion of the paper.

Google (probably) wanted to bury it because many of those clickbait headlines would have been negative to google.


You should read some other ethical AI research. This is par for the course. We're still suffering talks about killer robots.


We already have drones with cameras, drones with guns, and drones that are autonomous. How long until someone combines these?


A few decades ago? What is a fire-and-forget guided missile but an autonomous suicide drone? Some missiles, particularly anti-ship missiles, even use computer vision in the air to identify objects and pick targets. Mark 60 CAPTOR naval mines sit underwater listening to nearby ships. When a noise is classified as an enemy ship, the mine releases a guided torpedo to kill it. Of course autonomy in naval mines is nothing new; the most basic sort 'decide' to kill any object that bumps them too hard. The innovation is making them discriminate between enemy ships and everybody else.


The point is not that autonomous weapons systems are possible, but that their existence does not presage SkyNet, and the whole discussion distracts from the real societal impact, like some people being denied shelter/finance/employment/social connection/etc based on opaque ML boxes that intrinsically discriminate but in a way that can't be easily interrogated it controlled with legislation or court action.


Already done.

https://www.defensenews.com/video/2019/05/31/watch-this-isra... :

Video: Watch this Israeli robot fire a Glock 9mm weapon

Israel’s General Robotics has demonstrated what it says is the world’s first operational armed robot, the DOGO. (Seth J. Frantzman/Staff)


Reserve your judgment until you can read the actual paper.


More so than the headlines we're getting right now?

(Also, what were they thinking would happen when they originally hired AI ethics researchers, then? Nobody made them do that.)


> Most of the internet is from Western sources and some fraction of that is racist/prejudiced. I think this is obvious to anyone who has ever used the internet.

The impact of that on AI and the difficulties in counteracting it are not obvious to anyone who has ever used the internet or even—from, among other bits of evidence, public clashes Gebru has had with people who work in AI outside of ethics—not even to everyone building and training AI models on public data that is impacted.


While I will not dive into the extremes here, this is yet another battle in the fight over language, right? The most important point of criticism seems to be that the language models are "unethical", i.e., not subject to control by a particular political standard.

Am I the only one that finds this line of argumentation highly troubling? The idea that once you do something with language there should be someone proactively controlling you? Should it not always be the output that will be judged by the public?


I don't agree with this characterization at all. This seems to be a much deeper battle about much more serious potential issues with AI. It's not just about language choice. One of the examples cited in the linked article is how facial recognition wasn't trained on as many examples of lower-albedo people and thus gave more false matches for them, thus potentially leading to false arrests for crimes they didn't commit. This is about a lot more than just language.


You're not the only one.

What we're seeing here is a simple problem spilling out into the public: a lot of AI research is being done at a small number of companies that are in turn dominated by a tiny minority of people with politically extreme views. Those people consider control of language and expression very important to achieve their goals of a racialist and gender-biased society. But they're building AIs which require training on huge language corpuses, and thus inherently learn the way people writing language actually think.

Meanwhile the techniques to do "mind control" of AI aren't that good, at least not yet, so this leads to a lot of tensions when AI learns something true but unacceptable to the woke mindset.


Funny to equate ethics and politics...


One way to define politics is "difference of opinion over ethics".


Is this paper on arxiv? This overview doesn't answer any critical questions. For example, it's easy to fill up 128 references and a reader shouldn't blindly trust a claim that, "The version of the paper we saw does also nod to several research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models."

If a key part of Google's claim is that the paper omits relevant research, an author should have simply posted their 128 references and openly asked what work was missing. This whole saga could be easily solved instead of being dragged out for clicks.


> If a key part of Google's claim is that the paper omits relevant research, an author should have simply posted their 128 references and openly asked what work was missing. This whole saga could be easily solved instead of being dragged out for clicks.

So, we have on one hand, a researcher who got perilously close to litigation with her employer in the past (IMO because of missteps on both sides).

On the other hand, we have an employer that then was skittish about telling her that they didn't want the paper published (to protect the employer's business interests, mostly, it seems, while maintaining a veneer of open research organizations). And resorted to small statements through HR and intermediaries demanding retraction.

This relationship has broken down; there's no ready process to tidy up the misunderstandings.

I will say that Google's claims to be fostering an open discussion of AI ethics and confronting potentially uncomfortable truths on this path are looking a bit more dubious, though Ms. Gebru doesn't look so particularly easy to work with, either.


No, the paper is not on arxiv

> [...] Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.


See the other thread were one mentions that with reCaptcha Google learns that all cabs are yellow (and traffic lights hang over streets - they don't here). But I guess everyone only sees the blind spots of other people.


You're assuming that reCaptcha is training some Google algorithm. But in fact it has trained you -- to recognise US style dangly traffic lights, yellow school busses and cabs, pedestrian cross walks -- and this makes me wonder if reCaptcha does use the same image set for all different locations. Are people in Lagos, Pune and Tashkent routinely failing reCaptcha at higher rates because they don't watch US television? How fast do they learn what a school bus looks like?


I knew those already from being flooded for 40+ years with Hollywood films (all of them) and sitcoms, I know Cosby, Malcolm in the Middle, Beverly Hills 90210, Miami Vice, the Fall Guy, Seinfeld, Friends, Lou Grant, Mary Tyler Moore Show, Diff'rent Strokes, Cheers, Golden Girls, MAS*H (the best), Married... with Children, Three’s Company, King of Queens, Family Matters and on and on and on.

Basically I know the US better - or the image it projects - than my own country.

Funnily it was a huge letdown when I visited for the first time decades ago, as it felt just like a sitcom and there was nothing new really.


Even in Europe I sometimes find reCAPTCHA more difficult for being "too American". Maybe we're categorised as "close enough" and there are south American or African datasets but I suspect not.


> A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate

So... they're saying it used about $100 worth of electricity.

[ https://www.eia.gov/tools/faqs/faq.php?id=74&t=11 ]

[ https://www.statista.com/statistics/190680/us-industrial-con... ]


I find these CO2 arguments from model training extremely misleading. If this is what you care about, why don't you actually try to make electricity cleaner. Realistic, that's a much more effective way to address this tiny issue, together with the grand total of 50% of humanity's CO2 emissions on top. To me it always seems like virtue signalling, showing that you "care", while not having to actually make any realistic progress. Go build nuclear power plants, or at least prevent them from shutting down, if you want you're electricity to be at 12 g CO2 / kWh rather than 500 from natural gas or 800 from oil (you can also move your servers to France or Ontario). Do not focus on tiny, 10% efficiency gains in NLP. That's a marginal gain within a miniature cause to start with.


Of course.

Wasn't Google prioritizing green energy for their datacenters?


yes, https://cloud.google.com/sustainability

Google is the largest corporate purchaser of renewable energy in the world


1438 pounds of CO2 is 250 liters of gasoline, or thereabouts, that's pumping gas five times. How long does a tank of gas last the average Google employee? Less than a week, considering what East Bay traffic looks like. People get paid to publish that stuff?


The issue isn't how much BERT uses, the issue is the trend over time in how much models use, with BERT being a recent data point.

The whole point of having AI ethicists is to identify current indicators of potential future ethical problems so that they can be considered in guiding the direction of development, so that you minimize acute ethical crisis.


You do realize just how small this co2 output is compared to the human co2 output it replaces.

You realize the shear scale of co2 output the office the engineers who wrote the model, drove to work, education, etc produced.

I’m actually suprised just how small of a co2 output it was.


> You do realize just how small this co2 output is compared to the human co2 output it replaces.

Sure, the issue is that the scale of models is increasing by orders of magnitude in a fairly short span of years, as well as the range of applications rapidly expanding. It doesn't take long for that kind of growth to go from a trivial issue to catastrophic one, and it's the exact kind of risk you have ethicists in a field to call out while it still is trivial so that some of the energy of people doing technical development gets directed to mitigate the risk of it ever reaching the catastrophic stage.


Analogously, I used to pooh-pooh the concerns about the total electrical usage of Bitcoin years ago, but it's since risen to an actually meaningful level. There's easily more money in AI in total than in crypto, and thus, you could see the total energy usage ending up a lot higher than crypto. That's not an insignificant amount of energy we're talking about here.


Energy has a cost. Using 100$ of energy is much different than 1 million.

Capitalism actually mitigates the risk. Companies won’t pay 1 million in R&D to replace 100k of salary.


The number is probably quite a bit lower. The Strubell paper uses a PUE coefficient of 1.58, while Google datacenters are at 1.1. The figures are for GPUs, but Google uses TPUs, whose power characteristics were not public. Price is a proximate for power usage, though. TPUs might have been half as expensive in that experiment. Let's say that a further half of those gains are money that goes to Nvidia and not really related to power. So, making numbers up, training with TPUs might be another 25% more efficient. That's probably conservative, as Google claims that the TPUs are tens of times less power-hungry, due to their simplicity, but on the other hand, you also other fixed costs like racks, fabric and the CPUs to feed the chips.

In an ideal world, nobody would get hung up on details and everybody would understand that there is a lot of nuance when comparing things. In practice, if Google published a paper which quoted the Strubell paper without the caveats, I can see headlines about how inefficient and bad for the environment Google Translate is. PR would get busy and obtain corrections or follow-up articles to clarify things, but those rarely get the same attention. And it's still extra work that could have been avoided, which in itself is bad optics ("do you folks even review the stuff you send out for publication?").

I'm all for reducing emissions and improving efficiency, but I find the premise a bit of a stretch.

Yes, large and wealthy organizations have a big advantage, but that applies to pretty much anything they do, not just language models. Inefficiencies are bad, but if you ask a number of people why fix them, I think they'd mention financial cost and wasted time well before looking at it as an issue of ethics and fairness.

Reducing costs for language models is already a great idea across all fronts. So is reducing inequalities. It's linking the two that sounds like a strained argument to me. Suppose someone makes models ten times smaller next week. Will marginalized communities' lives improve soon? There must be more. I haven't seen the paper, so I'm curious how it is all framed.


> The whole point of having AI ethicists is to identify current indicators of potential future ethical problems so that they can be considered in guiding the direction of development, so that you minimize acute ethical crisis.

An alternative motivation could be to launder accountability so that when the acute ethical crisis /does/ come, you can throw your hands in the air and say "See? Look at how much resources we poured into this and it still happened! At least we tried!"


Could you share your calculation for that?


You have 1438 pounds of CO2, roughly 1/3 of that mass is carbon, so that's about 500 pounds or 250 kilograms of carbon.

Gasoline is a hydrocarbon, but the mass of hydrogen is neglegible compared to that of carbon, so it's not too wrong to say that this mass of carbon came from the same mass of gasoline, 250 kg. The density of gasoline is around 1 kg/l, so we are talking about 250 liters of fuel. The typical tank of a compact car holds about 50 l.


(Having not read any of this). Isn't Google committed to be net carbon neutral/negative? If so, does the claimed extra electricity usage matter?


The paper should have been probably focused on the title only, because the CO2 usage looks like a publicity stunt. She also didn't subtract the amount of CO2 saved by improved productivity by getting better search results.


Yep, it feels very disingenuous....

It is like someone complaining that google map uses x, amount of energy, yet the old alternative (printing maps, and getting lost) was much worse CO2 wise.


Does OpenAI? Does the rest of the industry? Everybody is training larger and larger models, with research groups competing on who will be the first to scale up to some number of parameters.

It seems obvious that this would be something an ethicist would be interested in researching to figure out what the impact will be and start discussions about how to offset it.



Google isn't the only player in the AI field. The paper is about the entire field in general, not just what Google is doing. And plenty of the other companies in the field have not made similar pledges.

The total amount of electricity being used so far for training models does not yet move the needle in a significant way, but the point of the paper is that it's currently growing in an unbounded exponential fashion. And you know how exponential functions work.


There are models much larger than BERT with a much larger footprint, GPT-3 being the most well kniwn example.

Models like BERT aren't just trained once either when they are developed, but trained again with different domains, different parameters, different tasks in some cases. There is also fine-tuning (more frequent, less carbon intendive), so these are real environmental problems, and others have pointed them out.


Google claims they neutralized their legacy emissions over the entire history of the company.

Now about the billion cars on the road and the 40% of the world’s electricity being generated from coal.


The existence of other problems does not make a particular problem go away and humanity can in fact focus on more than one thing at a time.


Google is carbon neutral.

How much more of a problem are a billion cars and 40% of our electricity being generated from coal?

We’ve squandered decades ignoring the big problems and now people want to run into the weeds with a thousand little problems, that individually don’t amount to much.


This is research in the form of bike shedding. Let’s talk about how the bike shed is going to look for a nuclear power plant.

CO2 impact for stuff like this - we probably have bigger fish to fry.


It's nice to finally get to see some of the content of the paper and it's awesome that Bender was willing to step up and give some context to world about what the hell it was about.


> Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online

Should not you avoid submitting early drafts which are not good enough to be published yet?


No, it was submitted to a conference, that’s what they’re for. Not the same as submitting for publication.


In AI research conferences are main way to publish, if I understand you correctly. It is generally considered to be a bad practice to submit "such an early draft" for peer review. Moreover, if it was accepted, it would be "circulating online" in conference proceedings. Why is anyone surprised that the draft which is considered not good enough to share with public by authors themselves is banned from sharing (in form of conference publication) by internal review?


There's a lot of speculation going on here. The version of the paper I read did not feel like an early draft at all, but a final one. This is just the excuse Emily Bender is using to avoid providing Technology Review with a copy of the paper. You're right; they wouldn't have submitted an early rough draft for peer review at a conference (these aren't amateurs we're talking about here!).

The real reason I suspect she didn't want the paper published is that the other 4 colleagues are still employees at Google and could get in trouble, seeing as how Google doesn't want it published and all.


"...is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them"

Wait, isn't that the other way around? If it can't recognize people of some category then it can't be used to discriminate against them, e.g. the police can't use it to identify peaceful protesters with those characteristics. I wonder what would have happened if the these networks were much better at recognizing women and people of color, would the paper then be about Google designing technology to detect minorities?


If face detection defines them as not a person, then things that rely on there being a person in the field of vision will not work for them. (Like the racist soap dispenser that went viral a few years ago)

If face recognition makes the old racist "they all look the same to me" declaration, then peaceful protestors get arrested for looking like criminals.


"Peaceful protestors" became a bit of a joke this year, as newsreaders stood in front of burning cars & buildings telling us a peaceful protest was going on.


That's a straw man though, no classification system assigns a "criminal" score. Here's another quote that points out exactly what I was talking about:

“There are two ways that this technology can hurt people,” says Raji who worked with Buolamwini and Gebru on Gender Shades. “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work—where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them. It’s a separate and connected conversation.” [0]

[0] https://www.technologyreview.com/2020/06/12/1003482/amazon-s...


I don't mean physiognomy, though such things periodically arise and are roundly and rightly ridiculed.

I mean "Our computer says we have footage of you robbing a supermarket" (higher error rates)

Another way for issues to arise (your easily weaponised point, above) is if you need a separate system for recognising people from certain groups. If it's all the same system, it's harder to argue that you are acting in good faith when you only follow up on matches on marginalised groups.


> e.g. the police can't use it to identify peaceful protesters with those characteristics

It can be used to (mis) identify them if the higher error rates are accepted. The problem isn't really the tech, but its application.


When police error in identification can mean getting shot, black people deserve to be identified as accurately as individuals of any other race.


That depends enormously on the type of error, so you can't say that as a blanket statement at all.


> An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

As those become pervasive cultural norms, why would the model not adapt to include them?


They want to be able to control the model before the norms become pervasive.

Like how, with activism, you can pressure the NYT into using jargon like “Latinx” when no one outside small progressive circles uses them.


> An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

This is a pretty superficial take on what is an extremely interesting sociological topic. (To be clear, I’m referring to the article, not the underlying paper which we don’t have.) Obviously just because social movements “have tried to establish ... vocabulary” doesn’t meant that vocabulary has become a “new cultural norm.” Plenty of such efforts end up being cultural dead-ends.

Take for example a term like “LatinX.” This term has been proposed and is used by certain people, but is extremely unfamiliar and often alienating to Latinos themselves: https://www.vox.com/2020/11/5/21548677/trump-hispanic-vote-l... (“[O]nly 3 percent of US Hispanics actually use it themselves.... The message of the term, however, is that the entire grammatical system of the Spanish language is problematic, which in any other context progressives would recognize as an alienating and insensitive message.”).

The article hand-waves away a deeply interesting question: What should an AI do here? Should AI reflect society, or be a vehicle for accelerating change? It seems at least reasonable to say that the AI should reflect what people actually say, in which case a big training dataset is appropriate, instead of what some experts decide that people should say. In some contexts, for example with “LatinX,” researchers seeking to enhance inclusivity could instead end up imposing a kind of racist elitism. (People without college educations—which disproportionately comprises immigrants and people of color—tend to be less knowledgeable about and slower to adopt these changes in vocabulary.)

The paper seems to imply that AIs should not reflect “social norms” but that training data should be selected to accentuate “attempt[ed]” shifts in such norms. Maybe that’s true, but it doesn’t seem obviously true. To return to the example above, is some Google AI generating the phrase “LatinX” (which 3/4 of Latinos have never even heard of: https://www.pewresearch.org/hispanic/2020/08/11/about-one-in...) in preference to “Latino” or “Hispanic” actually the desired result?


> Take for example a term like “LatinX.” This term has been proposed and is used by certain people, but is extremely unfamiliar and often alienating to Latinos themselves: https://www.vox.com/2020/11/5/21548677/trump-hispanic-vote-l... (“[O]nly 3 percent of US Hispanics actually use it themselves.... The message of the term, however, is that the entire grammatical system of the Spanish language is problematic, which in any other context progressives would recognize as an alienating and insensitive message.”).

For example, here is Hispanic Congressman (D) Ruben Gallego on the subject: https://twitter.com/rubengallego/status/1324071039085670401?...


My native language has "gendered" nouns, verbs, and adjectives. I think it's culturally insensitive to try to eliminate these.


"What people say" is itself quite nuanced. Words that two black might say to one another may be casual but appropriate and yet extremely offensive if said by a white person, or, presumably, an AI, in a different context.

What the AI should say is hard given that there is no one right answer to the question of what any individual should say. Different contexts change the equation completely. Seems like a nightmare to define the behavior or test it.

The right solution may involve training a model on everything we have access to and finetuning it based on the context you want to use the model in and the historical examples we will build up of mistakes previous models have made.


I agree. It’s a very hard problem that we need to thing about in context. Gebru’s research is important for raising the issue.

But it is a hard problem, and highly context dependent. It doesn’t seem to me like a proper subject for the sort of ultimatum that Gebru gave Google.


Sorry to hijack this comment but are you aware stumblingon.com is down? Error message: Not found - Request ID: 78591950-5c3e-4c76-9499-03931fab131e-53382846


Hey, thanks for telling me. Should be up now.

I was actually going to turn the website off (I believe it hasn't been working for a couple weeks now) as I didn't think anybody was using it. Glad to see someone is though - it's back on!


Really? That's a shame. I use it whenever I want to experience the indie web. I shared it with a friend and he's also a fan. I have even submitted a couple of websites.

If you do turn it off, would you mind sharing your list of websites?


Glad to hear that.

I'm learning django right now and for my first project (after the tutorial project) I will recreate stumblingon. When I do I'll add a logging/metrics system of some kind. That way, I won't make premature decisions about turning it off in future.

I'll also commit to replacing the website with a list of the index for a month or so before (if) I turn it off for good.

Now that I'm thinking about remaking it, is there anything you think is missing from the website?


I asked my friend what features he wants and he said seeing your history and favorites. Funny you added those today.


It’s pretty much perfect as is. Maybe add some contact info? Not everyone will be able to track you down on HN. :p


It's not the paper that forced her out: she forced herself out.


Good grief - this terrible clickbaity headline writing: "We read the paper that forced Timnit Gebru out of Google. Here’s what it says"

The paper DID NOT force her out of Google. Her subsequent behaviour - submitting without approval, rant, ultimatum, and resignation - did. And she wasn't "forced out": she resigned of her own volition. She could have chosen to make improvements to the paper based on the feedback she was given, resubmit it for approval, and then get on with her life, but she went the other way.

The headline from the last discussion on Timnit's exit[0] was awful as well: "The withering email that got an ethical AI researcher fired at Google". So bad in fact that it was changed on HN to more accurately reflect what actually happened: "AI researcher Timnit Gebru resigns from Google" (much more neutral and factual in tone).

Seriously, what happened to journalistic standards and integrity? Why are the actual events being so forcefully twisted to fit a particular narrative? No wonder the general population struggle to figure out what's true and what's not, and fall victim to all kinds of nonsense, BS theories, and conspiracies.

I wish I had a good idea on how to change this behaviour by journalists and publications.

(Clearly this is a problem that goes far beyond Timnit's story.)

[0] https://news.ycombinator.com/item?id=25292386


Your feelings are justified. I felt the same way as you two days ago.

You might be interested to know that I’ve changed my mind. The reasons are documented here: https://news.ycombinator.com/item?id=25308233

Basically, every one of those arguments evaporates when you dig into the details. She was doing her job. The paper was anodyne and perhaps even boring. She was rightfully pissed off that some middle manager was telling her and her four coauthors that they must retract their paper (the reviewers later posted on Reddit that they would have been happy to accept edits). Jeff was micromanaging her citation list, which is almost unheard of, apparently. Multiple google employees came out of the woodwork to say “there is no such review process, other than a quick scan to see if you’re leaking company secrets by accident.”

All of this is fascinating. Because, as far as I can tell, the entire AI community is now on her side. The only people who keep bringing up her character flaws are people who don’t do AI / ML. Or at least, they seem to be pretty quiet now that Jeff revealed Google supposedly has some weird academic review process no one’s heard of till now.

Your feelings about the journalists are completely warranted, to underscore that point. But it’s odd how much momentum is building. Try to ignore the circus; you’ll be surprised to find there is substance behind the claims. I was as surprised as anyone.

And to reiterate, pick any one of the arguments you mention, and really dig into it deeply for details. Try to verify the claims. The most you’ll get is that she was curt. And I remind you that it’s usually known who the reviewers are at any given venue. Even if it’s blinded, you at least know the general group of people involved. In this case it was highly unusual to do some kind of anonymous, selective, ad-hoc enforcement of rules that really don’t seem to benefit the scientific process in any way.


> Jeff was micromanaging her citation list, which is almost unheard of

This is quite normal, even for purely technical papers [^1], and absolutely important for an expository paper like this one. Narratives are always formed by selectively including references.

> the entire AI community is now on her side

On the MachineLearning reddit, which is popular among a proportion of ML grad students at least, it's completely different. Indeed the top comment in the discussion thread there [^2] discusses this discrepancy. And the fact that I'm creating a throwaway account for this reply is telling.

[1]: The review process of ICLR 2021, a premier ML conference, is taking place on public at https://openreview.net/group?id=ICLR.cc/2021/Conference. You can navigate through the reviews and see how many requests are wrt citations.

[2]: https://www.reddit.com/r/MachineLearning/comments/k6467v/n_t...


>Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.

https://twitter.com/JeffDean/status/1334953632719011840

With regard to the internal review thing, I thought this comment made a good point: https://www.reddit.com/r/MachineLearning/comments/k6467v/n_t... Sure, maybe Google is usually lax with internal reviews. But if you're going to write a paper which says "this area of research Google is engaging in is harmful", and neglect to mention the fact that Google is also doing research trying to address the harms, then from Google's perspective, you are just smearing its corporate brand without making any real progress.


The weird part is, the paper didn't seem to be saying that. It was mostly about energy usage, and bias in language models. The reviewer described it as "anodyne," which seems apt: https://old.reddit.com/r/MachineLearning/comments/k69eq0/n_t...

I'd totally agree with you. That would indeed be ridiculous. But... It's strange... each time a new argument pops up, I dig into the new detail, and surprise: it seems to have a straightforward, boring answer. "This is a pretty standard paper. Google wouldn't have been hurt reputation-wise by letting it through. And we should probably be thinking more about energy usage and bias. She didn't namedrop all relevant research, but there doesn't seem to be anything here to demand a retraction over."

It only gets stranger when you take this into account, too. From the journal reviewer:

However, the authors had (and still have) many weeks to update the paper before publication. The email from Google implies (but carefully does not state) that the only solution was the paper's retraction. That was not the case.

In some parallel universe, Google could re-hire her, Jeff and her could sit down and hammer out the paper, send the updated version, and there would still be two weeks to make even more edits. Isn't the point of the edit window to address these problems?

What really got my attention though, was that she informed everyone months ago that her and her coauthors were writing this paper. She wasn't working on some hit piece of a research paper. It's just ... a standard survey of the current ML scene circa 2021. I read the abstract and go "Yup, we use a shitload of energy. Yup, we should have better tools for filtering training data -- I've wanted this for myself. Where's the bombshell?"

For all the fuss people are making, you'd expect the paper to be arguing that we should stop doing AI for the betterment of humanity, or something weird. But it's nothing like that.


The entire course of action you suggest could have happened were not for Timnit going public by her own volition, accusing her employer and coworkers of unethical behavior in the process, and encouraging sympathetic colleagues to apply external political and judicial pressure on Google. What for, so she can publish a review paper disregarding relevant internal feedback?

She made a lot of fuss and burnt a lot of bridges. Nobody forced her to do so.


> Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.

This means the date of her leaving was not set in stone, ie both parties would agree to the date or a date when she would leave. She had a vacation coming up and they would discuss it when she returned from vacation.

>According to Ms Gebru, Google replied: "We respect your decision to leave Google... and we are accepting your resignation.

>"However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behaviour that is inconsistent with the expectations of a Google manager."

She was fired, not on account of the content of her paper, but the way she expressed her feelings to her colleagues, which wasn't an exhortation for them lay down their tools, but because laying down their tools made no different as their work was ignored.


She was not fired. She offered to resign if her demands were not met.

The only difference is that the timeline was not of her choosing. She knew exactly what she was gambling for when she wrote those words. Why would Google want someone around who doesn’t want to work at Google? Better to accept their resignation and let them go immediately. It happens all the time. I once gave 2 weeks notice and my boss said I might as well leave immediately since there was no point in hanging around (I was consulting and on the bench). It happens.


If they decided the date of the "resignation" without informing her then they fired her. That is pretty obvious.

And it wasn't because of her demands concerning the paper, it was on account of the way they interpreted of an email she sent to her colleagues.


> Because, as far as I can tell, the entire AI community is now on her side

Oh because how dare people judge. You would be instantly taken to court of public opinion for talking down a black female, AI, ethics, researcher.

If people have opposite opinion, they would keep it to themselves. But saying the entire AI community had rallied behind her is wrong.


I've been hoping to address that point, actually. Because, I get where you're coming from, and I've felt those frustrations myself. Let's just say I'm probably the last person to hop on the "do what I say or else" bandwagon.

... but like everything else, that turns out not to be an issue here. I would feel fine saying what you said. It's nothing to do with the color of her skin, or the fact that she was working on ethics rather than optimizers. Horrible employees who cause drama wherever they go are a liability and a downer, and I'll happily say that to whoever's listening. Black or white or purple, the goal is to serve the company's business interests.

But, much to my own surprise, as someone who grew up memeing on 4chan and poking fun at leftist dogma, here I am after about a year and a half of ML, wondering "Where's my harassment? I was promised harassment."

Because I feel exactly the opposite. Not only is everyone in the ML twitter scene cool, but they're some of the most open minded people I've ever met. Sure, you get some people showing up sometimes to give you a hard time, e.g. when we train danbooru: https://i.imgur.com/RMZd6mu.png and then say that the dataset is very objectifying, and so on.

But the antidote is to simply be straightforward. Make it clear you actually hear them. "Yes, I am concerned about that, but it's simply the problem domain; people use danbooru as a repository of this content. You're right that we could e.g. make a classifier to pick out the cooler looking gals and focus on those, but the challenge is simply to solve the problem at all. Once we have lovely auto-generated anime, it'll be straightforward to filter it. Come help us get there! It's fun!"

With all the horror stories I've heard, and how afraid everyone is of "the mob," I went into it expecting to "smile, and don't tell them what you're thinking." And I was really surprised to find exactly the opposite atmosphere. Everyone pretty much agrees that yeah, we have a bias problem, and that it's probably good to address that. People also seem to agree that it's ridiculous to take it too far, e.g. when OpenAI enforces a mandatory content filter that you can't turn off and isn't too choosy about what it deems harmful, and prevents you from shipping to production unless you enable it.

So I'm just scratching my head like ... "am I the baddie?" https://www.youtube.com/watch?v=hn1VxaMEjRU&ab_channel=roots...

Because I'd be a part of the problem, if you felt like you can't talk to me freely (at least in private). I don't want to foster that kind of environment. So my reaction of the idea of you "talking down a black female, AI, ethics researcher" is "Y'know, I see where you're coming from, and I was worried about the same thing, but I'm just not getting that sort of vibe at all. I was surprised too; I expected the opposite."

And once you relax about it and look around, the most interesting part to me is that you start to feel like "well... why look at whether she's a black, female, AI, ethics researcher? I should probably read her work and listen to what she's saying, and judge for myself whether it seems crazy." And when you sit down and really listen to people, and put your full mental focus on what they are saying, I find it hard to disagree with a lot of their points, simply from a logical perspective.

So you might argue "Well, you're just part of that culture then. You'd be appalled how I feel, but I'll keep that to myself." Fair. But it's so weird being in a situation where everyone is like "watch out for that mob!" and meanwhile all of the people who actually work in the field seem pretty cool to me.

Everyone feels equally lost, i.e. that we have this magical new power (ML) that we don't really know what to do with. We know it will affect society, but we don't know how it will affect society. We also don't know the best ways to guard against some of the obvious problems on the horizon.

I ran into that problem myself. Since I'm already on a ramble, I may as well lay it all out, because it really is interesting: I was training some FFHQ latent directions, trying to get a skin color working. And I ended up discovering, quite by accident, that my latent vectors were "racist." My model was generating caricatures of black people that I would not be comfortable showing. And I couldn't figure out why. Why black people? I flip the skin color to white, no problem. Flip it to asian, no problem: https://twitter.com/theshawwn/status/1184074334186414080

Flip it to black, and horrible results. (I mean "horrible" as in "this would be an especially bad idea to show anyone," rather than merely "it has some visual defects.")

The answer to this mystery was obvious, but only in hindsight. There are far fewer black people in FFHQ than whites or asians. I found a classifier, got it to identify an order of magnitude more black faces than I had before, retrained my model, and the result was instantly so much better: https://twitter.com/theshawwn/status/1209749009092493312

That experience stayed with me to this day. I think about it a lot, because it would be so easy to overlook that kind of bias when it's numerical data rather than facial data. And as far as I can tell, that's exactly the sort of ethics that Timnit has been arguing for: we need to pay more attention to bias, and unexpected ways that bias can creep in. Which seems reasonable to me.

I don't really know why I'm posting this to you, but, just in case it changes your mind, I leave it to you. I really thought I'd end up feeling every bit as pinched as you expressed, yet it's nothing like that. Felt quite the opposite. I keep re-reading http://paulgraham.com/orth.html wondering if I have orthodox privilege, or if my social group simply doesn't include people who are comfortable enough to express themselves around me, or what. Because you say that there is a massive number of people who feel exactly the opposite, and I can't help feeling curious where they are. Our discord's up to 1200 people, and they don't seem to be there. I talk to dozens of researchers, sometimes on a weekly basis, just to poke my head in and see what they're up to. They don't seem to be there either, even in private. And, picture someone who is the opposite of "leftist" in basically every way. I know a few people like that in the ML scene, and even they don't seem to be saying "whoa, this bias stuff has gone too far, and these ethics concerns are nonsense." It's the opposite. Eleuther has a dedicated ethics channel, people spend a lot of time freely debating about what the "right" ideas might look like, and so on. We also maintain internal research channels with the idea that, if people have concerns of the type you mention, they can freely express themselves there with no fear of any kind of retribution -- that's the whole point of having secret research channels. It's up to ~30 or so active researchers, and no one has brought up concerns like that.

So you see, I end up being dragged to the conclusion that, yes, the politics / activism stuff is a concern, but no, it doesn't seem to be affecting anything. We're not hearing "do this or I'll bite your face off," or something nuts like that. It's more like "Could you please listen to me for a bit? I have this experience I'd like to share." And the experiences tend to be interesting, at least to me.

So that's why I urge you to be super skeptical about the angle you mention. ("You'd be instantly taken to court of public opinion...") Dig into the situation and look for evidence of that yourself. Don't pay attention to newspapers; talk to researchers, and ask them how they feel about it. I just can't find any trace, no matter how hard I look. I feel if you also look, in a scientific way, for evidence to support your concerns, that you might not find it either.

Anyway. I related to what you were saying and just wanted to give perhaps a new way of looking at it, since it's what changed my mind. Feel free to hit me up in twitter DMs if you're looking for someone to chat with about some hard topics, since those are often the interesting ones.


> The answer to this mystery was obvious, but only in hindsight. There are far fewer black people in FFHQ than whites or asians. I found a classifier, got it to identify an order of magnitude more black faces than I had before, retrained my model, and the result was instantly so much better: https://twitter.com/theshawwn/status/1209749009092493312

Ironically, that exact observation about that exact dataset was what got Dr. Gebru so mad at Yann Le Cun in the twitter thread that people brought up. So maybe you're just lucky with who saw your tweets.


You've worked with a visual appearances dataset, lacking sufficient examples from one class of entities, and it failed to perform well for that class. You solved the problem by adding more example of that class. While the malfunction might have hd some, as of yet unquantified, real world impact in some hypothetical police face recognition system, it doesn't follow that:

a. Datasets that are not about visual appearances, are prone to the same problem and to the same degree. Perhaps the house lending datasets / systems have small race (visual appearances) issues, but large class issues. The political debate of how to handle class issues is as old as politics have been around.

b. The extent of real world impact, which depends on the actual system deployed. Perhaps a hypothetical real world system has a 1% failure rate vs .1% failure rate. Should we stop developing useful system just because they are not produce exactly the same results across all visible demographics we can carve ourselves into?

c. Can the impact be mitigated by human post processing. The hypothetical face recognition system is part of the judicial process, there are many checks and balances before one gets to suffer drastic consequences. For example a human actually looking at the picture, or a solid alibi. "Your honor, I was skiing in Canada at the time of the alleged Florida murder".

As others have expressed in this thread, dealing with first order visual issues is easy. everyone can agree at a glance what a correct solution to visual questions is, and bugs are usually straightforward to fix. Language issues on the other hand, are second order, everything is subject to interpretation. Once we open the can of worms of talking 'critically' about language and AI, we are getting uncomfortably close to language police, and via Shapir Whorf, to thought police. The BIG underlying stake of 'AI Ethics', one that possibly neither side has completely articulated just yet:

Should a small group (in the thousands) of hyperachieving hyperpriviledged individuals working in the AI labs at the handful of megacorporations controlling the online flow of human language, get to decide what we can speak, and by extension what we can think?


Re the existence of an internal review (tl;dr: it definitely does exist!): A Google employee here who's actually published research there. There definitely IS an internal review and an approval process and every single paper I have published while there had to go through it. Everyone I knew had to submit for an internal review well in advance and I personally couldn't even submit to a conference review once because we only asked for the internal review 3 days in advance, which wasn't enough time. My understanding was that everyone around me was adhering to this.


Thank you for speaking up! To be clear, there’s no question it exists. But multiple (admittedly former) google researchers have gone on record saying that this is meant to be a quick scan for trade secrets leakage, and nothing more than that. Certainly not some kind of academic integrity review. That’s what the journal reviewers are for.

The point of the scientific process is to be allowed to fail and to be mistaken. And the paper was pretty bland. Sure, she didn’t namedrop all relevant research over the last decade, but why demand a retraction? Especially when the reviewers posted on Reddit that there was a big edit window to make revisions. Adding a few more cites seems like “oh, why not throw in X?” then you both go out to lunch / email an updated version later that day. A retraction is basically “your last few months have been wasted,” right? I’d probably be upset too.

Thanks again for the datapoint about the 3 day window. It seems rather selective, but that’s standard for any bigco. I’m just having a hard time stomaching the idea that your ideas need to pass through some anonymous internal review panel where you don’t even know who’s judging you. Is that really what it’s like to be there? Seems strange.


Typically this is not something you worry about too much. You work on a research project, you talk to people around you, your research manager and everyone around you pretty openly, and if something was going very wrong, you'd likely know because someone would have told you along the way. So by the time you are submitting for internal review, the expectation is that you will almost certainly pass, but typically the reviewers tale their job pretty seriously (and so did I when I served as one) and spent a week or two reading through your paper in their "spare" time and writing a comment on the science, the method, the conclusions etc. The not-leaking-secrets part is certainly there, but that's the lowest bar to pass, so one rarely worries about it practically if one works on public datasets and doing fundamental science.

My experience might be different from what Dr Gebru was going through since I never rubbed against anything that could have been considered company secret. My work was entirely academic and I never felt that I was restricted in any way in the questions that I could ask or the papers I could write. That is likely very different when you are criticizing a product, using internal data etc, which might have been the case with her. It also seems that she was in no way diplomatic about her actions.

When you do fundamental research there, it's as free or possibly even freer than standard academic institutions. As I said, I personally never felt any implicit let alone explicit forces telling what to work on / what to avoid.


The fact is that you’re wrong. There is an thorough internal review process. It’s not just a rubber stamp. So please stop spreading misinformation. What Dr. Gebru went through was standard.


What organization are you in? Several people have said AI and Brain don't work that way. Someone from Tech Infra said their papers could get spiked just for not being interesting enough.


Brain


The question is whether the review process vets the topic of the paper, the writing, the citations etc., or it's only a screening that avoids revealing company secrets.


The paper sounds boring, though.

She hits some boring bureaucratic resistance, lost her shit, called everyone racist and gave ultimatums. She chose this hill to die on.

If she were uncovering some major ethical scandal, I'd be on her side, but this?


> The paper DID NOT force her out of Google. Her subsequent behaviour - submitting without approval, rant, ultimatum, and resignation - did. And she wasn't "forced out": she resigned of her own volition. She could have chosen to make improvements to the paper based on the feedback she was given, resubmit it for approval, and then get on with her life, but she went the other way.

This sounds a lot like taking what Google said without including Timnit’s point of view. Is it really fair to disparage an article for having a title like that when you’ve completely ignored such a large part of the issue?


> This sounds a lot like taking what Google said without including Timnit’s point of view.

Not at all. I've read everything she's written on the issue including the long message she wrote to the brain group, and her tweets where she shared what Google said when they accepted her resignation. These latter make specific reference to the ultimatum, which was the ultimate reason for her departure.

As I pointed out in the previous discussion she may have faced some sort of disciplinary action upon her return from vacation due to the content of her message to the brain group.

However, when she resigned what Google managers did (and this is no great leap of logic) is figure out that if they made her work her notice period she'd probably only cause more trouble in the meantime, so they brought it forward and made it effective immediately.

This might or might not be unusual behaviour for handling a resignation at Google, but it is fairly common practice amongst different organisations for a variety of different reasons that usually boil down to mitigating some kimd of risk to the organisation.


I've only heard of unethical companies firing people immediately without honouring the notice period.

It's definitely not common place and should not be encouraged except under severe circumstances e.g. violence, criminality etc.


Are they not being paid for their notice period? Usually the way this works is "gardening leave" where you're paid and are not supposed to come into work.


Her manager said "effective today". That means no gardening leave.


If she was cut off without “gardening leave”, then she was fired. It can then be true that she both quit (two weeks notice) and she was fired (no two weeks for you). I’d be surprised if the latter was true as it would be petty on Google’s part. More likely would be gardening without access which would still be gardening.


"Effective today" has legal meaning. The NLRB just credibly accused Google of acting pettily in other cases too.[1]

[1] https://www.cnbc.com/2020/12/02/google-spied-on-employees-il...


It's really just academic and not germane to the larger overall discussion whether or not she got that one pay period's worth of money.


IMO, it’s germane. “I quit. My last day will be sometime in late Dec.” “We’ll pay you and recognize your employment through that date, but you are relieved of all duties effective immediately” is quite different from “Nope; your job ends today.”


How is that really a big difference? She's gone either way, because they wouldn't let her publish the paper. This way I guess she's more eligible for unemployment, but beyond that?


> How is that really a big difference?

Well, I mean, for one thing it about $20K difference in salary alone.

> She's gone either way

True.

> because they wouldn't let her publish the paper.

That's...less clearly true in any meaningful sense. The public statements from all the other Google AI people about how the official narrative is inconsistent with general practice on publication review suggests very strongly that the management actions related to the paper were a pretextual component of a constructive termination campaign, and that even when it succeeded in generating something management could at least seize on as a “resignation” the result was insufficiently immediate requiring finding another pretext for immediate termination.


Should she have been fired for her email to the group? It's hard to discuss if people don't agree she was fired.

Should we believe her or her managers? Her managers pretending they just accepted her resignation is dishonest.


What makes it dishonest? Do you not believe that she stipulated that she'd resign under certain conditions? Do you not believe those conditions then came to exist? Do you not believe that her managers accepted her resignation?

I don't have a strong opinion in the matter (and have no connection to Google), but if she in fact unambiguously offered her resignation conditioned on her paper not being approved to publish and Google accepted her offer, I don't see how she can turn around and claim she was fired.


She didn't offer to resign immediately. Her manager explicitly rejected her actual offer and imposed new terms as punishment for her email to the group.


If the text Jeff Dean wrote below is overwhelmingly true, I'd agree that she resigned rather than was fired. I suspect that it is overwhelmingly true as I'm fairly sure that Google legal would have reviewed it and ensured they didn't say anything falsifiable and likely that this paragraph is entirely true.

> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.


Notice he doesn't say she resigned immediately.

Do you believe she fabricated Megan's email?[1]

[1] https://twitter.com/timnitGebru/status/1334364735446331392


To me (and I suspect the courts): In the second case, she was fired today prior to her offered last day. In the first case, Google accepted her resignation and just didn’t require her to work through her last day.


…and? Did you not consider it useful to mention at all? Do you think you’ve already included what she mentioned?


Do you really think it's valuable for me to regurgitate an entire discussion that I've provided a link to (and which anyone can easily read) in my comment when what I'm actually trying to do is make a wider, but relatively pithy, point about journalistic integrity and the impact that the lack of it is having on our societies more than comment on Timnit's specific case? I will say that your comments are an almost perfect illustration of that point though.


Timnit claims she was fired, you’ve completely erased that part from the discussion and used it to prove a point about journalistic integrity. You provided a link to a massive discussion that has clearly not yet been able to piece through the details. When I asked you why you were so confident in your position that Timnit was clearly in the wrong and rightfully terminated because she didn’t comply with what Google told her to do, you decided that I lack journalistic integrity.

Google, talking through Jeff Dean, claims that Timnit was unhappy with her situation and submitted a good faith resignation which Google accepted due to her not following their policies. Timnit claims that she was forced into a position where she had to issue her ultimatum, forcing her into a resignation. And we have claims from Google employees saying that the process she had was unusual and did not match a normal review. Isn’t the true journalistic malpractice ignoring this and claiming that any title that doesn’t match your view, which appears to be Google’s view of the situation is inaccurate?


Do you dispute her ultimatum and/or offer to resign?


Do I dispute that she provided an ultimatum of which one side was a resignation offer? No. But can I say that this ultimatum wasn’t the result of Google pulling the rug out from under her and putting her in an unfortunate position from which she felt her only way to exercise her leverage was to make such a proposition, then pretend like they’re blameless by “choosing a provided option” of terminating her without looking at the reasons why she had to do such a thing? I’m not sure yet. I don’t think we have enough information at this point to judge, so I’m a bit concerned with comments like the one I just responded to that ignore like this concern doesn’t exist.


> Do I dispute that she provided an ultimatum of which one side was a resignation offer?

Ok so it's established that she threatened her managers and employer to either comply with her personal wishes or she would "exercise her leverage" to cause the company harm.

And in the end, as their managers didn't caved into her threats, she decided to pull the trigger.

> But can I say that this ultimatum wasn’t the result of Google pulling the rug out from under her and putting her in an unfortunate position (...) ?

So she threatened someone, her target didn't caved in, and thus she proceeded to execute her threat.

And somehow the responsibility of her executing her threat is supposed to be on her target?

This sounds a lot like victim blaming.

"Look what you made me do! Are you happy now?"


It seems to me that people misunderstand how ultimatums work. I guarantee you that you maintain a number of unsaid ultimatums with your employer; for example, one of them may be “pay me or I will quit”. Once an ultimatum reaches the point where it is nonverbal it is difficult to classify as a straightforward resignation or firing, because at that point it is clear that communication has broken down and pressure is being applied from at least one side. Without knowing who the “victim” is here the argument could go either way: “you made me issue an ultimatum”/“you put us in a position where we had to accept your resignation”.


Do you feel we have not heard enough of her side of the story yet?


It's clear all off the facts are not available, and likely never will be, experiences are subjective.

It's also clear that before learning of this event, we had 0% knowledge of the situation.

Between 0 and where we are now with both sides expressing their point of view to some degree, people on HN began making up their mind in the absence of complete information. There is no requirement or urgency that we come to some inconsequential conclusion of our own.

My bias is that it seems difficult to obtain the position the researcher held at Google. How can I be willing to believe the engineer has the ability to navigate the subject matter and its application without being able to navigate this employment scenario. It feels as though I am required to accept the engineers brilliance while calling them dumb at the same time. That feels like a larger handwave than considering the known actions of Google and questioning the few assertions they are willing, but not required truthfully or untruthfully to provide.


There are plenty of instances of "smart" people doing "dumb" things. The idea that there is only one intelligence without considering people have a lot of individual foibles due to experience, temperament, predisposition, and any number of other factors is really dangerous. It's that type of thinking that led to presidents who were former movie stars or real estate developers.


> The idea that there is only one intelligence without considering people have a lot of individual foibles due to experience, temperament, predisposition, and any number of other factors is really dangerous. It's that type of thinking that led to presidents who were former movie stars or real estate developers.

Or it doesn't and we victimize people with smaller PR budgets to present their perspective.


What about her toxic behavior? Do you dispute that?


Toxic is an inflammatory and unnecessary word to use when no-one is privy to the actual facts.


Have you read the exchange with Lecun? That is fact and it is Toxic. So its not unnecessary or imflammatory


Do you have any links to this discussion? I’m trying to verify the toxic claim and am having a hard time finding it on Twitter right now because of all the noise.


I think there are links to the tweets in this article: https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...


I’d like to see more elaboration than a claim that her behavior is toxic - that is not helpful or conducive for spreading knowledge. I don’t see anything here that matches up to that claim at all, speaking as an outsider.


A word you seem to have no reservations about using against your opponents just from a quick search of your comment history. Why such outrage when it's turned back on your own sacred cows?


Why are you assuming anything about me? You realise I e used the word toxic in my whole comment history only related to this topic right?

I, like many others, dont like the way she deals with people. It is toxic. You call a toxic person, a toxic person. Ample evidence for it. Its not outrage, its just facts.


I think you replied to the wrong person?/friendly fire?

My message wan in reponse was to.

> Toxic is an inflammatory and unnecessary word to use when no-one is privy to the actual facts.


Ah ok, apologies!


No problem :)


Can you provide said evidence? The article linked above by a throwaway account didn’t contain anything I would label as toxic.


I disagree with this strongly. Her communication on twitter is public record. It's clearly toxic.


> Toxic is an inflammatory and unnecessary word to use when no-one is privy to the actual facts.

This response reflects an unwillingness to understand a situational nuance from multiple sides, show empathy to a person in distres and offers no workarounds, support or evidence. I've grown so tired of these factual tug of wars to justify callous.


> Seriously, what happened to journalistic standards and integrity? Why are the actual events being so forcefully twisted to fit a particular narrative?

clicks. after a decade during which they lost money over fist not knowing how to monetise their product on the internet, they all went to the lowest common denominator: clicks. this in turn forced them to start bending the truth (something that tabloids where known for, and for which they got huge profits).

basically we still don’t have a viable business model for this industry. the only one that works needs headlines such as the one you mentioned to function.

the interesting bit is that publication such as Nikkei or the FT are still top-notch, but these are niches, not general audience publications. (also my FT subscription is £30/month and that’s a lot of money if you’re not in that niche)


> the interesting bit is that publication such as Nikkei or the FT are still top-notch, but these are niches, not general audience publications. (also my FT subscription is £30/month and that’s a lot of money if you’re not in that niche)

Indeed. £30/mo sounds like quite a lot of money in an era where a teenager doesn't come to your house every morning and shove the newspaper through your door (as I used to many years ago) but, adjusted for inflation, it's probably less than the cost of that older delivery mechanism.

It's hard to persuade people of that point of view though so they stick with free, ad-supported "news". Not that paid news wasn't historically ad supported as well, but at least they had more diversified revenue streams.

And you are, of course, correct: it's all about the clicks and ad revenue. And, given most peoples' preference for free over paid news, I don't have any great ideas on how to fix that.


I don't see how you can possibly be that confident about the truth of what happened. Unless you have non-public information, then there is still considerable uncertainty.

Timnit says she was fired: https://twitter.com/timnitGebru/status/1334352694664957952

Jeff Dean's email says the article was "approved", though clearly not approved enough: https://www.platformer.news/p/the-withering-email-that-got-a...

Other researchers at Google say that this level of scrutiny is highly unusual: https://news.ycombinator.com/item?id=25307618

So in fact it is you that is twisting the narrative by asserting that the truth is publicly known, and that it is exactly as Google describes.


Timnit also shared a number of tweets, which you can easily access via the one you referenced, where she quotes from the email that Google sent her in response to her ultimatum.

They accepted her resignation and brought her finish date forward. They cited her message to the brain group as reason for doing so (without the resignation I suspect she may have faced some disciplinary action, though whether it would have gone as far as firing I don't know).

They clearly weren't happy with the content but, beyond that, if somebody is pissed off enough to write that the kind of message Timnit did then, by making them work their notice period, you only invite them to cause more trouble whilst they're still part of the organisation. You therefore bring forward their leaving date and make their resignation effective immediately.

When you do this it's about mitigating risk to the organisation. Commonly I've seen it done with salespeople in certain sectors, where when they resign they are escorted from the premises and access is revoked as part of minimising the risks that they'll take clients with them to their next role (particularly if they'll be working for a competitor). Still, any situation in which continuing to have an employee around represents a significant risk to the organisation is one in which you might ask them to leave immediately.

Google already had a mess to try to clean up with the brain group as a result of Timnit's message to that group. They probably didn't want any new messes to deal with, so they asked her to leave immediately to mitigate that risk.

Btw, I'm not advocating for Google here: I'm just looking at this in terms of, "What would I as a manager do in a circumstance where an employee has set out conditions of an ultimatum for their continued employment that I am unable or unwilling to meet?"


No-one knows fully without seeing her employment contract.

But having worked at many companies similar to Google in similar roles it is not normal for (a) contracts to not have notice periods and (b) for companies to not honour them.

And IP-flight risk is a concern for many roles but it's typically handled through legal channels as we've seen with Uber.


> it is not normal for (a) contracts to not have notice periods and (b) for companies to not honour them.

Nobody's talking about the contract not having a notice period, much less about not honouring the notice period. I'm talking about (metaphorically these days) getting you out of the building and stopping you from potentially causing damage.

I'm not sure about the US but here in the UK your notice period will still be honoured because you will receive the salary you would have received had you continued to work through that notice period even though you are no longer able to do any work for the company (i.e., "gardening leave" - https://en.wikipedia.org/wiki/Garden_leave).

I don't know what Google's severance policies are, and particularly with regard to remuneration for severence period (they will certainly vary by country/region though), so this situation might be different. Nobody's explicitly said whether or not Timnit will be paid some standard notice period (though reference is made to her final paycheque in the email she quotes where her resignation is accepted). No doubt an organisation as large and complex as Google has some policy that covers these circumstances that they will follow.

As I say, here in the UK, if somebody's resigned it's perfectly OK to ask them to stop working before their notice period is up (which might include revoking access to email and other company systems), as long as you pay them for the whole notice period. Doing work on behalf of the company and getting paid are two separate issues under these circumstances.


The effective date of a resignation is when the employee is officially no longer employed. Not when they're put on gardening leave. Google doesn't pay people past the effective date normally.


Thank you. That's helpful to know and clearly illustrates the difference in practice across different countries (they wouldn't be able to do that in the UK unless it was a firing).


I think the difference is more phrasing than practice. Both countries require paying employees for the notice period when they resign. That's why people are saying she was fired.


>contracts to not have notice periods

All the employments contracts I've seen were along the lines of "you may give notice of x day which we may refuse" and "in case of termination we give notice according to laws (2 weeks)" or something iirc. If she resigned it's pretty usual for the employer to be able to waive the notice period.


Unilaterally changing the date would make it a termination.


And further down that thread:

> I need to be very careful what I say so let me be clear. They can come after me. No one told me that I was fired.


> The paper DID NOT force her out of Google.

We all would love to see an evidence of it.


> She could have chosen to make improvements to the paper based on the feedback she was given

I'm unsure whether "improvement" is the right synonym for "change" in this situation.


‘she resigned of her own volition’.

That’s false - she, at most, threatened to resign.


Despite what both sides claim, IMHO papers like this are primarily PR, ideology, and politics, rather than science and technology.

This is akin to a speech writer working for a politician, writing a piece that disagrees with the party platform, and refusing to fix it when asked.


Can I postulate that the problem with these kind of positions goes even deeper? Let's say we had a wonderful company which would create an unimpeachable ethics board and which enthusiastically endorsed the findings of said board. This is good for today but what about tomorrow? What about when the company comes up with a promising product which violates one of those ethics? What happens when the companies stock stagnates? What happens when the company is in financial trouble? What happens when the company is coming up with promising new products every day for years that could get it out of that financial trouble? What happens when ethnically minded managers have to argue day in and day out that we should let the company fail rather than dain to implement one of these profitable products which violates just a few of these ethics? Eventually, some combination of desperation and profit motive will break the damn.

And often it is a damn that need only break once. Corporate ethics as a whole seems contradictory. The only ethical rules which a company may obey in perpetuity are those that do not for long conflict with profit and those which are backed by effective laws.

Even when done with a positive intent, I fear creating positions such as this is begging for eventual corruption.


To her defense, she was working on ethics in AI. What ethics to adopt inherently comes down to ideology.

While I don't really understand the emissions argument, the rest strikes me as very defensible. If the best language models need giant datasets to excel, it is very difficult to ensure your AI is trained on reasonable data.

I wouldn't want an AI therapist to be trained on 4chan. Now obviously nobody would be that stupid, but unfortunately it seems we are not far from it.

Unless we build models that rely on less data, it will be difficult to prevent problematic biases in the AI we put into production.

If that is the argument Timnit makes, I think that's the exact type of work I would expect from an AI ethics department. And good work at that.


You train your LM on web crawl data, but also train a 4chan classifier, then you condition your LM not to generate in 4chan style. GPT-3 got a similar chaperone classifier for offensive speech. It's like knowing swear words but choosing not to use them. You could also condition a general LM to bias and debias its outputs as you like.


She didn’t “refuse to fix it when asked”. She agreed with the proviso that she could have a meeting to figure out what she was and was not permitted to publish.

The response was to decline to meet and fire her.

It’s entirely plausible that the paper is bunk; however, when someone is willing to come down that hard to prevent an idea being published, I tend to err on the side of “worth finding out what”.


> She didn’t “refuse to fix it when asked”. She agreed with the proviso that she could have a meeting to figure out what she was and was not permitted to publish.

I think that is in dispute. Jeff Dean claims she did refuse to fix it.

> It’s entirely plausible that the paper is bunk; however, when someone is willing to come down that hard to prevent an idea being published, I tend to err on the side of “worth finding out what”

But they weren't trying to prevent the idea from being published. They certainly knew by firing her they'd give it far more attention than it would otherwise get. What they wanted was not to put their name on this idea.


That's one interpretation of her demands. Mine is that she was preparing to shame the "privileged white men" who were blocking her.

https://twitter.com/timnitGebru/status/1331757629996109824


> She didn’t “refuse to fix it when asked”. She agreed with the proviso that she ...

Are you sure she agreed to fix it? According to her tweets she agreed to take her name off the paper, not to fix it, and only if they agreed to her conditions which included releasing the names of all the reviewers involved. From Jeff Dean's email:

> ... including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback

If I was in their position, I would be extremely uncomfortable releasing the names of the reviewers with the likely result that they would be doxxed by Gebru on Twitter


She wasn't fired, she offered her resignation.


She offered “to work on an end date”. Whether that counts as a resignation or not is likely to be a matter for the NLRB.


Suppose your girlfriend announced she was working on an end date, how would you feel?


I'd fire her in response, it's still a firing.

This technicality matters for unemployment and COBRA, and it has a concrete definition.


This is not a discussion about the technicalities, I am sure you are aware.


For any application of meaningful size, "science and technology" cannot be divorced from "ideology and politics".

I'm not blaming you solely but the recurring idea that it can is worrying.


Well, she did Research while Google expected a PR piece. And it's obvious she couldn't fulfill Google's expectations, as the whole point of an ethics committee is to be critical, not just doing rubber-stamping.


Ethics is inherently "PR, ideology, and politics" since the ethicist has to select a set of foundational moral principles to work off of but that set won't be universally shared by everyone.


These points are...almost uniformly terrible.

> Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

You train the model once...and then use it to provide incredibly cheap value for billions of people. Comparing the carbon footprint of a single flight between NYC and LA to the training of a model is...insanely disingenuous. The model gets trained once. The correct comparison would be between the carbon footprint of building the plane. Or, atlernatively, to amortize the carbon footprint of the training over all of the individual queries it answers.

> Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there's a risk that racist, sexist, and otherwise abusive language ends up in the training data.

This is the only actually legitimate point. This is a real problem, but everyone already knows about this problem, and so if she's going to talk about it she should be doing so in a solutions focused way if she means to contribute anything to the field. She may have done that in the paper, but this review doesn't say so.

> The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).

Your criticism is that...tech companies are spending capital on making more profit for themselves? Thats uh, not much of a criticism. Especially when you consider the fact that this technology has positive spillover effects for other groups. These language models can be repurposed to combat racism online, and for all sorts of other things. But even if you ignore that, the premise here is just an utterly trivial near-tautology: "Company invests in things that make it more money".

> The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

Sure. You could say this about photoshop, too, and people have. But this technology is going to happen, with or without Google's help.

> In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias.

> However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It's the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.”

Your defense against a claim that specific research was missed is to cite...the length of the citation list? Lol. This argument would hardly pass muster in a forum comment.


A company that profits from AI conducts and oversights research of ethics in AI. What could go wrong?

It was just a question of time before something like that blew up.

Edit/disclaimer: said without judging this particular conflict.


Maybe the real problem with the paper is that it points out that GPT-3 and the like can completely distort search results by filling the web with auto-generated spam. Botnets could link spam pages to each other and even generate traffic. If that got into the wrong hands, we would be unable to distinguish truth from fiction.

If the general public heard of this, Google’s stock price might be hurt. In such a bot-filled world, humans might prefer to start their searches in walled-gardens or on sites that could better-validate content.


Not totally on topic, but on threads about researchers, I see this trend that when we're talking about a male researcher we only mention the family name but when talking about a female researcher we're using mostly the first name.

Case in point, at time of writing there are 26 mentions of "Timnit" vs 16 mentions of "Gebru" on this thread.

I don't think there are bad intentions behind this, but it really comes off as infantilizing so maybe we'd be better off calling her "Gebru"?


Ethiopian names seem to generally not follow the western first name/surname pattern, instead using the Semitic system where you list the male lineage (cf. https://ethnomed.org/culture/ethiopian/). I remember this information receiving some unusually mainstream exposure thanks to the moments in the limelight that https://en.wikipedia.org/wiki/Tedros_Adhanom enjoyed this year.

(Now, I don't actually have enough of a sense of Ethiopian ethnicities to be confident as to whether this holds true for all of them and whether Timnit Gebru belongs to one for which it does, but the case of @DrTedros certainly left me with a general heuristic saying that referring to her as Gebru may be an ignorant foreigner move whereas using the "first" name (as US academics generally refer to each other anyway) is almost certainly safe.)


There could be many other reasons:

- Outside media continues to refer to her as Timnit including NYTimes which affects HN users

- If someone’s name is Gorot Trzebiatowski (please forgive me if you have such a name), most people would go with Gorot because it is shorter and easier to pronounce / write. Subconsciously, these things happen.

You’re right - No need to assume malice where probably there isn’t. This is conspiratorial thinking - find some pattern of data that supports mainstream narrative and make conclusions from it. This is what QAnon conspirators do all day.

Although I’m curious about your motivation for analyzing this :-)


I haven't noticed this elsewhere, but haven't been paying attention. It'll be interesting to see how much I notice it henceforth - interesting and probably disappointing.

Regarding first/last name confusion, how many people here on HN _know_ which is her first name and which is her last? It's a lot easier for Western Anglicized speakers to get it wrong with non-Western non-Anglicized names than with "Jeff Dean", whether intentionally or not.


I’d love to see the HN discussion on a study titled “An analysis of unconscious bias on HN”!

Great point though


The lifetime output of 5 cars is nothing in a nation of over 300,000,000.

If AI can manage peak energy usage efficiently and improve the grid by 1% it will have made a positive impact over 100 fold.


I fail to understand how this is even a story worth covering. Sometime people forget you work FOR someone, who's already paying top dollars for your work. If your interests are not aligned with your employer you certainly have the right to walk away. You surely do not have the right to be seen as a beaten up dog when this was exactly what happens when you ignore feedback, rant and give ultimatums like you count more than anybody else. I am glad she was let go


If the authors aren't confident enough in their paper to release it publicly, why would I want to read someone else's (presumably inferior) summary of it?


This is a good point that I did not think of. Google maintains the paper was not up to their standard. The authors submitted the paper to a conference. And now, one of the author asks MIT Tech Review to not publish it because it is an early draft. Gives some credibility to Google's claim IMO.


> It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

It seems amusing to read this when the same person and her followers hounded Yann LeCun for pointing out the same thing with image models.

Anyway, this seems interesting. But I am not sure how you solve this. Do we take representative dataset according to population of a place? Also, assuming this is limited to a single language. How can AI generated language account for nuances from regional differences at the same time in a common model. Isn't what the author asking for here is kind of train, en_US, en_GB, en_IN separately here. For things like completion don't language models already account for this?


That's a problem for lots of technologies. The native language in my country hardly has any literature published in it because it's so obscure. How should the printing press solve this unfairness? The internet in general also doesn't use it much at all, neither do all the popular movies and TV series. They don't even get subtitled. Apple Maps can't pronounce the street names correctly for navigation either. It's pretty hopeless but does it really matter?

Maybe if you have an unpopular language, that's just unfortunate and please encourage your kids to learn English (which they probably do at school just about everywhere anyway) so you don't perpetuate the same problem to the next generation.


Reading Jeff's email plus her comments on twitter doesn't give the full story.

It seems like

1. She did not give them the time required to vet through the paper or followed the processes, plus her email to everyone to stop work on other projects. 2. Google fired her immediately, which might have been different if she wasn't a POC.

This


If you read the linked material, an ex-Googler calls that a lie based on his experience working as a reviewer of scientific papers there.


Ultimately this reads as straight censorship by Google. Or if not by Google, by Google employees, with the tacit approval of their higher ups.

They didn't like the viewpoint she expressed, didn't like the criticisms she raised, so they blocked her (well her and 7 co-author's) paper. When she said that was unacceptable and stood her ground, they badmouthed her and fired her.

You don't have to agree with the paper's criticisms (and it appears they were just part of a longer paper) to be concerned by viewpoint censorship. If the paper wasn't worthwhile or based on facts, then that would have come out in academic review, either in peer review in the paper, or in subsequent papers rebutting it, or pointing to subsequent changes. That's how academic inquiry works.

But if companies can silence ethics researchers who express concerns, whose job, as AI ethicists, is to express concerns, that fundamentally undermines academic inquiry into the topics at hand.


> Censorship

Man, people really do want to have their cake and eat it too. If you want to publish research freely join an academic research lab, if you value money join an industry lab. You can't have both of those things.


She was the technical co-lead of Google's Ethical AI team, publishing papers like this was exactly what she was hired to do.

Here's Google's Principles for Artificial Intelligence https://ai.google/principles/

From the second paragraph of item 6:

"We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications."


If a megafarm hires you to write papers for the Ethical Milk Production team, any modicum of social awareness will tell you that they don’t actually want you to write a paper about the ethics of animal products.


If you, a large corproation working in AI, hire a prominent and vocal AI ethicist, any modicum of awareness should tell you that they may actually have a sense of ethics.


True - though I’m guessing there are a lot more vocal ethicists who will tone it down for money, than there are large AI corps who are actually willing to be honest about AI ethics! Anyway, sounds like it was a bad fit all round for both parties and this was the only possible outcome long-term.


Except corporations do not have awareness, because they are machines.


But if that megafarm says "We believe that AI should: 1. Be socially beneficial." then we can point to that when their behaviour is not consistent with it, no?


Once you conclude they were just using you for good PR, what is your recourse (aside from compromising your ethics)? Bad PR.


There are a lot of research resources - compute, tools, data - you can't access from academia. The action in AI is in industry, as AI needs data and industry has it.


This is utter nonsense. If you looked outside her twitter feed and at evidence, its clear she was fired because she was toxic. Her threat to leave was her own fault which allows google to move forward. Look at her interactions with respected people like lecun. She has issues with disagreeing with people. She may be qualified, she can have opinions, you can also be the best engineer. If you are an asshole when it comes to disagreeing, then no one will want to work with you. Everyone is replaceable.


It is not clear at all. Nobody at Google has said that she was fired for being toxic. That’s your inference.


> Nobody at Google has said that she was fired for being toxic.

No company will ever say this publicly.

Yes, that's my inference.

Looking at this Twitter thread between her and Jeff Dean 6 months ago: https://twitter.com/JeffDean/status/1278571537776271360 ...

She is highly toxic, not just in general, but specifically toward Jeff Dean, who is her manager's manager. Actually, doing that against anyone is not okay.

Reading between the line, she is absolutely fired for her toxicity. This event is just a last straw.


Google let her go early claiming her actions were "inconsistent with the expectations of a Google manager".

That is the language used when firing someone for their behavior.


Sometimes reasons for firing are the last straw, not all the other things that led to that point.


Another inference


Google let her go early claiming her actions were "inconsistent with the expectations of a Google manager".

As per the above comment


Honestly question to fellow HN commenters: what is the popular take on energy efficiency against entertainment industry? Where do people draw the line, regarding what and what not should be an ethnical use case for energy?


While this is a lot of electricity and CO2 to consider, is it fair to compare them to cars? I don't think this so much means that one should get rid of training these big models but rather do their best to ensure that the data centers that these models are trained on are getting their electricity from mostly 0 emission sources. This actually seems like a doable task since you don't need to be physically near your compute cluster. I think Google and Facebook could also turn this into a PR move by putting some research money into green tech (ambiguous).


I'm curious if the paper posits any solutions to the risks highlighted? There was mention of references to work on making model training more energy efficient but that was it.


Doesn't make the bar for Google publishing quality standards? That is the most ridiculous thing I have ever heard.


Kicking out (or having resign) an AI researcher is a lot less costly in PR terms than having a large scale AI fail like the one Google experienced when their photo software started identifying black people as monkeys in 2015. Gebru is a fuse that just went pop, and can now be replaced with a new one quite easily.


What was this article from technology review - yes it gave insight into the contents of the paper but then barely made any conclusion or added anything about what that meant to the situation other than the very end saying the obvious maybe this cuts into Google's 'cash cow'?

Bah


I was a little disappointed in the quality of this research. I can see where Google is coming from. Some of these topics seem regurgitated (e.g. bias in online text) and some are just irrelevant (CO2 emissions). Training language models does not contribute in any meaningful way to global CO2 emissions. Overall not a very strong paper in contrast to the other reactions I've been seeing online.


> I was little disappointed in the quality of this research

But the research is not even published yet


Kinda the point though isn't it?

We're only reading a summary of the paper because apparently the authors aren't confident enough in its quality to release it publicly.

You can't claim that Google dismissed this paper out-of-hand while simultaneously saying "oh, but it's too much of a draft to release publicly". Uh, if it was too much of a draft for the public why shouldn't it be too drafty for Google? Are we really pretending that Google has lower standards of quality than the general public?


That, or Google is censoring the paper because of the conclusion it comes to, which doesn't align with their business interests.


If a company is paying a researcher a lofty salary (I am guessing ballpark mid six figures), and provides that researcher virtually unlimited tools and resources, shouldn't that company have a say in what material gets published by the researcher? Why shouldn't it have the ability to restrict research of dubious quality that makes borderline incorrect conclusions about the company?


I’m taking the article at face value, which states the authors think it’s too much of a draft.


I naively expected this article to include a link to the paper somewhere in it, so we could decide for ourselves. Oh well.


Training language models does not contribute in any meaningful way to global CO2 emissions.

Google consumes a vast amount of energy: "10.6 terawatt hours in 2018, up from 2.86 terawatt hours in 2011."[1]

If training and retraining models is a significant and inefficient part of google's energy consumption, the point doesn't seem insignificant(edit: The most advanced AI model involve as much as computing and energy any programs ever created [2], btw). I'm biased by the impression Google's actual search results haven't improved very much but I don't think I'm alone in that impression.

[1] https://www.statista.com/statistics/788540/energy-consumptio...

[2] https://openai.com/blog/ai-and-compute/


I obviously don't have the data, but I would argue that a tiny tiny fraction of Google's energy usage is dedicated to training language models.

I'm guessing the vast majority of the energy usage is for serving billions of requests for various products, such as search, youtube, maps, gmail, photos, etc.., and the cpu, network, and storage requirements for those requests.

Training and retraining ML models is definitely not on the hot path.


what’s the energy expense of training GPT-3? It should be very high. Eager to know the upside of such models. Environmental and social impacts of tech are becoming more direct and mainstream in today’s world!


How exactly is the environmental impact of tech becoming more "direct and mainstream" [citation needed]?

Data centers around the world account only for a tiny fraction of electricity consumption or carbon emissions. Do you even account for the reduced car and air travel due to remote working and online shopping?

How are advanced few-shot learners like GPT-3 even remotely a problem, training less models is somehow worse? Do they even know what they are taking about?

Its all very confusing, but a lot of the work done by grievance studies becomes immediately easier to understand once you realize they are arguing in bad faith.


What is the energy expended on training GPT-3? Eager to know the upsides of such models. The environmental and social impacts of tech are more direct and mainstream in today’s world.


You realize the energy and co2 production to release the paper is greater than the training itself.


Thats the kind of problems you get when you let your company turn into a political party...


This story blew out of proportion. Social media is sick. Some people's tweets read like she is some kind of a George Floyd level victim. She had the best job one can have. Lead researcher who got resources to build a team to work on her own research interest, having all the freedom. And making a killing doing it. She blew it and her race has nothing to do with what happened. She will be fine, joining academia as a tenured professor most likely. If you care about social justice focus on real victims who really need help. Not about this privileged researcher.


Is this what esteemed PhD AI ethicists spend their time on? It sounds like a useless grad school paper. I think much less of everyone now.


The significance of their job is imagined at best, borderline academic welfare. Guess that is why Google can make the choice to remove her this quickly.

They lose nothing but face.


It’s academic welfare but also woke political welfare / a PR exercise. It shouldn’t be legitimized because it is just inviting unnecessary political controversy into the workplace.


This is exactly what I would expect from an “AI ethicist” to be honest.


The thing is, that's about all an AI ethicist can say. The problem is there isn't that much to modern "AI"/machine learning. It's just brute force approximation/simulation/curve-fitting. There's a lot of tweaking, a lot of data and a lot of processing power. It does a lot but fails unpredictably and has unpredictable (or predictable) biases and leaves you at the mercy of a black box.

There, I've repeated the paper without reading it. But Google did hire her to be an AI ethicist so what else would they expect?


The mistake was expecting much from “PhD AI ethicist” - a title Orwell accidentally left out.


Medical ethics is a serious subject, and then there was Asilomar on the ethics of genetic engineering. Just because this AI ethicist isn't one, it doesn't follow the field is worthless.


I’m going to stand by my post even though AI ethics should be a genuine field. Call me cynical.


The Asilomar Conference on Recombinant DNA was a high-caliber event. The paper here, done at Google, a leading AI company isn't, and the ethics-washing on view here is deeply concerning.


That’s the crux of the problem. Top tier academia and ethics both attract the absolute best and the absolute worst. So I suppose we need both the critical cynicism and the optimists in threads like these.


> There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

This is how much of the internet has felt for a long time. After this nugget, I now wonder if Vox is just a model trained on Piketty and Tumblr.

edit: Also not sold on the CO2 argument. Too many variables! Nerds will calculate and re-calculate such things, with the result jumping all over the place, swearing that they've gotten it right--this time! No humility, in spite of the odds.


Vox are economically usually a lot more neoliberal than Piketty.


I have a genuine question:

If a person of color (I am myself non-white) is not performing, what does it take to fire them without the entire world playing the race card on you?

Are we creating a society that makes it impossible to fire a person of color? You know there are bad apples in every race, right? How do we handle such scenarios? Seems unfair to me, myself being a person of color - I don't want the world to treat me like some kind of a hero for being non-white / minority. I want fairness and it is frankly offensive.

I am not pleased with the way we're treating each other. It's supposed to be equal opportunity.

I also want us to have scientific discussion about gender differences (backed by research) and other difficult conversations. Nature doesn't give a fuck about any of this - if our goal is to uncover the way mother nature works, we're going to have to meet difficult truths and not be afraid of it.


> If a person of color (I am myself non-white) is not performing, what does it take to fire them without the entire world playing the race card on you?

I think it is going to be extremely hard. From the same article, I opened a tweet and look at what a high voted reply is: https://twitter.com/PocketNihilist/status/133495412981528985...

This line of reasoning means, you can't be pro diversity and fire someone from the underrepresented groups at the same time for their behaviour.

People are conveniently choosing to forget this is the same company which not long ago fired a person when he complained about the company being too pro diversity in their hiring.

I really hate Twitter and its mob culture.


You’re going to be downvoted, not because of your questions, but because you decided to put “please bring the downvotes” at the end of your comment. Why would you undermine yourself by sticking such an antagonistic comment at the end of your post?


I removed it. I know its going to be downvoted though. Because it is going against the grain on HN. But, I am being respectful.

We've created an atmosphere of fear. I don't feel comfortable voicing my opinions even after being anonymous on the internet. That's pretty fucked up.


Downvote goading, things like 'the entire world playing the race card on you' are basically trolling, they don't sound like asking a question in good faith. This particular case is also not about someone being fired for 'not performing' - nobody has claimed that. So if trolling is not your intent, you should edit it down to whatever it is you are actually asking.


That's why I removed it. I am asking the questions in good faith. I want to know how the society will avoid having chilling effect of not being able to fire underperformers and making difficult choices when they need to be made.

If we cannot sit down and have a peaceful conversation, calling trolls and other non-sense, please don't divulge in this thread.


No, I'm sorry, the high horse you're trying to ride here just isn't going to hold the weight of the ideological argument you tried to make. Timnit Gebru was severed from Google because of a conflict between her and the research practices at Google. Moreover, the story begins with her offering an ultimatum and Google preemptively accepting her resignation. No part of this story reasonably involves the "gender differences" and "difficult truths" you're invoking.

Dropping boilerplate ideological provocations onto unrelated threads isn't good-faith conversation. Whether you mean it to be or not, it has the effect of trolling, and on HN, trolling is a strict-liability offense; your mens rea matters less than the outcome.

Please don't do things like this on HN.


Probably fairer to say she was severed because the fruits of her research are irreconcilable with the company's peaceful enjoyment of its core business model. The sudden commitment to research practices (as documented elsewhere in this thread) being a fig leaf for that less comfortable truth.


I agree, for what it's worth.


'playing the race card' and asking about firing for non-performance in the context of a story that does not involve someone being fired for that are not invitations to 'peaceful conversation'. They come off as deliberate provocation. Again, if that's not what you have in mind, just edit it out and focus on whatever it is you want to ask.


Sorry, I know this is somewhat off-topic, but I feel like it is important to engage on a human level for a minute.

I’m genuinely sorry to hear that you feel uncomfortable voicing your opinions. I know from personal experience how hard it is to feel like you have to keep yourself closed off to the world. As a species generally, and as technologists specifically, we have some way to go to create non-toxic spaces for people to share ideas.

Still, you are engaging in fortune telling[0]—you really can’t know that your comments will be downvoted before you make them. Feeling compelled to add notes to the end of your posts encouraging others to downvote is your brain tricking you into tilting the scales to “confirm” what you “knew” to be true. It may feel like a helpful strategy to blunt the emotional pain of discovering that people don’t agree with you all the time, but from what little you’ve said, it sure seems like it’s just reinforcing your negative outlook. I don’t want you to feel bad all the time, and I suspect it is not actually true that most people here are going to disagree and downvote you to oblivion all the time so long as you avoid self-sabotaging.

I know it can be incredibly hard not to take downvotes personally, and, I hope you are able to try to reframe them as what they are: some random people, some of whom are thoughtful and some of whom are not, pushing a button. It’s not a personal attack, even though our brains can make it feel very much like it is. If you truly are getting downvoted a lot, it may be a signal that some of your opinions aren’t fully thought through and need to be re-evaluated, or perhaps that you just didn’t present your ideas well. On the other hand, your brain can and will exaggerate the negative experiences, make them seem like they are happening a lot more than they are, and make you feel bad even though you’re actually doing just fine.

Anyway, while I’m sure it happens (I don’t think there’s any space that is totally immune to bandwagons), I don’t get the sense that genuine and thoughtful comments regularly get downvoted to oblivion here. It’s trickier than ever these days since there is a lot of bad-faith argumentation going on everywhere online under the guise of innocently “just asking questions”[1], and I think it’s fair to say that there is an growing immune-like reaction which is sometimes attacking genuine posters because it’s just impossible to tell who’s being honest and who’s being a shitty troll.

So just keep doing your best, anonymous internet commenter. :-) If you feel like you can’t, I hope you can find a counsellor or friend who will listen and help you into a more positive head space. At the least, your post has generated some reasonable and civil discussion, and that’s what we’re here for, right?

[0] https://en.wikipedia.org/wiki/Jumping_to_conclusions

[1] https://rationalwiki.org/wiki/Just_asking_questions


Thanks, I read your comment in its entirety. I think we need more people like you. Disagreements about issues and these things happen all the time. My fear was founded in the fact that people perceive me as either 1) Faking as a non-white person (see some accusations here) 2) On top of that, accusing me of being racist. The truth cannot be more further from that. 3) Downvoting because its a difficult conversation. 4) Perhaps tracking me down and finding my identitity and may be someone crazy can go nuts and cause harm to my persons or my career.


You are looking at it from the wrong angle. I hear you because I used to be like you.

Now...try this instead. Think of downvotes as someone anonymously throwing a tantrum with a keystroke because their tender tender feelings were hurt.

That, my friend.. is not your problem.


Your comment is the top comment.


[flagged]


That's an insane accusation. You can check my comment history perhaps that will reveal some aspects of it.

Also, I am kind of shocked you would accuse of me something like this. WTF. I am not trolling. I am asking a difficult question that needs to be discussed because no one is discussing it.


You seem to have rigid ideas of what it's possible to experience, believe and be genuine about on the basis of the color of one's skins. Maybe that should be a signal to step back and rethink.

You also illustrate beautifully the vapidness of this fad notion of "concern trolling". Because they have not conformed with your view of the issues and used your preferred language, they can only be conceived of as trolling.

The other poster seems to be making a genuine effort to express their thoughts and edited their post to be less combative. Yet you can ONLY see them as being dishonest and disingenuous.


She was fired after sending a mass email, to hundreds of colleagues, criticizing her employer in very strong terms. What company wouldn’t fire an exec who sends a “f* this place” email to the company mailing list?


Google, usually. People are highly critical of executives and (relevantly) Google's efforts on diversity and inclusion work all the time.


If a person of color (I am myself non-white) is not performing, what does it take to fire them without the entire world playing the race card on you?

For them to be one of 99.9% of the world who can't mobilize a following to create a complaint about this?

Are we creating a society that makes it impossible to fire a person of color

We so far from such a situation like that that your complaint is absurd. A few places with a history of discrimination may have trouble firing the few people of color they might hire. That's about it. In the real world, incompetent people get fired and often competent people as well. A few people may make a career of playing the race card but that's a limited number of people.

Both racism and opportunists "playing the race card" can be real at the same time.


It doesn’t matter because aggrieved employees with a compelling story will always have going public with real or imagined grievances as an option. The persons protected class is just another attribute — this is a story because the subjects are notable.

If a company is responsible in how they manage, follows policies, etc they are fine. If executives or others are allowed to misbehave and the company is too cheap to buy silence, things may not be fine.

What’s the real story here? I don’t see evidence of incompetence. But you can be fired for any legal reason in absence of a contract. Maybe there’s some unknown political or other issue. Maybe some conduct crossed a line. Who knows.


I think the real story is that Google wanted to muzzle a researcher.


Here we're talking about something different: suppressing the speech of a person of color raising some well-thought out arguments about racism that are problematic to the employer's core business.

Google has set up a research organization that ostensibly is empowered to ask and ultimately work to openly resolve difficult questions like these... but when the rubber hits the road, they instead throw the researcher under the bus.


What are the arguments about racism that were suppressed? The paper that didn't pass internal review was about the external costs of training large models (correction: the paper also discussed language models not adhering to anti-racist language standards).


Race comes up several times in the article. Did you miss it?


The common "nice" way to remove non-performers is a "Reduction in force", also known as "layoff" where they let multiple people go ritualistically, though I've seen it for as few as two people. Then they can argue it was about finances, not personal. They offer a decent severance that requires the employee to sign an agreement that requires, in part, they don't sue.

The word "fired" often is reserved for terminating someone for cause, where you really broke the rules and get no severance, not even the two weeks minimum that is customary. Non-performers usually aren't treated this way. I'm not sure this is really what happened in this story.


Employers need solid documented reasons for firing someone. Fired person needs proper recourse for appealing it if they felt it was unfair. Going to NYTimes is OK if there is solid proof of racism, the company needs to face the consequences - both legal and public image / media perception. If the fired person goes out to media and brings down the company without proof - we need to have mechanisms for that as well - I don't see us addressing that.


In an at-will employment state (i.e. most of them) companies do not need reasons. Maybe you mean to help their chances in a lawsuit or something?


None of this comment has anything directly to do with the article; the article is just a coat rack these arguments are being hung on.


Unless I missed something (and I just rechecked) there's nothing in the guidelines about comments having to be on the topic of the submission.

IMO it enriches HN comment sections to allow for people to bring up adjacent topics, things that came to mind or funky little "this reminds me of..." anecdotes.


This is just your perception based on media speculation, not something actually backed up by numbers. Just because people on the internet are talking about race doesn't mean a firing has anything to do with race.


> what does it take to fire them without the entire world playing the race card on you?

A workplace that isn't hostile to people of color and evidence of the cause of dismissal


Should companies publicly present their side of the story for every firing?


When they want to publicly claim credit for being a certain way but their actions point to another, it's up to them to clarify their position, and use whatever evidence to support their claim.


This seems an extreme solution to media investigation of a small number of people losing their jobs

Perhaps instead, in these situations, employers could provide, at the discretion of the dismissed, information gathered while performing due diligence leading up to the dismissal


We all know there's a zero percent chance of that happening. For many, many reasons.


I personally think that just the way the employer is required to document the reasons for firing, the person going off on media must also provide proof that they were being treated unfairly.


Most people who get fired don't the option of going to media. The way "star" researchers and the way average employees get treated is going to be different whether we're talking white or non-white.


That's a good point. May be we need ways to have a common person without a huge media presence have a way to voice their concerns of racism.

Since media cannot cover thousands of individual cases, there should be legal avenues without deep pockets for lawyer fees to sue companies for racist behavior.

The court should look at this situation objectively and factually.


.


No, I am not, and I don't see how you came to believe that I might be


Wow, let me get this straight: Are you claiming that she got fired because she couldn't perform and she used her race as the cause for that?


Yes because this happens everytime a PoC is fired. Every single PoC who is fired gets this media attention.


I think it is causing a reverse effect. It's going to create more racist behavior from employers - "If we cannot fire a PoC or some minority, let's not even bother hiring them in the first place."

This is not good.


That is also, in fact, illegal. And it's not very hard to realize someone is doing that.


Sure, but how hard is it to prove it in a court of law?


No one is questioning its legality. That's obvious and I agree, it is illegal.

There are a lot of things that are illegal but people do it anyways.


> Every single PoC who is fired gets this media attention.

No, they don't.


I think you're agreeing with someone being sarcastic. But it is text, so one can't be sure.


The problem with sarcasm on this issue (beyond the inherent ambiguity of plain text sarcasm) is that this is something you can see people saying literally (or, at least, as what they see as only a slightly hyperbolic description of a real trend that they are complaining about.)


I understand, sometimes sarcasm is my tool of expressing frustration.


Sure. It's like Poe's law.


I call BS. "Every single"? Come on.


That was my point.


The problem I find with this has much to do with character. I will not comment on the research (because I do not know the field) but rather comment on why character matters and why motivation is important.

Having read her email and tweets; it is quite clear that she is as much a political activist for a far-left “woke” interpretation of the world that I view as of immediate threat to our way of living, democracy and free speech.

Of course you can still author great papers but you would be naive to expect her to reach any conclusion that goes against her political goals.

In her email she makes this clear. She even posts demands if her paper is not published. In her email she professes to being “dehumanized” for her color and makes it clear that - irrespective of any factors - hiring only 14% women in a year is not acceptable.

She continues to say that they had been “enough talking”. We need action.

This type of urgency to take action (of course only the actions she herself approves of) and stop talking are clear indications of a person who no longer lives by liberal values but has embraced an ideology to which they now belong.

At that point, as an employer of researchers, I would consider her credibility to be severely damaged. If at that point I receive demands from this person “or else”. Then I would of course also accept her resignation.

I think the tech industry will need to continue to stand up to the anti-liberal values of these “woke” people before they cause too much harm.


> a far-left “woke” interpretation of the world that I view as of immediate threat to our way of living, democracy and free speech.

WTF dude.

The core argument as to what you say is a "woke" interpretation of the world is to reshape the world in such a way that inherent privilege (for example being rich, being white, being male) does not completely predetermine anyone's outcomes. So that you can still "make it" even if you don't start out as rich, if you're not white, or male. Basically, if you want to boil it down to slogans, it's "the American Dream for everyone". Or, equal opportunities – not equal outcomes.

I'm not sure why that is a immediate threat to democracy, free speech, or your way of living.


It seems like all of the actions to "rectify" the aforementioned issues clearly involve threats to democracy and free speech.

For example, the far-left "woke" interpretation has historically made use of and encouraged "cancel culture" to punish people that disagree with that worldview, which deprives them of their right to free speech. This is not imagined -- many people's lives have been destroyed because of this. Free speech is not just the First Amendment in the US... it is a tradition upheld in a plethora of ways, public and private.

In addition, not everybody agrees with the theory of inherent privilege, as it's often used to overlook or devalue the hard work of others with whom one disagrees.

Many people of all backgrounds are frightened by what they are seeing of the far-left "woke" view.


I'm not sure how anyone can have this view of the world and keep a straight face.

However, here

> It seems like all of the actions to "rectify" the aforementioned issues clearly involve threats to democracy and free speech.

you at least acknowledge that there is a problem. So we just need to put our heads together and try to solve these problems... right?

Also, as a child of (at least some) privilege it takes a lot to see and understand the privileges you have. I hope you see yours, and I hope that you would want other people to have the same privileges that you have.


Freedom of speech isn't freedom from consequences.

The only people that are 'frightened' by equality are those that have historically benefited from the lack of equality.


I’ve heard this stock answer many many times before. “Freedom of speech isn’t freedom from consequences.”

By this token, an authoritarian state has freedom of speech - say what you want, but don’t complain when you’re imprisoned or shot for it.

This political slogan does not logically imply that all consequences people choose to deal out are therefore somehow justifiable. Specifically, it does not give people permission to take action to curtail freedom of speech, which is precisely what is happening.


> "Freedom of speech isn't freedom from consequences."

It's kind of surprising to see that even the direct example of Ms. Gebru having just experienced "consequences" hasn't jolted people into realizing that that tired old phrase is problematic precisely because it cuts both ways.


Umm, I think you need to do more research. Have you seen the toxic behavior she has demonstrated against respected people in the industry? She maybe is smart and pushing for good things, her approach of how she goes about it, especially with people she disagrees with is toxic and she did this to her self.


Both you and the parent invite flamewar. IMO the parent has a good point even if you remove this from what he said.


I think when people pull an episode of a TV show featuring an Asian man wearing make-up to play a dark elf due to fear of reprisal from race crusaders, we're in a bad place.


In the 90s and early 2000s it was quite common for TV shows that offended Christians in some way to be censored or cancelled by their network. This was due to letter writing campaigns by a small minority of Christians.

Is it fair of me to call Christianity a threat to democracy?


It isn't the Christianity so much as the ignorance, but yeah.


Google has deleted an entire track from a Zappa album (Sheikh Yerbouti) listing for reasons known only to itself. Aren’t we already there?


As a woman of color I see it as self-serving bullshit. I too can cry discrimination whenever someone disagrees with me and pretend to be working to make the world more fair. But the truth is, people in Ethiopia (where she was born) for instance give jackshit about AI and bias in facial recognition systems. If these people were so woke and concerned about underprivileged communities, they would tackle the problem at its root, not pick the easy fruit that is SV tech boys being scared that they would offend someone.

TL;DR: For someone relying so heavily on their race and where they came from, she is as privileged as the people she's criticizing.


If you are not sure why, then please read about what has happened before when far-left revolutions occurred based on these concepts of supposed “privilege”.

For example in Lenin’s Soviet Union. People were judged by their class membership rather than their individual behavior.

This ideology of group-membership based society is harmful to everything I consider liberal and free.

Don’t judge people by their appearance, judge them by their character.


It's generally difficult and requires a strong spine to stand up for causes that you believe are important, particularly when doing so involves personal costs in terms of opportunity.

Suffragettes battled hard to allow women to vote, for example. While reading your comment I wondered whether you would have considered their movement to be "woke" at the time.

For their part, Google likely truly does try hard to behave ethically because it tends to be good for their image and business.

But it may also be true that there are dangers and risks involved in AI research that Timnit (as an expert in the field) believes have the capacity to perpetuate inequality on a long timescale.

You state that a sense of urgency around that indicates an ideology. I'd suggest that almost everyone who participates actively in the kind of liberal democracy you defend requires some kind of ideology to guide their decision-making.

The most trustworthy people may adjust and refine their ideology when faced with contradicting facts and evidence, and to do so they may need to understand and reason about those counter-arguments.

There seems to be an underlying sense in some of these threads that the financial and technological setbacks that the tech industry might suffer as a result of adopting more ethical policies and listening to employee concerns wouldn't be worth the cost.

There is less discussion and optimism, for some reason, about what the benefits of a happier and more transparently equitable work environment would be.

Speaking of ideologies, I think this hints at a sense of company loyalty and perhaps national loyalty, with a possible fear that criticisms may be a form of subversion, accidental or malign.

Those loyalties and suspicions may help the participating groups succeed, or could equally lead to their failure if the surrounding environment changes. That is, perhaps, the market at work.


I think you are taking 'stop talking' to mean censor all debate. That's clearly not the primary meaning in the context.

But maybe you think that we still need to debate whether or not we need to hire more women?

And even if you argue that we do NOT need to take action now to bring more gender equality into tech - who is stopping you from making that argument?

Judging by all the comments on the HN threads it seems there are plenty of voices expressing opinions.

I assume that this chorus is mostly men - 86% perhaps?


No. She was accused of doing shoddy research and was fired for that reason, not her point of view, therefore evaluating the research is the only relevant issue. I welcome this second opinion on the paper and in fact this article changed my opinion. I really am blown away by the energy cost of building these models, and would love to hear that the numbers in the paper are just way way off. I realize that very few models like this are built, but as Gebru points out, that also introduces bias and the problem of verifying the integrity of the process. I don't have to agree that NLP is racist, or that capitalism is racist to find value in this paper. And it certainly is true that what many take to be inevitable in technology is in fact a choice, so attacking and questioning fundamental assumptions is a very important part of moving into the future.


[flagged]


The stupidity of cancel culture and college kids screaming "shame" at history professors for an entire lecture because he didn't spend enough time pontificating on white privilege throughout the ages have made a mockery of civil rights activism. How do they expect to change people's minds when they're constantly making enemies?


What I see is that a lot of mainstream liberals are waking up to how problematic and alienating that progressives are (even to the point of drawing criticism from former-President Obama https://www.nytimes.com/2019/10/31/us/politics/obama-woke-ca...) and are pushing back against their poor behavior.

If you're a liberal who is tired of "wokeness" (I am and a minority to boot), let people like the parent know that you're not on their side, both through your words and through your voting.


I’ve developed a severe case of “wokeness” fatigue. At this point if I see the - social - media trying to drum up outrage for some “woke” cause, I just roll my eyes and scroll away.


The summarized sections of the paper seem very, very weak to me.

1. Environmental footprint of technology must always be considered as a trade-off for what you get in return. Why do we spend energy on a giant render farm for Pixar? Because the cinematic artwork is worth that environmental cost for many people. Obviously we should pursue improvements in environmental outcomes, but that is not a goal unto itself in a vacuum apart from all other side effects of a technology. Is it worth ~5 car-lifetimes to train GPT? I would say overwhelmingly yes. It reminds me of an anecdote from The Beginning of Infinity by David Deutsch, where some ethicists argued about whether it was a useful human endeavor to invent color TV monitors back when they first hit the market. Why would you need to spend resources creating something besides black and white TV? Yet today and for decades, color monitors are used as critical tools to diagnose diseases and save lives.

2. Nobody is required to accept “wokeness” vocabulary, and indeed one of the signs of a crank or a quack is making up a fiefdom of new vocabulary and collecting rent in the form of social authority for the validity and requirement of that new vocab. Nobody is required to be on the cutting edge of how activist language changes, and it seems like a disingenuous way to try to make a cottage industry out of one’s own expertise in rapid changes to activist language. As long researchers are stating what corpus is used, and making tools available that allow peer reviewers to audit that corpus, they are meeting their obligations to their professional field and to society. We should be happy that language researchers would produce lots of papers and lots of models on many sets of corpora, and as the cost of training these models gets cheaper, and the cost of curating the corpora gets cheaper, we can expect to see better variety of curated large corpora, domain-specific corpora and other things.

3. Researcher opportunity cost is perhaps the most ridiculous objection. Researchers are free agents to decide what they want to study. In most PhD programs, especially in machine learning, the project selection is entirely up to the student. If Timnit wants there to be different research priorities, well, news flash, but she is only one of eight billion. Unless she wants to raise money to give as research grants that tie the researchers to her pre-decided methods of inquiry, she really has nothing to say here.

4. Inscrutable models - this is the only one where there is any point. If the models can produce harmful outcomes and they are inscrutable, then debugging or safeguarding them is a real problem. But this has been true for almost all types of computer science algorithms. Of course we should work on methods that synthesize clarity from the prediction mechanism of large neural nets, but that is also not a criticism of neural nets. That’s just a need for more technology.

Overall the main points of this paper seem full of themselves, arrogant and overly self-important, especially with wildly ridiculous connections to “wokeness” vocabulary.

Given the immediate nuclear option of the ultimatum and email that Timnit sent, I suspect the full text of the paper would be even more unacceptable.

Google has plenty of bullshit issues with the way it treats employees and transparency of decision making. Rejecting this publication approval was not one of them.


[flagged]


I am honestly disappointed to see this kind of response at the top of comments here. Every time we have some sort of social issue come up people here come in to pick apart the person involved, and I think it’s one of the bigger reasons why some maintain a generally negative view of Hacker News. Can you imagine being Timnit and stopping by this thread, where the top comment, in colorful language, reduces you to a social activist fraud? A comment coming from someone who hasn’t even read the full paper? One that ignores the job you were hiring to do (which seems precisely to draw out the social issues of AI) and lambasts it?

At you the commenter yourself: you seem to be exceptionally harsh on her based on the most minimal information that you’ve purposes to fit your claim of this being “useless things that we shouldn’t care about”. Do you know how much energy goes into training a model? I certain didn’t. Do you think that showing that AI is picking up discriminatory language is something worth looking into? I certainly think having someone look into it would be useful, yes. And Google apparently thought so too. Claiming that these things are idiotic and attacking the author of this paper for bad faith and censure is…very extreme.


> I think it’s one of the bigger reasons why some maintain a generally negative view of Hacker News.

Twitter already exists, we don’t need another place online to spend all day spreading false positivity.


It was my impression that Twitter was mostly about spreading false negativity. Anyway, I would prefer we discuss the content of the paper and what it implies for large models.


The issue isn’t false positivity. It’s clear argument, politeness, precision and a willingness to assume the best of your interlocutor.


I’m not asking you to spread false positivity, I’m asking you to not read quotes of a rereading of a paper by an author who is not the person we’re discussing and then using it to complain that a researcher is useless and that her entire field is useless.

A comment that started off with “it seems like Timnit Gebru mat have overestimated the impact…” is not false positivity.


Think of the blowback if the paper was published!


> Can you imagine being Timnit and stopping by this thread, where the top comment, in colorful language, reduces you to a social activist fraud?

If it is accurate what does it matter?


[flagged]


If you’re referring to her spat on Twitter with Yann LeCun, I am aware of it although not particularly well versed in its details. Certainly not well enough that I could claim it was “toxic”, although maybe that would be an accurate classification of her behavior. Regardless, her toxic behavior elsewhere is not an excuse for toxic behavior here.


If her behavior is "toxic", who has it harmed?


Have you read her exchange with Lecun and the impact it had? Edit - Also, if her behavior online is like that, imagine what its like inside Google? Have you thought about that? Clearly her email to the brain group was an issue here, i wonder how it compared to her exchange with lecun?

One final edit, even her co-authors were not ready for publishing the paper (per article), so why is the reason of censorship, supression etc even being discussed. This whole thing has been blown out of proportion and Google likely did the right thing, purely based on her behavior


You mean when Lecun left Twitter explicitly saying people should stop critiquing Gebru?

https://twitter.com/ylecun/status/1277372578231996424

saying:

"Following my posts of the last week, I'd like to ask everyone to please stop attacking each other via Twitter or other means. In particular, I'd like everyone to please stop attacking @timnitGebru and everyone who has been critical of my posts."


Was I pointing out what Lecun said or her specific way of handling that interaction? Its the latter thats toxic. If people were attacking Timnit for that interaction why is that? Clearly she did not handle that very well.


I think if she is to be judged on whether she was being toxic to Lecun or not, Lecun's opinion is relevant. Why do you feel it's not? And if his opinion isn't relevant, why should we care about yours?

I also find your question about why people attacked her, as if people in the internet always have good reasons to attack others, is an incredibly silly attempt to appeal to the Bandwagon fallacy.


Lecun is way more of a professional than her. He handled it the way it should be, do you think lecun is going to come out and say something other than what he did as a respected individual in the industry?

Now go and look at how she has handled it and how she argued with Lecun. Then she also thratens people if she does not get her way. There is ample evidence for her behavior. Does she behave like this all the time? Of course probably not. But the fact that she is openly behaving like this is a clear data point. If you can not see this behavior as not suitable in a work environment, then lets hope we never work together.


They were continuing to work on the paper for publication, which, if you've never written an academic paper, is a lengthy process. There's a heck of a difference between not being done and not wanting a rough draft circulating vs being asked to withdraw the paper entirely.


Given that she seems relatively popular within Brain - more importantly her collaborators and direct manager, why should we assume she fostered a toxic work environment with those around her? More like, it appears the people who wanted her out were leadership and a set of anonymous PR/IP reviewers she does not know.


Her behavior is clear evidence of how she interacts with people. You can be a brilliant jerk and still get along with some people. In a team environment however, there is no place for this. Her exchange with Lecun, her threatening words and many of the way she puts forward her arguments. Even if she was in the right here, she can easily deal with it without doing what she did here.

So yes, there is more evidence for her toxic behavior than there is for her ability deal with disagreements professionally and work in a team environment.

TBH, I think this whole review things is just a reason for google letting her go. They were probably looking for a reason and she using her threatening (aka toxic) nature was just what google needed.


Again, I repeat, if her team had issues with her, her direct report would know and her team wouldn't be harping constant praise towards her.

The people who removed her are executives several skips above her. That is unusual.

You keep bringing up LeCun, but LeCun did not get harmed as you imagine - he only conceded to her point because she was frankly the expert in the topic of discussion and he finally acknowledged that.


Can you link to your conclusions?

Because thats not how I have seen it. She threatened to leave. If any of my team had an issue, and their response 'Do X, Y, X or ill leave', even if they are in the right, I would not align with them purely out of the fact that that is not the right behavior. You can still leave any company with out threats.

Also we have not seen the email to the brain group. I wonder how that compared to her exchange with Lecun and her threatening nature.

Lecun was not harmed? How did she contribute to a healthy debate there? Please also link me. Because all I saw were aggressive tones towards someone she disagreed with (even if she was in the right). Again, he behavior (not if she was right or not) is the problem here. You can be right, but still be an asshole.

Also, how do you know some people in her team didnt have problems with her? Do you have here perf reports? I believe her direct manager is not the only person that can fire her. She can be ok good terms with her direct manager and still behave unacceptably towards higher ups


1. https://twitter.com/timnitGebru/status/1334877710477385728

2. How was LeCun harmed? As far as I'm concerned, he accepted the feedback. And just so we're clear on what happened then - Gebru was only one of many with qualifications who were aggressively critical because of his abdication of research responsibilities. I don't know why you're so focused on Gebru in particular.

3. Yes, if this was about performance, her direct manager who oversees 300 people would know.

4. So yes, glad to see we agree higher ups, who she likely has little to no interaction with to supposedly be exhausted of her behaviors, stepped in to get rid of her.


1. Again, Managers are not the only person nor do they have to wait for someone’s direct manager to fire someone. If someone’s behavior is not acceptable, anyone authorized person in the company can fire them. Your link is irrelevant to facts.

2. Do you really think lecun wouldn’t have left Twitter if he didn’t find that conversation enjoyable? Do you discount the many other people who found her interactions with lecun horrible? Did any of the other people behave as arrogantly towards lecun as she did? Again, no point made.

3. See 1

4. See 1

So, you can either stop ignoring the fact that her own behavior led to this, or you can continue to think many other respected people in the industry who act way more professionally than her are just against her because they don’t support her views or think google is racist.


That’s not the point. Being removed by executives you rarely interact with because they want you to retract a paper is extremely suspicious. And the community has responded with disappointment.

You are the only person postulating the intention of her firing. I am merely calling out your assumptions because the consensus from the entire community involved has declared otherwise.

Most other people who found her refutations horrible are outsiders who don’t fully grasp the topic LeCun and Gebru and everyone else was alluding to.

Who are the respected in the industry that are against her? Not even LeCun, who has said he thinks her work is important. Nothing in your post has facts at all.


Again, you are assuming that this incident was the only thing that led to her firing. If she behaves like this in public, how does she behave and interact with individuals inside the company? It doesnt matter if she interacts with senior management. If I saw someone i never interacted with harass someone, as a manager, should I just let her direct manager deal with it? Absolutely not. HR would fire that person if it was deemed against expected management behaviour. Google even explicitly stated that she was fired for unexpected management behavior.

You dont have to grasp a topic to know if someone is debating respectfully. I dont have to be a doctor to know that the person is an asshole if they act like an asshole towards me.

I think lets just agree to disagree. Focus on the behavior. Not what she is writing about in her papers. And imaging someone acted like that towards you in a workplace. Imagine you are a CEO or a manager and someone comes to you and says 'Do x y or z otherwise im leaving'. Id be like wtf, sure, if thats how you deal with disagreements, cya'. Everyone is replaceable especially toxic people.


I am only working with what has been corroborated by Google, her teammates, and Dean. There has been no indication whatsoever, thus far, by any account - anonymous or not - of her interactions OR behaviors being problematic in any sense with any single individual. Now, you have gone as far as making up imaginary hypotheticals and blatant accusations - that not a single person has made - of harassment. You are not a person who argues facts - only personal biases and feelings about how you perceive a twitter conversation went down - one that peacefully discontinued.

How can you objectively comment on her behavior if you have never interacted with the person ever?

This has been discussed in prior threads. A manager is well within his rights to act upon his decisions emotionally. Employees are well within their rights to express concern. If I work in a workplace with severe safety issues, I would most definitely demand an ultimatum for my personal occupational security. Managers DO NOT impulsively fire individuals unless they are mentally unable to navigate through a conflict. It's far more nuanced than that.


Based on previous discussions, she bullied facebook's AI chief, Yann LeCun, who is apparently a respected expert in the field, off of twitter in an attempted cancellation.


Lecun's last post on twitter was telling people not to attack her, so I'm not sure that characterization holds.


Lecun is way more an adult and professional than she is. I dont expect anything other response from Lecun. Now, lets turn the tables, have you ever seen a response like that from the person in question here?


>-- Has she calculated the oposite alternatives? People driving to libraries, to search for something?

Driving to the library, that is the alternative to training these models, really?

> -- Ok, basically she is pissed as AI is picking normal language, that people use, and not using the vocabulary that is in vogue on certain political circles. Basically censure, and forced speech.

If you had a model trained on a large corpus of data from the pre civil war southern American states, it would have been deeply racist, and would even view black people as possible property. If you had one that was trained on data from the 1950 it would be less racist but still problematic viewed by people from today. Is there really something special with today, that removes these kind of concerns with a model trained with current data?

> I think social engineering b.s. should be kept as far away as possible from science. This is turning true ai research into a masquerade to push certain political agendas.

It seems to me that it would be impossible to do any social science research, and specifically any research on racism. With this kind of attitude.

Some of the concerns brought up in the paper seems less consequential than others. Especially the pollution one seems weak to me, that doesn't make it false, and fair enough that is was brought up. I find the racism issue a lot more problematic, and is something I've run up to working on deep learning solutions myself. Even if it worked fine for my group, and most of our customers, that is definitely not fun, and something practitioners should consider when building these things.


> If you had a model trained on a large corpus of data from the pre civil war southern American states, it would have been deeply racist, and would even view black people as possible property. If you had one that was trained on data from the 1950 it would be less racist but still problematic viewed by people from today. Is there really something special with today, that removes these kind of concerns with a model trained with current data?

I think this argument applies not just to machine learning, but to learning in general. Any kind of knowledge-acquisition process is going to be biased by the environment in which it occurs. That goes not just for digital neural networks, but also those in our human brains, operating on the same racist data the ML models are. If that means we shouldn’t do machine learning, it also means we shouldn’t do human learning either.

Of course, the preceding is absurd. A more reasonable take is that we should adjust the objective function of our learning processes to try to account for the effects of biases. We try to do that subjectively as any decent person operating in a biased society should, but our ML models can do it more accurately. In fact, I’d argue that such techniques are necessary to more carefully analyze and build evidence describing the effects of those biases. They can provide insights that will even improve our ability to correct for biases in the real world.


> It seems to me that it would be impossible to do any social science research, and specifically any research on racism. With this kind of attitude.

I think this sort of veiled personal attack resorting to baseless extrapolation is not a productive line of public discourse.

You should take a step back and think about a) the field of study, b) how the main argument completely ignores the basis of said technical field and what it studies, c) how the argument being created lies on the idea that a self-annointed elite should have the right to manipulate the public to force it to fall in place with it's goals and desires.


> I think this sort of veiled personal attack resorting to baseless extrapolation is not a productive line of public discourse.

I thought that my comment was fitting to the tone of the text I replied too, but fair enough, maybe I shouldn't have included that paragraph.

> You should take a step back and think about a) the field of study, b) how the main argument completely ignores the basis of said technical field and what it studies, c) how the argument being created lies on the idea that a self-annointed elite should have the right to manipulate the public to force it to fall in place with it's goals and desires.

It is unclear to me what this means, is it arguing that studying bias in AI and specifically deep learning is not germane? Is it arguing that it is not acceptable to institute policies and action to create a level playing field for minorities and repressed group because that would be social engineering, and forcing the common man to do the elites bidding? If that is what it means, I have to disagree, I find both of those things worthwhile.


> I thought that my comment was fitting to the tone of the text (...)

Yes, that was one of the problems.

> It is unclear to me what this means, is it arguing that studying bias in AI and specifically deep learning is not germane?

Let me make it clear for you so that a) we are able to talk about things objectively, b) your options to continue using veiled personal attacks is curtailed.

Either your goal is to model reality and real life, or your goal is to model your idealization of what you feel real life should be.

If you pick option #2 then your model does not reflect real life.

Bias is by definition the way the model returns results that don't match the real world and real life, in frequency and in proportion.

If your goal is to use your model to manipulate and control society based on your own personal criteria, by manipulating it to return results that distort the real world and real life, then call it something else, because bias is not it.


I'm not OP, but what is "real life"? In cases where some communities/countries have (to quote from the article) a "smaller linguistic footprint online", is trying to control for this a form of manipulation of society or a way to fix a bias? Can you actually choose a sampling measure without making some sort of value judgement?


> I'm not OP, but what is "real life"?

In the context of creating models, it's representative data collected from statistically significant observations of a population, for starters.

> In cases where some communities/countries have (to quote from the article) a "smaller linguistic footprint online", is trying to control for this a form of manipulation of society or a way to fix a bias?

In modelling there is no such thing as control. There's the input data and there's the model generated from input data.

If you are looking for a model that is expected to represent a property intrinsic to a specific community then you use data collected from that community to generate that model. That's it.

Models generalize, and reflect the norm. They are like that by design. That's their point.

If your plan is to have a model that does not reflect the input data but instead forces your biases regardless of the input data then your goal is not to model reality but to distort it to comply with your personal goals.


> Either your goal is to model reality and real life, or your goal is to model your idealization of what you feel real life should be.

> If you pick option #2 then your model does not reflect real life.

These deep learning models built by corporations are not scientific models, they are engineering solutions, built to solve problems. Reflecting the real world is only useful if it furthers what the company wants to solve. If they for example remove swear words from their training set, that will make them a less accurate model of the world, but make them more useful for building solutions. But it is probably a trade of they would be happy with. We've also seen example of risk scoring application for felons that seem to end up doing racial profiling, because that is what the data seem to indicate makes sense. But that's deeply problematic and runs counter to laws in some places and seem ethically problematic (https://www.theverge.com/2020/6/24/21301465/ai-machine-learn...).

> Bias is by definition the way the model returns results that don't match the real world and real life, in frequency and in proportion.

Getting a good data set without bias is hard, even if you crawl the whole internet like Google does. Not everything is on the internet and there are systematic drivers that make some part of the human condition over represented (English, science, the views of the affluent and educated), and some under presented (small languages, the discourse of people behind the Chinese great firewall, the poor). So just getting a ginormous data set does not fix bias.

> If your goal is to use your model to manipulate and control society based on your own personal criteria, by manipulating it to return results that distort the real world and real life, then call it something else, because bias is not it.

Positive bias is absolutely something that we use, and while it might seem sinister it does not have to be. The example I'm most familiar with a facial recognition technology. Most groups building that ends up with a model that is better at some groups than others. Asian research groups often end up with models that does well with asians and worse with whites, while European groups usually end up with the reverse. In some sense these results do reflect the reality of these groups, most people in Europe is white and most people Asian is asians, so that you training sets ends up like that is not surprising. But no one is happy with these kind of results, and everyone wants to fix that.

To bring it back to speech and text models, let's say you are building a customer service solutions incorporating a deep learning model, the reality might be that you current customer service representatives treat blacks (or people who use "black" dialects), worse than people who sound white. An accurate model built on this data set will then also do that. But is that acceptable? I hope most companies would want to fix that, and be fine with adding some positive bias in their solution.


> I think this sort of veiled personal attack resorting to baseless extrapolation is not a productive line of public discourse.

As opposed to the more direct “approach” in the parent post?

> Ok, so basically she is a bullshit social engineer, masquerading as an 'AI ethics researcher'


The social engineering and woke culture stuff seems like a moot point honestly.

If the large language model analyzes everything, it will veer towards normal. It doesn’t seem so far fetched to think they can build something which also tracks and understands language trends and slang, and how to appropriately use this “special vocabulary”.

I don’t know much about how these models work, but I would hope something trained on the entire internet is smart enough to know to speak to me in modern English, not Civil War English or Shakespeare English.


Sometimes researchers investigate boring questions like "what is the carbon emissions impact of doing this thing?" Just because you don't think the question is interesting doesn't mean the answers aren't relevant to somebody, or that finding some answers isn't increasing our understanding of the field in some way.

There are all kinds of research papers; they aren't always describing some new clever trick no one thought of before. Sometimes it's just "hey, we had this question we wanted to know the answer to and so we did some investigation and some math and figured it out." As long as they're questions that some other people somewhere are asking, there's no reason why a respectable conference or journal wouldn't accept such a paper. It's up to them and the reviewers to decide whether the content is a good fit for that venue. If it isn't, it'll get rejected; no reason for Google to involve themselves in the decision.

How rigorously the authors treated these questions in the paper is something we don't know without being able to see the paper itself.


Yeah it seems like her paper was light in technical substance and high in social justice activism. Pointing out that google emits carbon is worth consideration in the appropriate venues but it doesn’t really advance the state of the art.


We haven't read her paper, just an article about it. We can't read it, because google won't let us. That you don't see that as a problem is a little concerning.


The article we are using as reference is a summary of her paper and is literally titled “we read [her] paper” from MIT Technology Review.

What do you suggest might the article be getting wrong about her paper? Or are you suggesting that the information contained about the paper in the article is inherently invalid and should be ignored altogether?


Additionally, according to TFA, it's not Google who don't want the article published. "Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online". Though they were happy to present it at a conference just one day later.


I'm saying that saying her paper was "light in technical substance" when we haven't read it is problematic. Any article such as this is going to be light on technical substance in comparison to the paper itself.


The article states her paper focused on the dangers of large language models: “Environmental and financial costs,” “Massive data, inscrutable models,” and “Research opportunity costs.”

The dangers of large language models is an interesting topic but it’s not AI research and it doesn’t advance the state of the art of AI.

When it comes to technical substance in the field of AI, her paper is indeed lacking in that unless the article completely failed to mention something that would directly qualify as original AI research.


Her job was AI ethics.


That is a good point. It says her field is “AI ethics,” not “AI.” So I have to admit that her paper was in her field.

I also have to say that “AI” is a vastly different area of study than “ethics.” Very strange that Google has these organized in the same hierarchy.


You have to point the alternative costs though.... if non AI trained search, returned worse results, would that waste more energy?

She didn't... which are the hallmark of bad research.


How do you know?

You're reading a non technical summary of an article, and taking it as gospel.

I think I'm going to wait to see the paper before deciding whether or not it's good.


Yes I agree. A substantive analysis of carbon emission should look at multiple countries, multiple industries, multiple companies, and all the outputs from those companies to give a good picture of what types of outputs are the most inefficient w.r.t. carbon emission.


> "One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms."

This was just glossed over and maybe slightly off-topic, but what kind of shifts in language did these two movements in particular affect?

Stuff like #MeToo was about raising awareness about sexual abuse, particularly in the entertainment industry, can't really think of how it forced people to change words that they use.

The best example I can think of is people opting to not use "master/slave"... Which is much more debatable and far off from "core tenets" of the BLM movement like police reform and dealing with systemic racism.

I guess what I am trying to say is that when it comes to language, things are a lot less black/white (heh) and we should be careful about people that are setting themselves up as arbiters. Sort of a who guards, the guardians issue I suppose.


Her previous papers look pretty serious to me, along with her resume. I see nothing to suggest she's not qualified.

https://ai.stanford.edu/~tgebru/


What do you mean by pretty serious? I looked at three, two are about better documenting ML models/data, and one is about inferring the political leanings of an area by what type of cars are parked in it. None seem particularly groundbreaking to me...


Honestly? I don't have much experience with ML and in particular issues of bias in ML (though I have a general understanding of a lot of the former), so I don't really feel qualified to evaluate her more recent work, but her older stuff definitely appeared like what I'd expect from a machine learning practitioner, so I don't think attacking her as someone that only understands social sciences (which I perceived the parent as doing) to be fair.


Her work on identifying bias in facial recognition has won quite a number of awards.

Ground-breaking might be a strong word but it does move the industry forward.


Do you have a link to that paper? I'd like to read it, though I wasn't thrilled by the way she handled (or didn't) Yann LeCun.


You can see some of the recognition on her Wikipedia page:

https://en.wikipedia.org/wiki/Timnit_Gebru


[flagged]


The paper referenced ("Gender shades: Intersectional accuracy disparities in commercial gender classification") has been cited 1000+ times per her Google Scholar page (https://scholar.google.com/citations?user=lemnAcwAAAAJ). For a 2-year old paper, this is easily a top 1% most cited paper.

Take for example a retrospective look at 2017 NeurIPS papers done in 2019 (https://archive.is/wip/77YrB).

You can disagree with how she and/or Google has handled this whole situation, but please do not denigrate work that has been cited (https://scholar.google.com/scholar?oi=bibs&hl=en&cites=14954...) by papers accepted at the most competitive/prestigious ML conferences.

EDIT: I also do not see how in good faith you can say that VentureBeat, a company who makes the bulk of its revenue from running conferences catering to C-suite execs who can shell out thousands of dollars for a ticket, is "leftist".


Are you really making the argument that people with money can’t be leftists? Do you know liberals are on average richer? https://www.cnbc.com/2019/09/19/economic-divide-in-the-us-is...

Most executives of tech and news companies would bend over backwards just to show how leftist they are in 2020.

(I’m only replying to the part of your comment that answers mine)


"Left" and "right" carry little meaning when it comes to analyzing the divide between people. The "with/without money" bit is of a much higher order than the global "left/right" bit to me.

I don’t really care about the paper author’s fate, but it seems unsettling to me that she is discussed more than the paper itself.


In the United States, liberal and leftist are distinct terms.

Even someone like Ben Shapiro recognizes a difference between liberals and leftists (https://twitter.com/benshapiro/status/966081078166421504).

Silicon Valley types would hardly be described as leftists. Numerous studies have been done on the attitudes of Silicon Valley founders and execs (https://www.vox.com/2015/9/29/9411117/silicon-valley-politic...). The distinctions are dramatic.

We see that on average, tech founders are less likely to support vs. even Democrats generally (not just progressives):

* Banning the Keystone XL pipeline (60% vs 78%)

* The individual healthcare mandate (59% vs 70%)

* Labor unions being good (29% vs 73%)

This is to say, the average Silicon Valley type, particularly the C-suite exec or founder, tends not to be on the left wing of the Democratic party.

During the 2020 Democratic primary, even the Silicon Valley billionaires who are openly Democratic-leaning donated to candidates who were not to the left of the field (i.e. Elizabeth Warren and Bernie Sanders) (https://www.cnbc.com/2019/08/13/2020-democratic-presidential...):

* Eric Schmidt -> Cory Booker and Joe Biden

* Reed Hastings -> Pete Buttigieg

* Marc Benioff -> Cory Booker, Kamala Harris, and Jay Inslee

* Reid Hoffman -> Cory Booker, Kirsten Gillibrand, Amy Klobuchar

* Jack Dorsey -> Andrew Yang, Tulsi Gabbard

* Ben Silbermann -> Pete Buttigieg

I'm engaging with you in good faith, and because I was intrigued that in a previous comment you mentioned that you live in Spain (though who's to say you're not a US ex-pat). But calling US tech companies "leftist" is a stretch at best.


I see, they are the same thing to me, hence the confusion.


> I see nothing to suggest she's not qualified.

Your argument sounds like a red herring. Qualified for what? I didn't noticed the OP making any remark on the author's qualifications.

And by the way, you should think things through before pulling appeals to authority to try to defend the credibility of researchers. See James Watson, a nobel laureate and a founder of modern genetics, and his comments on race and gender.


> I didn't noticed the OP making any remark on the author's qualifications.

Did you not read the first sentence of OP's post?

"Ok, so basically she is a bullshit social engineer, masquerading as an 'AI ethics researcher'"


If you take a step back and read the snippet you cherry-picked, I believe you'll be able to understand that it referred to how the author is trying to pass opinions on social engineering under the guise of research on AI ethics.

That does not mean at all that the author lacks qualifications.


When you accuse someone of pretending to be an "AI ethics researcher" you are stating that they lack the qualifications to hold that title.

Pretty obtuse to suggest otherwise.


[flagged]


Keen to understand what "masquerading as" is doing in that sentence if it's not meant to imply "pretending to be"?


masquerade /ˌmɑːskəˈreɪd,ˌmaskəˈreɪd/

pretend to be someone one is not.


Let's look at the dictionary, shall we?

https://www.merriam-webster.com/dictionary/masquerade

"an action or appearance that is mere disguise or show"

As in social engineering masquerading as AI research?

Honestly, is there any room for doubt at this stage?

I don't expect honest and well-meaning people to continue insisting on this at this point. It feels that these attempts to distort what was said are simply attempts to attack the OP based on misreprentations of what he did said.


> If you take a step back and read the snippet you cherry-picked

Cherry-picked? It's literally the first sentence, the central thesis of OP's entire "argument". If they can't get that down on paper correctly, they're not worth taking seriously.

> You would do well to work on your literacy and reading comprehension skills.

Ascribing unsubstantiated sentiments to someone's post in lieu of what they actually said doesn't fall under literacy or reading comprehension, sorry. It's actually the polar opposite of that.

Why are you claiming to know OP's intentions (despite their own words) better than everyone else here? Is OP your alt account?

> I don't expect honest and well-meaning people to continue insisting on this at this point. It feels that these attempts to distort what was said are simply attempts to attack the OP based on misreprentations of what he did said.

It's extremely bizarre that you're questioning the literacy of people when you're projecting your own unsupported sentiment to OP's post while simultaneously claiming that people who are directly quoting the OP's post verbatim are "misrepresenting" what OP said.

You're being very disingenuous here (as well as borderline manipulative) by trying to shut down any dissent against your flawed position by questioning the good faith of the people doing nothing but directly quoting the OP's words.


>If they can't get that down on paper correctly, they're not worth taking seriously.

Amusingly, the OP got shirty at someone* further down this thread for calling them names, but with an unfortunate textual blunder which exposed the hypocrisy of their own name calling (not to mention total lack of self awareness).

I pointed this out but got downvoted to oblivion of course. Classic goose / gander politics.

* (and rightly so btw, that kind of conduct has no place on HN)


This is an shameful comment to read on this matter regarding a real, well-known, and serious researcher. It’s the worst kind of tech-bro middlebrow dismissal that makes no interesting comment on the content in question beyond some ill-informed shallow analysis, and assumes that years of making half-baked shitty web-apps have somehow gifted you insight into AI ethics that’s enough to decide what research is “real” and what’s not - such that anyone who isn’t aligned with your high-school level of meta-philosophy is best ignored.

Your goal here wasn’t to make any useful point about how it was actually a reasonable decision for her to resign/be fired from Google. It wasn’t to comment on the process or difficulty of having these discussions at scale or in public, or even respond to the specific points raised in the article. These would all be valid discussions to have. It was specifically a personal attack based on a paper you haven’t actually read intended to demonstrate how much better you are than everyone else.

Absolutely done with this shit.


I would love to hear of alternative tech-oriented communities that aren't dominated by the types you describe.


I think you’re making a weaker version of this argument than you need to, because you’re framing it in terms that oppose the politics of half of the public, when you really only need to argue against a much more specific set of ideas.

For example:

> Ok, basically she is pissed as AI is picking normal language, that people use, and not using the vocabulary that is in vogue on certain political circles. Basically censure, and forced speech.

I think a stronger version of this argument, which can appeal more broadly, would be: an automated system that is constantly being trained on data from social media can more quickly pick up new trends in human speech than a manually-programmed expert system. Furthermore, even if humans are constantly updating an expert system, the biases inherent in the composition of the company-hired set of experts will limit the representation of minority trends in the data. On the other hand, a model examining the entire data set can pick these up automatically. For example, I can easily get AI Dungeon (powered by GPT-3) to communicate using language and concepts that are only present in a minority community that I’m part of that I’ve rarely seen represented in literature.

When arguing against Timnit Gebru or their ilk, you don’t need to just give them the support of liberals. We actually make up the majority of techies, so it’s a bad position to put yourself in if you want your cause to succeed.


> Furthermore, even if humans are constantly updating an expert system, the biases inherent in the composition of the company-hired set of experts will limit the representation of minority trends in the data.

Exactly this. Framing it in terms of alternate options makes AI the more reasonable choice.

Google AI appears to have drawn unnecessary attention to themselves for relatively benign paper. Most of the arguments in this paper have already been discussed elsewhere and could be refuted in simple terms.


That really depends on the context you think about this paper in. If you see it as Critical race theory activism (which I do) then it's not benign at all on the contrary it's deeply damaging that Google have well paid people who think they should push their own political agendas through an organization that underlines almost everyone in the wests daily lives.


If you see AI as a technological amplifier, it could backfire the other way around. I.e. that the status quo is kept longer than necessary because it is so baked into the technology itself, not only the people who wield it.

I think it is precisely for these reasons this debate is so important, because it is truly not a clear cut improvement in my opinion.


I sincerely think that your CEOs of large companies are much less powerful than one would naturally think. They have money and power, but much of it is based on a group of people sympathetic to some pretty extreme views approving of you.

We call it “FU money,” but these guys aren’t satisfied with being able to say “FU” and still live comfortably; they want to be in charge of things that are seen as significant.

I believe it will always take more than money to get thousands of largely independent adults to do what you want.


> This is turning true ai research into a masquerade to push certain political agendas.

Spot on. Naive scientism is used as cloak to advance political agendas, instead of searching for truth:

"When a domain becomes respected source of Truth, there is incentive for hijackers. There is a huge window of opportunity (from decades to centuries) until the public opinion figures it out. Many (not all) still trust Harvard credentials, as Cardinal credentials in the past." [0]

This is the trend for "Believe in Science", a magical catchphrase evoked to put trust in frauds like Steven Pinker and Paul Krugman:

https://books.google.com/ngrams/graph?content=believe+in+sci...

[0] https://twitter.com/_benoux_/status/1335175823620497413


Calling Stephen Pinker a fraud is way over-the-top. He’s published a large number of extremely serious papers, including the paper introducing the ngrams dataset you just linked to.


He got challenged on his statistics [0], and didn't engage, correct or retract. This is antithetical to scholarly behaviour. Pinker is an ideologue and absolutely "the Mozart of bullshit vendors".

[0] https://www.academia.edu/26772813/The_Decline_of_Violent_Con...


This is a co-written paper. It's interesting you focus on her, instead of suggesting that all of the 8 authors are social engineers, pissed etc.


Maybe because the whole issue is about her?


Do you think the employer weren't aware they were hiring an activist?


I feel google was being open minded, but the other party was not, and actually being malicious...

basically: instead of reform, and fixing whats broken; to tear it down, as it is just another means of oppression.... (yes, that's the mindset of the movement).

Her research, looks to be mean spirited, and made to discredit google, so it can be blackmailed into whatever the ideology wants.

It is not meant to fix/improve the situation, but used as political amo. Hence it is just a political paper/propaganda, and not proper ai research.


Hiring an activist and then firing them before a paper is published can also be viewed as expecting them to be your activist. Maybe Google wanted to be part of the anti-establishment movement as long as its part focused on competitors.

I find the sequence of decisions made by Google problematic.

Until they describe in intimate detail what their management has been planning to filter they are effectively retracting all research from their institution as having this unknown bias. Like a tabaco lab that blocks all results against their products. That this was shoddy means nothing unless someone would have caught a Google positive piece as shoddy and actually stopped it at this process. I would guess no.

The right thing to do was let it be published, get external critique and end the relationship if most people with no money in the game thought it was bad science.


Was this outcome that difficult to predict based on the hiring interviews, and based on prior articles and public statements? In other words, would this come as a surprise to a half decent hiring manager and hiring committee?


Being an activist isnt the problem, her toxic behavior is. Im not sure Google were aware of her toxic behavior and the problems it would cause.


Has she calculated the oposite alternatives?

They didn't say they shouldn't make them, they said "prioritize energy efficiency".

normal language, that people use

As they said, language changes. Do we want a model that acts like the average person of the last 20 years? Or do we want a model that acts like a person who has just been through 2020?


> Do we want a model that acts like the average person of the last 20 years? Or do we want a model that acts like a person who has just been through 2020?

You're free to use the model you believe is best for your goals.

I you use real world data from the last 20 years to generate the model then your model will reflect an amalgamation of the real world data collected in the last 20 years.

If you prefer to focus on last month then go ahead. You'll have less real world data and seasonal patterns will have a deeper impact and the model will be less robust, but go right ahead.

If, instead, you want a model that is not generated from real world data and instead just uses your own artificial data that is blatantly and openly manipulated to reflect your personal biases and political preferences then just call the spade a spade.


While I understand what are you trying to say (the manuscript in question exhibits low empiricism), I don't think you fully understand different kinds of research that exist (you don't have to agree with it, but more helpful would be to take it appropriately according to how serious the study is).

Please take a look at (Wieringa, R., Maiden, N., Mead, N., & Rolland, C. (2005). Requirements engineering paper classification and evaluation criteria: A proposal and a discussion. Requirements Engineering, 11(1), 102–107) available here: http://www.cse.chalmers.se/~feldt/advice/wieringa_2006_re_pa... Specifically read the section 3 (which is on p. 4). There you will find that research in software engineering can be broadly grouped into the following categories: Evaluation research, Proposal of solution, Validation research, Philosophical papers, Opinion papers, Personal experience papers.

While I agree that the usual way to deal with the criticism of the work is to redo the work with better evaluation (and thus, more strongly supported arguments), I think that in any ethics research you'd need more than just publications that fall under Evaluation research (which is what you are likely referring to as "true AI research").


That's the textbook example of ad hominem attack, masquerading as critique of the paper.


Google’s algorithms will social engineer whether Google likes it or not. They might as well be intentional about it.


> Has she calculated the oposite alternatives? People driving to libraries, to search for something?

What has this got to do with anything? How is driving to a library the "oposite" of AI?

We had very effective search engines for decades before AI.


People probably thought they had very efficient libraries for decades before search engines.


I concur. IMO her works are more of opinion pieces than science.


[flagged]


hmm... not sure what you are talking about, but you are an anonymous person, with no history... retorting to bullying and name calling... not sure how you are contributing to the conversation.


Just flag these comments and move on.


> Ok, basically she is pissed as AI is picking normal language, that people use, and not using the vocabulary that is in vogue on certain political circles. Basically censure, and forced speech.

The thing is: The language that people use is often enough offensive: racial slurs, sexism and other forms of discrimination, plus seemingly innocent "code words" like "globalist".

AI developers should know about these problems and keep them in mind while developing training data for AI models. The consequences of ignoring such bias is continuing discrimination - with the most obvious and low-tech reminder being the "racist soap dispenser" which had only been trained on white people's skin and failed to dispense soap for people of color.

The danger in such discriminatory models is that while it's an annoyance that a soap dispenser does not dispense soap for a person of color, with algorithms running more and more of our daily lives, the impact is way higher: whether being refused credit or affordable insurance rates because an AI "thinks" the applicant is black and living in a black, poor neighborhood is grounds enough for that, or "predictive policing" AIs sending in massive police responses to everyday calls, this directly hits on people's survival without human oversight.

That is why ethics in AI are so important and why "social justice activism" is so desperately needed!


I think your “social justice” cause(s) would be much better served if you didn’t make up problems where there are none (what’s the issue with “globalist”??) and didn’t project ill intent where there is none (the soap dispenser wasn’t “racist”, it was just “bad”).

It’s just basic intellectual honesty, sorely missing in the SJW circles.


The soap dispenser clearly didn’t have racist thoughts. But it effectively worked worse for black people, so it has racially discriminatory effects. I don’t think there’s much to argue about here.

Also, I believe it’s best not to accuse your interlocutor of lacking intellectual honesty unless the evidence is very clear. Perhaps they just see the world differently to you.


algorithm is biased => we should fix it is reasonable, but also a misrepresentation of the argument that this whole thread is about, which is more along the lines of algorithm is biased => researchers are racist => the company that employs them is racist => the whole country is racist which definitely is driven more by ideology than reason.

I’m happy about people viewing the world differently and having an honest, good-faith discussion about the issues. But I’ll criticise intellectually dishonest arguments, like the moat-and-bailey you just pulled here.


> researchers are racist

I never argued for that. My point is that researchers (as well as everyone!) have biases, which may manifest in the end as racism or other forms of discriminations. To prevent these biases from manifesting, (AI) ethics experts are needed as oversight.

> the company that employs them is racist

To take the example of the "racist soap dispenser" again: while the individual people who have developed it may not have been racist, they have failed to think outside their biases and ask themselves "can people with non-white skin use the product?". Also, management has failed here, because more diverse staff would have resulted in at least one internal test candidate of color noticing "hey, I can't get soap!".

The result of this lack of diversity and out-of-the-ordinary thinking was that a person of color was rightfully offended at not being dispensed soap. Therefore, the company acted racist even if it never intended to do so.

> the whole country is racist

Just short of half of the country recently elected a President who openly spouted white-supremacist conspiracy theories. For half the people in the US racism is not a dealbreaker for the highest office it has to offer!


> Just short of half of the country recently elected a President who openly spouted white-supremacist conspiracy theories.

He was elected by the votes of ~27% of eligible voters, and about ~20% of the population, not “just short of half the country”.

You could say it was a little less than half the people who felt it is worthwhile to vote in a system where votes matter so little that getting the most of them doesn't mean you win, but then the story is about alienation from the electoral system more than support or even indifference to racism.


> Just short of half of the country recently elected a President who openly spouted white-supremacist conspiracy theories. For half the people in the US racism is not a dealbreaker for the highest office it has to offer!

... which is a bit different from saying that half the country is racist. Perhaps they thought the alternative was worse.


> For half the people in the US racism is not a dealbreaker for the highest office it has to offer!

For the other half, sexism isn’t. (Biden selected his vice-president on the basis of sex.)


OK, dude - if you really think I'm arguing in bad faith, I will just leave you to it!


> what’s the issue with “globalist”??

In a certain context it just means jews. And they didn't write racist soap dispenser, they wrote "racist soap dispenser".

Do look inwards.


> what’s the issue with “globalist”

It's a white-supremacist dog whistle for "jew": https://www.haaretz.com/us-news/.premium-how-did-the-term-gl...

> the soap dispenser wasn’t “racist”, it was just “bad”

For a person of color who wants to wash their hand and sees that it dispenses soap for white people but not for them it is yet another piece of everyday racism.


I think that “dog-whistle” is a dog-whistle for wokeist people who prefer to insult others as racists instead of making actual rational arguments.

As for “globalist”, I don’t see an issue with criticising globalism and those who promote it, be they Jews or others. Clearly a lot of Jews (in Israel) are very “localist”, and I suppose there are some that are “globalist” as well. So?

Just because a view is also supported by some or other “vile” people (that are almost always a tiny minority, if disproportionately loud and/or salient), doesn’t make that view invalid or immoral.


Globalist probably can be used as a dog-whistle, but we do need a word for people who support free trade, free migration, borderless finance etc. That’s a quite common position. Indeed, the Haaretz article you link to is considerably more ambiguous than just saying “globalism is a bad word and must be banned “.


> The language that people use is often enough offensive

Please define "often". And "offensive" (as in to whom). And of course, you also need to factor in whether being offended is justified.

As an extreme counterexample, a deeply racist person would be offended by a mixed couple. And racists on both sides currently are. I see no reason whatsoever to pander to either side on this, so just there being someone who is or claims to be offended is no guide.

If you find "globalist" offensive, then you are the problem. And, the vast majority of Americans are find this type of PC culture and social justice activism offensive, including the vast majority of "marginalized" groups.

https://www.theatlantic.com/ideas/archive/2018/10/large-majo...

In fact, the only group marginally in favor are white, rich and highly educated. Hmm...


[flagged]


If it's OK to assume that Jeff Dean cannot understand certain things because of his ethnicity, is it also OK to assume that people of other ethnicities cannot do or understand certain things? Is this the kind of culture we should be encouraging?


> other ethnicities cannot do or understand certain things? I

I don't think that those ethnicities have been in the same position of power as the one represented by Dean, so your point is moot.


> I presume hetero

?

> coming from (I also presume) at least a middle-class family

?

That's quite a lot of presumptions.

I note in passing the asymmetry of the use of homo and hetero – the former is offensive, the latter is informal, interesting no? How'd that come about?

––

[1] https://dictionary.cambridge.org/dictionary/english/homo

[2] https://dictionary.cambridge.org/dictionary/english/hetero


Wow. The pro-google push is really visible in this thread


You can look through my history and realise I've been nothing but critical at Google for their privacy and product lifecycle issues. Not sure I ever weighed in on their anti-union topics but would have been against them there too.

The point is, I'm not a Google shill.

Still, on this case, on a factual level, the only real dispute is whether this exchange:

"Do X or I quit" "Ok, your final paycheck is in the mail and IT will be in touch to organise equipment returns, effective now"

Is "accepting a resignation" or "firing". Neither side is disputing that this is how it went down.

On an ethical level, again, I'm no fan of Google but Timnit Cebru's previous public actions don't paint her in a good light while Jeff Dean's doesn't have any notable enough to sway my opinion one way or another on his ethical trustworthiness.

So based on that, I (and many others) do end up siding with Google. is that a pro-Google push? is someone co-ordinating this? If they are, they haven't contacted me. Don't mistake the fact that Google is often unpopular here with the idea that no Google action can be supported here without interference


Thanks for the clarity.


TBH, I dont think so. I think she is toxic and ample evidence for it. Its normaly for people to side with Google here IMO. Would you want to work with a brilliant jerk?


> I understand the concern over Timnit’s resignation from Google - Jeff Dean

That's just outright false. She was terminated, effective immediately.

Does anyone else find themselves losing a lot of respect for Jeff Dean in all this?


> That's just outright false. She was terminated, effective immediately.

Without taking sides (I don't know the full story to understand who is objectively in the wrong here), that's not completely true.

She basically gave an ultimatum - meet my demands or I can work on a last date.[1] It is quite common at big tech that once you resign - officially or unofficially - you could be asked to leave effective immediately. Usually happens when you work in critical areas or moving to a competitor or a disgruntled employee.

Could this have been better handled? Maybe, but no matter who was in the right or wrong, she wasn't technically fired.

[1] https://twitter.com/timnitGebru/status/1334343577044979712


Threatening to resign is not resigning. Especially as part of a negotiation, and especially when the threat was to "work on" a last date.

She was fired.


If you threaten to resign, I don't think you should complain if the other side says "ok".


But it's still up to you to resign at that point, or to back down.

They can't accept the threat itself as a resignation. That's not how threats work.


I really dont understand this logic.

Firstly, there is ample evidence of her toxic behaviour online. (See the exchange with lecun on twitter, absolutely horrible). So its clear she has issues with how to collaborate and communicate.

She then goes on to threaten the company she works for. Like wtf? Irrespective of what you want to do in response, act like a professional. If you were a CEO/Manager etc and had someone with toxic behavior come in and threaten you if you didnt comply to their demands, wouldnt you go 'ok, see ya'. I certainly would. Everyone is replaceable. Especially if youre toxic when it comes to dealing with situations you dont agree with


> If you were a CEO/Manager etc and had someone with toxic behavior come in and threaten you if you didnt comply to their demands, wouldnt you go 'ok, see ya'. I certainly would.

Which is fine, but that's firing them for acting up. It doesn't really matter what the demands are.


She threatened resignation in fairly obnoxious and toxic way. google called her bluff and accepted.

No, I haven’t lost respect.


Googlers are waaay too comfortable with their message board system and if they stay there long enough and keep interacting with it they’ll get burned


Dr Gebru's research AFAICT shines a valuable light on some interesting ethical problems. Yet the response so far from Google and even HN has been one of censorship and suppression of Dr Gebru's free speech. How can we reconcile free speech and Google's right of review to her research?


Not sure the details of her employment arrangement, but usually work you do for your company belongs to the company and you don't have any right to freely publish it. Seems she was able to publish it anyway, so that's pretty good, isn't it? She's got more than the usual helping of free speech. Where's the censorship or suppression?


It was published? Where?


As a white man, when my work is rejected, I assume it's because my work needs improvement. This is an advantage of being a white man. When things don't go my way, I don't need to wonder if it's a macro-aggression, an act of systemic discrimination, or suppression of a marginalized voice. I don't envy my colleagues who navigate these waters.

Furthermore, even if my manager or employer makes it an explicit priority to promote more women or PoC, I don't question whether my own gender or skin color will ultimately work against me at promotion time. I grew up in a time and environment where I didn't need to question that, so I tend not think about it. Even if I did think about it, it's a culturally inappropriate question to ask, so my only option is to keep my head down and work harder.


As a minority growing up in a developing world with actual problems, I never once even heard of these "waters" until immigrating to the most wealthy & powerful country in the world.

Such is the product of a country which hasn't had any real problems in half a century.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: