Hacker News new | past | comments | ask | show | jobs | submit login

Agreed; but this isn't just a Google problem. Seems to me like a lot of SF (and SF-inspired) "big tech" wants to be known for their "wokeness"[1], which leads to hires like Timnit and other "politically-outspoken" people, which in turn leads to situations like this, James Damore, and other individuals/situations that amount to workplace political activism.

I am wholly uncomfortable with any discussion of politics in a workplace environment like a mailing list. I am more than fine with employees choosing to associate politically outside the workplace and workplace spaces, no matter how radical I find their views. This is in itself political - it supports the status quo - but anything else is inviting dissent and combativeness, and situations like these will keep happening.

That said, I've not seen the email outlining her conditions "or else", but I feel I'd have very much taken the same stance as you, given the surrounding coverage of what was in that email. Ultimatums to your employer don't often go well. And perhaps this is a good thing for her, because she may leave for a place that better suits her.

-

[1] my derisive use of this term does not stem from actual efforts at inclusiveness, those are good, but from surface-level attempts at it that end up feeling performative, at best.




The problem here is that AI and race now intersect in non trivial ways. It’s like having privacy as political discourse when you are scanning emails for marketing purposes. There’s a line at which not having the discussion is equivalent to taking a political stance.


This. And since certain companies/industries pose a certain "systemic risk," failing to self-regulate might lead to external regulation later.

For example, legislation on AI, predictive models, and facial recognition in policing.

Who knows, we could see limits on the use of imperfect models in other areas.

After all, there is no "market solution" for ethics.

https://mathbabe.org/2013/11/12/there-is-no-market-solution-...


This.

It was literally her job to conduct such research. It's not politics, it's ethics (tethics)


Ethics is not not politics though. Everything is political. Claiming that ethical concerns should impact business decisions is certainly political. We can wish to “not discuss politics in the workplace”, but then it isn’t really possible to have any discussion.


I think you are confusing her previous work (which was on race) and the work this article is discussing.

This research paper in question was about AI and natural language processing and its carbon footprint and all that. Nothing to do with race or political agenda.

Google also has done a ton of work on creating renewable datacenters, so I think they are definitely onboard with making tech have less of a carbon footprint.


This is not true. She does discuss how some countries and regions of the world use the Internet less, or not at all, and thus Transformer/BERT are training on the linguistic habits of the most affluent of the world.


It was still an extremely scattergun approach. I'm moderately familiar with NLP but nothing in that paper really had traction overall, it was more of a list of mildly bad things than anything huge.


This is true. And the research raised an important point which was worth discussing. But it seems, based on the totality of the information, that Google received an ultimatum demanding that it take particular action with respect to the issue. That’s going beyond an ethics researcher’s role.


The ultimatum was about the review process and not the paper per se. I do think the ultimatum was a bad idea, but I don’t think I’d fire an employee over one. Id just tell them no, on their demand.

That said a blind internal review process that apparently blocked a paper for the first time in Googles history probably deserves more explanation than simply “we will take your resignation”.


They didn't fire her. She said "I will quit if you don't meet these demands" and they were unwilling to meet the demands.

If she didn't threaten to quit, I can't imagine they would've done anything other than tell her no. But she gave them an ultimatum and they chose which side they would be on.


Google didn't hire Gebru for her "wokeness" or for whatever petty internal political issues drive discussions here on HN. They hired her because they are a massive AI company operating on a multi-billion user scale that will very likely produce massic impacts on our society, impacts that we can't even imagine today. They understand at an intuitive level that what they're doing is risky and (perhaps at minimum) they understand there's a risk that it will provoke a backlash. To insulate themselves, they wanted a credible AI ethics researcher even at the cost of her being potentially a bit difficult to manage.

Now, I don't fully understand what happened in this incident, but I think there's a chance that it's far more impactful than some internal workplace spat. To put this succinctly: businesses like Google are playing cards for thousand dollar bills, and HN is interested in debating their favorite game of nickle poker.


I really like this train of thought. I want to push on the idea of an AI-focused corporation “insulating” itself from backlash. I think what’s happening here is an example of the human agents inside of a corporation acting human, counter to the interest of the organization within which they find themselves. This particular spat might not necessarily be the tipping point, but it looks like history is starting to overtly dance with the impacts AI will have, as you stated.

The current imperatives of a profit-focused corporation are not aligned with human interests. This has been clear since the time of 1940’s fascist Germany. There are examples of corporations aligning themselves with that government, solely because of the stability that a strong government structure provided. In hindsight, it is clear that corporations provided Hitler et al with the economic powerhouses that moved him and other sympathetic leaders to construct frameworks which were not in accordance with modern ideas of human rights. Especially under the imperative of shareholder primacy in the US, that motivation is stronger than ever. In China, the imperative is strengthening government power.

A development I’m personally watching is how AI will fit into that tension between human-interests and corporate interests. Alphabet’s leaders seem to be attempting to steer that corporation closer in alignment with human interests, although the profit motive is an unshakeable goal. Should the US government take a more active role in the development and control of AI, or should we mainly allow the market to pursue AI within the framework of profit motive?

Offhandedly, I am wary of allowing unrestrained pursuit of AI development, and so departments like which Gebru was leading take a historically vital role. I also feel a tension between my awe of progress (as a layman) and the possibilities that AI might unlock for humanity. That leaves me with the question of, who should be in charge of artificial intelligence? From our perspective in 2020, I know Bostrom’s and Musk’s concerns about the future of AI seem far-fetched, but even if those negative outcomes have a non-zero possibility of coming true, then we should spend time considering them.


>Seems to me like a lot of SF (and SF-inspired) "big tech" wants to be known for their "wokeness"

It's just a fashion. It would be so dead in 10-20 years as being a hippy was in 1985.


It's more than politics. It's more about moralizing things as in Animal Farm's "Humans are bad, animals are good". If it is politics, it is the worst kind of politics, as history in Cambodia/China/Europe repeated showed.


The performative sense could have to do with a lack of personal connection to it. It’s a frustration for something not wholly understood, specially when the “performative” express strong conviction. I’m trying to understand how one could imagine someone would risk imprisonment, bodily injury, or death just for performance.


Exaggerating risk of “imprisonment, bodily injury, or death” is a hallmark of the “woke”.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: