Hacker News new | past | comments | ask | show | jobs | submit login
Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases (theguardian.com)
29 points by biorach 12 months ago | hide | past | favorite | 70 comments



> The thing that was very confusing to me as an immigrant was that liberal type of racism,” she says. “People who sound like they really care about you, but they’d be like: ‘Don’t you think it’s going to be hard for you?’

This is why I object to teaching kids about racism in schools. Maybe the teacher was racist. Or maybe she got good grades—which isn’t hard to do in American high schools—but her teachers didn’t think she was one of the exceptional kids who could go on to excel in the field as a career. If you aren’t socialized to think of yourself as the target of racism, the natural response is to probe further. “But my grades are good!” The explanation that comes next might lead to learning and growth, or at least renewed motivation (“I’ll show him!”) But if a kid has been socialized to perceive such interactions in terms of racism, there are, they can’t learn from such conversations. After all it’s pointless arguing with a racist, nor can you learn anything valuable about yourself from a racist.

I moved to Virginia from Bangladesh back in the 1980s, to a school and neighborhood that was nearly 100% white. At the time, northern Virginia where we were was solidly Republican. But my parents never taught me about “racism” and it was a tremendous blessing. I never wondered whether the ordinary negative interactions of life were racially motivated. Was that kid being a bully because I was brown, or because I was a book worm? What good would it have done me to recognize it anyway?

I’m extremely worried about how the discourse is going to affect my kids. It’s impossible to protect your kids these days from folks like Gebru.


> ‘Don’t you think it’s going to be hard for you‘

I found this curious when reading the article and expected more context as to why this was racist. Why did Gebru consider this was due to her race or immigrant status? I got this comment through my whole life too even though I don’t share Gebru’s background or demographic factors. Were my teachers racist? Or were they concerned? Or was the subject just really hard? Or were they classist against my poverty?

It seems more info is needed and attributing this to racism (or sexism or xenophobia) seems like an unsupported conclusion. Although it seems Gebru felt it was racist and significant enough to mention in the background for this article.

I think this is what makes measuring micro aggressions so difficult. It’s rare to confirm the intent of the person who did something that is perceived as micro aggressions but the recipient feels and infers an intent.

I don’t know how to fix this but I want to.

I remember once remarking to a coworker that they had a nice car. It was a new 7-series and looked really nice. I thought nothing of it as it was just small talk and I frequently make such small comments to note something positive.

A few months later we were having a large zoom call after the George Floyd murder where people were recounting persistent and systematic racism. My coworker shared an example of what he called everyday racism when someone said he had a nice car and he assumed it was because he was black and couldn’t afford such a nice car. He seemed really hurt and spoke for 5 minutes about it and how he worked really hard to afford the car and his spouse had a high income as well. He didn’t name me as the person who made the comment but it sounded like my small comment.

I was so confused as to how all that could be inferred from “nice car.” I don’t know him well enough to ask about it and clear it up. I felt bad he was hurt, even though I just intended it as a small comment recognizing a nice car from a coworker and would have made the comment regardless of his race. So I don’t think I did anything wrong but still feel bad. And there’s no way I could predict such a response.

I don’t compliment coworkers cars any longer.


> is perceived as micro aggressions but the recipient feels and infers an intent.

> I don't compliment coworkers cars any longer.

I'm starting to think these "micro aggressions" are just a way to keep a culture war going, because some groups need a cause to fight for (for political reasons).

It reminds me of the whole master vs main debate.

A (black) engineer colleague of mine told me about his team's effort to change master to main. The whole initiative was started by a rainbow colored hair (white) PM and since it was what they believed to be a highly visible and easy fix, grew to a team of 5. All non-technical PMs of course. They ended up producing a "manifesto of inclusive software" where they listed every word they considered offensive and what it should be replaced with and made a very public announcement regarding the change.

The only response to their email was my (black) colleague asking if the branch renaming could be postponed to after a release because he didn't know what it could break in the build and release automation in case "master" is hard-coded somewhere.

This apparently started a lengthy thread between him and the 5 PMs where they explained to him that the reason he wasn't supportive of the change was because of the "systemic and cultural racism" he apparently internalized.


This apparently started a lengthy thread between him and the 5 PMs where they explained to him that the reason he wasn't supportive of the change was because of the "systemic and cultural racism" he apparently internalized.

Precious.


[flagged]


and yet non white people do agree on things like white privilege and microaggressions. Non white people overwhelmingly support politics that reflects a view that white people are racist and privileged. It's seen in things like this poll https://www.pewresearch.org/politics/2019/12/17/views-on-rac....

If the majority of non white people[1] support those politics, how is it a conspiracy by liberal whites? It sounds like it's affirmatively popular to reengineer society.

[1]: https://www.usnews.com/news/health-news/articles/2022-11-16/...


The poll you link addresses people “who say white people benefit from advantages in society that black people do not have.” That’s self-evidently true: for example, white people have more wealth than black people. They are much more likely to have college degrees. Etc. You could also say “WASPs benefit from advantages in society that Appalachians don’t have” and it would be equally true.

But a majority of non-whites also oppose racial preferences for college: https://www.pewresearch.org/short-reads/2022/04/26/u-s-publi..., and jobs: https://www.pewresearch.org/social-trends/wp-content/uploads.... That includes a majority of black people in both cases. The majority of black and Latino people also don’t find common “micro aggressions” offensive. https://contexts.org/blog/who-gets-to-define-whats-racist

Racial preferences and quotas, and language policing are the most prominent of the concrete policies championed by white liberals who use phrases like “white privilege,” and most minorities don’t support either.


The poll I linked shows a difference in thought among white vs Hispanic vs Black perceptions of whether or not whites benefit from advantages in society. I don't see any definition of what a societal advantage is; it may be income. My point was more that races don't agree on whether whites have societal advantages which is very serious if policies are based on that. Things like university admissions, scholarships, and job applications do not favor whites so this point is debatable I think. I would argue that being non white is a societal advantage, being white is a disadvantage now. Anti white racism is publicly acceptable.

Non white people may not care about "microaggressions" but I think for racial quotas at schools it is much less clear[1], polls seem to go back and forth. Polls aside, my point is that even if what you are saying is true, non white people support the party that discusses microaggressions and quotas overwhelmingly. Why should the polls be trusted? I believe there are certain liberal white elites who benefit from this but it's surely not just them and surely non white people are aware of what they are supporting. Another interpretation of this is just that there is indifference to radicalism or even varying degrees of support for radicalism.

[1]: https://www.nbcnews.com/meet-the-press/meetthepressblog/poll...


> The poll I linked shows a difference in thought among white vs Hispanic vs Black perceptions of whether or not whites benefit from advantages in society. I don't see any definition of what a societal advantage is

In the absence of a definition, people’s reading of the question may reflect political polarization, but not necessarily about how a multiethnic society should work. For example, I think the constitutional design of a limited, federalized government was killed a century ago by non-Anglo immigrants. Certainly, black people generally believe in a robust federal government that can protect them from racism, and both they and Hispanics support big government with substantial investments in education and healthcare that can help reduce disparities. But that doesn’t mean they necessarily support the social and cultural rituals around race advanced by white liberals, or the more extreme policies such as racial preferences.

For that reason, it’s important to distinguish between “affirmative action” and “racial preferences. Originally, “affirmative action” mean taking affirmative steps to end explicit racial discrimination that was known to be happening. Today, it can encompass things like taking affirmative steps to ensure that universities and companies are marketing openings to qualified applicants of various races. Obviously that is a policy that has pretty broad support.

But poll after poll shows that minorities oppose using race as a factor in the actual admissions and hiring decisions. In California, Prop 16 was defeated in every single majority Hispanic county: https://thehill.com/opinion/education/526642-hispanics-shock.... Racial preferences, and similar policies such as “diversity requirements” (i.e. quotas) are a core approach for white liberals that most minorities reject.


> by non-Anglo immigrants

i believe these now make up the majority of the elite white liberals.

> and both they and Hispanics support big government

see https://en.wikipedia.org/wiki/Historical_racial_and_ethnic_d...

Looks like one particular party is a beneficiary and drafter of immigration policy.

> But that doesn’t mean they necessarily support the social and cultural rituals around race advanced by white liberals,

What about black professors and activists like Ibram Kendi? Blacks appear to support racial quotas in respect to Prop 16. White liberals are certainly part of it, but not all of it. I don't see any reason to say that black elites are being led by whites and not speaking on their own accord. Brandon Johnson, the mayor of Chicago, is a good example of a radical race activist. He was most popular in areas with highest black population and won the black community by the widest margin. That suggests that radical agendas are not just shared by white liberals but have support in the black community.

> Racial preferences, and similar policies such as “diversity requirements” (i.e. quotas) are a core approach for white liberals that most minorities reject.

If there was really such opposition to racial quotas and other racial preferences, why do they continue to support the party who puts race at the center of everything they do? Several attempts recently have been made at establishing quotas for medicine, grants, etc. but challenged in court. This was no surprise. It's contradictory to support that party especially since the party seems to focus particularly on black issues and expects other non white people to act along.


[flagged]


> The people selected to be “representative” of the group aren’t actually a representative sampling of the group.

I believe white liberals are involved in it yes. But I also believe elite black radicals have their own goals and can sustain their own power. In general the elite don't reflect the norm so black elites have different goals than the average black person. A similar pattern is seen in South Africa. Black elites run the country while the norm is impoverished. It would be hard to argue that SA is run by white liberals though.

The majority of large cities have black mayors and a number of teacher's unions are run by blacks. It's hard to see how white liberals even can benefit from things like not enforcing laws and taxes targeted at wealthy whites https://www.theguardian.com/us-news/2023/apr/01/chicago-mayo.... Right now the relationship is symbiotic with white liberals. But in my opinion this is a strange relationship.

Speaking of SA, racial retribution is a politically popular narrative demagogues can take advantage of. It's not unprecedented and happened repeatedly in ethnically diverse African countries in the last few decades. I think this is the general pattern ethnic conflict in a democracy leads to which I think the US is headed towards.

A lot of the Hispanic and Asian population came in the last 30 years or so and it would be hard to argue they are a victim of racism in the US but it is disappointing that they are siding with the pro ethnic conflict side.

> This is the white-liberal-dominated selection process at work.

This can happen in a selection process because the white american mind is propagandized to believe it is subconsciously racist and an evil oppressor so generally white people give deference to outspoken blacks to avoid being accused of racism. A small number of radicals can overpower a majority in that sense.


Wow. Totally wrong lesson learned. Go refresh yourself to some hardcore problems and then replay your situation again.

A more standard response may be: “oh that was me. Your car is better than mine. Sometimes you know, a compliment is just a compliment.”

I’m currently reading Hunting Eichmann. Really nasty stuff that went down, as many know. A very strong reminder that so many of these first-world problems, micro-aggressions, sympathy-seeking woe-is-me public sob stories are just crap. Seriously?? He felt bad from a complement about his car??


So I don’t think I did anything wrong but still feel bad. And there’s no way I could predict such a response.

I feel bad for both of you. This anecdote seems to capture everything wrong with the current climate of discussion on these topics in the US right now.


> I don’t compliment coworkers cars any longer.

It's unfortunate, but for you to feel guilty and to stop this seems unreasonable. There are myriad of traps I can imagine you'd be afraid of falling into. Like, "Oh I want lunch and I fancy some fried chicken. Oh wait if I invite [black colleague] will they be offended at the suggestion to go for that food?"

Communication goes through filters, as this cheap graphic shows, and I don't think it's something you can fix on your side:

https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2F...


To clarify, I don’t feel guilty over my actions and didn’t stop complimenting out of virtue. I stopped to not cause potential harm, or even worse, to be labeled a racist and all the time they would take.

I also never suggest friend chicken for lunch. I love fried chicken. I make a pretty good friend chicken. I’m just too timid and over leveraged to risk having it perceived incorrectly.


I find it fascinating that in the middle of a hype cycle here, where really central people to AI are saying we should be worried (Altman, Hinton etc.), there's still just outright animosity towards people who are actually interested in studying these problems. It's also very far from a theoretical problem, did you know that crash test dummies are men? Literally, until extremely recently crash test dummies were all designed to be 50th percentile men[1]. Surely they test women dummies too? Oh yeah! They put a small woman dummy in the passenger seat. But still, they've presumably measured women and designed something appropriate? Nope! The small female dummy is defined as a scaled version of the male dummy. It's genuinely astonishing how short sighted engineers can be in designing technology and it's wild how angry they get when anyone tries to actually fix it.

[1]: https://www.sciencedirect.com/science/article/pii/S000145751...


> I find it fascinating that in the middle of a hype cycle here, where really central people to AI are saying we should be worried (Altman, Hinton etc.), there's still just outright animosity towards people who are actually interested in studying these problems.

Note that the abstract speculative future concerns Altman and Hinton are concerned with are very different from the immediate and present issues Gebru, her former Google colleague Margaret Mitchell (now at Huggingface), and others in the AI ethics space are raising.


Absolutely, I think actually her concerns are much more direct, measurable and actionable. It didn't take a genius to point out "maybe we should crash test something other than a male crash test dummy", but it was much more impactful and actionable than "let's figure out how to test self-driving cars" (to which the answer is still "let's just drive them and hope")


Exactly. Same problem with facial recognition which simply didn't work on anyone with dark skin. The bigger the models the less obvious the bias becomes but is still there. Gebru did good work quantifying that but google shut it down.

And it's sad that many of the comments here are reactionary and hostile towards Gebru. Why is the community here so bad? I thought it's supposed to be full of smart people with computers?


> hostile towards Gebru

Her paper aren't really high quality. Her carbon footprint calculations had all of Google's compute running from jet fuel. [0]

In her "Gender Shades" paper she completely ignored the existence of the Asian race. That's a pretty big limitation for a paper named in such general terms. In another place she labels Asians as "white adjacent".

She can't argue about technical subjects (here's an example [1]) because she isn't technical.

[0] https://www.technologyreview.com/2020/12/04/1013294/google-a...

[1] https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...


I don't know if you just... hope people don't read your citations. But as far as I could see from your citations you're not making sensible claims.

The first citation the article talks about her citing someone else's work about carbon footprint. So firstly, they're not her calculations - the paper is by Strubell[1]. Secondly, the paper absolutely does not assume the compute is running from jet fuel! It calculates the total energy to train a model and then uses US EPA data to show how much CO2 that would be in practice and then compares that to how much CO2 emissions a flight puts out. Honestly, I don't know where on earth you got this rubbish about running Google compute on jet fuel.

Secondly, you claim she isn't technical, she has a Masters in Electrical Engineering, a PhD in Computer Vision, and has had a fantastic career. Again in the actual link that you cite she is described as "technical co-lead of the Ethical Artificial Intelligence Team". I just don't know what you think you're doing citing things that directly contradict your claims.

If you think that Gebru is terrible, I would thoroughly recommend you talk about her less, because your attacks on her are so poor it lends her credibility. I'm fairly undecided on her - I literally don't know enough to judge, but your arguments are very poor.

[1]:https://arxiv.org/abs/1906.02243


Google datacenters are run off green power, so it's deceptive to compare energy costs there to jet fuel. It implies that the carbon impact is the same, which isn't true, because not all energy is the same.

As for being technical or not, well, none of her work seems particularly interested in technology. It's all grievance studies stuff. Having academic credentials counts for nothing because universities are so obsessed with race ideology they'd happily award black females computing-related qualifications on the back of nothing. It supplies no credibility.


I don't really see the point of getting deep into this, the logic is very clear in the peer reviewed paper. The energy consumption to train new models is marginal extra electricity consumption and so they've cited the EPA's numbers for CO2 emissions per Watt-hour of electricity. It's a perfectly fair comparison. Yes, you can caveat all sorts of "But we could do this with green energy" but that's not what the paper was about.

>none of her work seems particularly interested in technology.

Honestly! Her PhD in computer vision seems pretty technological to me.

Fine, if the position you want to have is all universities are race ideology centres great, again, it reflects more on you than on anyone else.


The logic is clear but incorrect. It's not a correct comparison because it assumes a watt-hour used by Google is the same as an average watt-hour. They actually buy enough renewable power for all their DCs (just not always locally). However, admitting that the CO2 impact of Google training LLMs is zero would reduce the impact of her paper, so she just makes this deceptive comparison. This is one of the reasons her paper was squashed, right? It effectively pretends all the efforts made by her colleagues on greening their DCs doesn't exist. Not very nice.

I looked up her PhD:

https://stacks.stanford.edu/file/druid:xg519hx1735/thesis_ad...

In her own words it "pertains to using large scale publicly available images to gain sociological insight" and is about "visual computational sociology". The primary reported finding is that you can predict things like racial demographics based on what brands of cars are being driven around. It's consistent with a primary interest in society and not technology.


Let's cut through your bullshit

You said

> none of her work seems particularly interested in technology

Here is a very representative quote from her PhD thesis which you clearly did not really look up

> Augmenting Existing Adaptation Algorithms with At- tribute Loss > We can augment any existing adaptation algorithm with our attribute based losses to perform adaptation at the attribute as well as the class level. Here, we describe how we apply our method to [116]. To use our method with [116], we add the domain confusion and...

It's clear you will talk any amount of dishonest nonsense to make your point


The thread here started with the claim that "she has a Masters in Electrical Engineering, a PhD in Computer Vision, and has had a fantastic career".

It's reasonable to expect given that description for her PhD to be actually in computer vision. You know, a new SOTA on some well known benchmark or something. Some new general technique useful to anyone who deploys computer vision. Even she doesn't claim that's the case, the dissertation title claims it to be visual computational sociology which would be a different field. Her findings are pure sociology, there's no outcome here which is about computer vision (beyond "hey CV seems to work").

The original point was she doesn't seem very interested in technology. I don't see how a PhD that is at least 50% sociology disproves this point, it clearly supports it. And that's before dealing with the general dishonesty that pervades woke universities; who knows if the thesis can even be taken at face value? A place like Stanford wouldn't hesitate to misrepresent things for equity purposes.


Honestly, this is just silly. I'm sure you've got some very interesting thoughts about the way that Google manages it's data centre energy demand. It's not relevant to the paper. In fact if you read the paper you'd know your whole schtick about Google is irrelevant because their models ran on TPUs whose power characteristics aren't public information and therefore aren't in the paper.

You're quite literally making nonsensical complaints.


No, it's the other way around. The complaint being made by you and Gebru is nonsensical.

The exact power characteristics of TPUs don't matter because as I keep repeating and you guys keep desperately trying to ignore, Google years ago committed to buying renewable power 1:1 for the consumption of their datacenters. It doesn't matter how that splits between CPU, TPU and GPU, it's all covered by them.

Therefore, Google's computations net out to zero carbon emissions.

Most organizations don't do this. On average, a kilowatt hour of consumption is not from renewable sources. Therefore, Gebru's use of average emissions would be valid at other companies but not at Google, which means she was lying in order to gain impact.

Honestly, this point isn't hard to understand. It's difficult to escape the feeling that if she was a white man, it would all be clearly received.


[flagged]


I don't think right wing media talks about Gebru's paper much one way or another.

Please avoid the shallow dismissals. Unless you'd like every post you make to get a reply like, "wow guardian reader much? let me guess, you think the world is gonna end in 5 years?". It's not very interesting to read. Debate the points or don't bother.


> nothing because universities are so obsessed with race ideology they'd happily award black females computing-related qualifications on the back of nothing

Just look at the point you made!

This is exactly what right wing media talks about constantly. Not about Gebru, but that someone like Gebru is where she is not because of merit. Your point is completely baseless and my dismissal of it isn't shallow.

Where do you suspect your point, that I quoted above, comes from? Does it come from your own personal experience? Studies and statistics? Or somewhere else? My point is that it's baseless and comes from right wing media. We are all biased based on what we read and hear. Points made without anything to back them are shallow, not those critiquing them.


The point isn't baseless because universities and corporations themselves - not "right wing media" - constantly talk publicly about how desperate they are to hire more women and blacks, and sometimes even how they're explicitly removing merit as criteria. The media thing you've come up with there is entirely your own projection; I never mentioned media and don't read or watch American media as I don't even live there. You don't need to though, to understand what institutions are doing because they say so in their own press releases and even research papers.

Here's one such case from within the last month. Some academics did a study on themselves trying to find misogyny and were surprised to find pervasive and systematic bias against hiring men.

https://news.ycombinator.com/item?id=35762380

Absolutely nobody outside of hard-left bubbles is surprised by this because it's exactly what they constantly say they will do e.g.

https://www.universityaffairs.ca/news/news-article/universit...

Besides, think about how biased your framing is. The problem here isn't that it's a right wing talking point. The problem is that the left wing media refuse to tell anyone what's happening because it aligns with their ideological agenda. The fact that universities - places that claim to be merit based - are systematically doing the opposite actually is news. The left censoring discussion of it in forums they control does not make it a "right wing talking point".


The actual paper [0] that Gebru wrote referenced Strubell's climate impact paper [1]. None of them make the assumption that google compute runs from jet fuel. They compare training the models to flying, driving, etc.

In the "Gender Shades" paper [2] as you call it, she doesn't use race but dermatologist approved Fitzpatrick Skin Type in their analysis as a proxy which of course covers all people including Asians.

The kind of bullshit (a technical term) response you gave is common among reactionaries who don't read, and can't do science.

[0] https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

[1] https://arxiv.org/pdf/1906.02243.pdf

[2] http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...


> she doesn't use race but dermatologist approved Fitzpatrick Skin Type in their analysis as a proxy which of course covers all people including Asians.

Erasing Asians from stats when convenient has been a staple of the CRT movement for quite some times (when, for example, their existence break down stats "proving" racism against non-whites). The term "white-adjacent" has become one of the dog whistles of the movement.


But that's not what the paper does. Asians have many skin colors from light to very dark (compare China, Korea, and Japan to Thailand, Vietnam, and India). I'm not sure what you are talking about here. I don't see how it erases Asians since Asians are a very diverse group of people in appearance.

In regards to CRT, CRT has always included Asians, for example, studying the Chinese Exclusion Act in the US. Many people who criticize CRT (mostly on the right) have no idea what it is.

It seems to me that the anti-CRT crowd (mostly right-wing reactionaries) are making shit up to bring down this kind of important research in AI down.


> In regards to CRT, CRT has always included Asians

Quite the opposite.

https://unherd.com/2022/02/anti-racism-betrays-asian-student...


That article was about an expensive magnet school where by definition, the kids are privileged. Tuition is $30k per year, or with room/board $50k.

https://www.goodschoolsguide.co.uk/schools/500658/thomas-jef...

The demographics of the school are indeed mostly Asian, and 97% of the pupils pay full tuition. Which means you have to be really well off to afford it.

https://schoolprofiles.fcps.edu/schlprfl/f?p=108:13:::NO::P0...

Is this article really about merit? or is it actually about wealth?


> Same problem with facial recognition which simply didn't work on anyone with dark skin.

That's an urban legend btw. The issue was actually that the person for whom it failed had the camera pointed straight up with a bright light behind him, so the image was totally washed out and lacked saturation. The software failed in the same way if you had a dark background and then shone a bright light into a white person's face. The algorithm needs contrast to work.

Also, people on this forum really should know better than insinuate that a program working imperfectly is caused by racism. As if programmers sit around finding ways to troll black people by deliberately making face recognition not work. Even the idea of it is absurd. When people make such insinuations, it reinforces the widely held belief that accusations of racism are invariably bullshit, which can lead to a boy-crying-wolf problem.


"AI threatens to deepen the dominance of a way of thinking that is white, male,"

I stopped reading there.

I'm white I'm male I have 0 advantages because of that. Always the same lies everywhere. I'm tired of it.


Well if you stop reading any time anyone suggests you might have advantages you're not aware of then it's not surprising you don't think you have any advantages.


is it possible to disprove that "AI deepens the way of thinking that is white, male"? I don't think it is for obvious reasons. It's perfectly reasonable to stop reading when you encounter an argument like that.


I remember at the time Gebru basically went into leave until all her complaints were addressed, which can be considered as soft quitting for a company like Google.


> I'm white I'm male I have 0 advantages because of that. Always the same lies everywhere. I'm tired of it.

Even if it were true how would you know?


>> Even if it were true how would you know?

I think a better question would be "How would YOU know?"


There's a funny thing about this claim. These models are now trained by a kind of reinforcement learning with feedback from humans. Who are those humans and where are they?

https://slate.com/technology/2023/05/openai-chatgpt-training...


You have to be able to see the problem with this reasoning right?

https://twitter.com/nathanwpyle/status/1031008855210123264


I'm not white.

I don't believe that my "way of thinking" is meaningfully different from the way of thinking of a white person, language and cultural differences accounted for.

Actually, I firmly believe that implying that is profoundly racist. It's small wonder that I see most dangerously racist bullshit being parroted in progressive circles.


The point is that people can treat you differently based on their perception of you, mediated by apparent ethnicity. Your way of thinking doesn't factor in.


> "AI threatens to deepen the dominance of a way of thinking that is white, male,"

This was the claim. And I refuse this notion. The worst atrocities humans inflicted upon one another were derived from a fundamental belief that the "others" being oppressed were meaningfully different from the oppressors, and therefore was ok to oppress them.

And about people treating you differently based on their perception of you, that is a different claim. And that is just basic human nature. Everyone (man or woman, white or black) judges based on appearances, especially when that is the first (and sometimes only) information you have to navigate.


OK I was responding to someone else about their claim that being white had never benefited them. This is an entirely different conversation I don't know why you started it with me. I don't disagree with you so far.


That statement you made seemed reflexively progressive to me rather than an actual thought out position.

Question: why does everything have boil down to a mortality lesson? Why can't white guy from the other thread have a little gripe? What's the purpose of scolding him , telling him that his opinion is wrong, and then treating him like a kid by sending a link to a cartoon that casts him as a snobbish, indifferent, predator?


The comment you replied to was commenting on the same claim I addressed, just on a different perspective (speaking on the lack of perceived advantages).

I fail to see how my reply was misplaced. I'm basically on a subthread commenting on the same claim I chose to address.

> I don't disagree with you so far.

Good. This was a good conversation then. Have an excellent day.


Do you see the problem with your cartoon tutorial. Correct me if I'm wrong but your lesson is a good example of begging the question . I think of the two fallacies, the whipped white guys is not nearly as bad as yours. His is merely anecdotal. Yours rhetorically casts that sad sack as a predatory animal.

Please stop helping.


They mean to say patriarchy and white supremacy. The fight is against the system. I don't know why so many people don't use better terminology that doesn't exclude allies.


It is a little ironic to think that people being patronaged by giant monopolistic mega corporations are fighting "against the system".


there's lots of systems. you don't have to fight them all at the same time.


> After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”

Virtue signal by donating to my slush fund.


Amazing that these people don't even pretend. Refugee advocates and labour organisers have what to do with AI research? Nothing? Like La Sombrita and how the attempt to design a bus stop somehow became connected to female empowerment. The left asks for money to do X and then happily and very publicly spend it their usual ideological goals, even though they have nothing to do with X.

Wonder when this crosses the line and becomes some sort of fraud. Did they tell people they'd spend grants on AI research? A lot of their stuff seems very close to political advocacy and lobbying. They flat out state they hired activists and advocates. A 501c3 is supposed to have tight controls on political operations though.


Large models are indeed dangerous, because they threaten to concentrate power in the hands of a small number of tech companies with the resources to do the training, and because, once those models exist, they devalue human labor.

Some technologies are leveling. It may be debatable, but I've heard it argued that the musket was one of these. The musket was cheap. It rendered expensive mounted plate-armored aristocratic warriors obsolete. Down came the whole feudal hierarchy necessary to keep those horses fed.

These AI models are the opposite. They do not spread power around; they centralize it. They increase dependence.

So in many ways she's right.

What I would like to ask, though, is -- instead of deepening the intersectional cuts that were used to dismember Occupy, what can a technologist do to both reduce centralization and increase solidarity? Is it possible to have a broad-based populism that isn't particularist? What technologies could be in its arsenal?

Possibly AI (which is really compression) can be part of this if it enables offline, disconnected "compressed Internet"s to exist off-the-grid, like solar panels?

How do we reduce dependence on the platforms? The different ethnic factions wouldn't fight each other if they were more self-sufficient. If they weren't made to compete with each other for access to a small number of economic chokepoints. How do we destroy the chokepoints? "Fuck your Suez, your Panama -- we have airplanes."


Here is Jeff Dean's email about the situation when it happened

https://www.platformer.news/p/the-withering-email-that-got-a...

Hi everyone,

I’m sure many of you have seen that Timnit Gebru is no longer working at Google. This is a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company.

Because there’s been a lot of speculation and misunderstanding on social media, I wanted to share more context about how this came to pass, and assure you we’re here to support you as you continue the research you’re all engaged in.

Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers). We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.

Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.

Given Timnit's role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we’re doing. I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it.

I know we all genuinely share Timnit’s passion to make AI more equitable and inclusive. No doubt, wherever she goes after Google, she’ll do great work and I look forward to reading her papers and seeing what she accomplishes.

Thank you for reading and for all the important work you continue to do.

-Jeff


Classy response to a not so classy person.


This is honestly a very respectable and transparent communication.


> The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI.

I am glad they included that at the start of the article so I know to completely skip reading it!


You're doing yourself no favours. Even if you disagree with her you should inform yourself of her line of reasoning.

I think the guardian article is a reasonable write up of the issues.


> lost her job after pointing out the inequalities built into AI.

I think that this is a simplistic and incorrect summary of why she no longer works for Google.

It would be as accurate as if the article started with “lost her job after violating Google’s publishing clearance procedures and sending out inaccurate and accusatory emails to internal listservs”


I don't disagree with her. My gripe is wit the the article itself. If the reporter openly misrepresents (read lies) about obvious facts, I don't see how their take on more nuanced issues would be..nuanced and unbiased.

She quit/was fired for submitting a paper without following company procedure. Saying she lost her job after pointing out AI ethics issues paints a completely different picture and is not productive at all.


Sorry, it wasn't clear from your statement you had access to information which invalidated the article. Could you be explicit what they got wrong please?


I edited my comment with more details.


I read the article. While it does mention the keywords promised on the title, it's also very long winded as it recounts many of her personal life experiences.

Honestly, a very boring read. Brings nothing new to the discussion, and at times feels just shy on gushing about the subject matter.

As for her being fired from Google... Google being Google, thet're probably in the wrong. But it is not surprisingly in the slightest that this sort of activist employee ends up being sacked when they start to be troublesome in their activism. Hell, I wonder why Google even hired her in the first place. My guess is that they hoped that by receiving a paycheck, she would be neutered.


Her life's biggest highlight is being sacked by Google whom she had given an ultimatum which to me was unfair.


Ethics employees are the lowest on the Bullshit jobs scale.

They bike shed all day. Just regurgitate old stories and what ifs.

They never do anything quantifiable, they don't have the technical skills.

Unlike middle Bullshit jobs that do nothing, they instead drag down employees around them.

Case in point here and as we can see Google is currently losing the AI battle.


Except Timnit Gebru did quantifiable research on AI bias and was punished for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: