Hacker News new | past | comments | ask | show | jobs | submit login

Google (and Google Brain in particular) benefited immensely from Gebru's work, and her being affiliated with them. This can be seen in how often they (including Jeff Dean) pointed to her work to show they were taking questions of fairness and bias in AI algorithms seriously.

The flip side is that you have to respect the researcher, who frankly has more credibility than you do as an organization in this space.

Even to corporate researchers, demanding (through an HR procedure? bizarre) the researchers retract a paper without a discussion or an opportunity to revise it (which is still possible since the paper is only in review!) is highly unusual. To do that to a researcher whose work you loudly (and seemingly proudly) advertise, is insulting to them and the broader team they hired. And to do it over not citing literature? Unheard of and clearly indicative of something else going on. This seems like some kind of turf war.

You can read the abstract of the paper in question here - it is incredibly anodyne, though obviously does take a critical view of Google Brain's work (BERT in particular); https://www.reddit.com/r/MachineLearning/comments/k69eq0/n_t...




> it is incredibly anodyne

I don't get why anyone is surprised that Google doesn't want a paper saying ‘we should spend less money on ML’ published under their name, when the whole company is ML driven and pushing for growth there.

What other large company would allow that sort of thing?


I'm not sure where you get 'we should spend less money on ML' from the abstract. If anything it suggests investing more in ML, from data curation to architecture development, that would be perfectly within Google's general aim to convey itself as being thoughtful about how to use the capabilities they're developing.


> We end with recommendations including weighing the environmental and financial costs first

‘Train for less time,’

> investing resources into curating and carefully documenting datasets rather than ingesting everything on the web

‘on less data,’

> carrying out pre-development exercises evaluating how the planned approach [...] supports stakeholder values

‘while spending less’

> and encouraging research directions beyond ever larger language models

‘on smaller models.’


Choosing worthwhile projects doesn't mean spending less.


> Google (and Google Brain in particular) benefited immensely from Gebru's work

Explain


As one example, Dean references her work here: https://www.bbc.com/news/business-46999443


Presumably there would have been time for discussion if the paper had not been submitted a day before the deadline.


There's still time for discussion! The paper's only in review and they would have had to make revisions anyway. I'd say it's the management that made an ultimatum to the research team.


Once you submit your paper it is visible to experts you probably know (or at least know of). The double blind process does not work when the subject matter, and writing style, and citations correspond closely to specific labs or researchers. You stake your reputation as well as that of your lab/company from the moment you submit.

I say this from personal experience, I once published last minute for a conference and was violently chewed out by my bosses because of things they disagreed with in the paper.


> The paper's only in review

You submit a final version for review. You cannot change a paper significantly after it was accepted.


Conferences of this kind would not allow major revisions unless they were requested by the conference's reviewers.


The discussion around deadlines is moving the goal post. That is what submission deadlines are for, there is still time for discussion.


ATM Timinit's work is nothing but a decoration.

Unless the underrepresented people got their deserved social and economic status, these so called fairness ethical study would just be building a mansion over empty air.

I am also wholly assured that the rich and powerful progressives know very well of this truth. And they are very willing to indulge these self-righteous elite intellectual. As long as they do not roll up their sleeves and get to help the unprivileged to rectify the system.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: