Hacker News new | past | comments | ask | show | jobs | submit login

The problem here is that “criticizing ethical biases in a company’s core AI technology” is pretty much what an “ethical AI” group is supposed to do, even if it conflicts with the company’s profit-making imperative. Once Google started drawing lines around what technologies it was politically-correct to criticize (or, perhaps, “slander”), that group simply became part of PR and marketing. And that’s fine! But academic journals should treat that as a conflict of interest when reviewing or editing papers from that group, and academic “ethical AI” folks are probably not going to want to work there, because it’s definitely not a disinterested actor at that point. (The “quit vs fired” issue basically comes down to an interpretation of whether Google could “short circuit” an ultimatum and go right to “accepting an implicitly-offered resignation.” That’s … a fine point of labor law, and probably one not easily solved short of litigation.)



Here's a tip for you: if what you're doing conflicts with your employer's profit-making, don't expect to be doing it for very long.


Let’s say a company hired safety inspectors, and the inspectors found legitimate safety issues which would be expensive to fix. If the company fired the inspectors would you really be so pithy about for-profit enterprise? Likewise if a software company hired software security specialists who discovered severe bugs in their flagship product. Or if Bell Labs fired a physicist who discovered a flaw in transistor design.

It’s really no different with ethics. Gebru was hired by Google to study issues of ethics in AI. That was her literal job description. She was not hired to put a positive spin on Google’s business. She was hired as an objective researcher.

I seriously doubt you would actually defend the idea that (say) private-sector scientists or mathematicians should be expected to toe the company line even if they have a legitimate scientific objection: this attitude would be a disaster for the company in the long term, even if “ignore all bad news from the nerds” means they might make more profit in the short term.


I think there's still a portion of people who believe (or at least want to believe) Google's old "do no evil" mantra. You're right in the general sense, but for a time it seemed like Google might buck the trend a little.


You cannot make changes while cutting off your own hands. The fact that Google made the step to try to change their AI practices by hiring her in the first place was a huge risk to them. Then all they ask was not to ruin the business and she went all nuclear option and ran to the press about it.

As I see it at this point, her inability to navigate the politics of the situation showed she was incapable of doing the job anyway. She should have been highlighting her presence and recruiting people that believed in the cause. Instead she upended the table and left the second there was some friction between her and business interests.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: