No. Every time someone makes a big stink about someone getting fired at one of the top tech companies, it is promptly followed by an article like this. A trillion dollar company that hires thousands of researchers and consistently produces some of the highest quality research with real results is not going to implode from one person being gone. Another pattern I've seen is someone leaving a company, followed by writing an article about how the company is doomed. Of course, the doom never arrives and the companies do even better. All of us are replaceable, from Bill Gates to Jeff Bezos to this researcher. This doesn't mean I agree with the firing, just saying that there is no implosion incoming.
Of general interest, Jeff Dean’s comment on why the Gebru paper didn’t meet Google’s publication guidelines:
“Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems”
The problems they identified have been known for years and there are lots of papers exploring how to mitigate.
The whole paper was a nothing burger wrapped in social justice language with asides about how global warming is Actually Racism because of disparate impact (interesting, not an ML topic).
If the problems aren't novel and you're proposing zero solutions, it shouldn't be a paper.
I wonder what would drive someone to do this. Anger? Loneliness? Work pressure? Imposter syndrome? Her prior work seemed more observational than theoretical, and the she got thrown into a top-level leadership position in a theoretical research organization with almost zero experience. Is there a single person that would have defended decision if that person was a straight able christian white or north asian male? (Note: please do not respond by speaking for others that do not share your own views.)
I wonder when the “performative wokeness” bubble will burst.
I don't know where you get these ridiculous ideas about imposter syndrome. Take one look at her career. She's written signal processing algorithms at Apple, won awards at conferences, got a PhD and worked for Microsoft in their AI ethics lab. She was more than qualified for her position and the research paper she was publishing passed peer review.
You're right, this wouldn't have happened if she were a straight white male - if she were a straight white male, there wouldn't be comments like yours making asinine assumptions about how good she was at her job.
No, it is not too much to ask in the specific case of Gebru’s bad paper. Several of the arguments are specious, like comparing the total energy consumption for training GPT with car trips, or demanding that NLP researchers have to keep up with rapidly changing activist “woke” vocabulary and ensure their models are respecting it.
These are ridiculous claims, and it’s fair to respond to them by saying, “well, what exactly do you imagine a solution or mitigation looks like?”
Essentially, by the nature of how specious Gebru’s stated problems are, they demand clarity over what an “ethical solution” even is, conceptually, and why everyone would have to agree.
For example, you could discuss economies of scale or train-once-finetune-everywhere approaches with GPT that reduce total energy needs. Or you could discuss how researchers can register the corpus they use and the snapshot of time it was grabbed, with an open understanding that as long as the methods and data are reproducible, there is no research ethical issue with studying that corpus, no matter how much bias or lack of woke vocab a given person believes it has. (And also, nobody is required to just accept activist language as important or valid.)
Gebru did none of this. The article could literally be summed up by Gebru saying, “I think <supposedly shocking evidence> is bad, therefore its connection to something in ML is bad.”
E.g. “I think, subjectively, that the raw energy use to train GPT is bad. Here are some shocking comparisons. Therefore GPT is bad.”
It’s incredibly unrigorous and juvenile. Dean’s comments that it needs to clearly state mitigations is actually a super generous, polite way of saying the paper is just subjective amateur hour.
Sure if no methods/work exists. In this case there already has been some work done, to mitigate these issues. So either mention them in related work and/or highlight why theses methods arent enough.
To be fair, the article is not about the firing. In any case, the two researchers who got fired did more harm to Google than good. Internally no one cares they left. AI etichs is an esoteric academic research field.
I'm in the research community and I think you're significantly underetsimating the effect of firing Gebru and Mitchell. Machine learning is the hottest research area in computer science and ethics of ML is possibly its hottest subfield. And people pay attention to employers' actions. I think Microsoft Research is still feeling reputation effects from closing down its Silicon Valley lab 10 years ago with no warning. It sent a message to everyone who worked there that they had no job security, and plenty left for academia. The research community is not going to forget about Google's actions here nor, for the most part, will it view Google very favorably.
I think you're very wrong. Yes, there's hype around ML etichs. Trends in ML come and go. Do you see any VC investing in etichs in ML startups? As I said, it's an esoteric academic field.
How do you even compare between the two? MSR closed an entire lab out of the blue of some great researchers who didn't do anything wrong. Here you have two employees going against their company and shitting on it publicly. The only researchers who will not want to work at Google after this saga are the ones Google better off without.
> Do you see any VC investing in etichs in ML startups?
Perhaps you should pay less attention to VCs and more attention to governments and academic institutions, who for example in Canada are investing 10s of millions of dollars into AI ethics/FATE/AI for good research.
Sometimes, the point isn't just to make money, it's to actually improve humanity.
The point is not just making money. The point is solving real world problems. AI ethics focus on esoteric problems that don't solve any real world problems. AI ethics research has very little impact if at all on real issues.
Systemic discrimination is indeed a real world problem. That's exactly the problem. AI ethics doesn't help solving systemic discrimination for the simple reason that AI is not causing systemic discrimination.
AI systems are trained on data. There's an abundance of English data which is why systems are often biased to work better on English. Similarly, an image recognition system might be biased if you don't provide it with data representing all demographics. There's nothing new about this and you don't need AI ethics research to solve these issue.
Focusing on AI ethics thinking it has impact on systemic discrimination, instead of focusing on real issues that cause systemic discrimination, is my main issue with all of this.
> you don't need AI ethics research to solve these issue.
what are you talking about? This is exactly the kind of research that's classified as AI ethics. "Solving these issues".
> instead of focusing on real issues that cause systemic discrimination
Identifying which ML models _actually running in production_ cause systemic discrimination (e.g. as you mentioned poor image recognition, bail predictions, etc.) is exactly focusing on real issues that... cause systemic discrimination.
> AI is not causing systemic discrimination
This is simply not true. Bad ML models have an impact on systemic discrimination right now, in that they amplify it.
> instead of focusing on real issues that cause systemic discrimination
It's a fallacy to think we can't do both, there's enough humans. Both making better AI and making better societal systems.
> Identifying which ML models _actually running in production_ cause systemic discrimination (e.g. as you mentioned poor image recognition, bail predictions, etc.) is exactly focusing on real issues that... cause systemic discrimination.
There's nothing systemic about these issues. I already mentioned it's a data problem. Nothing new. It's very easy to build a fair image recognition system by representing all demographics. And even then AI systems will continue to make mistakes. Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
> Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
This is a weird statement. This is like saying police cherry pick on criminals to justify their existence.
Do you not believe in harm reduction? Don't you think some part of AI research should be dedicated to minimizing how many "AI systems will continue to make mistakes"?
Thanks for the references. I will check them out once I get a chance. I do know one of these papers and from my understanding the modeling bias is on underrepresented features or the long tail, which again can be thought as a data problem that can be solved with better data collection.
I do agree that in the real world datasets are often biased because they represent the real world... and there are indeed modeling approaches to address such issues. (e.g., designing a loss function to up/down weight of certain types of examples). There's nothing new about this, it's been known in ML for decades.
why on earth should i care any more or less about systemic racism just because some charlie tells me it's unethical? Inventing ethics for machines helps cure exactly nothing. Only that i may be deemed by a machine to be an inferior human and less worthy than a "superior" being. Pushing technology that enforces your ideology is a horrible idea.
>Your ideology seems to happen to be different than mine.
if you're saying i'm a racist, i'm not. Is everything that some people say are racist actually racist? No. Is racism a problem? absolutely!
The status quo, however, should be changed by people,not people with machines. Doing nothing, that involves not using obscure algorithms to force people to think a certain way is better, in my opinion. You can't call something ethical just because you think it should be; it must be argued out. Using AI to shut out some more of that argument will only create a universal standard, not necessarily the correct one.
> The status quo however should be changed by people,not people with machines
I'm not sure I understand what this means. People/companies own machines (and ML models) and use them. So shouldn't we make sure that the machines' decisions align with what people/companies _want_ them to do? (i.e. that the people's ethics align with the ML model's ethical consequences; I'm 100% sure that people who deploy "racist models" don't do it on purpose or out of malice)
> You can't call something ethical just because you think it should be; it must be argued out
On one hand this sounds like a strawman. No one thinks that something is ethical because someone randomly declared it so.
On the other hand... ethics are a human construct, and will continue to evolve as our culture evolves over decades and centuries. Shouldn't we construct ML models which are flexible in that they can align themselves with the ethics we collectively decide? We don't know how to do that yet!
> Using AI to shut out some more of that argument will only create a universal standard, not necessarily the correct one.
You seem to be under the impression that the fields of AI ethics is dedicated to brainwashing people into some particular unpopular moral philosophy. This is simply untrue. Within the field of AI ethics there is a lot of diversity of thought and disagreement on how human morals should be "encoded" so that AI can "align" with these morals. And I'm using the plural of morals because obviously there will never be a humanity-wide consensus on ethics, and if AI is to be deployed in the world it needs to reflect this diversity.
Codifying ethics perpetuates falsehood. Every single generation in history believed that the "had it" only to be denigrated as hopelessly misguided by the next generation. We are making the same mistake, only less people are killed right now, so it looks like we are more successful. Remember the Pacifism of the inter-war years? it bred fascism. The pendulum swings.
AI ethics cannot hope to remain in style for long, while they will almost certainly exist for far too long. Accepted standards of 2 years ago, are already out of date.
I'm lost for a solution.
I do think that the less AI is claimed to be ethical, the less it will be trusted, which is the best cure i can think of. Honesty is the basis of the whole of scientific inquiry, and is probably scarcer in google's ethics research department than anywhere else in the building. (programs don't run if the math's wrong, economics as well)
No, you're wrong. Ethics and morals do exist. Money exists. Ideas exist in our brain, functionally.
Are all these things _ideas_? Human creations? Sure. The universe is absolutely indifferent to us. But these _ideas_ have real-world impact, and I'm not indifferent to my own suffering.
Societies function at the scale they do right now because there is enough overlap in how I perceive the world and how another random human perceives the world so that even though we don't know each other, we can still cooperate [see e.g. 1 for great discussions on this] e.g. exchange money for goods.
> AI ethics cannot hope to remain in style for long
Again, you seem to be conflating "AI ethics" with a particular ethical stance, let's call it woke humanism, and you seem to think that the people who work on AI ethics work to enforce this belief on others. This is wrong. We're perfectly aware that humans have a variety of ethical preferences, see my previous post. Lots of people who work in "AI ethics" are definitely not woke humanists.
> Accepted standards of 2 years ago, are already out of date.
I'm not sure what you're trying to say here. Um, sure we keep finding better algorithms... no one ever, ever, ever, has claimed that their paper is the ultimate algorithm and no no one will find better. But 2 years ago, killing a random person in the street was wrong. It's still wrong today, it was wrong 2000 years ago, and it's going to stay this way for the foreseeable future.
> I'm lost for a solution
The research field of AI ethics exists because we don't know what the solution is!!! Come join us if you're so concerned.
> Honesty is the basis of the whole of scientific inquiry
If you value honesty, then you should value research that tries to make ML models "honest", by revealing how they make the predictions they do and where that fails. I don't understand your antagonism towards ML FATE (fairness accountability transparency and ethics) research
my points still stand. History has shown us that defining ethics and writing them down merely spreads falsehood. I don't think we are actually better off morally now then 50 years ago. In some areas yes, in others we have regressed. (look at addiction rates now)
Now to categorically state that ai ethics is as diverse as "traditional" ethics can't be true by definition. (mine, if you ask)
As for killing random people on the street never being socially acceptable...
...Just look at europe 80 years ago.
AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
What about far more stable principles, such as murder and racism, you ask?
They are prone to being overplayed or downplayed, are state executions murder, or justice? What if the victim/ hangman happens to be black? Why should it matter? Just ignore some issues?
That's misleading by omission.
It would be better to just admit
"yes, we at Giant Tech know our ethics are bs, but we had to put something down or our machines won't work.
Maybe we are not ready for advanced ai. Maybe there's a limit on what programmers can do. Yea. We know. turns out computers DO have limits. We'll have to find other ways to make money."
"But why should we say that!" cry all the executives when this speech is proposed to them.
"because it's true, and if we don't act now the company is screwed. And our clients will also get screwed" is the answer of the timid executive who first suggested this.
"how will the truth help us?" respond all the execs in unision.
"if we can keep up the lie for long enough, we shall all long be millionaires and retired before it implodes! We shall long be out of danger! who cares if some people lose money?"
"Yes, but don't you feel bad for all the shareholders? And how can we possibly fool people for decades to come that our ai isn't bs?" Responds the poor executive weakly.
"By creating a fake team and telling everyone they are ethics researchers" they say. "Really they are just pawns to help us earn more money. Fish get eaten by bigger fish you know?"
"can you just help me change a few lines on my press statement?" Asks the first executive.
I think you're missing the point I'm trying to make, which is that developing "fair" algorithms, is not about developing algs that are e.g. pro-white-black equality, it is about developing algs that have option of equality built-in. It is then up to the user of the algorithm (you, Google, whoever), to "input" what or who should be equal.
It just so happens that currently the "input" is racial and gender equality. That's a societal choice, and one that is likely to change if e.g. racial equality is achieved and some new inequality arises. Maybe eye-color-based discrimination, who knows.
More generally than "equality", AI Ethics research gives us tools to analyze current methods and see where they fail to meet our ethical standards.
> History has shown us that defining ethics and writing them down merely spreads falsehood
Humans have been trying to improve their own condition for as long as there have been humans. Collectively defining acceptable behaviors is a never-ending task. Does that mean we should not undertake it? Absolutely not!
Writing down ethics isn't about spreading falsehoods, it's about cooperation. Cooperation involves compromise:
> AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Laws can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Culture can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Morals can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Do you see the pattern? Things change, that's normal. We still have laws, and culture and morals, but we adapt them to our needs. Are you suggesting we should simply reject anything that changes? You won't be left with much.
I'm still very much interested in improving my own condition. That includes pushing people to behave in ways which I think would do that. People have different interests and their condition is often at odds with other people's condition. This is the foundational difficulty of living in a society of more than 1 individual. Yet we 8 billion humans still manage to be fairly successful at it. I wonder why?
Cultures and morals change. Does that make the morals of the past falsehoods? Of course not. They're just different perspectives on the human condition, probably best suited to the material conditions of the past.
Calling someone today is often seen as rude when a text would suffice. This is due to our material conditions, the ubiquity of cellphones.
> It would be better to just admit ...
You're suggesting we should admit defeat? Give up and let Google maximize profit? AI is a wonderful tool that could improve the material conditions of most of humanity if used correctly. It could also be devastating. I'd rather it not be devastating, so I'm going to continue supporting people who try to do research into aligning AI with whatever ethics we collectively agree on.
> I don't think we are actually better off morally now then 50 years ago
This is pretty sad.
Don't confuse your own cynicism vis-à-vis big tech with some nihilistic historical inevitability. The global improvement of the material conditions of people in the last 50 years have enabled us to start asking for ourselves what morals we actually want on a global scale, rather than this exercise being left solely to a self-interested elite.
Regardless of how better off morally _you_ think we are or aren't now, the space of collective possibilities is now immensely larger, whether you like it or not. That, is wonderful.
So, could you tell me for example, what this relatively often cited paper solves or where it's actually applied to solve a real world problem? https://openreview.net/forum?id=Sy2fzU9gl
I don't know this paper but it doesn't matter. I didn't say only ethics in AI research doesn't solve real world problems. It's a research field mostly for academia not industry IMHO. Obv many ethics researchers would disagree with me on this and that's OK. Partly because they would rather have the option to work at Google with FAANG salary vs a middle tier university in the middle of nowhere.
You seem to have an already strongly formed opinion on the topic, and it seems it would be very hard for people to have you even acknowledge that they may have valuable diverging views.
Starting from that point, what do you expect from a discussion? What kind of information would lead you to think again about the situation?
In terms of attracting AI researchers, think of this:
Gebru has very publicly got into fights with Yann LeCunn and now with Jeff Dean. If you are building AI, who would you rather build your team around, Dean/LeCunn or Gebru? If you are an AI researcher, do you want a join a team where one of the team members is in the habit of aggressively accusing other researchers of racism? Would you be worried that your research might fall within their crosshairs for some reason or another? For example, if you are working on natural language research, and your model ends up doing better with Indo-European languages versus those from other families, do you want to be accused of propagating racist power structures on Twitter?
Is this really true? I don't see ethics in ML papers getting the same attention in major conferences as theoretical or experimental breakthroughs in deep / reinforcement learning.
Don't get me wrong, ethics could be hot outside the ML academia, but I very much doubt it's something majority of grad students in ML are dying to get into.
i guess you are right that potential employees will consider this behaviour in their calculations. for the most part by adjusting their salary demands with an additional "risk adjustment bonus". as the FAANG can easily swallow that additional cost and are still incredibly attractive i doubt there will be a big effect besides loosing some value-oriented people. i doubt this will make a difference numbers-wise. nonetheless i applaud employees sharing their view of a companies inner workings for us others to have more information to make an informed decision themselves - yeah transparency
More importantly it has no connection to the bottom line, which is why Google management doesn't seem particularly concerned with disquiet in that research group, as long as it doesn't spread to the rest of the company.
Google and other companies should regard rigorous research and discussion about AI ethics as long-term protection of their bottom lines. If they start launching products and selling services that are found to unfairly favor or disfavor certain groups of people, they will be vulnerable to lawsuits, government regulation, and damage to their reputations.
One of the core issues in AI ethics, really the core issue currently, is that any product you launch or service you run will be found by some subset of the population to unfairly favor certain groups of people. No amount of research will allow Google to build a model so neutral everyone has to agree with it, because people want different things and have different ideas and assumptions about what's fair. As they found in 2019 with their AI ethics board, even basic ideas like "let's listen to everyone" are subject to this dilemma, because some groups feel that it's unfair to listen to other groups.
I think it is important to shift AI ethics to become more of an investment but that requires more tooling to evaluate AI ethics problems and the business risks.
This may not change end of year results, but this kind of research is what gives Google a credible voice when it comes to shaping public discourse and influencing legislative process, for instance.
But when you write something like this, do you also understand that their actual research is widely considered to be of a high quality and very important? So if you agree that ethics is important, would you leave them off a top 10 list (and who would you put on)?
That's a management illusion. Try to replace e.g. someone like Fabrice Bellard, Mike Pall or Claude Shannon. Of course such things happen in big companies, but mostly because management is too limited to properly assess the true value of certain individuals. But the article is actually about a different topic.
That's an ego illusion. It hurts to admit that we're not replaceable, but we are. The job might not get done as well, or done in a different way than we'd do it, but it'll still get done.
Both are true: there are supernaturally talented people and also an incredibly wide world.
If you take an intellect so impressive that they are one in ten million, there will be still be almost eight hundred of those people in the world.
We are also reasoning from the POV of our own reality. We see the people we did get, but it could be the case that we missed some brilliant minds that do exist in some alternative universe, but came ahead anyway. There are so many factors in play.
> If you take an intellect so impressive that they are one in ten million, there will be still be almost eight hundred of those people in the world.
Intellects aren’t fungible. Even if there are 800 Fabrice Ballard-level minds out there, I doubt most of them have honed their brain on the exact problems he’s worked on. You can’t just find another one-in-ten-million mind and put them to work on the problems of another 1/1e7 mind and expect comparable results.
Essentially it's a clash between the Great Man conception of history and the process version. The Great Man version is easier to understand. You can look at a specific individual and easily conclude that their actions had an enormous impact. For people such as Mao who had sway over billions, it is certainly a conclusion that seems to withstand quite a bit of scrutiny. But any person is a product of their context and we have to deal with multi-factored forces that might be impossible for a single human mind to model or grasp given the quantity of data. This is particularly relevant for scientific pursuits as opposed to political decisions. Newton and Leibniz sound irreplaceable if you read their biographies, but they came up with calculus separately around the same time. The same goes for Darwin and Wallace. If the conditions are ripe, individuals matter less. Technology isn't a predefined ladder like in the civ games, but every civ is at a juncture where so and so technology has a probability of being discovered. It's not unrealistic to assume that if certain lab conditions exist, it's only a matter of time until someone stumbles on to penicillin even if from a historical and emotional perspective it seems like a freak accident.
I can't draw a conclusive answer to these questions following the logical consequence of my own arguments, but at least we have to come at the problem with the knowledge that our own minds are drawn to simple narratives and to individual achievements. Hence assuming replaceability in the absence of very strong evidence to the contrary
There's a whole set of problems that people can work on. There's solutions for most of them. Some of those solutions aren't very good, but they're the best we have.
Fabrice Bellard has worked on a subset of the problems we have. He's created good solutions for them. But if he hadn't, we would have some other, lesser, solution for those problems. Like we do for the problems he hasn't worked on.
No, you can't expect comparable results. But you can expect some results.
I think we're violently agreeing then, as I said "the job might not get done as well, or done in a different way than we'd do it, but it'll still get done"
Simile: saying “your brain is replaceable”. Beyond the fact that the most likely context is a threat, it is a poor argument: while technically true, what would remain of me would not be meaningfully me. And the surgery is work that would be hard-pressed to generate the expected value, such that the only reason to do it, is either out of anger or as a consequence of irremediable damage.
Companies are stories. The decisions are made internally, but their meaning is narrated externally. If you change the protagonists, the story changes. The case of Uber’s self-driving car division is quite an example of that.
Does the change in Google’s story converge to a positive or a negative light?
>> It hurts to admit that we're not replaceable, but we are
The more people, the less the individual is valued. But that does not make the individual less valuable. Unfortunately, for a few years now, respect for the performance and qualifications of others has been declining more and more. This increases the illusion that everyone could be replaceable. Just ask your family if they see it that way in relation to you; the illusion of replaceability definitely ends here.
The job might not get done as well, or done in a different way than we'd do it, but it'll still get done.
If the job isn't done as well, then no one isn't as replaceable as you put it.
Excellence can't be replaced as easily. Maybe for certain kinds of jobs yes, but for all jobs? No. If that were the case then we'd be inundated with Einsteins, etc. And we aren't.
How many people have the opportunity to be Einstein?
How many people have the right brain, and the right interest, and write the right paper at the right time?
How many are starving in an underdeveloped country and no access to education, for that matter.
Einstein wasn't necessarily a unique genius standing at the pinnacle of an intellectual mountain. He was a beneficiary of survivor bias. We don't know how many other "Einsteins" there have been, or could have been, because we only tell success stories.
How many people have the opportunity to be Einstein?
Everyone who has access to (public) education, probably. And of those whomever has a relentless will for achievement. And/or is, by nature, curious about stuff. There's a reason why the lines between genius and mental illness get blurred sometimes. Remember John Nash, Jr.?
How many people have the right brain, and the right interest, and write the right paper at the right time?
How many are starving in an underdeveloped country and no access to education, for that matter.
I wouldn't know but I'd estimate millions.
Einstein wasn't necessarily a unique genius standing at the pinnacle of an intellectual mountain.
Whether you like it or not, he was a genius, and unique in his own way (like everyone else is - even you), along with various other well-known peers of his time and lots of other people before them.
Now, obviously, they, as well as any "proper" scientist, are well aware that none of their work would mean anything if they didn't stand on the shoulders of giants. Science is a branching tree of giant people.
He was a beneficiary of survivor bias. We don't know how many other "Einsteins" there have been, or could have been, because we only tell success stories.
Following your train of thought then no one's achievements - even those who you claim don't have the "right brain," "right interest," don't "write the right paper at the right time," are "starving in an underdeveloped country" and "without access to education" - would mean anything.
So, to get back to the subject: replaceability depends on the kind of job. It may be simpler to replace a fast food worker, but a Richard Feynman? an Albert Einstein? or <a name of a scientist whose name isn't publicly known but has made a difference in their field>? I doubt it. Those people made a difference in their respective fields and no one can take that from them. And I'd say the same if it were someone else from other countries, ethnicities, etc.
People are somewhere on the scale of greatness. At some point it becomes harder and harder to find replacements that will be able to get that job done. People are very capable to steer projects into failure.
It's not, at least not in ML for a lab as prestigious as Google AI. They probably have several hundred researchers with excellent publications that would be willing to drop everything and get a FANG salary.
Also this is a management illusion. There is no evidence for this assumption. You don't even know the probability distribution. There is no reason to assume that the percentage is equally distributed across all firms or countries. And anyway, the article is about something else.
No, it's an axiom. It defines a way to make collective/collaborative entities hopefully bigger than the sum of their parts. I think of these things (corporations, groups, movements) as aggregate people, and that's very much what Google is about.
Google deals almost entirely with aggregate people: statistics, algorithms, collective behaviors, machine learning, implementation that's never about individuals but is about larger population trends. Aggregates, not special unique snowflakes.
As such this is not an illusion but an axiom. Google and entities like it (themselves humongous aggregate 'people') MAKE individuals replaceable, the better to be dealing with other entities like themselves. This is only going to accelerate the more they get to bring AI and machine learning into the mix… which by now is long established, nowhere more than at Google.
Maybe an axiom as in being something we assume (because we can't/won't figure out whether it is actually true) this and base our decisions on this axiom.
Those exceptional individuals are incredibly rare, like one or two in a generation. So you need to be Shannon-likes to be not replaced by some middle manager in a big corporate? Emm, if someone were this accomplished, why would they care about one employment? That is the wrong question to ask.
Truth is, if Google thought they were not replaceable, it would not fire them this easily.
Much more people than you expect have at least one exceptional skill; from my observations I would say at least 20 to 30%; the more extraordinary skills per person, the rarer of course. And if indeed the human workforce were really such a generic, easily replaceable commodity, why do most companies, including Google, go to such great lengths in recruiting, with assessment centers and so on? And why are there so many unemployed IT specialists, for example in Germany, when at the same time the industry associations claim that jobs cannot be filled?
And yet so many people consider themselves important enough e.g. to post comments here. It just seems that it is always the others who are dispensable. For people who make it into management, this tendency even seems to intensify (or it was the prerequisite why they wanted to be in management).
Okay, approximately all of us are replaceable. We can agree there is an epsilon of people who are clearly beyond others. However for almost all the work that has to get done, the actual bar is "can you write decent Python?", not "can you design and implement a novel algorithm for computing Pi?"
I guess it depends on the purpose for which we are all supposed to be replaceable. Nature probably doesn't care which individuals reproduce or are eaten, as long as the numbers are right. Human society with its elaborate specializations and long training periods has added a few more dimensions.
Shannon built on Hartley, as much as Einstein built on Lorentz.
That's not to say these weren't great minds, but the concepts where in the air and the race to formalize them was on; most of the "second places" are today forgotten or their contribution diminished from the modern "winner takes all" mentality, but none of them existed in a vacuum.
The history of science is fraught with independent discoveries, from calculus to the the telephone, up and including mass energy relation and the basis that later became quantum mechanics.
If A and B made the same discovery independently, that is evidence that A was replaceable, but that C built on D is not evidence that C was replaceable.
Yes, I stopped reading when it mentioned the last good paper was from 2017. This is simply not true. I don't have time to go through all of their papers right now, but as someone else mentioned the protein folding one was a real breakthrough. They also have lots of great stuff in the NLP space (something similar to gpt-3 like 2 years earlier). Also tons of stuff on the actual training architecture/methods.
Edit: I want to add that saying the title has nothing to do with the article is not helping the case. I finished reading the article in case I was being unfair, but I still stand with my original comment.
It says the high point is 2017, not the last good paper. There are of course other good papers coming out go Google. But the novelty is dropping and the angst is increasing.
But this is false as well. Bert was a great breakthrough in NLP in late 2018. IMO bigger breakthrough than AlphaGo, but less media friendly. It has freaking 16 000 citations and it's used all across industry.
Journalists were asking people from DeepMind why winning in Go is important. They said because it may lead to breakthrough in e.g. medicine. Well it didn't, at least not yet and not directly.
But we still got AlphaFold. And AlphaFold is the type of breakthroughs DeepMind is meant to make. Not playing games.
I think judging the decline of a rapidly evolving field by literally one of the biggest breakthroughs in its entire history is not good. I also don't like clickbait and I think the audience here generally doesn't either, even if you justify it at the end.
To be fair, you wouldn't know that from the article's subtitle:
> What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.
I read the article and I don't get what's the focus of it. It seems a disconnected rambling about vague deep learning issues with Gebru's name interspersed several time in the text as to suggest her relevance.
Please read the article. I address some of the issues raised. My point is that Gebru's firing is symptomatic of some deep problems Google are experiencing. Thank you.
>. I don’t want to downplay the deep instutionalised sexism and racism that is at play in Gebru’s firing — that is there for all to see.
This is a very badly written and uniformed article, and sentences like these essentially illustrate the thinking here (It's imploding because I don't like it).
Here is an alternative reading: Google is cleaning house of toxic activists who are not interested in serious ethics research but use it as a vehicle for their ultra-progressive political agendas.
but isn't ethics inextricably linked to the worldview generally? We have some common ground nowadays (no killing in so called "civilized" countries), but to me it seems like you are just proposing the usual neo-xyz-argument: things can't be changed (which - originally developed in the thinktanks of cold-war-USA under heavy fascist influence - now has been the main global narrative without any institutionalized counter for 30 years)
I read the article and i'm not following the argument delivered at all.
There is no real proof to this.
I'm following and reading research @ google (stuff like this https://ai.googleblog.com/ and other sources) for ages now and NOTHING indicates an 'implosion'.
It is strong research with real and constant results.
I have no idea why the autor would even consider using the word 'implode'.
Its not rocket science that data is biased and it just will continue be researched and a solution will be found. For the single reason that biased systems in certain areas will not deliver the results you need to use it properly.
> For the single reason that biased systems in certain areas will not deliver the results you need to use it properly.
well, they produce happy numbers for papers and depending on the brainwash-level of a population, they might also sufficiently often do the "right thing" towards minorities that noone cares too much about, independent of whether it would stand a chance against objective evaluation.
Nothing symptomatic about firing toxic employees. The only thing imploding is AI etichs research. There are good people doing quality research that will now have much harder time finding a job in industry because of the bad rep these Google employees "contributed" to the field.
AlphaFold may bring millions if not billions to DeepMind, Google and Alphabet. Figuring out structure of a single protein may cost up to 100 000 dollars.
It is THE biggest change that Google's search algorithm had ever been through, I would assume. And to push such a fundamentally different model to ALL their English traffic is pretty telling itself that how much an improvement Google had been seeing.
This is easily billions of ROI for Google, if not tens of billions.
Alphafold is transformational for life sciences. I find it hard to articulate how much it's worth - maybe the sum of the top three life sciences companies today?
Honestly - the discussion is over, the AI folks won.
You must not have read the article, which isn't about Gebru's (and now Mitchell's) firing. There would be a lot to say about the ongoing credibility of any of Google's statements or research concerning ethical AI at this point, and lots of folks have said those things.
This article is an analysis of the extreme weaknesses in the current seemingly-productive approach to ML language model research. The intro anecdote about the failure of game-playing models to handle games with representational elements or indirect rewards is extremely important. But the failure of the Big AI community to recognize those same failures in its approach to building language models is the pending crisis that the article's title refers to. Gebru's firing is not the cause of the impending implosion of Google's AI research, but rather a leading indicator and warning sign.
The article discusses some concrete challenges in AI research, but the author's only argument for why Google won't be able to tackle these problems is that they fired Gebru. As he mentions in the article, it's not that the Big AI community has failed to recognize these issues; "40 Google researchers, from throughout the organisation" discussed some of them just a few weeks before the Gebru controversy.
Perhaps equally importantly, the author made a very serious accusation that Google has "deep instutionalised sexism and racism". Are we really intended to just gloss over that, treat it as unimportant filler?
Set aside the ethics issues for the moment, the criticism that Google is neglecting alternatives to neural networks is probably reasonably fair.
But on the other hand, research labs or not, Google is still a commercial entity driven by a shorter term horizon than your average academic.
Google Research has probably subconsciously drifted towards areas (or at least applications) that can tangibly benefit Google in the near future. Neural networks might not be The Answer (TM), but they can deliver results today, and there's still untapped potential. Speech recognition, search, speech synthesis - all core Google products - have all benefited from neural networks, not to mention the broader applications like protein folding for DeepMind.
I'm hesitant to find fault in a corporate group that's doubling down on the stuff that's actually delivering results, even if it's probably not the path to AGI.
I probably wouldn't be so lenient with OpenAI, because they are (were) purely a research outfit, openly committed to AGI.
In my opinion it's Peter Naur who has found the key issues that keep AGI out of reach for us. Check out his Turing Award lecture. Too bad that it is the von-Neumann-architecture itself that is keeping us from reaching our goal, because it is still a crude emulator for information processing as it is really done. Like in nature.
Thank you for giving a constructive answer because quite frankly after reading the article my reply was going to be much more acid against the author, which in my mind has some kind of agenda.
I'm always thinking that Google research is going to stagnate and yet they continue to show impressive results. So, yes, I would love to see something more original than yet another NN, but on the other hand... amazing.
> I don’t want to downplay the deep instutionalised sexism and racism that is at play in Gebru’s firing — that is there for all to see.
Really, where is that to see. You weaken your whole case with this kind of casual reference to "oh and she was also a black women" so racism and sexism apply.
Its a type of crying wolf that loses you more people in what is the potentially important issue at hand, her work at google.
I really wish there would be repurcusions for this kind of casual libel, as it can be thrown in today's climate without a second thought.
It's tough to draw a straight line from "deep institution" (redundant?) to specific examples. It shows up more in background statistic than anecdotes. When we're focusing on N=1, I'd rather see concrete complaints (and no, internal forum posts do not count).
I don't understand this statement. In my mind, internal forum posts as among the most revealing forms of evidence about the culture of an organization.
Too subjective; what looks like malice could just be basic misunderstanding.
Simple example: if I reply only the original forum post, but my message is posted after yours, then you might assume like I'm excluding you intentionally, probably because I don't like you personally; you get mad. This assumption is wrong and harmful to productive conversations. The simplest explanation is I began writing my reply before you posted; no apology is warranted. Another possibility is I'm simply a careless or excitable type of person who usually replies quickly before reading other replies; an apology is warranted, but it wasn't a personal attack!
The implosion argument in this article is solely ~based~ triggered by Google's botched handling of AI Ethics researchers. To argue that this will implode the complete AI organization seems like wild stretch. On a normal day, all the arguments padding the article would fall on deaf ears.
So, the answer to the question is no. Instead, AI Ethics research will probably implode. AI research, probably no.
For AI research to implode, major AI research leadership need to be seen leaving. And I am not talking about the corporate leadership but the research thought leadership whose sole purpose is to shield the junior/senior researchers from corporate BS (and who are themselves well-known researchers).
Edit: It is perhaps be more appropriate to have said that the botched AI Ethics researchers handling solely triggered the article.
Research isn't about knowing all the answers or always being successful. Negative results are just as important as positive ones. Author does not even mention the recent protein folding result.
I am the author. I think it is interesting with this "axe to grind" response (there was another above). I am an independent researcher, a professor at a University. And I have absolutely no reason to be disappointed with Google. My knowledge of this subject goes considerably deeper than I write in the article, but it is important to get the points across in an easy to follow manner. Thus I use certain literary devices. But I think it isn't fair to describe my opinion as a "axe to grind". I might (of course) be wrong, but there is agenda.
I'm not certain that claiming deep knowledge of the subject and handwaving about literary devices is a terribly useful response when you write an article suggesting decline while completely ignoring a bunch of significant more recent developments. That's not a literary device, that's cherry-picking facts to support a preconceived conclusion.
Paying more attention to other, more recent, developments might also have helped you avoid writing a paragraph where you talk about deepmind not being able to play games where it needs to perform multiple actions to obtain a reward - I think deepmind managing to beat professional starcraft players is clear evidence that this conclusion is questionable at best.
As I have said before, whenever someone talks about ethics and AI, I ask the question "Whose ethics? Ethics according to who?"
It's a fantastic chance to export your viewpoint on the entire world when this ethical AI begins to underlay various services. For example, "Members of Group X are inferior/should die" being unethical while "Members of Group Y are inferior/should die" getting past the AI ethics is a great way to slide this past a ton of people. No need for argument, it just comes pre-censored on the commenting system.
AI, as a field, is going to have to face a painful question: what if we get answers we do not like? I'm Irish enough by extraction, so I'll use it as a self-denigrating point ... what if AI found that the Irish, just by genetics, did actually tend toward alcoholism and drunkenness? Would we accept that, or would we say "No, that's wrong. Go back to the drawing board until we get the answers we want"?
My guess is that we are going to look for the latter. AI is going to give us the answers we want, because that's how we are going to build it. AI won't be constrained by ethics so much as it will by the discovered truths we are unwilling to accept, whatever they may be. And that means there's a space for people with a thousand tiny axes to grind to be employed. It will be a chance to shape what "truth" in the Orwellian sense will mean.
As the author admits, the headline is a clickbait question.
Indeed, the article highlights some issues with AI research, in particular those which can lead to ethical problems when AI methods are implemented in consumer products. These are by now well-known and important issues and people should find a way to resolve them.
Then, in its final paragraph, the article suddenly claims to have answered its title question in the affirmative! Am I alone in thinking that the issues, valid as they could be, do not obviously spell doom for the entire program?
This article is correct about underspecification, but completely dodges the argument on language and understanding, drawing the hackneyed, shallow distinction between "pattern matching" and "understanding".
How do you define understand? People just use the word "understand" to fill in for the magic stuff that they say humans can do but machines can't. They also sometimes use "symbolic reasoning" that way, but the best work on machine understanding and symbolic reason is done by DeepMind:
The examples in the original article are about truth, not understanding, mainly because we don't have anything approaching a formal definition of 'understanding'. But, if anyone does, it is probably the deep learning community, where conferences in the last 3-4 years have had hundreds of papers working carefully to examine what is the nature and structure of the knowledge encoded in these kind of systems.
Yeah, tech and AI is pretty unpopular and they'll probably have a hard time finding people who want to work there since the entire planet is now independently wealthy
May be "AI Ethics" research will be affected and I believe this is good thing. After reading David Graeber's Bullshit Jobs, He claims that administrative jobs in universities and managerial, duct tape IT jobs in industry as examples of it.
Now it occurred to me that this AI ethics research is just one more example of proliferating bullshit job. Just recycle same idea and conclusions thousand times and you have years of research and hundreds of papers generated out of it.
And a mutual admiration society of university departments, twitter fans and these type of researchers in tech companies keep everyone in high esteem as world leading authorities.
Every time critics try to lump conspiracy theories, LGBT representation, and fake news together just to make a general point, it directly puts me off, I think that's mostly a cheap shot and lazy.
I'm out of the loop on the whole Gebru situation, from what I know she was researching ethics wrt to AI, so I don't really get the whole "novel idea/work" part in the last paragraph. I often get the impression that such critics never see the rapid developments in AI as progress as long as they don't cover the topics they would like to see focussed. It will never be good enough and always a concern because of X.
Nice try, but still a miss. Building assumptions into NN won’t work for anything complex like ladders and keys which according to this author are built in to human brains? Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence. They will probably require many additional discoveries about how they need to be connected and maybe even a few updates to the model of the neuron but they aren’t going away. Secondly, of course a language model is going to parrot back the data it is trained on, that’s a major goal ie given a giant dataset answer these questions. It’s also how a lot of casual human conversation works, we generally just parrot back things we’ve read or heard. The interesting parts are the new syntheses that human brains come up with but even those are standing on the shoulders of the dataset so to speak. You’ll never get a truly neutral view of the data, I don’t even know what such a thing would look like maybe just pure ignorance of the data could be considered neutral? Biases aka heuristics are a a core part of learned intelligence, they can of course be flawed or entirely incorrect for a given environment but they serve a purpose and you can’t do away with them or the ability to form them based on observed data. You can optimize them for specific goals but you can’t get by with just one.
> Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence.
This calls for a big fat "citation needed". What we call neural networks have very little to do with actual neurons, so I fail to see how this is supposed to be "clear". And it's in its infancy since the sixties, really?
Technically, the perceptron was developed in the 50s, and it's kind of hilarious to read the original press release which claimed it was going to be conscious of its own existence.
I’d say it’s clear since we continue to make significant progress in replicating the behaviour of biological NNs with the main limitation being the amount of parallelism we can throw at it. If you read closely I stated that our model of the neuron may require modification but the basic idea is correct. Our implementation of artificial NN is in its infancy, we still don’t fully understand how to structure these networks and the way we feed them data is nowhere close to how biological NNs get theirs from the environment but the current SOTA is necessary because we don’t know how to build something capable of operating in the same way. We still have a lot to learn. We’re competing with hundreds of millions of years of evolutionary exploration using our fairly rudimentary understanding of biology. 60 years is nothing especially considering progress is not inevitable and this area of science has major boom and bust cycles where not a lot gets done.
What about electric cars, since they've been around for 180 years? Is it wrong to say that EV tech is in its infancy? We're seeing significant innovation in electric vehicles and there's so much room for it to continue improving.
Technology goes through periods of explosive innovation as well as periods of very little growth. NN's were dead until they weren't, and now they are having quite the renaissance ever since 2012. Currently they seem to be getting us closest to general intelligence and I'm excited to see where they go from here and if anything else comes along to supplant it's place on the throne.
It reminds me of the recent Paul Graham essay where he talked about discovering, in graduate school, that the "knowledge based" A.I. (e.g. SHRDLU) was a "fraud."
Specifically: we could make small worlds in which syntax and semantics lined up (like the Zork parser) to seem like it understood natural language, but there was a point you couldn't go further.
I think some graduate students now will be disillusioned by the claims being made for deep learning today for the same reason that those systems will quickly master the things they are good at and leave a residue of even more difficult problems for the 2050s and beyond.
Seriously though - even if the points in the article were true, none of them imply any kind of implosion or significant disruption in Google's research.
> Maybe one day we will see the transition from Hinton (ways of representing complex data in a neural network) to Gebru (accountable representation of those data sets) in the same way as we see the transition from Newton to Einstein.
This is so loaded and gives Gebru so much undeserved credit that it undermines that author's credibility.
I don't think it's about whether Gebru "deserves" the credit, it's just a bad comparison on so many levels. Roughly like: "the transition from Ronaldo to Magnus Carlsen may be seen in the same way as we see the transition from Pelé to Lebron James". I also think it would be fair to speak in terms of communities and not of individuals, at least in modern science.
> For these types of games, the algorithms can’t learn because they require an understanding of the concept of ladders, ropes and keys.
The underlying problem of AI is that we've taken something we don't really understand, and applied it. When something like that does not work - what can you really do?
As a casual observer, I get the impression that Google's corporate attitude toward research on Ethical AI is fundamentally analogous to Exxon's view of climate research in the 80s (and since).
Namely, that Google, correctly, views research in the area as potentially undermining key revenue sources in the short and mid term. It is sad, but unsurprising. Government funded research and regulators will most likely be the force driving AI to be more ethical -- if that's going to happen. That's what we see whenever corporate and public interests collide head on.
I'm not sure I get the logic of the article. Google AI Research is going to implode because we know what the current models can't do? I thought we do research precisely because we wanted to expand our boundary of out knowledge. That is, shouldn't a research institution implode if there is nothing to research?
And why is Gebru relevant in this article? Does she do quality research, and the research results have to do with the claimed implode?
This article makes no sense to me. If the premise was will Google AI Ethics research implode, then yes. Removing key memembers of the team has affected the team morale and created a toxic environment for the remaining researchers. However, the post seems to imply that stifling the Stochastic Parrots paper is somehow proof that Google AI stifles innovative research that is critical to AI progress. This is a quite a weak claim even before looking at logic of the claims that get to that point.
Google AI, Google Brain, and Deepmind are all different groups at Google with different mandates and research goals. While what's happening in the Ethical AI team is troubling, it's rather a large and unfounded leap that it'll affect research productivity for the other teams.
Digging deeper the article is confusing and sometimes plain wrong on its assessment of AI research. Broadly the deep learning and RL approach to AI has been critiqued for its lack of semantic and symbolic understanding. These are not Google specific and the articles examples of this are terrible.
The first example of limitations of AlphaZero on Montezuma's revenge is a bad example. The author implies that RL failed because it didn't understand ladders. But later approaches still solved this using by using stochastic exploration strategies and not introducing conceptual knowledge to the model as the article implies is the key limitation.
On the language modelling, its weird the article cites GPT3 as problematic given that GPT3 was developed at OpenAI's research and not Google. Also GPT3 is pretrained using next word prediction which only consider left context and is far more limited that BERT which considers bi-directional context and produces richer word level embeddings. That being said the Stochastic Parrots paper does specially critique BERT.
But it's not a new critique. Emily Bender, the other major co-author, is a computational linguist who has always been critical of deep learning approaches to NLP. Bender along with Gary Marcus and many others have called for AI that considers symbolic and linguistic knowledge and have been critical of purely data-driven deep learning approaches. Stochastic Parrots is not new in its critique of large language models, it just provides newer evidence specific to the current state of language model research.
So I'm not sure how any of this is a signal that Google AI is imploding. The broader trends in AI is not just throw more compute and make bigger models. It just happens that large models works well for OpenAI and Google for specific problems. Google also has one of the largest knowledge graphs and there is open line of research that combines symbolic knowledge from KG with deep learning methods. There is also active research both at Google and elsewhere that aims to add more make current deep learning approach more "intelligent" by using linguistic and symbolic knowledge.
I'm confused as to how Google AI research is imploding. Google PR attempting to censor Stochastic Parrots (which was still published) because of bad PR optics has nothing to do with active research questions elsewhere at Google Brain, Deepmind, and Google AI.
The problem is that AI etichs research is an esetoric academic field that has little real value in indistry. I will explain.
Take the famous paper the article mentions. I read the paper. It's full of nonsense. It talks about environmental costs of training huge models. Give me a break. Then the problem with the biased datasets. Yes, everyone knows if you train on racist text you gonna get a racist LM. AI researchers are not stupid. They don't need etichs research to figure this out. The paper even gives an example of an Arab guy who got arrested because the Israeli police used an MT system that made a mistake in translation. The problem is not AI, the problem is the stupidity of the Israeli police who didn't verify the translation before arresting someone (he got released of course). Image recognition is another issue AI etichs like to bring up. "A black man was falsy identified!!". But when Yann Lecun said it's a training data problem the AI etichs researchers exploded. "No! AI models are racist!!". Etc.
Now, if you have to work with a biased dataset, then ethics/fairness research might have some value in guiding you on how to build more fair models. But in industry and esp companies like Google, cannot take chances and almost always resort to data as the solution.
I would say the QAnon example demonstrates how useful the model actually is. It has been fed some [false] knowledge and answered in perfect accordance with what it has been taught. Doesn't this mean it is can give useful answers to real questions if taught real (scientifically proven) facts?
“ Is there hope for Google Brain? One glimmer can be found in the fact that most of the articles I cite here, criticising Google’s overall approach, are written by their own researchers. But the fact that they fired Gebru — the author of at least three of the most insightful articles ever produced by their research department and in modern AI as a whole — is deeply worrying. If the leaders who claim to be representing the majority of ‘Googlers’ can’t accept even the mildest of critiques from co-workers, then it does not bode well for the company’s innovation. The risk is that the ‘assumption free’ neural network proponents will double down on their own mistakes.”