“Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems”
The whole paper was a nothing burger wrapped in social justice language with asides about how global warming is Actually Racism because of disparate impact (interesting, not an ML topic).
If the problems aren't novel and you're proposing zero solutions, it shouldn't be a paper.
I wonder when the “performative wokeness” bubble will burst.
You're right, this wouldn't have happened if she were a straight white male - if she were a straight white male, there wouldn't be comments like yours making asinine assumptions about how good she was at her job.
If anything, straight white males are trashed harder around here.
That's the only problem I've ever had with having a woman on the team: the fear of saying something that offends someone, her or other.
Poorly written papers are a regular occurrence in any field - that doesn't justify attributing bad faith or character issues to Dr. Gebru.
These are ridiculous claims, and it’s fair to respond to them by saying, “well, what exactly do you imagine a solution or mitigation looks like?”
Essentially, by the nature of how specious Gebru’s stated problems are, they demand clarity over what an “ethical solution” even is, conceptually, and why everyone would have to agree.
For example, you could discuss economies of scale or train-once-finetune-everywhere approaches with GPT that reduce total energy needs. Or you could discuss how researchers can register the corpus they use and the snapshot of time it was grabbed, with an open understanding that as long as the methods and data are reproducible, there is no research ethical issue with studying that corpus, no matter how much bias or lack of woke vocab a given person believes it has. (And also, nobody is required to just accept activist language as important or valid.)
Gebru did none of this. The article could literally be summed up by Gebru saying, “I think <supposedly shocking evidence> is bad, therefore its connection to something in ML is bad.”
E.g. “I think, subjectively, that the raw energy use to train GPT is bad. Here are some shocking comparisons. Therefore GPT is bad.”
It’s incredibly unrigorous and juvenile. Dean’s comments that it needs to clearly state mitigations is actually a super generous, polite way of saying the paper is just subjective amateur hour.
How do you even compare between the two? MSR closed an entire lab out of the blue of some great researchers who didn't do anything wrong. Here you have two employees going against their company and shitting on it publicly. The only researchers who will not want to work at Google after this saga are the ones Google better off without.
Perhaps you should pay less attention to VCs and more attention to governments and academic institutions, who for example in Canada are investing 10s of millions of dollars into AI ethics/FATE/AI for good research.
Sometimes, the point isn't just to make money, it's to actually improve humanity.
I'm not sure now what you think AI ethics research is. Do you think systemic discrimination is not a real world problem?
AI systems are trained on data. There's an abundance of English data which is why systems are often biased to work better on English. Similarly, an image recognition system might be biased if you don't provide it with data representing all demographics. There's nothing new about this and you don't need AI ethics research to solve these issue.
Focusing on AI ethics thinking it has impact on systemic discrimination, instead of focusing on real issues that cause systemic discrimination, is my main issue with all of this.
what are you talking about? This is exactly the kind of research that's classified as AI ethics. "Solving these issues".
> instead of focusing on real issues that cause systemic discrimination
Identifying which ML models _actually running in production_ cause systemic discrimination (e.g. as you mentioned poor image recognition, bail predictions, etc.) is exactly focusing on real issues that... cause systemic discrimination.
> AI is not causing systemic discrimination
This is simply not true. Bad ML models have an impact on systemic discrimination right now, in that they amplify it.
> instead of focusing on real issues that cause systemic discrimination
It's a fallacy to think we can't do both, there's enough humans. Both making better AI and making better societal systems.
There's nothing systemic about these issues. I already mentioned it's a data problem. Nothing new. It's very easy to build a fair image recognition system by representing all demographics. And even then AI systems will continue to make mistakes. Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
I wish it was easy. Unfortunately, reality is more complicated, as it tends to be [1,2,3,4].
> Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
This is a weird statement. This is like saying police cherry pick on criminals to justify their existence.
Do you not believe in harm reduction? Don't you think some part of AI research should be dedicated to minimizing how many "AI systems will continue to make mistakes"?
I do agree that in the real world datasets are often biased because they represent the real world... and there are indeed modeling approaches to address such issues. (e.g., designing a loss function to up/down weight of certain types of examples). There's nothing new about this, it's been known in ML for decades.
But that's what we're all doing. By being "against AI ethics", you are effectively pushing technology that enforces _your_ ideology.
Your ideology seems to happen to be different than mine, but you would be naive to think that the status quo is somehow "ideology-free".
Do you still think it's a horrible idea? Try being a bit less deontological and a bit more consequentialist. Doing nothing has consequences too.
if you're saying i'm a racist, i'm not. Is everything that some people say are racist actually racist? No. Is racism a problem? absolutely!
The status quo, however, should be changed by people,not people with machines. Doing nothing, that involves not using obscure algorithms to force people to think a certain way is better, in my opinion. You can't call something ethical just because you think it should be; it must be argued out. Using AI to shut out some more of that argument will only create a universal standard, not necessarily the correct one.
I wasn't, sorry if that's what you interpreted.
> The status quo however should be changed by people,not people with machines
I'm not sure I understand what this means. People/companies own machines (and ML models) and use them. So shouldn't we make sure that the machines' decisions align with what people/companies _want_ them to do? (i.e. that the people's ethics align with the ML model's ethical consequences; I'm 100% sure that people who deploy "racist models" don't do it on purpose or out of malice)
> You can't call something ethical just because you think it should be; it must be argued out
On one hand this sounds like a strawman. No one thinks that something is ethical because someone randomly declared it so.
On the other hand... ethics are a human construct, and will continue to evolve as our culture evolves over decades and centuries. Shouldn't we construct ML models which are flexible in that they can align themselves with the ethics we collectively decide? We don't know how to do that yet!
> Using AI to shut out some more of that argument will only create a universal standard, not necessarily the correct one.
You seem to be under the impression that the fields of AI ethics is dedicated to brainwashing people into some particular unpopular moral philosophy. This is simply untrue. Within the field of AI ethics there is a lot of diversity of thought and disagreement on how human morals should be "encoded" so that AI can "align" with these morals. And I'm using the plural of morals because obviously there will never be a humanity-wide consensus on ethics, and if AI is to be deployed in the world it needs to reflect this diversity.
Here's an example of AI people disagreeing if you don't believe me: https://jacobbuckman.com/2021-02-15-fair-ml-tools-require-pr...
Codifying ethics perpetuates falsehood. Every single generation in history believed that the "had it" only to be denigrated as hopelessly misguided by the next generation. We are making the same mistake, only less people are killed right now, so it looks like we are more successful. Remember the Pacifism of the inter-war years? it bred fascism. The pendulum swings.
AI ethics cannot hope to remain in style for long, while they will almost certainly exist for far too long. Accepted standards of 2 years ago, are already out of date.
I'm lost for a solution.
I do think that the less AI is claimed to be ethical, the less it will be trusted, which is the best cure i can think of. Honesty is the basis of the whole of scientific inquiry, and is probably scarcer in google's ethics research department than anywhere else in the building. (programs don't run if the math's wrong, economics as well)
Are all these things _ideas_? Human creations? Sure. The universe is absolutely indifferent to us. But these _ideas_ have real-world impact, and I'm not indifferent to my own suffering.
Societies function at the scale they do right now because there is enough overlap in how I perceive the world and how another random human perceives the world so that even though we don't know each other, we can still cooperate [see e.g. 1 for great discussions on this] e.g. exchange money for goods.
> AI ethics cannot hope to remain in style for long
Again, you seem to be conflating "AI ethics" with a particular ethical stance, let's call it woke humanism, and you seem to think that the people who work on AI ethics work to enforce this belief on others. This is wrong. We're perfectly aware that humans have a variety of ethical preferences, see my previous post. Lots of people who work in "AI ethics" are definitely not woke humanists.
> Accepted standards of 2 years ago, are already out of date.
I'm not sure what you're trying to say here. Um, sure we keep finding better algorithms... no one ever, ever, ever, has claimed that their paper is the ultimate algorithm and no no one will find better. But 2 years ago, killing a random person in the street was wrong. It's still wrong today, it was wrong 2000 years ago, and it's going to stay this way for the foreseeable future.
> I'm lost for a solution
The research field of AI ethics exists because we don't know what the solution is!!! Come join us if you're so concerned.
> Honesty is the basis of the whole of scientific inquiry
If you value honesty, then you should value research that tries to make ML models "honest", by revealing how they make the predictions they do and where that fails. I don't understand your antagonism towards ML FATE (fairness accountability transparency and ethics) research
As for killing random people on the street never being socially acceptable...
...Just look at europe 80 years ago.
AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
What about far more stable principles, such as murder and racism, you ask?
They are prone to being overplayed or downplayed, are state executions murder, or justice? What if the victim/ hangman happens to be black? Why should it matter? Just ignore some issues?
That's misleading by omission.
It would be better to just admit
"yes, we at Giant Tech know our ethics are bs, but we had to put something down or our machines won't work.
Maybe we are not ready for advanced ai. Maybe there's a limit on what programmers can do. Yea. We know. turns out computers DO have limits. We'll have to find other ways to make money."
"But why should we say that!" cry all the executives when this speech is proposed to them.
"because it's true, and if we don't act now the company is screwed. And our clients will also get screwed" is the answer of the timid executive who first suggested this.
"how will the truth help us?" respond all the execs in unision.
"if we can keep up the lie for long enough, we shall all long be millionaires and retired before it implodes! We shall long be out of danger! who cares if some people lose money?"
"Yes, but don't you feel bad for all the shareholders? And how can we possibly fool people for decades to come that our ai isn't bs?" Responds the poor executive weakly.
"By creating a fake team and telling everyone they are ethics researchers" they say. "Really they are just pawns to help us earn more money. Fish get eaten by bigger fish you know?"
"can you just help me change a few lines on my press statement?" Asks the first executive.
It just so happens that currently the "input" is racial and gender equality. That's a societal choice, and one that is likely to change if e.g. racial equality is achieved and some new inequality arises. Maybe eye-color-based discrimination, who knows.
More generally than "equality", AI Ethics research gives us tools to analyze current methods and see where they fail to meet our ethical standards.
> History has shown us that defining ethics and writing them down merely spreads falsehood
Humans have been trying to improve their own condition for as long as there have been humans. Collectively defining acceptable behaviors is a never-ending task. Does that mean we should not undertake it? Absolutely not!
Writing down ethics isn't about spreading falsehoods, it's about cooperation. Cooperation involves compromise:
> AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Laws can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Culture can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Morals can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.
Do you see the pattern? Things change, that's normal. We still have laws, and culture and morals, but we adapt them to our needs. Are you suggesting we should simply reject anything that changes? You won't be left with much.
I'm still very much interested in improving my own condition. That includes pushing people to behave in ways which I think would do that. People have different interests and their condition is often at odds with other people's condition. This is the foundational difficulty of living in a society of more than 1 individual. Yet we 8 billion humans still manage to be fairly successful at it. I wonder why?
Cultures and morals change. Does that make the morals of the past falsehoods? Of course not. They're just different perspectives on the human condition, probably best suited to the material conditions of the past.
Calling someone today is often seen as rude when a text would suffice. This is due to our material conditions, the ubiquity of cellphones.
> It would be better to just admit ...
You're suggesting we should admit defeat? Give up and let Google maximize profit? AI is a wonderful tool that could improve the material conditions of most of humanity if used correctly. It could also be devastating. I'd rather it not be devastating, so I'm going to continue supporting people who try to do research into aligning AI with whatever ethics we collectively agree on.
> I don't think we are actually better off morally now then 50 years ago
This is pretty sad.
Don't confuse your own cynicism vis-à-vis big tech with some nihilistic historical inevitability. The global improvement of the material conditions of people in the last 50 years have enabled us to start asking for ourselves what morals we actually want on a global scale, rather than this exercise being left solely to a self-interested elite.
Regardless of how better off morally _you_ think we are or aren't now, the space of collective possibilities is now immensely larger, whether you like it or not. That, is wonderful.
Starting from that point, what do you expect from a discussion? What kind of information would lead you to think again about the situation?
People say that AGI is still far away, but I haven't seen any results of being able to contain the harm AGI can do to us humans.
What these researchers are doing is the easy part of AI ethics.
How much money something can make is not a good arbiter of how important it is
e.g. the hippocratic oath
Gebru has very publicly got into fights with Yann LeCunn and now with Jeff Dean. If you are building AI, who would you rather build your team around, Dean/LeCunn or Gebru? If you are an AI researcher, do you want a join a team where one of the team members is in the habit of aggressively accusing other researchers of racism? Would you be worried that your research might fall within their crosshairs for some reason or another? For example, if you are working on natural language research, and your model ends up doing better with Indo-European languages versus those from other families, do you want to be accused of propagating racist power structures on Twitter?
Is this really true? I don't see ethics in ML papers getting the same attention in major conferences as theoretical or experimental breakthroughs in deep / reinforcement learning.
Don't get me wrong, ethics could be hot outside the ML academia, but I very much doubt it's something majority of grad students in ML are dying to get into.
I would argue it’s a Public Relations field.
I don't agree in general but I do think these two researchers, and this whole saga have just hurt the AI ethics field.
That's a management illusion. Try to replace e.g. someone like Fabrice Bellard, Mike Pall or Claude Shannon. Of course such things happen in big companies, but mostly because management is too limited to properly assess the true value of certain individuals. But the article is actually about a different topic.
I would argue the real ego resistance is in not accepting such people exist.
If you take an intellect so impressive that they are one in ten million, there will be still be almost eight hundred of those people in the world.
We are also reasoning from the POV of our own reality. We see the people we did get, but it could be the case that we missed some brilliant minds that do exist in some alternative universe, but came ahead anyway. There are so many factors in play.
Intellects aren’t fungible. Even if there are 800 Fabrice Ballard-level minds out there, I doubt most of them have honed their brain on the exact problems he’s worked on. You can’t just find another one-in-ten-million mind and put them to work on the problems of another 1/1e7 mind and expect comparable results.
I can't draw a conclusive answer to these questions following the logical consequence of my own arguments, but at least we have to come at the problem with the knowledge that our own minds are drawn to simple narratives and to individual achievements. Hence assuming replaceability in the absence of very strong evidence to the contrary
Fabrice Bellard has worked on a subset of the problems we have. He's created good solutions for them. But if he hadn't, we would have some other, lesser, solution for those problems. Like we do for the problems he hasn't worked on.
No, you can't expect comparable results. But you can expect some results.
Which is exactly my point: if you can’t expect comparable results, the person is not replaceable.
Simile: saying “your brain is replaceable”. Beyond the fact that the most likely context is a threat, it is a poor argument: while technically true, what would remain of me would not be meaningfully me. And the surgery is work that would be hard-pressed to generate the expected value, such that the only reason to do it, is either out of anger or as a consequence of irremediable damage.
Companies are stories. The decisions are made internally, but their meaning is narrated externally. If you change the protagonists, the story changes. The case of Uber’s self-driving car division is quite an example of that.
Does the change in Google’s story converge to a positive or a negative light?
Like all stories, the meaning and message of the story is formed in the mind of the reader, not the mind of the writer.
Every reader will make their own meaning, and fit that into their own story.
It's impossible to say whether this change results in a positive or negative effect: it will be positive for some, negative for some.
The more people, the less the individual is valued. But that does not make the individual less valuable. Unfortunately, for a few years now, respect for the performance and qualifications of others has been declining more and more. This increases the illusion that everyone could be replaceable. Just ask your family if they see it that way in relation to you; the illusion of replaceability definitely ends here.
The job might not get done as well, or done in a different way than we'd do it, but it'll still get done.
Excellence can't be replaced as easily. Maybe for certain kinds of jobs yes, but for all jobs? No. If that were the case then we'd be inundated with Einsteins, etc. And we aren't.
How many people have the right brain, and the right interest, and write the right paper at the right time?
How many are starving in an underdeveloped country and no access to education, for that matter.
Einstein wasn't necessarily a unique genius standing at the pinnacle of an intellectual mountain. He was a beneficiary of survivor bias. We don't know how many other "Einsteins" there have been, or could have been, because we only tell success stories.
How many people have the opportunity to be Einstein?
How many people have the right brain, and the right interest, and write the right paper at the right time?
How many are starving in an underdeveloped country and no access to education, for that matter.
Einstein wasn't necessarily a unique genius standing at the pinnacle of an intellectual mountain.
Now, obviously, they, as well as any "proper" scientist, are well aware that none of their work would mean anything if they didn't stand on the shoulders of giants. Science is a branching tree of giant people.
He was a beneficiary of survivor bias. We don't know how many other "Einsteins" there have been, or could have been, because we only tell success stories.
So, to get back to the subject: replaceability depends on the kind of job. It may be simpler to replace a fast food worker, but a Richard Feynman? an Albert Einstein? or <a name of a scientist whose name isn't publicly known but has made a difference in their field>? I doubt it. Those people made a difference in their respective fields and no one can take that from them. And I'd say the same if it were someone else from other countries, ethnicities, etc.
Google deals almost entirely with aggregate people: statistics, algorithms, collective behaviors, machine learning, implementation that's never about individuals but is about larger population trends. Aggregates, not special unique snowflakes.
As such this is not an illusion but an axiom. Google and entities like it (themselves humongous aggregate 'people') MAKE individuals replaceable, the better to be dealing with other entities like themselves. This is only going to accelerate the more they get to bring AI and machine learning into the mix… which by now is long established, nowhere more than at Google.
An axiom which only applies to a certain percentage of cases?
Those exceptional individuals are incredibly rare, like one or two in a generation. So you need to be Shannon-likes to be not replaced by some middle manager in a big corporate? Emm, if someone were this accomplished, why would they care about one employment? That is the wrong question to ask.
Truth is, if Google thought they were not replaceable, it would not fire them this easily.
I guess it depends on the purpose for which we are all supposed to be replaceable. Nature probably doesn't care which individuals reproduce or are eaten, as long as the numbers are right. Human society with its elaborate specializations and long training periods has added a few more dimensions.
That's not to say these weren't great minds, but the concepts where in the air and the race to formalize them was on; most of the "second places" are today forgotten or their contribution diminished from the modern "winner takes all" mentality, but none of them existed in a vacuum.
The history of science is fraught with independent discoveries, from calculus to the the telephone, up and including mass energy relation and the basis that later became quantum mechanics.
Edit: I want to add that saying the title has nothing to do with the article is not helping the case. I finished reading the article in case I was being unfair, but I still stand with my original comment.
Anyway, you stopped reading in the first sentence? That's essentially the same as not reading it.
But we still got AlphaFold. And AlphaFold is the type of breakthroughs DeepMind is meant to make. Not playing games.
> What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.
These clickbaity article titles are tiresome.
Thanks for saying it like it is.
This is a very badly written and uniformed article, and sentences like these essentially illustrate the thinking here (It's imploding because I don't like it).
Here is an alternative reading: Google is cleaning house of toxic activists who are not interested in serious ethics research but use it as a vehicle for their ultra-progressive political agendas.
There is no real proof to this.
I'm following and reading research @ google (stuff like this https://ai.googleblog.com/ and other sources) for ages now and NOTHING indicates an 'implosion'.
It is strong research with real and constant results.
I have no idea why the autor would even consider using the word 'implode'.
Its not rocket science that data is biased and it just will continue be researched and a solution will be found. For the single reason that biased systems in certain areas will not deliver the results you need to use it properly.
well, they produce happy numbers for papers and depending on the brainwash-level of a population, they might also sufficiently often do the "right thing" towards minorities that noone cares too much about, independent of whether it would stand a chance against objective evaluation.
And the research they do is, even with this bias, ground breaking.
80% might be to get this thing running, 20% might be to finetune it for minorities.
When google started the ML stuff for translation, they did start with english, now they support much more languages than before.
This sounds more like a reason to unionize than a reason to celebrate a firing.
"Google Assistant adds 9 new AI-generated voices": https://venturebeat.com/2019/09/18/google-assistant-gains-9-...
Data center cooling is another one:
"How Google is Using AI for Data Center Cooling": https://www.bmc.com/blogs/data-center-cooling/
Bert as applied to search from late 2019: https://www.google.com/amp/s/blog.google/products/search/sea...
It normally takes some 10 to 20 years for basic research to produce profits.
mRNA vaccines are technology from 1990 and are first time used commercially on mass scale on humans 30 years later.
It is THE biggest change that Google's search algorithm had ever been through, I would assume. And to push such a fundamentally different model to ALL their English traffic is pretty telling itself that how much an improvement Google had been seeing.
This is easily billions of ROI for Google, if not tens of billions.
Honestly - the discussion is over, the AI folks won.
highest quality research with real results
This article is an analysis of the extreme weaknesses in the current seemingly-productive approach to ML language model research. The intro anecdote about the failure of game-playing models to handle games with representational elements or indirect rewards is extremely important. But the failure of the Big AI community to recognize those same failures in its approach to building language models is the pending crisis that the article's title refers to. Gebru's firing is not the cause of the impending implosion of Google's AI research, but rather a leading indicator and warning sign.
Perhaps equally importantly, the author made a very serious accusation that Google has "deep instutionalised sexism and racism". Are we really intended to just gloss over that, treat it as unimportant filler?
But on the other hand, research labs or not, Google is still a commercial entity driven by a shorter term horizon than your average academic.
Google Research has probably subconsciously drifted towards areas (or at least applications) that can tangibly benefit Google in the near future. Neural networks might not be The Answer (TM), but they can deliver results today, and there's still untapped potential. Speech recognition, search, speech synthesis - all core Google products - have all benefited from neural networks, not to mention the broader applications like protein folding for DeepMind.
I'm hesitant to find fault in a corporate group that's doubling down on the stuff that's actually delivering results, even if it's probably not the path to AGI.
I probably wouldn't be so lenient with OpenAI, because they are (were) purely a research outfit, openly committed to AGI.
I'm always thinking that Google research is going to stagnate and yet they continue to show impressive results. So, yes, I would love to see something more original than yet another NN, but on the other hand... amazing.
Really, where is that to see. You weaken your whole case with this kind of casual reference to "oh and she was also a black women" so racism and sexism apply.
Its a type of crying wolf that loses you more people in what is the potentially important issue at hand, her work at google.
I really wish there would be repurcusions for this kind of casual libel, as it can be thrown in today's climate without a second thought.
I don't understand this statement. In my mind, internal forum posts as among the most revealing forms of evidence about the culture of an organization.
Simple example: if I reply only the original forum post, but my message is posted after yours, then you might assume like I'm excluding you intentionally, probably because I don't like you personally; you get mad. This assumption is wrong and harmful to productive conversations. The simplest explanation is I began writing my reply before you posted; no apology is warranted. Another possibility is I'm simply a careless or excitable type of person who usually replies quickly before reading other replies; an apology is warranted, but it wasn't a personal attack!
So, the answer to the question is no. Instead, AI Ethics research will probably implode. AI research, probably no.
For AI research to implode, major AI research leadership need to be seen leaving. And I am not talking about the corporate leadership but the research thought leadership whose sole purpose is to shield the junior/senior researchers from corporate BS (and who are themselves well-known researchers).
Edit: It is perhaps be more appropriate to have said that the botched AI Ethics researchers handling solely triggered the article.
Research isn't about knowing all the answers or always being successful. Negative results are just as important as positive ones. Author does not even mention the recent protein folding result.
Paying more attention to other, more recent, developments might also have helped you avoid writing a paragraph where you talk about deepmind not being able to play games where it needs to perform multiple actions to obtain a reward - I think deepmind managing to beat professional starcraft players is clear evidence that this conclusion is questionable at best.
It's a fantastic chance to export your viewpoint on the entire world when this ethical AI begins to underlay various services. For example, "Members of Group X are inferior/should die" being unethical while "Members of Group Y are inferior/should die" getting past the AI ethics is a great way to slide this past a ton of people. No need for argument, it just comes pre-censored on the commenting system.
AI, as a field, is going to have to face a painful question: what if we get answers we do not like? I'm Irish enough by extraction, so I'll use it as a self-denigrating point ... what if AI found that the Irish, just by genetics, did actually tend toward alcoholism and drunkenness? Would we accept that, or would we say "No, that's wrong. Go back to the drawing board until we get the answers we want"?
My guess is that we are going to look for the latter. AI is going to give us the answers we want, because that's how we are going to build it. AI won't be constrained by ethics so much as it will by the discovered truths we are unwilling to accept, whatever they may be. And that means there's a space for people with a thousand tiny axes to grind to be employed. It will be a chance to shape what "truth" in the Orwellian sense will mean.
Indeed, the article highlights some issues with AI research, in particular those which can lead to ethical problems when AI methods are implemented in consumer products. These are by now well-known and important issues and people should find a way to resolve them.
Then, in its final paragraph, the article suddenly claims to have answered its title question in the affirmative! Am I alone in thinking that the issues, valid as they could be, do not obviously spell doom for the entire program?
How do you define understand? People just use the word "understand" to fill in for the magic stuff that they say humans can do but machines can't. They also sometimes use "symbolic reasoning" that way, but the best work on machine understanding and symbolic reason is done by DeepMind:
The examples in the original article are about truth, not understanding, mainly because we don't have anything approaching a formal definition of 'understanding'. But, if anyone does, it is probably the deep learning community, where conferences in the last 3-4 years have had hundreds of papers working carefully to examine what is the nature and structure of the knowledge encoded in these kind of systems.
And a mutual admiration society of university departments, twitter fans and these type of researchers in tech companies keep everyone in high esteem as world leading authorities.
This calls for a big fat "citation needed". What we call neural networks have very little to do with actual neurons, so I fail to see how this is supposed to be "clear". And it's in its infancy since the sixties, really?
Technology goes through periods of explosive innovation as well as periods of very little growth. NN's were dead until they weren't, and now they are having quite the renaissance ever since 2012. Currently they seem to be getting us closest to general intelligence and I'm excited to see where they go from here and if anything else comes along to supplant it's place on the throne.
Specifically: we could make small worlds in which syntax and semantics lined up (like the Zork parser) to seem like it understood natural language, but there was a point you couldn't go further.
I think some graduate students now will be disillusioned by the claims being made for deep learning today for the same reason that those systems will quickly master the things they are good at and leave a residue of even more difficult problems for the 2050s and beyond.
I'm out of the loop on the whole Gebru situation, from what I know she was researching ethics wrt to AI, so I don't really get the whole "novel idea/work" part in the last paragraph. I often get the impression that such critics never see the rapid developments in AI as progress as long as they don't cover the topics they would like to see focussed. It will never be good enough and always a concern because of X.
No. (For reference see https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...)
Seriously though - even if the points in the article were true, none of them imply any kind of implosion or significant disruption in Google's research.
> Maybe one day we will see the transition from Hinton (ways of representing complex data in a neural network) to Gebru (accountable representation of those data sets) in the same way as we see the transition from Newton to Einstein.
This is so loaded and gives Gebru so much undeserved credit that it undermines that author's credibility.
The underlying problem of AI is that we've taken something we don't really understand, and applied it. When something like that does not work - what can you really do?
And why is Gebru relevant in this article? Does she do quality research, and the research results have to do with the claimed implode?
Google AI, Google Brain, and Deepmind are all different groups at Google with different mandates and research goals. While what's happening in the Ethical AI team is troubling, it's rather a large and unfounded leap that it'll affect research productivity for the other teams.
Digging deeper the article is confusing and sometimes plain wrong on its assessment of AI research. Broadly the deep learning and RL approach to AI has been critiqued for its lack of semantic and symbolic understanding. These are not Google specific and the articles examples of this are terrible.
The first example of limitations of AlphaZero on Montezuma's revenge is a bad example. The author implies that RL failed because it didn't understand ladders. But later approaches still solved this using by using stochastic exploration strategies and not introducing conceptual knowledge to the model as the article implies is the key limitation.
On the language modelling, its weird the article cites GPT3 as problematic given that GPT3 was developed at OpenAI's research and not Google. Also GPT3 is pretrained using next word prediction which only consider left context and is far more limited that BERT which considers bi-directional context and produces richer word level embeddings. That being said the Stochastic Parrots paper does specially critique BERT.
But it's not a new critique. Emily Bender, the other major co-author, is a computational linguist who has always been critical of deep learning approaches to NLP. Bender along with Gary Marcus and many others have called for AI that considers symbolic and linguistic knowledge and have been critical of purely data-driven deep learning approaches. Stochastic Parrots is not new in its critique of large language models, it just provides newer evidence specific to the current state of language model research.
So I'm not sure how any of this is a signal that Google AI is imploding. The broader trends in AI is not just throw more compute and make bigger models. It just happens that large models works well for OpenAI and Google for specific problems. Google also has one of the largest knowledge graphs and there is open line of research that combines symbolic knowledge from KG with deep learning methods. There is also active research both at Google and elsewhere that aims to add more make current deep learning approach more "intelligent" by using linguistic and symbolic knowledge.
I'm confused as to how Google AI research is imploding. Google PR attempting to censor Stochastic Parrots (which was still published) because of bad PR optics has nothing to do with active research questions elsewhere at Google Brain, Deepmind, and Google AI.
And it is happening regardless of the drama, from the perspective of job market.
Great that we taked about it :-)
Take the famous paper the article mentions. I read the paper. It's full of nonsense. It talks about environmental costs of training huge models. Give me a break. Then the problem with the biased datasets. Yes, everyone knows if you train on racist text you gonna get a racist LM. AI researchers are not stupid. They don't need etichs research to figure this out. The paper even gives an example of an Arab guy who got arrested because the Israeli police used an MT system that made a mistake in translation. The problem is not AI, the problem is the stupidity of the Israeli police who didn't verify the translation before arresting someone (he got released of course). Image recognition is another issue AI etichs like to bring up. "A black man was falsy identified!!". But when Yann Lecun said it's a training data problem the AI etichs researchers exploded. "No! AI models are racist!!". Etc.
Now, if you have to work with a biased dataset, then ethics/fairness research might have some value in guiding you on how to build more fair models. But in industry and esp companies like Google, cannot take chances and almost always resort to data as the solution.