I'd totally agree with you. That would indeed be ridiculous. But... It's strange... each time a new argument pops up, I dig into the new detail, and surprise: it seems to have a straightforward, boring answer. "This is a pretty standard paper. Google wouldn't have been hurt reputation-wise by letting it through. And we should probably be thinking more about energy usage and bias. She didn't namedrop all relevant research, but there doesn't seem to be anything here to demand a retraction over."
It only gets stranger when you take this into account, too. From the journal reviewer:
However, the authors had (and still have) many weeks to update the paper before publication. The email from Google implies (but carefully does not state) that the only solution was the paper's retraction. That was not the case.
In some parallel universe, Google could re-hire her, Jeff and her could sit down and hammer out the paper, send the updated version, and there would still be two weeks to make even more edits. Isn't the point of the edit window to address these problems?
What really got my attention though, was that she informed everyone months ago that her and her coauthors were writing this paper. She wasn't working on some hit piece of a research paper. It's just ... a standard survey of the current ML scene circa 2021. I read the abstract and go "Yup, we use a shitload of energy. Yup, we should have better tools for filtering training data -- I've wanted this for myself. Where's the bombshell?"
For all the fuss people are making, you'd expect the paper to be arguing that we should stop doing AI for the betterment of humanity, or something weird. But it's nothing like that.
The entire course of action you suggest could have happened were not for Timnit going public by her own volition, accusing her employer and coworkers of unethical behavior in the process, and encouraging sympathetic colleagues to apply external political and judicial pressure on Google. What for, so she can publish a review paper disregarding relevant internal feedback?
She made a lot of fuss and burnt a lot of bridges. Nobody forced her to do so.
I'd totally agree with you. That would indeed be ridiculous. But... It's strange... each time a new argument pops up, I dig into the new detail, and surprise: it seems to have a straightforward, boring answer. "This is a pretty standard paper. Google wouldn't have been hurt reputation-wise by letting it through. And we should probably be thinking more about energy usage and bias. She didn't namedrop all relevant research, but there doesn't seem to be anything here to demand a retraction over."
It only gets stranger when you take this into account, too. From the journal reviewer:
However, the authors had (and still have) many weeks to update the paper before publication. The email from Google implies (but carefully does not state) that the only solution was the paper's retraction. That was not the case.
In some parallel universe, Google could re-hire her, Jeff and her could sit down and hammer out the paper, send the updated version, and there would still be two weeks to make even more edits. Isn't the point of the edit window to address these problems?
What really got my attention though, was that she informed everyone months ago that her and her coauthors were writing this paper. She wasn't working on some hit piece of a research paper. It's just ... a standard survey of the current ML scene circa 2021. I read the abstract and go "Yup, we use a shitload of energy. Yup, we should have better tools for filtering training data -- I've wanted this for myself. Where's the bombshell?"
For all the fuss people are making, you'd expect the paper to be arguing that we should stop doing AI for the betterment of humanity, or something weird. But it's nothing like that.