From what I read it sounds like Timnit cleared the general idea of the paper 2 months before. The paper itself seems to have been submitted for approval 1 day prior to the deadline. Another HN commenter says that submitting papers for approval "hours" before the deadline is common at Google.
Thank you. I’m not sure what to think. It seemed absurd to submit a paper for internal academic review one day prior to a major deadline. Yet today we’re hearing that it’s common (I can believe that; standard big company stuff) and that Google hasn’t been doing academic reviews at all until very recently / possibly this incident.
So it feels like, suddenly, the cornerstones of the arguments against her are vaporizing before our eyes. This could go badly for El Goog unless they stop making official statements on the matter.
Saying nothing would have been better than giving a convenient post for all the former Google employees to come out of the woodwork and say “That’s not true! Google never did academic reviews; they solely checked whether business IP was being exposed.”
It’s ironic that people are painting her as unprofessional in that context; I’d be frustrated too, if that’s really the situation.
> It seemed absurd to submit a paper for internal academic review one day prior to a major deadline
I think that there isn't internal academic review of the sort implied by Jeff at Google.
Timnit seems pretty clearly in the right here. As an AI researcher at a competitor, this impacts my desire to join Google in the future. I imagine these sort of PR disasters hurt their standing in the academic labor market.
"My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review."[1]
"In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?"[2]
Do you have examples? The closest I've seen is the person I replied to saying a key person in their less academic department spiked papers he thought weren't interesting enough.[1]
I'm interested to know too. Because although I treat my ML work professionally, I haven't worked at a large AI lab. So I had no idea whether academic review was common inside DeepMind/OpenAI/Facebook AI, or whether researchers generally did their own thing -- i.e. that the researcher and their co-authors are the main judges of their own academic integrity.
And in hindsight, it seems dumb to think it'd be any other way. Of course the researchers are their own judge. That's what the reviewers are for! You submit your paper for academic review at a journal, and the reviewers are in charge of reviewing.
Would you really want to mess with that dynamic if you're a big company? It's been a tried-and-true way to do science for more than a century. It's also a recipe for failing at science, as many will attest. But being allowed to fail at science is a key aspect of science. It would be terrible if we only published papers that were completely correct in every detail, because it means everyone is playing it safe rather than pushing the boundaries. The most interesting work is usually on the frontier of some new idea.
When the news broke, I didn't give it a second thought. "Oh, Jeff is saying that there's an academic review process. Yeah, obviously DeepMind would have something like that. And what's this -- she sent the paper one day before the journal deadline? That's almost giving them the middle finger. Yeah, pretty clear-cut firing."
... But when you think back on it, none of that adds up. Researchers are paid to do research. Being hamstrung by some manager insisting that you namedrop every relevant paper from the last decade would certainly be rigorous, but not necessarily productive. Sure, you can argue that maybe she should have talked about X or Y. But you could also write your own paper.
I'll admit, I didn't think highly of her. All I knew was that she liked to stir up drama. Why won't she just keep quiet and do her job like everyone else? Yet now it seems like she was doing her job. And if I ask myself how I'd react in that situation -- some middle manager is forcing a bogus new "review" process, and now they're demanding us to retract a paper that we put several months of work into, for reasons other than "You're revealing Google's IP," then my thoughts would be: (a) where were you during the two months I've been writing this and asking for feedback? (b) what are you trying to accomplish here, and is this really how a world-class institution treats the process?
Every company is different. And at Google scale, different teams are different. But now it's looking pretty bad. They certainly had grounds to fire her, and for many folks perhaps that's enough, but as a researcher I'm thinking "Why did Google try so hard to retract her paper anyway?" They keep dancing around that. And the article certainly doesn't address it:
Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.
Why? That's the point of publishing! Do reviewers just say "Oh this is from Google" and click the "approve" button? Maybe, but the whole point is for people to read a paper and decide for themselves whether it's mistaken. This whole "keep it under the rug until it's polished perfectly for six months for no reason other than prestige" is... well, rather a grim-sounding idea.
Outsiders can't know what insiders know. But we can picture various things based on the information we're getting. And this reads like some manager tried to double down, and she called him on it each time. After four or five doublings, now it's headline news and Google is looking like they went nuclear without some very solid reasons.
From my time in a corporate research lab: The formal review process was conducted in parallel to initial review, with the goal to amend to prevent internal details from being widespread and amending for camera-ready. While people would freely review each other's papers, the goal was to increase initial acceptance rate. However, our lab notably, and perhaps exceptionally, mirrored the structure of an academic lab with corporate funding, even though all researchers were employed by the corporation.
Google's story would not fly where I worked, or where I work now.
What does this even mean? The power and resource disparity between Google and an individual researcher are so vast that this in no way can go bad for Google.
The idea that a few members on HN are disillusioned with Google is just another Tuesday for them. They literally do not care...no business of this size and magnitude do. The general public will never hear about this and if they do, they won't understand it, and if they do then they aren't the general public.
You're not wrong. But like it or not, HN is the newspaper of our time. For many of us, anyway. So stuff like this tends to percolate in unexpected ways. And as a hiring manager, you get no feedback when people decide not to apply to your company, so it's probably better not to kill your hiring momentum.
If it seems absurd that anyone would turn down a job at DeepMind, well... Let's just say, in my experience, prestigious institutions tend to come with a pile of downsides that everyone puts up with (because prestige) but no one really talks about (because no reason). If you care about shipping results quickly -- some researchers do (or at least I do) -- then the idea of joining a big company is already worrisome. Like you're a professional rower, happily rowing along and navigating wherever you want to go, then you're asked to join a galley rower: https://youtu.be/TyzQ-bVaqPU?t=294
There's no substitute for Google-scale work. (Working on TPUs would be a dream, IMO; where else could you possibly build those?) But if you join Google as a researcher, it sounds like your ideas have to (a) pass through their internal academic review, (b) pass through a journal's review, and then finally your idea can be published to the wider scientific community for comment. (b) was painful but possibly worthwhile, with arxiv serving as a bucket to catch everything else. Why roll your own internal review process? And why is Google trying to micromanage what researchers are allowed to publish?
I know we're probably missing a lot of the story. But on the other hand, Jeff has now given an official side of that story, so it's not like they didn't have a chance to set expectations.
It is common to submit for approvals hours before the deadline. However, if pubapprove process finds something that needs to be redacted, you have to withdraw the paper. That's basically the risk in it.
Since most people are frantically working on their papers until day of/ hour before ML conference submission deadlines, the "final" version of the paper may very well have been submitted the day before the deadline.
Someone in the ML community posted the abstract and provided feedback, which seems to indicate that this followed the typical review cycles for conference papers.