I knew @colah was amazing (his neural net articles come to mind), but this is a whole other level of awesome.
WTH is Jeff Dean and WSIC.
One minor nitpick: the event that it rains next week is probably rather correlated with the event that it rains this week (in particular it's correlated with the season), so I don't think this is a great example of independent variables. Maybe you can separate by distance: the event that you wear t-shirt vs the event that it rains in city Y vs the event that it rains in city Z.
He explains his personal mission, doing research distillation to reduce research debt, in this essay: https://distill.pub/2017/research-debt/
I don't think academia is going to contribute to this mission as it's mostly governed by a make-it-look-ridiculously-difficult-so-people-won't-find-it-approachable-and-thus-give-you-much-credit-for-doing-impressive-work culture.
Academics don't get rewarded for confusing each other, they have to create research that their peers appreciate, understand, and value.
There's plenty of incentive to dress something up as more complex and impressive and using more high powered machinary than needed, e.g. impressing reviewers, qualifying for a particular conference, convincing the funding body that your pet technique can actually be useful etc
I don't think you'd find many academics that dispute this, though hopefully they'd think a lot of people are trying to do good research.
It'd be like open source software if it was done as a hobby; a lot of academics get public funding.
I suspect you'd find thousands of academics who dispute your claims. Nobody is going to claim perfection, but the problems you're describing are not systemic and basically absent from anyone with a career.
I should have chosen my words more carefully; we can probably find 1000s of academics to dispute a lot of perspectives on academia.
> are not systemic and basically absent from anyone with a career.
So, leaving aside criticism of any particular field, and to take just one pretty big cross-discipline trend, there's this 'replication crisis' thing: https://en.wikipedia.org/wiki/Replication_crisis
(wiki links to plenty of reputable sources).
I find it difficult to reconcile that with the idea that incentives to over-inflate claims were 'basically absent from anyone with a career'.
Maybe we think everyone was just really bad at statistics?
Again, by no means is academia perfect. But insisting that its primary problem is that grant writers are a pack of pathological liars is both incorrect and unhelpful in solving actual problems, that's all.
And yeah. I would say that most people would benefit from regular refreshers on statistics... in or out of academia.
I didn't say the replication crisis lead to bad incentives. I cited it as evidence bad incentives exist.
> But insisting that its primary problem is that grant writers are a pack of pathological liars
I didn't say remotely that.
Academics have incentives to make their work sound overly impressive, and many are playing this game. That doesn't mean every grant writer is a pathological liar, but it's very naive to think they aren't incentivised to oversell findings, and many do.
It feels a bit awkward to chime in, but it feels a bit important because I think there's a way in which the narrative is a little off (although I'm very grateful for everyone's kind comments):
Working at Google wasn't a trade off where I got financial support in exchange for not publishing. Rather, working at Google was amazing enabler, which supported me doing work I believed advanced important public interests for five years. That actually includes the blog posts being discussed here, which I wrote while an intern on Brain and received feedback from many researchers there.
To be concrete, some of the things I did while at Google included:
* Writing expository blog posts explaining important ideas in machine learning (eg. this blog post).
* A five year thread of research on neural network interpretability. How humans can understand how neural networks make decisions? (eg. DeepDream, Feature Vis, Lucid, Building Blocks)
* Cofoudning a scientific journal for machine learning that values exposition and allows interactive diagrams.
* Working on early TensorFlow and writing some of the initial tutorials on it.
* Articulating concrete safety concerns about modern machine learning systems, organizing a cross-instutional paper on it, and later representing Google on the Partnership on AI's Safety-Critical AI working group.
* Designing and teaching the introductory course for the Google Brain/AI Residency.
* Miscellaneous ML research, all published in open access venues.
That's basically the majority of what I did at Google. I always felt empowered to work on things that I thought were important for the world, and in many cases felt like had leverage I wouldn't have had individually. I think that was exceptionally generous of Google, and also speaks a lot to the environment Jeff created for those of us on Brain. (I realize Google is a large company, and other people's experience may be very different.)
I do think you're right that the number of short expository blog posts I wrote declined, especially after my first two years at Google. A big part of that is that I focused more on a small number of more ambitious projects (eg. Distill, Building Blocks). Another big part is that I started doing more non-individual work: teaching, mentoring, editing, debugging social issues.
To be clear, I actually left Google last fall. I now work at OpenAI where I lead the new Clarity team, which works neural network interpretability -- basically, can we take a trained neural network and turn it into something like code that a human can understand? Leading a team means that I do even less individual research and writing. Sometimes I miss being an individual contributor, but I think it's the right call. I get to build an environment where others can focus on a kind of research I think is really important for the world and teach and support them.
I'm deeply interested in these topics, but I'm not so good at them. I have gotten much further than I ever would because of efforts like these.
Thanks for being you.
One thing we're thinking a lot about how you can transition from "what" a neural network represents to "how does it mechanistically do that"? I hope we'll publish on that in the next couple of months.
By the way, while I'm flattered by the esteem it implies, I'd generally rather people not refer to us as the "Olah Team", just as you wouldn't refer Google Brain as "Jeff Dean Research" or MILA as "Yoshua Bengio Institute" or OpenAI as "Sutskever AI", etc. I think academic culture of branding groups after the PI is kind of unhealthy.
My teammates are doing the hard work and I see my position as a team lead as just to serve and support them. :)
Happy for Chris's decision to be at the big G doing his research, but it seems every researcher/engineer is obsessed with being at the big tech companies being on their resume, which is such a shame.
But obviously chose Google.
Unfortunately, the Fellowship only lasts for two years. Towards the end, it was actually super stressful. How could I support myself while continuing to work on what I thought was important? And if I wanted to live in the Bay Area, how could I event get immigration status to work and live there?
Getting to work at Google Brain at this point let me continue to do what I was doing before -- many of my blog posts, including the one being discussed here, were written while I was at Brain.