Hacker News new | past | comments | ask | show | jobs | submit login
Visual Information Theory (2015) (colah.github.io)
543 points by less_penguiny 28 days ago | hide | past | web | favorite | 45 comments



Love this article. It makes statistics enjoyable and accessible. Most of Olah's old stuff is also really good, especially the one on manifolds and neural networks [1]

[1] https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/


You might like to check distill [1], it's a journal with a limited amount of content for now, but of very high quality, and C. Olah is one of the editor.

[1] https://distill.pub/


Nice post: author has good communication / teaching style, and firm grasp of the material. The visuals help make some of the concepts more intuitive. Bookmarked.


On an unrelated tangent, the way the page formats when printing is one of the best I've seen. No weird navigation cruft, and it seems there's even a style defined that sets book-style margins, such that the left and right margins alternate in length.


Thanks! This particular article went through pretty extensive feedback with colleagues where I'd print the article out, share it, and get hand written feedback. This necessitated investing in print formatting. :)


incredible guy, he really is the 10x everything type: https://colah.github.io/cv.pdf


Wow, his internship host was Jeff Dean?

I knew @colah was amazing (his neural net articles come to mind), but this is a whole other level of awesome.


> Wow

WTH is Jeff Dean and WSIC.


It's no accident. Olah is living a life and leading a movement dedicated to doing what you noticed.

https://distill.pub/2017/research-debt/


Love how the probability distributions are presented. I wish those diagrams were in the material when I was first learning probability. Would have communicated the concepts so much faster and easier.


Nice article. For those who are more interested in mosaic plots, statisticians have already done a lot of work on this issue. For R there are many nice solutions, e.g. the strucplot framework which allows to visualize complicated relationships between multiple qualitative variables (https://www.jstatsoft.org/article/view/v017i03).


Love this blog post!

One minor nitpick: the event that it rains next week is probably rather correlated with the event that it rains this week (in particular it's correlated with the season), so I don't think this is a great example of independent variables. Maybe you can separate by distance: the event that you wear t-shirt vs the event that it rains in city Y vs the event that it rains in city Z.


The referenced 1948 paper was recently cited by Max Hodak at the presentation of Neuralink recently. Pretty amazing piece of work!


There's a followup on properties of English which is also good fun: https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf


The visual presentation in this article is very similar with how small children are taught the theorems of multiplication and distributivity and simple series sums like triangle numbers.



This visualization is also very useful when trying to understand Bayes' theorem.


bookmarked


[flagged]


Chris Olah is first author on these papers in distill.pub:

- https://distill.pub/2018/building-blocks/ - https://distill.pub/2017/feature-visualization/ - https://distill.pub/2016/augmented-rnns/

He explains his personal mission, doing research distillation to reduce research debt, in this essay: https://distill.pub/2017/research-debt/


I wish more people were interested in research distillation. Recently I've been searching for intuitive explanations of different areas of math, and while there are some great blogs that do so, the lack of such materials is indeed obvious.

I don't think academia is going to contribute to this mission as it's mostly governed by a make-it-look-ridiculously-difficult-so-people-won't-find-it-approachable-and-thus-give-you-much-credit-for-doing-impressive-work culture.


Your hostile attitude to the people putting the work in is disrespectful and inappropriate. It's like the people who make entitled demands of open source software authors.

Academics don't get rewarded for confusing each other, they have to create research that their peers appreciate, understand, and value.


That's naive. Academia isn't some perfect system with perfect incentives.

There's plenty of incentive to dress something up as more complex and impressive and using more high powered machinary than needed, e.g. impressing reviewers, qualifying for a particular conference, convincing the funding body that your pet technique can actually be useful etc

I don't think you'd find many academics that dispute this, though hopefully they'd think a lot of people are trying to do good research.

It'd be like open source software if it was done as a hobby; a lot of academics get public funding.


The things you describe are common among unsuccessful academics. It is not common to see a researcher build a career on pissing off all the funding agencies and hoodwinking a project into a conference for which it is not appropriate.

I suspect you'd find thousands of academics who dispute your claims. Nobody is going to claim perfection, but the problems you're describing are not systemic and basically absent from anyone with a career.


>I suspect you'd find thousands of academics who dispute your claims.

I should have chosen my words more carefully; we can probably find 1000s of academics to dispute a lot of perspectives on academia.

> are not systemic and basically absent from anyone with a career.

So, leaving aside criticism of any particular field, and to take just one pretty big cross-discipline trend, there's this 'replication crisis' thing: https://en.wikipedia.org/wiki/Replication_crisis (wiki links to plenty of reputable sources).

I find it difficult to reconcile that with the idea that incentives to over-inflate claims were 'basically absent from anyone with a career'.

Maybe we think everyone was just really bad at statistics?


The replication crisis is a real example of an incentive problem rampant in the current system. However, this leads to a disproportionate emphasis on original research. Accusing it of leading to the things you talked about is a non sequitur.

Again, by no means is academia perfect. But insisting that its primary problem is that grant writers are a pack of pathological liars is both incorrect and unhelpful in solving actual problems, that's all.

And yeah. I would say that most people would benefit from regular refreshers on statistics... in or out of academia.


> Accusing it of leading to the things you talked about is a non sequitur.

I didn't say the replication crisis lead to bad incentives. I cited it as evidence bad incentives exist.

> But insisting that its primary problem is that grant writers are a pack of pathological liars

I didn't say remotely that.

Academics have incentives to make their work sound overly impressive, and many are playing this game. That doesn't mean every grant writer is a pathological liar, but it's very naive to think they aren't incentivised to oversell findings, and many do.


Hello! Thanks for pointing out that my CV is out of date. You're right that none of my work over the last two years is listed there, and I should probably fix that.

It feels a bit awkward to chime in, but it feels a bit important because I think there's a way in which the narrative is a little off (although I'm very grateful for everyone's kind comments):

Working at Google wasn't a trade off where I got financial support in exchange for not publishing. Rather, working at Google was amazing enabler, which supported me doing work I believed advanced important public interests for five years. That actually includes the blog posts being discussed here, which I wrote while an intern on Brain and received feedback from many researchers there.

To be concrete, some of the things I did while at Google included:

* Writing expository blog posts explaining important ideas in machine learning (eg. this blog post).

* A five year thread of research on neural network interpretability. How humans can understand how neural networks make decisions? (eg. DeepDream, Feature Vis, Lucid, Building Blocks)

* Cofoudning a scientific journal for machine learning that values exposition and allows interactive diagrams.

* Working on early TensorFlow and writing some of the initial tutorials on it.

* Articulating concrete safety concerns about modern machine learning systems, organizing a cross-instutional paper on it, and later representing Google on the Partnership on AI's Safety-Critical AI working group.

* Designing and teaching the introductory course for the Google Brain/AI Residency.

* Miscellaneous ML research, all published in open access venues.

That's basically the majority of what I did at Google. I always felt empowered to work on things that I thought were important for the world, and in many cases felt like had leverage I wouldn't have had individually. I think that was exceptionally generous of Google, and also speaks a lot to the environment Jeff created for those of us on Brain. (I realize Google is a large company, and other people's experience may be very different.)

I do think you're right that the number of short expository blog posts I wrote declined, especially after my first two years at Google. A big part of that is that I focused more on a small number of more ambitious projects (eg. Distill, Building Blocks). Another big part is that I started doing more non-individual work: teaching, mentoring, editing, debugging social issues.

To be clear, I actually left Google last fall. I now work at OpenAI where I lead the new Clarity team, which works neural network interpretability -- basically, can we take a trained neural network and turn it into something like code that a human can understand? Leading a team means that I do even less individual research and writing. Sometimes I miss being an individual contributor, but I think it's the right call. I get to build an environment where others can focus on a kind of research I think is really important for the world and teach and support them.


Any interesting work coming out of "Olah Team"? I tried googling but found nothing specific just Distill articles and your twitter feed...


We're still ramping up. So far, our only publication has been co-authoring Activation Atlases with our wonderful colleagues at Google (https://distill.pub/2019/activation-atlas/).

One thing we're thinking a lot about how you can transition from "what" a neural network represents to "how does it mechanistically do that"? I hope we'll publish on that in the next couple of months.

By the way, while I'm flattered by the esteem it implies, I'd generally rather people not refer to us as the "Olah Team", just as you wouldn't refer Google Brain as "Jeff Dean Research" or MILA as "Yoshua Bengio Institute" or OpenAI as "Sutskever AI", etc. I think academic culture of branding groups after the PI is kind of unhealthy.

My teammates are doing the hard work and I see my position as a team lead as just to serve and support them. :)


I feel like someone who would write an article like "Research Debt" is someone who truly gets it. Someone who sees the enormous upside potential that can be unlocked through just investing in quality.

I'm deeply interested in these topics, but I'm not so good at them. I have gotten much further than I ever would because of efforts like these.

Thanks for being you.


Maybe you weren't aware he moved to https://distill.pub/ ?


Not trying to make a judgement about the lack of output or anything. I appreciate the work he did and the ideas he published. Mostly making a comment on the observation that outsider researchers being bought by google doesn't seem to produce the greatest effect on the open source idea-sphere. But he's free to do whatever he wants of course.


Didn't think you were making a judgement, but I don't understand your comment about him abrubptly stopping, since he never stopped.


This actually happens to a lot of folks. I don’t necessarily blame them. It seems a bit unreasonable to expect a person to keep putting out good material forever. But I can only imagine that Google provides enough opportunities to express ones creative outputs internally to suppress the need to publish it externally...


It’s sad that people have less time to contribute to public knowledge when hired by big companies, but saying that someone “has peaked” is just mean. I am sure you’d find it easy to say no to Google money.


What a toxic comment. Many people would consider DeepDream and Inceptionism to be the spark that ignited a lot of creative work in deep learning, which Christopher Olah wrote on while at Google AI.


Disagree, I don't find this a "toxic" comment. The parent makes a point. Such potential in my opinion from these researchers/engineers and instead gets scooped up by the big tech companies, hoarding the talent such that nobody can compete with them.

Happy for Chris's decision to be at the big G doing his research, but it seems every researcher/engineer is obsessed with being at the big tech companies being on their resume, which is such a shame.


Working research for a big tech means you are much less subject to the “publish or perish” req of a university. It often isn’t a must have resume point, but it actually gives you a different kind of freedom.


Given that he was awarded a Thiel fellowship, this gives him more freedom to research what he wants and make an impact than big tech and can literally go anywhere.

But obviously chose Google.


Hello! The Thiel Fellowship was a wonderful period for me, and gave me a lot of freedom to work on different topics. I'm very grateful for it.

Unfortunately, the Fellowship only lasts for two years. Towards the end, it was actually super stressful. How could I support myself while continuing to work on what I thought was important? And if I wanted to live in the Bay Area, how could I event get immigration status to work and live there?

Getting to work at Google Brain at this point let me continue to do what I was doing before -- many of my blog posts, including the one being discussed here, were written while I was at Brain.


Olah isn't even at Google anymore.


Based on a ~20 minute or less reading/reflection of this post, or something more?


At least in terms of public output. He hasn't published anything in particular in the last few years, whereas the years when his income came from the fellowship (judging by blog output, his twitter, and his academic CV). Can't say what he's doing inside google of course, but it's a shame it's not ideas and knowledge that can be enjoyed by us, the lowly outsiders.


Please stop reposting the same false information.


Agreed, very unfortunate and such a waste, extraordinary talent should go towards startups and open source software not the big FAANG companies.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: