Hacker News new | past | comments | ask | show | jobs | submit login
A profile of Geoffrey Hinton (2018) (torontolife.com)
105 points by tosh 22 days ago | hide | past | web | favorite | 36 comments



> Hinton has said that when he was growing up, his mother gave him two choices: “Be an academic or be a failure.” His family tree is branch-breakingly weighted with scientists. His great-great-grandfather was George Boole, founder of Boolean logic, familiar to anyone who has done a “Boolean search.” One of George Boole’s sons-in-law was Charles Howard Hinton, Geoffrey’s great-grandfather, a mathematician and sci-fi writer who coined the concept of a “tesseract” (a four-dimensional object we can see in the 3-D world as a cube—well known to all readers of the classic children’s novel A Wrinkle in Time), and who ended up in the U.S.

Interesting fact from reading this that normally gets neglected in media.


It's actually quite common.

One of the most tried and true routes to getting into the best academia-oriented colleges (Harvard, MIT, Caltech, etc.) is to win one of the big science fairs. I am not sure what they are called or who their sponsors are these days, but they used be sponsored by Westinghouse, Intel, and Siemens. Anyway, if you look at bios of winners or semifinalists, they almost always have a close relative in academia. Which in these competitions is actually a massive competitive advantage since your relative can get you into labs, access to equipment, and push you towards fruitful projects they quietly are guiding from the background.

Same with a lot of the international olympiads, very often the kids come from intense tiger parent families that are already in academia or do everything they can to get their kids the best physics/math/whatever tutors (the main exceptions to this are usually kids in large metro areas going to top magnet schools).

And then of course once you're in these top academic institutes, your relatives can further help you network and get into the best labs, show you how to get the best fellowships and academic internships to set you up for top grad schools, etc. Plus now that you have a top olympiad/science fair your credentials will give you a big plus in competing against others without them


I agree that academics at research universities can provide more opportunities for their children, but they are also bequeathing genes for higher IQ to their children, and I'd guess that the genetic influence is at least as strong as the environmental one. Adoption studies have generally found that the academic achievement of adoptees is more correlated to the educational level of their biological than their adoptive parents.


That's true, however here I am speaking about the highest level of competition for the best path into academia (one of the most competitive careers in the world). Large IQ studies aren't as relevant since the population of people seriously considering academia who have the ability to act on that is skewed to the relatively far right side of the distribution

I ran into so many roadblocks when I started seriously looking into going into academia when I was 16-17; if you don't have top science fairs or olympiads your chances of getting into top academic undergrads are already at risk, which has negative compounding effects down the line compared to the positive compounding effects of those that do get these, especially if you are a white or asian male. Close relatives of academics and people going to top magnet schools in major metro areas have huge headstarts compared to everyone else. Having a good IQ is just one of many requisites into going into academia, accomplishment-oriented credentials are much more valuable in your early career than things like regular standardized test scores


The bit about the competitions specifically doesn’t ring true for me.

Many friends from grad school have tenure-track jobs. None of them, as far as I know, participated in these contests. I would bet that things effectively reset once you start grad school and again once you’re in a faculty job.

It is true that academic recognition tends to snowball. Winning your nth grant or fellowship is exponentially easier than the first one “because you have a track record.”


I dont understand. The highest level of competition is for thousands of spots a year in the Ivy League/MIT/Caltech/Stanford/etc. versus dozens of notable competition winners per state, at best. Round up all the science fair, math olympiad, etc winners, and I'd be surprised if they filled put a single graduating year at Caltech, let alone all the (much larger) other schools.

i dont think the math adds up.


> especially if you are a white or asian male

As far as whites, alumni preference is typically as strong or stronger a weighting than affirmative action in undergrad admissions, and segregation wasn't outlawed until around 50 years ago meaning there are less alumni for non-whites to draw on, and many of the schools received federal money during segregation.


His aunt is the first foreigner who got "Chinese green card"(Permenent Residency in China). Very famous couple in China long time ago.

https://en.wikipedia.org/wiki/Joan_Hinton


Charles Howard Hinton is actually a very prominent figure in the history of eccentric mathematicians. Anyone who read Rudy Rucker's book about the fourth dimension in the 90s knew about the earlier Hinton's bigamy and superstitions.

https://books.google.ie/books?id=8J0djs-FK_8C&pg=PA65&lpg=PA...

http://www.rudyrucker.com/blog/2009/06/08/alicia-boole-charl...


I don’t like how they oppose deep learning with « logic based AI ». When Hinton got great results, he wasn’t competing against logic based AI but rather other machine learning approaches like SVMs.


I dont think he was competing. ANNs and boltzmann machines in the 80s seemed was an exploratory field not really part of statistical ML. At the time people were indeed competing with logic-based AI. SVMs came later and even then i believe, NNs had a different history and were not motivated from competition to SOTA techniques like SVMs. When the success of trainig came after the mid-2000s that's when ANNs started being called "ML". At least that's my memory


Indeed, he wasn't competing. In fact, he wasn't moving; he kept doing what he had always been doing until, luckily, something, that he had nothing to do with, changed and allowed NN methods to work better. So I view him as essentially standing still, frozen in time, like a clock stuck at 6 o'clock. And, just like that clock, he would inevitably be correct (at least twice a day). He had nothing to do with the tools (higher memory & CPU speeds) that made his methods work, he just kept doing the same thing over and over until, one day, by accident, something important changed: his lab bought newer, faster computers.

He's not exactly Louis Pasteur is what I'm saying!


he s still the coinventor of boltzmann machines, and one who kept the field alive.


We call this "skating to where the puck is going to be".


I call it "doing the same thing over and over again, hoping someday you get lucky." I think there's a commonly-used expression for that behavior.

Hinton got lucky: something he had nothing to do with changed, making him look like a stoic hero.

A winning strategy or just a lazy path?


Well, Hinton himself makes a big deal of this in his Turing award speech. See: https://fcrc.acm.org/turing-lecture-at-fcrc-2019


Me neither. In fact, both will be important components of future AI. I like to see deep learning as intuition, and logic-based AI as formal reasoning.

Intuition allows systems to avoid search problems, to some extent. Formal reasoning introduces structure.


Sure, but for someone who does not know what an SVM is, the big-picture takeaway is that a particular version of "soft" statistical learning (deep learning) had succeeded where "hard logic" approach failed.


Just asking: hard logic failed in what area?


Computer Vision would the the most likely example. It’s quite hard to come up with “hard rules” for distinguishing cats and dogs.


Carrots are vegetables. Vegetables are edible. Is this a picture of something edible? Having a bunch of propositions in a database is not much help if you can't relate them to messy sensory input. It turns out that making sense out of mess was a hard problem. Eventually, logic* will have to be brought in again, but at a higher level, to help out.

* Ofc Hinton's approach is based on logic too, but in the form of probability, calculus etc.


but with probe metrics and other experiments we are retroactively explaining why the big breaktroughs actually work, and these can inspire and fix the big breakthrough by putting them on more rigorous foundations. At the end you get a formal explanation of the conditional probability that given a carrot with 6DoF and such and such parameters (age, size, ...) in such and such lighting conditions results in that probability distribution for pixel values.


Speech recognition is one such case. We thought that by breaking down the incoming speech into phonemes we would be able to get words from the bottom up. That never worked too well, whereas statistical speech recognition has been much more effective.


[flagged]


Money, engineering talent, and virtually infinite resources to continue research


I always find it so difficult to read popsci biographies of my own expert subjects. Hinton did not invent neural networks. Hinton is an amazing researcher and scholar to be sure, but this is clearly hyperbole.

Edit: I turns out I misread the article, it merely suggests his chosen approach was neural networks, rather than that neural networks are his idea. I think honestly it’s written ambiguously by intent.


> In the late ’50s, a Cornell scientist named Frank Rosenblatt had proposed the world’s first neural network machine. It was called the Perceptron, and it had a simple objective—to recognize images.

This is right in the article. They don't attribute neural networks to Hinton—they just note that he worked with them.



It describes how he had the idea as a teenager, inspired by a friend's description of holograms. In the next paragraph, the idea had already been tried as the perceptron, and had been disproven. It's ambiguous because that's how ideas are; the same idea will come to more than one person quite frequently.


> the same idea will come to more than one person quite frequently

This seems to happen as often as not. And sometimes the ones in the not category are that way for a reason!


A lot of such articles exaggerate many of the specifics in favor of a dramatic storyline. I find that I enjoy them most when I treat them like based-on-real-events documentaries - Not taking them too seriously, but picking up interesting little facts and ideas along the way.


Except you won't know which are facts and which are dramatized factoids.


And yet we merrily accept stuff they write outside our expertise :)


This is called the Gell-Mann Amnesia Effect[0]. People believe things about stuff they don't know, even if the source is wrong about stuff they do know.

[0] https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnes...


I was reading somewhere that there are evolutionary drivers behind our cognitive biases. I wonder if this is because to cooperate in social settings you have to rely on other people's expertise.


It's amazing how often people mention Gell-Mann Amnesia on Hacker News. Seems like Crichton's strategy of trying to make the term more popular by attaching a famous scientist's name to it paid off.


To be honest, I first learned about it on HackerNews a couple of years ago. Someone apparently reposted a link a few hours ago: https://news.ycombinator.com/item?id=20789446




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: