HN'ers may have a good laugh at these taken from Yan LeCun's page. LeCun made it a tradition to have Hinton jokes in the lines of Chuck Norris ones (or more appropriately Doug McCllroy ones).
A few will recall that neural networks all but died from the US after Minsky's damning book. Hinton gave backpropagation which is one of the foundational pillars of feed-forward neural net algorithms. With his new thrust on whats called "deep belief networks" he is challenging his own early seminal contribution in the field. Not often do you see researchers throw away such huge swathes of their own work and start again to solve the same problem. Unless you are Niklaus Wirth of course.
Some background is necessary to get the inside jokes, but I have tried to minimize the requirement.
Geoff Hinton doesn't need to make hidden units. They hide
by themselves when he approaches.
Geoff Hinton discovered how the brain really works.
Once a year for the last 25 years.
Markov random fields think Geoff Hinton is intractable.
Geoff Hinton can make you regret without bounds.
Geoff Hinton doesn't need support vectors. He can
support high-dimensional hyperplanes with his pinky.
All kernels that ever dared approaching Geoff Hinton woke up convolved.
The only kernel Geoff Hinton has ever used is a kernel of truth.
After an encounter with Geoff Hinton, support vectors become unhinged
Geoff Hinton's generalizations are boundless.
Geoff Hinton goes directly to third Bayes.
Re: Downvotes. I seem to have touched a nerve. Yes I agree humor is frowned upon here, and I largely agree with that, but when its not humor for humors own sake, I sometimes make an exception. Not everyone would know who Hinton is, but may gauge that he is someone important from these fun anecdotes. But of course you are free to like it or dislike it, no hard feelings either which way.
> With his new thrust on whats called "deep belief networks" he is challenging his own early seminal contribution in the field
I don't know if I agree with that, he still uses backprop. Backprop has always been known to have problems when you scale to millions of connections, and his work on RBMs/DBNs is really quite old. What was novel more recently was showing that the contrastive divergence step need only be performed once, rather than 100 times, while the performance remained similar. The networks are generally still 'fine tuned' with backprop.
Still, the focus on generative networks (not sure if that's the right term still, been a while) and single layer training is fairly recent even if the concepts are quite old.
Mostly agreed, one difference that I would like to highlight is that errors are not always backpropagated across all the layers. In addition to contrastive divergence the breakthrough has been that you can get away with unsupervised learning (like with autoencoders) in the layers.
On the comment that RBMs are new, now I have to come to accept that if one looks hard enough almost all things are old, only the name changes !
Sigh. I don't know whether I'm alone in being saddened that many of the excellent thinkers of our time, who once might have spent their lives probing the the very limits of our comprehension in their field, now end up comfortably turning their minds to the services of (in the long term) rather short sighted commercial ends.
I can imagine that this sentiment is probably against the prevailing HN mood, but I've been thinking a lot lately about how certain kinds of thought and investigation are only enabled and supported by certain structures. The point of a university used to be being the highest pinnacle of thought. A place where people could have everything not related to intellectual pursuit taken care of so they could devote themselves to the pursuit of greater knowledge. Now that seems to be replaced by a corporate campus.
It's a cliche, of course, but how many breakthroughs of meaning are we missing out on because the brightest and best of our generation are now no longer seeking after truth, but instead seeking after a way to make more people click on an ad? Ah well, it's late, and as ever I'm post-sober. Best of luck to him and his team - I've enjoyed his lectures and papers. I hope they may continue, but I sadly suspect not.
The common lament about Google being just a place that optimizes ad clicks is overused, and lazy thinking.
I don't even think it's a conclusion that's the result of actual thought anymore, just reverb.
Google is a big company. It would not even be worth it or effective to have every single engineer dedicated to this ad click business. There's a team for that, and there are lots of other teams, for lots of other interesting things. Many of them do serious intellectual research.
My impression of Google's leadership, employees, and choice of projects often leads me to believe that they sell ads so they can continue making really cool stuff - not that they try to come up with cool stuff so that they can sell ads.
I also disagree with the whole premise that goal-oriented or profit-driven intellectual study is in some way less pure, less worthy, less boundary-pushing, or more short-sighted than purely academic study. How does the initial incentive for a thought affect whether or not the conclusions from that thinking are new knowledge about our universe? It doesn't.
As long as he's not sacrificing what he does to become a paper pusher, which he isn't doing, I don't see how it's different. And I think ultimately the closer his study is to being manifested in real things that we use, the faster his knowledge will be refined towards the path of truth, and the faster humanity will actually learn and benefit.
The true end of Bell Labs was when Lucent started financing customer purchases. They were carrying dot com bubble debt on their books as if it were a solid asset. When the music stopped they had spent their actual assets to the brink of destruction.
I wasn't suggesting that Hinton or anyone else should go to Bell Labs or PARC these days. I was just pointing out that back in the day, nobody would have been disappointed or considered it selling out for a notable figure to leave academia for one of those corporate research centers. For all that Google may aspire to have that kind of reputation for research, they don't have it, yet. So some doubting is allowable. However, I can't think of any other for-profit employer that seems to be as willing and able as Google to build that kind of legacy.
To the extent that they don't have that kind of credibility (I think I agree with your assessment) whatever the eventual result of Hinton joining the team should say something very strong about whether or not they will. If he finds that he isn't getting what he wants out of it, I think we can expect him to leave. If he ends up staying it is evidence that Google can hold the interest of the sort of people you would expect to find at places like Bell Labs.
The common lament about Google being just a place that optimizes ad clicks is overused, and lazy thinking....many of them do serious intellectual research
Now that's lazy thinking. No one accused Google or Verizon or of not having "many people" that do serious research.
Personally, I see absolutely no problem with what he did, he has a family and his future to worry about. After all these years he deserves to make a lot of money money and not have to worry about money anymore. Those that criticize him for "selling out" will probably sell their start-up to just about anyone for FU money.
What I find not true, especially in the past year or two is this:
"My impression of Google's leadership, employees, and choice of projects often leads me to believe that they sell ads so they can continue making really cool stuff - not that they try to come up with cool stuff so that they can sell ads."
Part of my job requires me to surf without ad-blockers and with Chrome. There's no trick in the book they do not use to make you click on ads or sign for their services. Even Chrome, the "open source" the supposed savior of the web now has ads. It totally changed my perception of them, and it's not like Google was struggling to pay their electricity bill. As a commercial enterprise, Google has the right to do that, however they cannot have their cake and eat it too.
> Now that's lazy thinking. No one accused Google or Verizon or of not having "many people" that do serious research.
By assuming that every single person who joins Google is somehow being used to optimize ad clicks, the implication is that they are abandoning whatever serious intellectual research they could have otherwise been doing.
> There's no trick in the book they do not use to make you click on ads or sign for their services.
This doesn't counter my point. The fact that their ad click team is actually doing its job doesn't mean anything about the greater motivation of the company.
"he has a family and his future to worry about. After all these years he deserves to make a lot of money money and not have to worry about money anymore."
Academic salaries are low compared to salaries in the private sector, when equated for things like years of training, status in one's field, etc. Here are some facts:
- Google clearly has a self-interest in being a leader in machine learning technologies
- Geoff Hinton is a world recognized leader in this very field
- Google appears not JUST to be interested in selling ads ... but as many posts have said, they seem to have the ability to run in-house R & D in a somewhat more open way than most companies in the private sector
Here is some speculation:
- faced with the chance of working at Google on the very same topics one is already working on, but receiving ... 10x? 50x? even 100x? the salary, wouldn't you do it?
- think of it this way: if Geoff Hinton's google pay is a "mere" 5 million per year, then all he has to do is work under these terms for TWO YEARS and he will have made 67 YEARS of $150,000 per year salary (probably a median senior academic salary in canada)
let's stop criticizing and start getting excited about what's coming down the pipe
- and hope that google makes it all publicly available, and not just secret sauce
I agree, but on the other hand I'm impressed that google services are so tolerant of ad blockers. I've never even seen a single ad on youtube for example - although they could easily "trick" the ad blockers, or even worse, block the content when ad blocking is detected. Then again, this might all change if a large enough fraction of users start using adblock...
You have this impression of universities that i don't believe has been true for at least 100 years, if not more.
Research professors at schools have been mostly funded by either commercial or military interests for a very long time now. Even in non-direct cases, universities have "tech transfer" offices that are commercializing stuff, and continued funding often depends on that.
If you are going to claim this model is bad and that it produces bad results, you are going to have to offer at least a little evidence that the models that existed before this, worked, and were better for humanity.
I might agree with the second part in some ways, but I dont' think they supported anywhere near the amount or level of research that occurs now.
Research is definitely more goal directed than it used to be, but this is not totally good or totally bad, it's just "different".
The Bayh-Dole act is less than 100 years old, but it is in my view one of the major causes of the phenomenon you describe.
(It should also be, incidentally, taught as one of the great examples of unintended consequences: by all accounts it was designed to increase the independence of universities by letting them keep the monies they got from commercialization. Then somewhere along the way policy-makers realized Bayh-Dole was a great excuse to encourage universities to do tech transfer. And now essentially every grant that doesn't have some tech transfer proposal is almost automatically in trouble when compared against those that do.)
Get over with this already. Everyone who's been paying attention to the company for the last, say, five years, knows that most of what they do doesn't in fact revolve around ads. Just because the company has a boatload of money and data doesn't make it inherently evil.
I read the book "In The Plex". A lot of academicians joined Google mainly because they wanted to get an actual product/service out into the hands of people instead of just doing research or writing papers.
I have a different reaction than you. It's good to see brilliant minds stepping outside of the academia and join a commercial entity to actually build something.
I would argue that Google has had the largest impact on information dissemination , since the invention of pen and paper.
Advertising is just a means for providing comfortable livelihoods to these excellent people. There's no better place for these people to be working. We got some spectacular output when some bright folks ended up working together in a phone company, better than any university. Why not Google?
> Advertising is just a means for providing comfortable livelihoods to these excellent people.
I'd say it's a bit more than that; what Google has done in advertising has benefitted millions of consumers and businesses. I'm not sure why people tend to look down on some of the work they do in advertising as somehow being less worthy than working on GMail or Search!
Not the means, it's the end, and everything feeds to that. Especially search with their "updates," Chrome, Android, UI etc. Hinton has his work cut out, make users click on ads! Looks like playing with colors has been milked already.
"I would argue that Google has had the largest impact on information dissemination , since the invention of pen and paper."
Because they are a search engine and is used by most people. Now, however, Google is replacing information with ads and with sub-par information that is beneficial to their bottom line. That hurts creators, readers and society in general.
To be honest... kind of. They release papers but most of the layers at the bottom are Google proprietary. Certainly the source code is proprietary. Hell, even most of the information about GFS is proprietary, despite having a successor. From an academic perspective, their disclosure is kind of crap.
Shows that cassandra is based on the bigtable paper, among others (hbase i believe was as well).
Their disclosure is apparently good enough to get folks going ...
FWIW: In reading 20 years of compiler papers, i've had more luck getting code for papers from commercial companies than academics. It's become somewhat better over time, but plenty of universities still seem completely unwilling to give code to papers.
You're looking at this from an engineer's perspective. Yeah, great, we can build Cassandra -- who cares? Google papers have tons of performance benchmarks -- how is it scientific if none of those are reproducible?
Am I defending academics who can't or won't turn over all their data? No, of course not, but most of them don't try to pretend you can do science and still be proprietary.
P.S. I'm not talking about all companies. Intel and MSR turn over a bunch of stuff. I'm talking specifically about Google.
It's always been interesting to me that someone publishing a paper in computer science (especially someone working at a University) would not want to open source the code that produced those results. It's the best kind of repeatable experiment.
I recently went looking for any code published by Hinton, since I've been doing some neural network research as a hobby, but was unable to find any at all.
Speaking from personal experience, a lot of academics see programming as a hurdle---an unpleasantry that must be overcome as quickly as possible before going back to the real interesting bits.
This tends to result in very poor code, since the people who write it do not practice their craft. And very poor code isn't usually released because people are embarrassed.
I'm certainly generalizing here, but this seems to be the trend as I see it. It is a sorry state of affairs, and I try to do my best to encourage publishing code with the people I work with. (Code reviews are great.)
On the contrary, I think these are symptoms of machine learning coming of age [or getting industrialized]. I am not sure "short sighted commercial ends" are the main motivator for likes of Prof. Hinton to shift, but rather the availability of vast amount of data and opportunity to understand them in a scale which was not previously possible.
I also predict the reverse shift to happen within few years, once the interesting fundamental research problems has been tackled such people might move back to universities. If that happens, that is indeed a healthy process of academica and industry supplementing each other.
I think the biggest difference is that the knowledge gained in the context of a university is open and easily built upon by anyone else.
The saddest part of corporations replacing academia won't be realized for decades but it's that the death of knowledge is tied too closely with profitability.
The decision of what to do with one's life rests entirely in him or her. But we should ask as a society, what are we striving for? The same goes to how our society (I can only speak of the US) looks at teaching as a profession.
Not being the biggest fan of Google anymore, I got somehow saddened when I first read the news.
But after a second thought, I can understand why he did it and if I were in his shoes I would do the same.
I don't think it has to do anything with money. For a connectionist, it is a dream to have access to the massive amount of CPUs that Google has. Take that and add all the incredible and smart people that Google has in its team and then you would see his decision was a no-brainer. I feel very happy for him.
For what it's worth, I was thinking the same thing. I don't feel this way about Google hiring programmers in general, but I think at this point they've hired some significant double-digit percentage of the world-class machine learning professors in the world, and as I understand it those people don't publish (many) papers anymore.
Thanks, I didn't know about that. But you'll probably agree that it's unimpressive: 270 publications (and that appears to be all-time, going back 10 years) for a team of 515 researchers. In academia, you'd expect several per year per researcher.
It remains to be proved that an university is particularly better at innovation that anything else. At least in Computer Science most major historic breakthroughs happened in army departments and in commercial laboratories (Bell Labs, Xerox Parc) and today they are still strong, for example many of the most cited CS publications come from Microsoft Research. Google faces some of the biggest challenges in Machine Learning and has some of the brightest people in the field, I doubt they will be looking for "ways of making people click on an ad". The advantage of a commercial or military laboratory is the constant flow of new problems needing to be solved that can give everyone huge visible benefits, it is emotionally stimulating, in comparison to the somewhat vacuous atmosphere of an universities thinking for thinkings sake.
This includes textbooks ("Handbook of Applied Cryptography", "Reinforcement learning: an introduction") and introductory articles ("A tutorial on hidden Markov models"). Also, ironically, paper 11 in "Academia" is about Google.
Anyway, there are strong incentives at the university to produce papers, to the point of this being one of the top priorities for every university worker. So, I don't wonder the universities are the best at producing papers that are cited, I just wanted to emphasize that even in that area the industry has not bad contributions. There is also lots of innovation happening in the industry that is not published in the form of papers, since in many places there are no incentives to write up one after doing something of value.
I think you should reconsider your take on that paper 11. It's not "about" Google-- it is the cornerstone work that led to the creation of Google-- and it was written by two academics (PhD candidates at Stanford). In your previous post, you mentioned "the somewhat vacuous atmosphere of an universities[sic] thinking". But you need to step back and realize that this paper is showing that the university environment led to the creation of the key ideas behind Google, and thus I don't see justification to call it a "vacuous atmosphere".
In fact, I believe what Google is doing so well (and what is attracting great academic researchers like Hinton) is the creation of the rich and rewarding atmosphere mimicking that of classic academia. (And at the same time, cuts to education and research budgets are helping to erode the benefits of a life in the academy.)
Please don't have the impression that I am being condescending toward universities, I respect and admire a lot of work that is being done there, I just think that not all of innovation can happen there. Google uses a lot of theory, but they also act in the real world and face real-world problems that can inspire new research and new theories, whereas at an university you are to a large extent cut off from such stimulus.
The problem with this type of research in particular is that these deep learning networks shine when they are trained for a very long time across many machines, and on huge quantities of data. In other words, serious engineering muscle is necessary that goes beyond what a few graduate students can do in a University setting.
The point of a university used to be being the highest pinnacle of thought. A place where people could have everything not related to intellectual pursuit taken care of so they could devote themselves to the pursuit of greater knowledge.
Sounds good on paper, but I'd question if that was ever actually true in reality anyway.
Now that seems to be replaced by a corporate campus
And? Is research less valuable because it's done by a corporation? Look at how much basic research IBM sponsors and how much Bell Labs and others have sponsored in the past. If anything, I would go the opposite direction and lament the decline in corporate spending on fundamental research. I'd like to see more academics go into the business world.
"Startups are the new graduate school" - can't remember where I heard that quote, but I think there's something to it.
Commercial motivations often drive academic achievements. Why assume it is the end of his contributions? This situation seems similar to Guinness' practice of hiring all the best graduates from Oxford and Cambridge.
I worry that much of his groundbreaking work will now not be released to the public, or at least, not until it is no longer groundbreaking, instead looking out for corporate interests first and foremost. I don't fault the guy for this move, it certainly makes sense, but I hope we still get access to his research an future discoveries and work.
Something really interesting must be happening with AI at Google, because in the past few months both Ray Kurzweil (the best-known proponent of the singularity) and Geoff Hinton (the crazy-talented individual who invented deep-belief networks using interconnected "restricted Boltzman machines") have joined the company.
I think it's pretty clear what's so interesting: google has the some of the largest datasets ever assembled, on a hugely diverse range of subjects. If you want to play with that data, you've got to work for google. Probably the only other place you get to play with that much data is at the NSA, and their goals are a bit less fun than google's.
There's also the fact that Google likely has more computing power than any other organization worldwide, which is what seems to be the attraction for Hinton, at least according to the last sentence of his post.
One of the most striking aspects of the research led by Dr. Hinton is that it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields.
“We decided early on not to make money out of this, but just to sort of spread it to infect everybody,” he said. “These companies are terribly pleased with this.”
I doubt Google will stop Hinton from publishing, and I think both will still benefit greatly from that arrangement. Larry and Sergey published "The Anatomy of a Large-Scale Hypertextual Web Search Engine" before they were even incorporated, and it doesn't seem to have hurt them much.
In a way, yes. Research in industry has the advantage that you don't have to "waste" your time applying for grants in order to fund your own research group. A lot of people seem to live in an illusion that professors actually spend most of their time doing research, they are the managers of research groups more than anything else and growing those research groups is hard work and takes up close to all of your time (then add teaching to this). You are unlikely to have more than a few permanent researchers under you and these researchers will most likely be trying to leave within a few year and get a professorship of their own.
Add to this that it is an enormous pain to solve engineering problems at a university, you usually have to hand this to students, limiting what you can produce and maintain. The best I have ever seen for a group was a single full-time software engineer and this was at arguably the most prestigious university in the world. And don't get me started on the fact that there is a poor incentive to produce good software in academia, even though said software can be essential to make research possible.
I don't know Hinton in person, but I can imagine that at his age both the possibility of something new and the promise of a strong engineering and financial backing for a large group is enormously tempting (also, do they force professors into retirement or start denying them grants in Canada?). Oh, and to those pointing out the potential salary, if my knowledge of the financial situation of professors that have been on tenure is generalisable I would be surprised if personal finances would mean much at this point.
Google is slowly becoming the equivalent of a Silicon Valley think tank. :P
Not that it's a bad thing. Lots of smart people did interesting things at IBM, just as people at Microsoft Research are doing some great things - albeit usually without anyone seeing the work in progress.
Dont get me wrong.. google is much better than its "predecessors" in that field.. eg: microsoft, apple, ibm..
but in the end is only one company, with its capital open in the stock market, with boards and everything else.. while google now is half good/half evil.. who can garantee that in the future when larry and sergey goes away.. that the company wont be ruled by a Larry ellison clone?
ps: while google have a pretty much decent portfolio of open source and papers floating around.. its more about "marginal software" for google...
the core of its knowledge about the business still a secret.. what would be of the world now and the cloud economy without Doug Cutting, that do it all by himself?
In one sentence: is only ONE company rulled by the capital market and profits and thats pretty scary.. thats all :s
As I am not a contractor or affiliated with MSR in any way, I can back you up. In my field MSR is light years ahead of Google when it comes to research impact. Google has been gaining a stronger presence and they are generally seen as more innovative. But for those of my colleagues that want an industry research job, MSR and Google are both very attractive, the old stable one and the new kid in town.
(Disclaimer: I work for Microsoft Research Asia) On the other hand, what Google does get around to publishing is often very strong. Outside of universities, and since IBM lost/dropped/??? most of its researchers, we are the only two big companies left that have strong ties to research.
ok ,mea culpa. the research part of microsoft is pretty decent.. even i have already stop by and read some of those good papers.. :)
but the point i was made was about open culture.. and since google publish much more(and heavy weight) open source software than microsoft i tend to think they are ahead in the curve at least in that matter..
but with the current trends of online education model our academic institutions are in threat, at least the classic model we used to know.. so this make even more dangerous if corporations are the only safe place to big minds to continue their work..
so while companies are getting more open.. in this hypothetical future.. i think this is not enough.
Companies by their own standards dont have to think about the society greater good. they can, but they dont have to.. contrary to what academia and republics were created for