

How a Toronto professor’s research revolutionized artificial intelligence - dataminer
http://www.thestar.com/news/world/2015/04/17/how-a-toronto-professors-research-revolutionized-artificial-intelligence.html

======
fchollet
Highly glorifying, badly credited article that reads like a state propaganda
piece. Hinton's work is amazing and his contribution has been immense, but
this article is a joke.

Such science "journalism" is seriously hurting the field, including the
researchers that are lionized in these articles. It also makes large groups of
researchers who've been working hard on advancing the field think they don't
matter in the eyes of the public, and that can be quite demoralizing.

That "unsupervised cats" paper from 2012 had nothing new in it at the time,
although it was a feat of parallelization on a cluster of CPUs. Three years
later its (uninteresting) results are not used anywhere. But Google used it as
a PR piece at the time, and hundreds of journalists have been presenting it as
some kind of game-changing breakthrough. It continues to this day.

~~~
canistr
Keep in mind that this is the Toronto Star. A local paper here in Toronto for
obviously a broad audience outside of tech.

For a country and city that has a heavy inferiority complex and suffers
greatly from the "Brain Drain" of technical talent to the US, this is one of
those articles that's meant to inspire. Not an article for
Canadians/Torontonians to puff their chests.

So I wouldn't be so critical about it's so called "propaganda" aspect.

------
carlosgg
The course he taught on Coursera in 2012 is archived if anyone's interested.

[https://www.coursera.org/course/neuralnets](https://www.coursera.org/course/neuralnets)

------
Quanticles
I'm surprised about all of the backlash on here against the article. Look at
the ImageNet competition - when it first started out the algorithms being used
were very different hand-designed algorithms, then all of a sudden backprop
dominated.

If you're interested in this area, my company is designing hardware
accelerators for these algorithms. Our versions of the algorithms have a few
extra twists to them to account for the analog nature of the hardware. I'm
probably a bit biased, but I think this is a really interesting place to be
working right now

Below is what posted on this month's Who is Hiring:

Isocline - Austin, TX - Software Engineer for High Performance Computing and
Modeling

We are looking for two people - one interested in neural networks and one
interested in GPS.

We are developing microchips that yield a 10-1000x improvement in performance
& energy-efficiency compared to digital ASICs, GPUs, and FPGAs. We are a
bootstrapped company and are fully funded through mid 2016. Patents pending.

$70K – $120K Salary

0.5% – 1.2% Equity

Full Job Description: [https://angel.co/isocline/jobs/38767-software-
engineer](https://angel.co/isocline/jobs/38767-software-engineer)

Company website:
[http://isoclineengineering.com/](http://isoclineengineering.com/)

Email me directly if you do not have an AngelList profile

------
FallDead
Who is going to click on an article that isn't super hyped now a days anyways?

Toronto Professor, contributes to computer science subfield AI.

Vs

How a Toronto Professor, changed the world with Deep Learning.

Which one would you click ? This article is framed towards an audience that
doesn't exactly care about technical details.

------
Fando
Incredible. It is exciting to see which areas of life AI will optimize in the
future. The medical company which is focusing on teaching AI to interpret
patient charts and etc, is an excellent example. I wonder if it would be
possible to build a deep AI for interpreting law, effectively replacing
research work which lawyers do. It seems law has a methodical structure that
AI could learn to navigate recognizing past precedents, performing wide cross
referencing, exploiting existing loop holes in the system, and validating
judicial decisions.

------
whitten
Is this news because "neural networks" now has a new buzzword "deep learning"
and Google is paying attention to it?

~~~
Animats
It's news because that's the cold place neural networks went to wait out the
"AI winter".

It's amazing that neural networks actually work now. They've been around since
the 1960s, and for four decades, they sucked. They're only a little smarter;
it's mostly sheer compute power applied to algorithms that fit on one screen
of Matlab.

~~~
murbard2
Not so much the computing power as a few tricks:

\- proper initialization of weights (should have been obvious, but no one
really saw it)

\- rectified linear units and dropout

~~~
UmDieWelt
Also, a lot more data to train on.

------
abc_lisper
Can somebody point good(and up to date) resources to learn more about neural
networks?

