In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art.
To translate that, they built and train a RNN to design neural networks. These machine designed networks are almost equal to the best human designed network on an image-recognition benchmark, and outperform the best human-designed systems on a text understanding benchmark.
Without a relevant job position, knowing how to implement Deep Learning is a buzzword trick for Medium thought pieces or getting $$ in funding from venture capitalists for a generic "AI" startup that no one actually understands how it works.
I used to teach at a data science bootcamp where many of the students got hired by big companies.
I've also been running a deep learning startup for the last few years and have hired quite a few people.
Many of our team don't have phds but can still write backprop code for even complex modules like inception among other things. A lot of my students didn't have phds either.
A few of us (me included) are self taught. I've also coauthored the largest oreilly book on deep learning:
1 piece of advice I would offer is building something that differentiates you from the rest. Many of these "medium thought pieces" you're talking about are actually very cool applications of deep learning. If you want to get hired for these kinds of roles, I would demonstrate you understand how to build things with deep learning. The litmus test I would also look for is "I trained a net from scratch and innovated in x way". Honestly, there's a rare amount of talent out there that can do well at software engineering as well as deep learning. I'm not convinced a phd is a hard requirement.
I get that recruiters at these larger companies definitely tend to look for the buzz words and often can't tell the difference so it's definitely harder going the traditional route.
Tech hiring also tends to be a networking thing as much as it is buzz word bingo no matter what field you're in. If you can network a bit and build something cool that demonstrates an understanding of deep learning I don't see the problem.
I am hesitant to recommend your book to a true practitioner due to the assumed knowledge presented within the math section. I think a better treatment of mathematics would assume the reader has little to no background but is intelligent enough to learn ground up the specific use cases of the mathematics for the deep learning techniques presented in the book. See: http://www.deeplearningbook.org/ for better treatment of the math review. It seems more thorough and makes less assumptions about the math background of the reader.
I would love to recommend your book to a practitioner but I'm afraid the math section (the version I reviewed) would scare them off/they would get little out of it.
This makes sense. However, there will always be requirements to understand any given topic. It is recursive and dangerous to assume otherwise because knowledge builds on previous knowledge. Knowledge gaps for requirements should be an exception handled by the reader, not by the author because it penalizes everyone who doesn't have that gap.
I understand the effort of authors wanting their books to be self contained and inclusive, bringing everyone up to speed, but this brings up awful college memories and students having to wait for the one person who doesn't know matrix multiplication asking a question in a class that is not about linear algebra. This person was the exception and instead of learning it on his own time, he was willing to penalize everyone.
Similarly, in the context of books, this is the reason 600 pages is the norm with the same first 400 pages "bringing everyone up to speed" (100 pages for a Python introduction, 70 pages for elementary linear algebra, etc).
The overlap is just staggering and it is safe to assume that a 600 pages book does not cost the same as a 200 pages book. In other words, everyone is paying the price for the one guy who wants to do the sexy Machine Learning/Deep Learning/Pattern Recognition, but doesn't want to bother looking up the Jacobian on his own. We're paying for the 400 pages we'll never read.
A large percentage of books caters to the beginner/neophyte knowing that being a beginner is a relatively short step for someone who has a long road ahead. There's an assumption of non-evolution/improvement, an everlasting tutorial 0. Imagine how frustrating it would be to have every item in the world being designed for crawling babies and disregarding the facts that they're on their way to be adults.
The interviews the authors give paint the picture this book is for the 'practitioner'. If Chapter 1 is meant for a brief review then don't advertise the book for a complete practitioner/beginner. Either make the book for the practitioner or not. If you do, then don't pretend to serve introductory math in it that the unfamiliarized reader will read and understand. They fail at their purpose there. So either make that chapter useful for the practitioner or leave it out and assume the mathematicians already know it. Maybe put it in an appendix and let us get to the meat quicker. It honestly does not take much time to define what a matrix is, give an example, define matrix multiplication, give examples etc. Same applies with basic definition and examples of derivatives. These are mindless mechanical procedures anyone can learn. It wouldn't take too much extra space to include some thoughtful examples. Maybe I should write an 'introductory group theory' textbook and start discussing geometric group theory 2 pages in if we want to get into not serving an intended audience's purpose.
I like what the author's are doing. I'm on their side, but I'm making suggestions that could serve a wider audience.
The book is meant to contain simple examples oriented towards engineers building applications rather than deriving backprop.
The book isnt called the definitive guide for a reason ;)
In my opinion it wouldn't be too difficult or much effort to define what these mathematical objects are and show basic examples with basic computations to solidify the concepts. The notion of gradient descendent & derivates (or partial derivatives) isn't that difficult to understand and could be easily explained in a page or less.
For example when you discuss the Outer Product:
"This is known as the “tensor product” of two input vectors. We take each element of a column vector and multiply it by all of the elements in a row vector creating a new row in the resultant matrix."
It would be nice for the beginner to see an example of this and as stated it wouldn't take much space in the book to provide one. I think these sort of things would differentiate your book from others. If you made it more friendly more 'practitioners' would be willing to read/use it end-to-end.
I would maybe rephrase this as "Machine learning really just became mainstream recently and now everyone wants in".
If you are talking about say: recruiters, they will always tend to piggy back on buzz words. They don't really learn the technology themselves. Requiring a phd and some of these other things that are being talked about is a general "data science problem".
I can't count how many candidates I've seen applying to companies that got turned down for jobs because they just went through the traditional HR funnel. Your best bet as I said earlier is just to network.
The worst parts of getting a deep learning job are the same ones that plague every tech position out there.
Uhh..No. I have been doing this for 5 years or so. I don't have a PhD; most people we have hired don't have PhD. Some who write NIPS papers (very different from a "Medium thought piece") on their spare time. Now what we optimize for is relevant experience and the ability to not just throw a framework at something. That is highly correlated with having worked on this for a while or have strong math skills. Guess, what? Some of those who have these skills have a PhD. Some, not all.
These can either be horizontal plays or product focused. For the latter it doesn't matter as much. For full stack developers domain knowledge is usually a lot more helpful there.
For horizontal plays this can matter a bit more. I run a well funded deep learning startup and we are starting to hire full stack developers next year.
I have thought about this a bit and we would be looking for people at a minimum who have dealt with some basic machine learning before. Much of the stuff in deep learning we do is displaying some sort of output from a neural network (eg: various ways of displaying a choice a neural net makes). Being able to do things like visualizing clusters is also important (this would be d3). The other part of this would be a basic understanding of being able to communicate with a data pipeline of some kind. We are java based but I'm imagining a lot of startups would be python based in this case.
I thought it was a negative to have a PhD in SV?
Even outside of "hot" research topics, large companies and startups doing technically interesting things recruit heavily out of top PhD programs. Many companies even have different hiring processes for Ph.D. candidates, even for job positions that don't require or recommend a Ph.D., which suggests those companies evaluate Ph.D. candidates differently (and therefore view them as a different sort of asset).
I just added a few comments and constant names.
Do we know a person who draw those digits and ask "what artist had in mind when making this masterpiece" ? And even then someone might have been trying to draw the "2" but end effect looks more like "3".
I think that some of the test cases simply don't have definitive answer and trying to reach 100% accuracy is just misguided effort.