As a busy programmer who gets exhausted at night from the mental effort required at my day job, I have a feeling like I will never be able to catch up at this rate.
Are there any introductory materials to this field? Something I can read slowly during the weekends, that gives an overview of the fundamental concepts (primarily) and basic techniques (secondarily) without overwhelming the reader in the more advanced/complicated techniques (at least during the beginning).
I'd really appreciate any recommendations.
Improved algorithms have been devised since it was written, see
and, in particular,
If you're a beginner, don't start with deep nets. Start with basic data analysis.
Fundamentally, these models are just trees of multiplications that are computed over and over.
You can construct some here: http://playground.tensorflow.org
It feels like many balls are still up in the air regarding deep learning, and it's likely that the dust will settle at some point. The tried and true will remain and it's essence will emerge, will the rest will sink to the bottom.
Crossing my fingers for a library or API to do the grunt work for me.
Got into NLTK, used the built in sentence tokenizer, word tokenizer, then wordnet POS tagging to remove proper nouns, added some more cleanup code, and I had something passable within two days.
Now at this point I couldn't write a POS tagger to save my life, but it was cool seeing code you wrote over two evenings run over 30k books just like that (which still took a week, but ah well).
And when I think about people who are not familiar with even Machine Learning, then really need to buckle up and spend serious time to catch-up with the technology that's making history today.
But now is really a good time to start. There are only a bunch of people in the whole wide world who are masters of DL and anyone with skills in it is in high demand. And it's not just about a job, "it is really cool" to play with it. I really feel I'm doing something heavy.
Improving GANs https://arxiv.org/abs/1606.03498
Improving VAEs http://arxiv.org/abs/1606.04934
Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks http://arxiv.org/abs/1605.09674
Generative Adversarial Imitation Learning http://arxiv.org/abs/1606.03476
I think the last one seems very exciting, I expect Imitation Learning would be a great approach for many robotics tasks.
ImageNet has 1,034,908 labeled images. In a hospital setting, you'd be lucky to get 1000 participants.
That means those datasets really show off the power of unsupervised, semi-supervised, or one-shot learning algorithms. And if you set up the problem well, each increment of ROC translates into a life saved.
Happy to point you in the right direction when the time comes—my email is in my HN profile.
Even outcomes data procedures performed and diagnosis across multiple visits can be easily obtained for millions of patients on national scale. My research involves applying deep learning to these datasets.
In my limited experience, EHRs aren't usually setup to handle structured labeling of something like an image. There are lots of different fields for text entry that can be unstructured. Then the only label left is the billing code, which ends up being a poor choice of label since the hospital often bills for what it can get reimbursed for, not what you actually had.
E.g. you know from image metadata that it's a chest x-ray of patient #1234 at 2012/03/04. Then you automatically check patient EHR near that date - do they have lung cancer Y/N; do they have broken ribs Y/N; do they have TB Y/N, etc, and make your image labels based on that. How diagnosis are codified, though, differs significantly between various medical systems, I have no idea how it's in USA EHR.
We are using data provided AHRQ HCUP and some internal datasets. TensorFlow for ML.
I'm imagining something where you take a corporate db and reduce it down to a model. Then that can be shared with third parties and used to generate unlimited amounts of test data that looks like real data w/o revealing any actual user info.
I'm glad you are since I'm using it myself, but I haven't used any other frameworks so I'm wondering if I should expect more people to head in this direction, or spend time learning others.
However, the technique does not seem to have a generative interpretation.
My guess is that your brain is creeped out by an uncanny-valley-like effect. The images are plausible in their structure so part of your visual system is happy, but the causality is not there, so your brain is thrashing around looking for meaning that is missing.
Using larger images means your code runs much (exponentially) slower, and gives you only slightly (asymptotically) better results so people usually use tiny images. All their outputs are 32*32.
Q: What's a generative model?
A: Well, we have these neural nets and...
Ugh. I understand the excitement for one's own research but if the point is to make these results accessible to a wider audience then it's important not to get lost in the details, at least not right away. IMO, there's very little here in the way of high-level intuition. If I did not already have a PhD, and some exposure to ML (not my area), I would probably find this article entirely indecipherable. Again, paraphrasing:
Q: OK, so I understand you want to create pictures that resemble real photos. And you really like this DCGAN method, right?
A: Yes! See, it takes 100 random numbers and...
Come on guys. You can do better.
It is not. While it's a big, growing field, it's really a narrow audience that can be expected to understand this, far from everyone in the field. How intuitive the writing appears is subjective. I'm sure I don't understand a word of it, not just for lack of intuition.
Maybe you can do better as well? Which is to say, effectively communicating something technical to a diverse audience is difficult, let's not be unnecessarily derisive.
There's nothing especially derisive in my assessment. I don't think the content is bad, just boring. I also think it's too technical for a non-specialist audience.
> Maybe you can do better as well?
My first criticism is that generative models are not something specific to neural nets but that's not obvious from the article.
My second criticism is that their explanations are overly mechanical. In the case of DCGAN the article begins by talking about parameters and magic numbers; i.e. they explain how the thing works rather than what it does, at an intuitive level.
Notice on the wikipedia page for generative models, there is a lot more than variational methods.
In some kinds of logic this is impossible, but isn't that how reports are usually written? The what is the conclusion of the how. Many papers omit the mechanics completely and get critized for it, too.
It's talking about machines gaining the power of imagination. How is that boring?
It does nothing of the sort! If anyone comes away with this conclusion I would say the article has failed entirely. Which, btw, is my whole point: there's no over-arching intuitive explanation of what generative models are, why they're interesting or even what concrete problem they're solving.
From their perspective, it's hard to put such information in an accessible format. Try explaining redux for example, to a person who has no idea what functional programming is. How would you do it?