Why doesn't this just move the optimization problem? Aren't you now just optimizing your DeepRL network rather than the network you're trying to optimize?
In "normal" machine learning this is basically hyperparmater optimization for a given dataset (eg, the depth of a random forest, XGB parameters, the best random seed/jk )
In this case is tests different combinations of operators on a known dataset to see what performs the best. So it is optimizing the prediction network
(Also this isn't DeepRL, it's a deep neural network. I think that was a typo)
Also it seems different from more traditional hyperparameter optimization because it makes novel cells. So the structure of the network isn't limited to our existing library of layers/cells.
It's entirely true that these are combinations that humans haven't (and probably wouldn't) come up with.
I don't want to underplay this. "It's similar to hyperparameter search" makes it sound like it isn't interesting or novel, which is untrue. I completely believe it is a revolutionary way to build software (so much so that I quit my job, raised funding and are working on a similar space of problems).
But it isn't doing something like inventing a new math operations similar to the other operators which humans put together to form cells/layers. It is rearranging and choosing those operators in new ways.
Thanks for sharing!
Speaking as an Australian, common spiders which aren't poisonous (e.g., daddy long-legs, huntsmen, little jumpy-guys) is general knowledge. As is the really bad ones (funnel webs, redbacks). If you're bitten by something else, you generally assume it's somewhat poisonous and pay close attention to how you're feeling.
This kind of approach would have more impact for snakes, whose venom can vary significantly -- although hanging around to snap a picture seems dangerous. Perhaps a suggestion system based on salient features?
The app helps people make the decision as to whether its a harmless spider or something that requires urgent medical attention.
In fact, since the numbers are so small, widespread useful of such an app seems like it may actually increase the number of deaths: instead of people seeing an unidentified spider and intense pain and getting treatment, people may get a false classification and so endure the pain without prompt-enough treatment ("the app says it's just a [something other than a funnelweb], I'll be fine!").
Funnel-webs are terrifying.
Not saying a doctor is perfect, though I find they're often better than they get credit for.
I wonder how different bites present themselves - if there is enough differentiation to provide a conclusive level feedback.
The example there is 7 lines (counting the NN description as one line). That's using a (easy) pre-existing dataset too, and a primative neural network.
That's roughly the same as in Python using something like the fast.ai library. I think that comes for 4 lines (not including data wrangling or inputs):
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
log_preds = learn.predict()
Also note that this AutoML version uses zero lines of code.
Data Turks is manual labeling.
There is active learning and related algorithms where you trace the boundary of your classifier and pass examples along that boundary to be manually labeled (as they are the ones the classifier is most unsure about).
But there is nothing "auto" about this - it's just being smart about where to deploy the manual labor.
It's worth noting that highlighted sections in Medium articles probably aren't great summaries (they are more a representation of important points - which is a useful thing to predict as well).
For example, many summarizer systems are trained on the single-line summaries given in news media systems. There have been attempts to use Tweets as summaries for linked articles too.