Hacker Newsnew | past | comments | ask | show | jobs | submit | L2R's commentslogin

Researcher here (in ML), our field is so full of noisy results due to this issue. Everyone talks about it, but you can't get around the fact that you no longer can get away with a low amount of publications.


The issue is it takes away a significant amount of utility without adding any. Volume/brightness is slightly better, but you can get the same effect by holding down the physical button. Some users like to use Caps Lock for Ctrl, and this is an instance of the touchbar forcing a loss of utility with no gain.

Function keys are used heavily in IDEs, and forcing a large reach of fn + f# key isn't adding any utility either.

I have yet to see a situation where the touchbar adds some utility that merits the loss in other areas for users.


Based off of the results, you have to train a larger number of architectures to identify the right subnetwork.


There are approaches to ensure parameters remain stable despite the depth (selu, for example).


Nothing wrong with an SVM. How else would they create a decision boundary for classifying patients? The choice of the polynomial kernel is interesting, but I don't think it causes any issues given the data.


I see, so basically instead of intuiting a simple threshold (e.g. >X% change), they apply an SVM which is able to discover more accurate thresholds (and error ranges). Do you have any suggested resources on learning more about SVM?

I guess my question comes from the observation that these advanced statistical techniques such as machine learning haven't been around for long and yet medicine has often created decision boundaries, presumably just looking at the data and making a reasonable cutoff. Is all the extra effort in a case like this worth the time investment?


Learning about them: https://en.wikipedia.org/wiki/Support-vector_machine.

That will tell you SVMs are ancient (linear version dates back to 1963), and that what they do here isn’t really machine learning, but something similar to linear regression: just as linear regression finds the best (in some strict mathematical sense) line describing a set of points, this finds the best (in a similar mathematical sense) line splitting two sets of points.

For software, take a look at https://www.csie.ntu.edu.tw/~cjlin/libsvm/. Easy to use, fairly flexible, with a Java applet you can play with.


SVMs are as old school ML as they get. They guarentee the maximum separation at the decision boundary. However it doesn’t scale very well for higher dimensional data. The standard used to be to use some dimensionality reduction technique like PCA to preprocess before feeding it into the SVM.

This is all before deep learning.


Exactly. Perhaps the paper could have given a clearer message if the abstract had characterized SVMs as a quadratic optimization technique instead of as machine learning?


Spot on. Just a heads up, there is a decent amount of work on using convolutions to condense the initial representations and can reduce computation time equal to your max pooling. A lot of these tasks can be done via hyperparameter search over CNNs, so you can easily reach parity using a CNN-LSTM approach w/ the same number of parameters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: