Hacker Newsnew | comments | show | ask | jobs | submitlogin
achompas 564 days ago | link | parent

First: great post! Don't let my comments deter you--I'm only sharing what I think is some constructive criticism.

I think this should be titled "why use kernels," as the gain here comes from using an RBF kernel (and not just a SVM). Kernel-based L2/Tikhonov regression (or kernel ridge regression) could perform just as well (although it might require more memory to train).

Other thoughts:

* your training set is 66% of the entire set, not 80%

* you use "degree=0.5" when creating the sklearn.svm.SVC, but since you don't specify the kernel type it defaults to RBF, which doesn't accept a degree arg. See [0] for more.

* you should motivate the kernel transformation more thoroughly; by mapping the data into a higher-dimension space, you hope to find a separating boundary that isn't present in the natural space. The mapping is performed such that we only need the inner product of each vector, which lets us map to ANY space (even those of infinite dimension!!) as long as we can compute inner products.

(There are some properties of the higher-dimension spaces, such as the fact they're Hilbert spaces, that make the above possible, but it's late and I can't remember details off the cuff).

[0] http://scikit-learn.org/dev/modules/generated/sklearn.svm.SV...




Guidelines | FAQ | Lists | Bookmarklet | DMCA | News News | Bugs and Feature Requests | Y Combinator | Apply | Library | Contact

Search: