Hacker News new | comments | show | ask | jobs | submit login

A kernel is basically a function which transforms data from one dimension into another. The mentioned kernel trick is the computational reduction from O(d^2) to O(d) through the fact that one doesn't need to calculate the dot products of all the extended basis vectors but only a kernel matrix (so called Gram matrix) which is made from the dot products of the original vectors which is only linear in cost. It seems almost everybody talking about SVMs and kernels is getting this wrong.



Actually, you get it wrong.

A Kernel function k(x, y) is a function that calculates phi(x)^T phi(y) for some phi. That means, it calculcates the dot product in a higher dimensional space of two data points; it does not do the transformation.

That implies that Kernels do not have to work on the gram matrix. Kernels can be sth completely different, e.g. Fisher Kernels.

(What I wrote is based on Bishop's book and others.)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: