Hacker Newsnew | past | comments | ask | show | jobs | submit | bwbellmath's commentslogin

We re-explore the path kernel result from Domingos and address some errors and limitations from his approach to derive an exact and practical kernel representation.


See our re-examination of the kernel equivalence. Path kernels exactly measure how models learn as their understanding of data improves during training, and this can be expressed in terms of the gradients with regards to each trianing input: https://arxiv.org/abs/2308.00824

We believe that all neural networks are effectively an SVM or more generally reproducing kernel architecture to implicitly layer the understanding contributed during each training iteration. Do you have any comment in the RKHS or RKBS context for transformers?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: