Hacker Newsnew | past | comments | ask | show | jobs | submit | bart1ett's commentslogin

After trying this out with the fourier implementation above, swapping MLP/Attention Linear layers for KANs (all, or even a few layers) produces diverging loss. KANs don't require normalization for good forward pass dynamics, but may be trickier to train in a deep net.


Note that KANs use LBFGS, which is second-order optimization method. My experience with the use of second-order features suggests that simple gradient descent often leads to divergence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: