
A non-linearity that works much better than ReLUs - seesawtron
https://www.youtube.com/watch?v=Q2fLWGBeaiI&feature=youtu.be
======
seesawtron
Could one benefit from it by replacing ReLU with sine activation functions in
all DNN (conv.) architectures? In this paper they only show that it works
better in MLPs based (non convolutional) deep neural architectures, if I
understood it correctly?

Traditional ReLU-CNNs do perform better than Sine-MLPs in one of the
tasks(Supp. Table 4).

------
ai_ja_nai
Sounds very interesting, although it has huge prerequites on the kind of
problems they are addressing

