
Fourier Features Let Networks Learn High Frequency Functions in Low Dim. Domains - _Microft
http://people.eecs.berkeley.edu/~bmild/fourfeat/
======
memexy
> We show that passing input points through a simple Fourier feature mapping
> enables a multilayer perceptron (MLP) to learn high-frequency functions in
> low-dimensional problem domains. These results shed light on recent advances
> in computer vision and graphics that achieve state-of-the-art results by
> using MLPs to represent complex 3D objects and scenes. Using tools from the
> neural tangent kernel (NTK) literature, we show that a standard MLP fails to
> learn high frequencies both in theory and in practice. To overcome this
> spectral bias, we use a Fourier feature mapping to transform the effective
> NTK into a stationary kernel with a tunable bandwidth. We suggest an
> approach for selecting problem-specific Fourier features that greatly
> improves the performance of MLPs for low-dimensional regression tasks
> relevant to the computer vision and graphics communities.

I don't know what some of those words mean but had been wondering why neural
networks did not have Fourier transform blocks. Convolutions become point-wise
multiplication when the functions are decomposed into their frequency
components so it seemed like an obvious thing to do. It's good to see I wasn't
insane.

> In this paper, we train MLP networks to learn low dimensional functions,
> such as the function defined by an image that maps each (x, y) pixel
> coordinate to an output (r, g, b) color. A standard MLP is not able to learn
> such functions (blue border image). Simply applying a Fourier feature
> mapping to the input (x, y) points before passing them to the network allows
> for rapid convergence (orange border image).

Technically a Fourier feature mapping is not a Fourier transform but the idea
of decomposing functions into their frequency components before feeding them
into a neural network seems to hold up.

Fun paper. Thanks for posting.

