
Holographic Neural Architectures (2018) - headalgorithm
https://arxiv.org/abs/1806.00931
======
surak
I wonder if this should really be called a holographic network:

"we are projecting all training example into a single bounded dimension. As
with VAEs, we also combine the input information with an optimized prior.
However, we treat the prior as a separate input to the network. Because the
network has very little information from the training examples, it must
complement it with an accurate general representation of the training set.
Because these representations are continuous, multi-dimensional, and represent
the whole training set, we call them ‘Holographic Representations’ and the
architectures capable of generating them ‘Holographic Neural Architectures’
(HNAs)."

This seems to me to be very similar to what has always been done in
regressions on complex data.

~~~
marmaduke
> it must complement it with an accurate general representation of the
> training set

This in particular smells off, it sounds magical. This could only work when
the sort of general representations the network knows how to complement with
already match the data.

