

On Eigenfaces: Creating ghost-like images from a set of faces - dusenberrymw
http://mikedusenberry.com/on-eigenfaces/

======
theoh
There was a generalisation of the eigenface technique to 3D published in
Siggraph about 15 years ago.
[http://gravis.cs.unibas.ch/Sigg99.html](http://gravis.cs.unibas.ch/Sigg99.html)

One of the authors is still working on refining their approach, by the looks
of it:
[http://gravis.cs.unibas.ch/projects.html](http://gravis.cs.unibas.ch/projects.html)

------
stared
(Nomen omen) spectral look is an artifact of using negative values. It's nice
to see non-negative components -
[http://www.quantumblah.org/?p=428](http://www.quantumblah.org/?p=428). They
are both more accurate and more human-interpretable (at the cost of
computational efficiency).

~~~
dusenberrymw
Great contribution and interesting read. I'll certainly be checking this
method out in more depth!

------
sabalaba
Here's an animation of an autoencoder learning filter weights. It's
interesting that they look similar.

[https://lambdal.com/images/autoencoder-learning-face-
filters...](https://lambdal.com/images/autoencoder-learning-face-filters.gif)

~~~
chestervonwinch
It's not completely by chance. There's an old paper [1] that shows that if the
activation functions are well approximated using only up to the linear term of
its Taylor expansion, then the optimal weights for encoding and decoding are
the same as PCA.

There's probably newer results on this topic; I'm sure.

However, I will say that I've created some autoencoders on toy sets like those
found in scikit-learn, and the spaces learned via the autoencoder and the
spaces found through PCA were often similar if not identical. For example, if
my input vectors were in R^n (with n > 3) and I restricted an autoencoder to 3
units, the encoding matrix of the autoencoder would span the same subspace as
the first 3 principal component directions.

[1]:
[http://oucsace.cs.ohiou.edu/~razvan/courses/dl6900/papers/bo...](http://oucsace.cs.ohiou.edu/~razvan/courses/dl6900/papers/bourlard-
kamp88.pdf)

------
cafebeen
An interesting post in need of a reference to past work:

[http://www.mitpressjournals.org/doi/abs/10.1162/jocn.1991.3....](http://www.mitpressjournals.org/doi/abs/10.1162/jocn.1991.3.1.71#.VMa5OP7F9yM)

nearly 25 years old and 13k references, so it's pretty well studied...!

~~~
dusenberrymw
Yeah that's a great paper, and I definitely used it to learn more while I was
writing this post up. If anyone else wants it, here's a direct link:
[[http://www.cs.ucsb.edu/~mturk/Papers/mturk-
CVPR91.pdf](http://www.cs.ucsb.edu/~mturk/Papers/mturk-CVPR91.pdf)].

I really should add a section of resources that I found useful. Thanks!

------
ilzmastr
Classic HW problem nicely done. Today they use Viola-Jones.

For those wondering why PCA works (self-plug):
[http://ilyakava.tumblr.com/post/95691347612/demystifying-
pca](http://ilyakava.tumblr.com/post/95691347612/demystifying-pca)

------
ArekDymalski
Very interesting. I'd like to see something like that for sounds/music to see
how audio will evolve after several cycles of encoding/recovering.

------
ibebrett
Isn't this one of the homework's in the stanford/coursera ml course? I feel
like this is not really original content

~~~
jamessb
It doesn't contain any new ideas, no - there are many other tutorials about
eigenfaces with example code, such as:

[http://jeremykun.com/2011/07/27/eigenfaces/](http://jeremykun.com/2011/07/27/eigenfaces/)

[http://nbviewer.ipython.org/github/rcquan/sklearn-
practice/b...](http://nbviewer.ipython.org/github/rcquan/sklearn-
practice/blob/master/pca_eigenfaces.ipynb)

The wikipedia article
([https://en.wikipedia.org/wiki/Eigenface](https://en.wikipedia.org/wiki/Eigenface))
also contains code for a MATLAB implementation.

------
murbard2
Ghost like faces? It's as if they are... spectral

(•_•) ( •_•)>⌐■-■ (⌐■_■)

~~~
dusenberrymw
Well played...

