
Deconstruction with Discrete Embeddings - hamilyon2
https://r2rt.com/deconstruction-with-discrete-embeddings.html
======
cs702
Very nice!

I've been having similar thoughts for a while (and also have done some non-
public research along similar lines).

For those who are mystified or confused by the OP's title, the idea is to
train DNNs to learn to map continuous objects to discrete embeddings, i.e., to
map such objects to _a list of learned entities drawn from a learned
vocabulary_ , with each entity representing a different "feature" or
"category."

Perhaps a better term to describe what the DNN is doing is _unembedding dense
/continuous embeddings_. Once the DNN is trained, we feed object embeddings
(e.g., objects embedded in a grid of pixels) and the DNN outputs a list of
features/categories that describe the object.

Obviously, those discrete DNN-learned features/categories necessarily will
have less representational capacity than continuous (floating point) ones, but
they enable all sorts of interesting applications that are otherwise
impractical or not feasible. The author barely hints at some of these possible
uses.

Highly recommended reading if deep representation learning is of interest to
you!

