
A Different Approach to Low-Rank Matrix Completion - highd
http://www.highdimensionality.com/2016/04/24/a-different-approach-to-low-rank-matrix-completion/
======
xyzzyz
When I did mathematics, I used to think that I work in very esotheric field of
secant varieties[1]. Funny how I now see it pop up everywhere. Anyway, here's
some geometric interpretation of his approach:

In space of all matrices, the set of rank one matrices is called a Segre
variety[2]. Segre variety is cut out by 2x2 minors of a generic matrix, which
is what the author's post is all about. What he's effectively trying to do is
given a plane (which is the set of possible matrices having some given
values), finding a point in the intersection of the Segre variety and that
plane. The way he does it is by intersecting that plane one by one with large
varieties cut out by a single minor, and noting that all points in the
intersection must lie in a hyperplane not already containing the plane of
possible matrices. This reduces the dimension of the plane of possible
matrices, as they must lie in both the original plane and the constraint
hyperplane. After enough steps we're left with a single point. The clever
thing here is finding a good order of intersections, so that at each step we
can find that hyperplane.

Now, rank k matrices form a k-th secant variety to the Segre variety, and they
are cut out by k x k minors, and so for their part 2, I expect using larger
minors.

[1] -
[https://en.wikipedia.org/wiki/Secant_variety](https://en.wikipedia.org/wiki/Secant_variety)
[2] -
[https://en.wikipedia.org/wiki/Segre_embedding](https://en.wikipedia.org/wiki/Segre_embedding)

~~~
highd
Wow, that's a very elegant connection! I'll have to do some more research on
Secant varieties.

------
jdfr
Whoa! Quite similar to the "magic" behind compressed sensing (as noted by the
authors of the paper linked by the OP):

[https://en.wikipedia.org/wiki/Compressed_sensing](https://en.wikipedia.org/wiki/Compressed_sensing)

Nice to see that similar mathematical tools are being developed for similar
problems.

Edit: typo

------
stared
Can anyone comment:

\- if the results of the two methods are the same?

\- how about the numerical stability? (many times tricky matrix algorithms
have poor stability)

~~~
highd
Results should be the same as the convex optimization technique in the "well-
defined" entries, though numerical stability is a real concern that I haven't
looked at yet. Empirically I haven't had any issues, but if you have entries
in the singular vectors sufficiently close to zero you can definitely imagine
some problems. I don't think that's unique to this approach, either.

