
‘Outsiders’ Crack 50-Year-Old Math Problem (2015) - 18nleung
https://www.quantamagazine.org/computer-scientists-solve-kadison-singer-problem-20151124/?utm_content=buffer30c95&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
======
chatmasta
Spielman was my algorithms Prof in college and legitimately one of the
smartest guys in terms of raw intelligence I’ve ever met. He was a great
teacher, able to explain the most complicated of problems in language that I
could intuitively understand and get excited about. Before that class I had a
lot of problems with the math sides of CS (still definitely do), but he really
made the subject approachable for me. I started the class getting C’s, but
after visiting his office hours nearly weekly, by the end of the semester I
was getting 100s on my problem sets.

That class, and the way he taught it, convinced me that advanced math, whilst
hard for me, is not beyond my grasp or really that of anyone. It’s just a
matter of breaking problems down into manageable chunks and recognizable sub-
problems, a skill at which Spielman is especially adept.

I knew he was smart then (he had just received a $500k McArthur “genius”
grant), but reading this article really put into perspective how effective he
is and how lucky I was to have him as a professor. His same skill for breaking
down problems that helped me succeed in his class seems to be the same skill
that enabled him to solve this problem.

~~~
anitil
I remember watching a (maybe MIT?) youtube lecture where someone described
talking to Claude Shannon the same way. You'd talk about some idea and he'd
simplify and throw things away, until you had just a kernel of an idea that
was so simple it was just obvious.

------
ddinh
Quick note that the article is from 2015, and the MSS result dates back to
2013.

There's a blog post by Nikhil here that explains the proof of the discrepancy
result that implies Kadison-Singer:
[https://windowsontheory.org/2013/07/11/discrepancy-graphs-
an...](https://windowsontheory.org/2013/07/11/discrepancy-graphs-and-the-
kadison-singer-conjecture-2/)

The main result is a theorem that replaces a Chernoff bound (saying here that
a random partition has discrepancy logarithmic in the number of vectors with
high probability) with a bound that says achieves a better bound on the
discrepancy, independent of the number of vectors, at the cost of replacing
the "with high probability" with "there exists".

The proof uses some really beautiful techniques from the geometry of
polynomial roots to prove this and is a pretty fun read:
[https://arxiv.org/abs/1306.3969](https://arxiv.org/abs/1306.3969)

------
ericand
"He guessed that the problem might take him a few weeks. Instead, it took him
five years"

Perhaps the key to their success was typically programming work estimations ;)

More seriously, I think that if you think something is easy for you, or that
you are almost there, you may be more willing to put in the necessary time,
than if you knew it would take 5 years from the outset.

"Spielman realized that he himself might be in the perfect position to solve
it. “It seemed so natural, so central to the kinds of things I think about,”
he said. “I thought, ‘I’ve got to be able to prove that.’”"

~~~
garmaine
Related but different advice: build the habit of learning things by trying to
figure it out yourself. Richard Feynman did this when learning quantum
mechanics, and invented the Feynman diagram as a result. While not a different
model or theory per se, the diagram made apparent the connection to functional
integrals and a different method for solving that made previously intractable
problems (relatively) simple.

Funny story is he “invented” this technique in his school days and only found
out that it wasn’t standard during he Manhattan project when a colleague
complained a problem was t tractable and Feynman went to the black board to
illustrate the “obvious” solution.

Sometimes optimal solutions aren’t found because nobody bothered to look,
assuming if they think about it at all that any remaining work must be more
complex than the already known suboptimal solution.

~~~
JadeNB
> Related but different advice: build the habit of learning things by trying
> to figure it out yourself.

This can be a wonderful habit, or it can create cranks. As you mention,
Feynman used it to great success, but he was Feynman. I had a few classmates
in graduate school—I nearly was one, until my advisor set me straight—who
couldn't make any progress because they couldn't take any results for granted;
they wouldn't do _anything_ until they could prove _everything_. This is
intellectually admirable, but a sure recipe for stagnation (at least for me
and these classmates).

~~~
zentiggr
I think we can generalize the last two comments to: when you look at a
problem, learn about it, it's structure, until you think you can resolve part
of it. Keep repeating that until you get there. Assume that what everyone
knows is good until your next steps push you to reexamine assumptions, and
don't be reluctant to do so.

Happy ground between 'must build stack myself' and 'too orthodox to ever
question the masters'.

~~~
JadeNB
> Assume that what everyone knows is good until your next steps push you to
> reexamine assumptions, and don't be reluctant to do so.

This is a fantastic summary. I like it a lot, and may steal it for future use.

------
bllguo
Great story that reminds me of something I've had. There's just so much
knowledge out there - and growing - that researchers have to be increasingly
specialized. Isn't it inevitable that 4 years of undergrad + 1-2 yrs of grad
school classes won't prepare you sufficiently to do novel research? Imagine
when people have to be in school for something like 10 years before they know
enough to do "real" work! And people will necessarily have to become more and
more specialized, making situations like the one in the article even rarer.

Just a thought.

~~~
Invictus0
This is a real phenomenon by the name of credential creep:

[https://en.wikipedia.org/wiki/Credentialism_and_educational_...](https://en.wikipedia.org/wiki/Credentialism_and_educational_inflation#Credential_creep)

~~~
AnimalMuppet
It has very little to do with credential creep. The problem is that, in order
to do work at the frontier of a field, you first have to _get_ to the frontier
- you have to learn what's going on there, which means that you have to learn
all the stuff you have to learn in order to _actually understand_ what's going
on there. The learning to actually understand enough to get to the frontier is
taking longer and longer. That's not credential creep, that's creep in how far
away the frontier is from the foundations.

~~~
jfoutz
That sounds like an abstraction problem. I’m totally willing to accept I’m
crazy. But my gut feeling is there is a lot of technical debt in
representation, so it’s hard to move to the frontier.

~~~
ggggtez
In this case, you're probably just wrong. Consider for half a second how many
programmers fail Fizz Buzz, and then think about how much work it takes to get
from that to the frontier of computational science (i.e. not just installing
another javascript library, but actually computation).

There's a lot of low hanging fruit out there, but you still have to actually
know how to find it.

~~~
jfoutz
I want to point out that Maxwell had like 40+ equations for electricity that
wound up being 4. I suspect that many fields have rules, but those rules
aren't the most concise representation.

it will always be hard to get to the frontier. but that path should be easy.
we can get years and years of explorers by building good roads.

