
Sensitivity Conjecture Resolved - traderjane
https://www.scottaaronson.com/blog/?p=4229
======
svat
The actual conjecture that was proved has a very elementary formulation: if
you consider all n-bit words and take _any_ set of more than half of them
(i.e. at least 1+2^(n-1)), then this set will contain some word for which
there are at least √n other words in the set that differ in exactly one bit
position.

For example (with n=64), if you take any set of 1+2^63 (or more) 64-bit words,
then for some word in this set, there will be at least 8 other words in the
set that each differ from this word in only one of the 64 bit positions.

To illustrate, here's a brute-force verification in Python for the n=4 case,
that tries all (16 choose 9) = 11440 subsets:

    
    
        import itertools
        n = 4
        m = 2
    
        def degree(word, s):
            """How many other words differ in one bit position"""
            return sum(1 for w in s if bin(word ^ w).count('1') == 1)
    
        words = range(2**n)  # All n-bit words
        for s in itertools.combinations(words, 1 + 2**(n-1)):
            assert any(degree(word, s) >= m for word in s)
    

Note that the test fails (for 48 out of the 11440 sets) with m=3 in place of
m=2, so the bound is tight in this case.

------
edwintorok
The proof is fairly short too: "Within the preprint, the proof itself is about
a page and a half.

Whenever there’s an announcement like this, ~99% of the time either the proof
is wrong, or at any rate it’s way too complicated for outsiders to evaluate it
quickly. This is one of the remaining 1% of cases. I’m rather confident that
the proof is right. Why? Because I read and understood it. It took me about
half an hour. If you’re comfortable with concepts like induced subgraph and
eigenvalue, you can do the same."

------
espeed
Earlier today when I was looking for potential correspondences between
Interval Graph Theory [1] and Flow Graph Theory [2] (Reducible Flow Graphs), I
happened upon this 2017 blog post and paper [3] by Jacob Steinhardt: "Does
Pigeonhole Degrade Gracefully?"...

    
    
      The celebrated pigeonhole principle says...
    
      If we have disjoint sets S_1, S_2, ..., S_m subsets in [n]
      each of size s, then m (the number of sets) is at most [n/s]. 
     
      Suppose that instead of being disjoint, we require that every 
      pair of sets has intersection at most 1. How much can this 
      change the maximum number of sets m? Is it still roughly n/s, 
      or could the number be much larger?
    
      Interestingly, it turns out that there is a sharp phase 
      transition at s = √n .
    

[https://theorydish.blog/2017/05/30/does-pigeonhole-
degrade-g...](https://theorydish.blog/2017/05/30/does-pigeonhole-degrade-
gracefully/)

[1] Interval (graph theory)
[https://en.wikipedia.org/wiki/Interval_(graph_theory)](https://en.wikipedia.org/wiki/Interval_\(graph_theory\))

[2] Flow Graph Theory
[http://infolab.stanford.edu/~ullman/dragon/w06/lectures/dfa3...](http://infolab.stanford.edu/~ullman/dragon/w06/lectures/dfa3.pdf)

[3] Does robustness imply tractability? A lower bound for planted clique in
the semi-random model (2018) - Jacob Steinhardt, Stanford
[https://arxiv.org/pdf/1704.05120.pdf](https://arxiv.org/pdf/1704.05120.pdf)

------
traderjane
Link to preliminary publication (PDF):
[http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pd...](http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pdf)

------
tildedave
I listened to _Right Ho, Jeeves!_ this year from Librivox
([https://librivox.org/right-ho-jeeves-by-p-g-
wodehouse/](https://librivox.org/right-ho-jeeves-by-p-g-wodehouse/)) and I'd
second his recommendation for it. Sort of a perfect novel.

------
ggg2
any practical application outside of complexity field eco chamber?

~~~
graycat
Maybe not now. But there's a sense in which that should be expected. Here's
the situation:

There are some biggie problems out there having to do with combinations of
discrete things, optimization with such things, etc. For example, this
challenge led to the question P versus NP. Bluntly, there is a big supply of
potentially tasty nuts there we are unable to crack or poor at cracking.

If want to crack those nuts, e.g., have something "useful", then likely we
have to back off on a direct attack via some one bright idea, technique,
paper, and chunk of code. Instead, we have to, say, build a _foundation_ ,
circle around the problem and look for entering wedges, construct some tools,
look for related, seeming somewhat promising smaller results that in time we
could build on to make progress to the real goal of being good at cracking
those tasty nuts.

So, right, maybe we are not sure yet that some result will be useful -- that's
sad but just part of the challenge of cracking those nuts. They are tasty but
tough nuts to crack. Moreover, such results are about the best we see; we
should take, and take seriously, the best we do see.

So, that's part of how research works, for much of the best we know how to do,
in pure and applied math. Yes, I know; I know; they don't always make this
aspect of research fully clear in freshman calculus!!!

~~~
sn41
I agree. It is wrong to wag fingers saying impractical etc. Lower bound theory
is about adversaries against _all possible algorithms_, not just the analysis
of a given algorithm. So progress is expected to be slow.

George Polya in his book "How To Solve It?" encourages to think about "What is
the easiest problem that you cannot solve yet?" Most theoretical results tend
to answer this, and this principle is a good guideline for theoretical
research.

