Hacker News new | past | comments | ask | show | jobs | submit login

I'm sorry, I still don't understand practical applications of that paper. Does that mean we now can write "smart" function with input of booleans and sensitivity and get results much faster than if we just iterate over booleans?



While I generally thing that looking for practical applications of research is nonsense, this one seems to be in an obviously useful area.

It's all about different ways of measuring how good a function is at scrambling the input data. I'm guessing if you want to break (or make) a hash or cryptosystem you will use these measures over various aspects of it to look for weaknesses or some such.

This particular proof seems to be saying that the measure called sensitivity will give you similar answer to a bunch of other measures.

On the one hand, that's disappointing (a measure that gave totally different results might enlighten whole new ways of attacking/strengthening you crypto). On the other hand it is encouraging because if a whole bunch of very different measures agree, then that's a sign that they are on to something real.


This was a result that everyone expected to be true, but weren't able to prove: our mathematical tools weren't good enough. Now they are. It's like weightlifting: as your mathematics becomes “stronger”, it becomes capable of doing more things. The applications will come. See this Quanta magazine article and quotes in it from researchers about the problem: https://www.quantamagazine.org/mathematician-solves-computer...

> But no one could prove it. […] “People wrote long, complicated papers trying to make the tiniest progress,”[…] this power should yield new insights about complexity measures. “It adds to our toolkit for maybe trying to answer other questions in the analysis of Boolean functions,” […]

etc.


It's not my field, so there actually may be practical applications, but in general mathematicians don't really care directly about applications (unless they do). The result stands on its own merit (and may either help in applications later, or simply as a beautiful result, or may later be useful in the proof of some dependent theorem that does have applications).

But essentially a large part of why this is a big deal is that it was a "long"-unproven conjecture that smart people had looked at over the years and not solved, and expected to take heavier machinery, and that all of a sudden got a really simple proof.


I’d like to be corrected if I’m wrong, but this seems to be about arranging data to be better processed in quantum computing, to retrieve parity information. I wonder if this has application to McEliece codes, or future communications.

I hope so.


no. math people just like to math.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: