Hacker News new | past | comments | ask | show | jobs | submit login
Surprise computer science proof in combinatorics (quantamagazine.org)
267 points by theafh 11 months ago | hide | past | favorite | 146 comments



I feel like Quanta just has Terry Tao on speed dial at this point, and that he loves to answer the call. They always ask him about this stuff for their math articles and he always gives great answers.

> That Kelley and Meka managed to spot the strength of once-overlooked ideas shows the often fitful nature of mathematical progress — a quality that to Tao is more of a blessing than a curse. “It’s not always the case that math just gets harder and harder and harder,” he said. “Thank God.”


Hey nothing wrong with that! Huge fan of Terry Tao. Something magical about how their brains work.


It definitely wasn't a complaint, haha.



Plus much shorter write-up from Bloom and Sisask: https://arxiv.org/abs/2302.07211


I think you mean this one: https://arxiv.org/abs/2302.07211


automatic conversion to web page available at https://ar5iv.labs.arxiv.org/html/2302.07211

(more info about the conversions at https://ar5iv.labs.arxiv.org)


TIL. Thank you for this.

https://ar5iv.labs.arxiv.org/



Seems to only be working for papers in TeX?


Yes, thank you!


The first half page finally made me understand the problem. I didn’t get it from the article.


The statement in the article:

> a limit on the size of a set of integers in which no three of them are evenly spaced

This misses a key detail. You can trivially find arbitrarily large such sets e.g. take the first however many powers of 2: 1, 2, 4, 8, 16 ...

The missing constraint is that the set of integers must be a subset of { 1, 2, ... , N }.


I thought it was pretty clear from this:

> Erdős and Turán wanted to know how many numbers smaller than some ceiling N can be put into a set without creating any three-term arithmetic progressions.


[flagged]


True, but we wouldn't be discussing this without the journalism in the first place


Just state the result. 15 year olds can understand big O notation and arithmetic progressions. Instead we get a long article that nobody gets what is actually going on even after reading it.


What nonsense. Would you have even heard of this paper if not for Quanta?

Also, no mathematician or computer scientist I know has anything negative to say about Quanta.


I do not really understand the unconditional love for Quanta. Sure it is better it exists than not but I find the articles vague and mostly about people/institution name dropping. "Someone someone from the prestigious MIT said <this is a tremendous result!>". Cool I guess?

Take this one:

- No clear explanation of the problem to solve. Could have given an example or something to hammer out what is an arithmetic progression of 3 numbers. - No detail about the actual form of the previous bound and the new one. - Not much detail about the actual technique. I get that it become very technical very quickly. But that is the actual job of a science/math journalist to distill this. "They used a well know technique of increasing density, etc.". If it is well know, why not try to describe how it works.

I wonder what someone like 3blue1brown would make of this.


> I wonder what someone like 3blue1brown would make of this.

Although I am not Grant Sanderson (3blue1brown creator), I would wager very good money that he would strongly approve.

I am a research mathematician, I do read Quanta, and I've been interviewed for them also. Overall my impression is that they do a good job of making at least something of contemporary research mathematics accessible to the general public. Most people have very little concept of what we do.

It is a notoriously hard task, it is a vitally important one, and it is one that too few people are attempting. Quanta does the best job of it of any publication I know, and for that I am very grateful.


I had a bit of trouble grappling with this.

What is the largest AP-3-free set for, say, the integers from 1 to 100? What is the largest N where we've computed it explicitly / how expensive is that to do?


According to the OEIS [0], the set has size 27; you can find its terms in the Dybizbański reference [1]. Presumably, it's been computed up to N = 211, where the B-file ends.

[0] https://oeis.org/A003002

[1] https://doi.org/10.37236/2061


Raghu and I had desks next to each other when we were students - we all knew he was brilliant!


Yes, I saw the article in Quanta.

Issue here: Click-bait headlines. Uh, I go to the Quanta Web site ~once a day. I respect Quanta enough not to get torqued at any of their headlines. Also there I pay attention to what fields of content (usually science) and not much to the headlines. For the fields, I tend to pass over medical, social, psychological, and biological sciences and stay with math, physics, cosmology, and engineering, and it is easy enough to guess the field of the content without getting torqued at the headlines. Soooo, net, at Quanta I don't object to their headlines.

For the article under discussion, yup, it is in the field of "combinatorics". I bumped into that field while scheduling the fleet at FedEx, in grad school, and in some business problems since. Result: Combinatorics is one heck of a challenge, both in the theory of the pure math (and the field can quickly become pure math, e.g., number theory) and also in the applications in business. In practice we squeak by with a little in theory and a lot in intuitive heuristics and relatively blind enumeration (that exploits powerful computing).

So, it would be nice, maybe a biggie in various respects, to have some major progress in combinatorics.

So, in such an attack, sure, it would be nice for someone some afternoon to have some big insight, bigger than heap sort instead of bubble sort, the fast Fourier transform instead of the direct approach, of error correcting codes instead of just three copies, etc. So, here is an invitation: Get a sharp, soft pencil, a big, soft eraser, a pad of paper, lean back, put feet up, and have some of the needed big ideas!!! All are invited and welcome!!!

After some decades, we are still looking for that big insight!! So, what now? How to modify the attack?

One way is, chip away at the problem wherever can see can get something new and apparently relevant, connected, or just anywhere in combinatorics -- or just follow the standard good research criteria of "new, correct, and significant" but at least in sight of the ballpark of combinatorics.

So, with that approach, the results in the current Quanta article are an example of the desired and welcome progress.


> Though both Bloom and Sisask had other pressing matters to attend to at the time they received the email — Bloom had just adopted a puppy, while Sisask was in the middle of moving — they quickly set to work verifying the new paper.


I'm a practical person. What's the point of spending "time" (brain cycles) on such problems? Is it just a mathematician's version of brain teaser?

What useful development comes out of such?


Neural networks were first prototyped in the 1950's [1]. How practical was it to study them back then?

Aleksandr Lyapunov's theory of stability for dynamical systems, which forms the bedrock for much of control theory, was first proposed in 1892 and went unnoticed until the 1930's [2]. How practical was it for Lyapunov to study that topic?

It's virtually impossible to predict which mathematical tools will be "practical" or "useful" in the future. The point is that if someone needs the tool down the road, they'll have access to the solution due to the efforts of these mathematicians.

[1]: https://en.wikipedia.org/wiki/Neural_network#History

[2]: https://en.wikipedia.org/wiki/Lyapunov_stability#History


Many other fields too.

Mendel's discoveries weren't appreciated until after he died. When Narinder Singh Kapany invented fiber optic cable in the 50s he had no idea it would lead to revolutions in computer networking, gastroenterology, and other fields.


And the entire works of Church and Turing…


Or the guy who just wanted to extend the works of Aristotle, and created binary...


Leibniz did that from memory, although the stoics deserve the credit for propositional logic (which can be reduced to a collection of truth tables). and contrary to popular belief Boole actually extended Aristotle’s syllogisms (reduced the conventional syllogisms to computation using matrices of 1s and 0s) rather than create a today’s binary logic.


> It's virtually impossible to predict which mathematical tools will be "practical" or "useful" in the future

On a long enough timeline, you're right! But if we limit ourselves to, say, the next 5,000 years, it gets a lot easier to accomplish. ;)


How does limiting the timeline to 5,000 years enhance our ability to predict the future?


The more you limit your timeline, the easier is is to predict the future. That's why we only have 10-day weather forecasts.


Good point, maybe a better way of phrasing it would have been "It's virtually impossible to predict which tools will _not_ be useful" :)

Fast Fourier transforms, complex numbers, complexity classes, etc. are clearly useful. It's much harder to rigorously claim that a result will _never_ be practical or useful.

Just to be clear, I think it's fair to say that a result isn't _immediately_ useful. But who can say whether or not it will have a more practical application down the road? Only time will tell.


there is a huge chance this advancements could be invented maybe in more efficient form when need and time come.


Proofs (both results and techniques) are a mathematician's ammo. It is not good strategy to wait for a war to start before building ammo.


And then you open ammo storage when you need it and see it is polluted by 99.9% of irrelevant garbage.

Math is like code: technical debt makes moving forward much harder.


Mathematics continually surprises us when seemingly disparate areas suddenly become linked, sharing some sort of relationship that was not appreciated until much later.

Some think mathematics is something like a physical law, or property of the universe. So figuring out such things may be like fundamental physics research. Hard to quantify the value, but there is value. Seemingly unrelated developments in mathematics have had a habit of helping to solve problems in physics in the past, at least. I don't think anyone really expected the exciting new mathematics of statistics in the 18th century, to one day be relevant to physics. But it would be.


> Some think mathematics is something like a physical law, or property of the universe

my kid asked me if math was invented or discovered. i said "invented", i hope i'm right considering he views me as some all knowing oracle of information. just a little bit of pressure there.


Computers were invented, but you could also say that we discovered how universal computation works. It’s not like there’s some fundamentally different form of computation that we could have invented instead. Even more so in math, the truths in math are discovered, not invented.


The mathematical realm is a purely human construct.

There's no such thing as "one" in nature: there's not "one" apple. An apple is a monstrously complex collection of cells and dynamic chemical processes, it's not "one".

The concept of numbers are a tool that humans use to categorize, summarize and model the world around us.

The "truths" we discover in math are only true in the mathematical realm, which is an inexact model of the physical realm. Useful, but undeniably a human construct.

So yes, math is invented.

Though we constantly discover new ways to use it in the physical world.


Just as one (no pun intended) example, the fact that there are exactly five platonic solids in three-dimensional space is not something that is invented. It is discovered. And facts like that govern what is possible in the physical world, for example what kind of crystalline atomic structures can form. There is nothing “invented” about that.

The fact that the physical space we live in has three dimensions also isn’t invented. These numbers, three and five here, are not a human invention. They correspond to facts we discovered, and they represent truths that are independent of human existence or inventions.

Maths is exactly about that, discovering the fundamental truths that, based on logic, we find couldn’t have been any other way.


> The fact that the physical space we live in has three dimensions also isn’t invented

The concept of "three orthogonal dimensions" is a pretty good model of the physical reality that is space.


It's a bit of a difficult concept to wrap our head around because we've been taught to associate "number symbols" and names with "things" since we were toddlers, but they absolutely aren't the same thing.

There's no such thing as "three", other than a categorization that we, as humans, apply to model the physical phenomenon.

We've developed language and symbols over time to more precisely model the things we observe, but it's important to remember that our symbols are different from the things they are invented to represent.

It's really hard to separate this abstract concept from our observed reality when they are so tightly coupled - the idea of "three dimensional space" is a model we've constructed to categorize and represent, in a convenient way, observed phenomenon that is too difficult to talk about in other terms.

We didn't "discover" that there are three dimensions, for example. We developed language and a mental framework for categorizing space in such a way that we could more easily reason about it and communicate our reasoning effectively with others.

We didn't "discover" Ohm's law. We created a set of units, names and concepts that would allow us to model the relationship in an analytic way, between other concepts such as current, voltage and resistance. Ohm's law isn't not a fundamental law of nature. Above all else, it's a series of struggles with language to describe things we observe. "Ohm" as a unit of resistance, "Voltage", "Amperage" - all of this was invented in order to be able to reason about things we were seeing, and it seems to be a good mental model and tool for getting things done, but it is very different than the actual reality.

We didn't "discover" the number three - we created the concept of numbers and refined their meaning over time. For a very long time, in most civilizations there wasn't a concept of "zero" in number symbols. We needed a way to communicate that concept though, so invented a symbol to represent it.

Number systems and the symbols that represent them have varied wildly throughout human history, and there appears to be a neurological attachment in the human brain that associates quantity with physicality (specifically fingers) [1] that makes grokking this concept especially difficult. When we become sophisticated enough that using our fingers wasn't sufficient to reason and communicate, we started using bone tally marks as a quantity representation - but all of these things, fingers, tally marks, glyphs - are just symbols that humans have created over time to imperfectly model physical phenomena.

It's no different than any other sort of naming. We look at differences between birds, and attach a symbol of "hawk" to one, and "eagle" to another. There's no such thing as "hawk" or "eagle" in nature, these are human invented symbols that help us categorize and communicate in a short-hand way, differences between species we observe. Numbers are the same, but we've also invented a bunch of rules for manipulating those symbols to communicate more advanced concepts: addition, subtraction, etc...

Now it's true, of course, that we discover new things all the time. Frequently we need to invent new ways of describing it. We discovered, by observing the stars, that planets follow an elliptical orbit. We couldn't describe that well using the tools we had, so invented Calculus.

[1] https://en.wikipedia.org/wiki/History_of_ancient_numeral_sys...


Math fundamentally isn’t about the “names”, it’s about the “things”. We are doing it because we want to know about the “things”. The “names” are only incidental. Saying that math is invented and not discovered gives the impression that it is building fantasy castles in the sky, whereas it really is about gaining knowledge about absolute and objective truths, truths that exist independent from any human inventiveness or creativity, or from the choice of “names”. The “names” are merely a vehicle, a tool for that purpose.

To take an example from computer science, we discovered that comparison sorting algorithms have a time complexity of Ω(n log n). This discovery is independent from the notation we use to express it, and independent from the programming languages we use and independent of the choice of sorting algorithm. It is a fundamental fact of the matter, and there is nothing invented about it. Any alien intelligence would end up with exactly the same insight about sorting.

Or to give a more hands-on example: If you have two, three, or four balls of the same size, you can arrange them such that each ball touches all of the other balls (a tetrahedron shape in the case of four balls). But you can’t do that with five or more balls. Convincing yourself that that’s true is an essential example of doing math. And it is true regardless of what “names” you use. You can also use apples and oranges if they are approximately spherical and of the same size (or, alternatively, “look the same from all angles”).

For these reasons, when chasd00 tells their kid that math is invented, not discovered, they are misrepresenting what math is really about. The above example with balls could be one way to illustrate to a kid how math isn’t invented. (I’m sure there are much better examples, this is just from the top of my head.)


> algorithms have a time complexity of Ω(n log n)

If you don't understand how this is a human construct for human cognition and information processing, I don't have a way to explain it to you. I would recommend reading more about number theory and philosophy.

Reading about number theory helped me understand these concepts, and I recall having a similar resistance to the ideas at the time. Specifically set theory and "zero" as a proxy for the empty set, and "one" being the set that contains the empty set, etc...

Math is an imperfect mechanism for representing the physical realm, invented by humans for that purpose. It has truths that are only true in the mathematical realm and have no bearing or representation in the physical realm.

As an example, I suspect you'd have a difficult time pointing to a physical "real" example of the square root of negative one. It's called the imaginary plane for a reason.

> gives the impression that it is building fantasy castles in the sky

That's exactly what it is, as most mathematicians will tell you.

That doesn't imply however, that it's not useful.

Imaginary numbers (to continue the example) are quite handy for dealing with signal processing, among other things. Which doesn't change the fact that they are a human invention - one that stumped societies for hundreds, if not thousands of years (the Greeks struggled mightily with square roots for this reason, for them math and philosophy were not separate things). We only really got past it by inventing the symbol i and tossing all the gnarly stuff in it like a waste bin, because most of the time for practical applications it's factored out.

> absolute and objective truths

These are most appropriately the realm of religion, not of physical sciences.


>truths that exist independent from any human inventiveness or creativity

This is not a fact and is the oldest debate in mathematics. All of it is just as likely (if not more) to be the way the brain models things than some objective truth of “reality.”


> which is an inexact model of the physical realm

Mathematics have nothing to do with the real world.

The set of all true statements provable with a set of axioms is determined only by the axioms. All the true statements are already true whether we know they are or not; in that sense math is like exploring the unknown land that exists outside of the physical realm. It's discovered.


I agree with you, except that math does have a relationship with the physical world.

Math was invented and refined over time in order to find ways to model, communicate and reason about our observed physical phenomenon.

I understand though, that there is a ton of subject matter within the field that is completely divorced from the "real" world.


While counting apples might be a useful example for teaching children, they're hardly the only thing there is to count. The putative abstraction isn't even in the number "one", the abstraction you're complaining about is in the chosen category of "apple". In other words, the fact that something can be comprised of many things does not change how many of that thing there are, nor does it reveal some deep unnaturalness of numbers. It at best casts doubt on our delimitation of objects.


Neat point, but I do think the abstraction of "one" fails us regularly.

Even in the supposedly simple idea of counting apples.

To continue to (ab)use the apple example, tell me how many apples this is [1].

Sure, there's a problem with the "apple" category/symbol (to your point) but similar exceptions and confounding examples are abundant throughout messy physical reality.

So I think my point stands, "one" is an imperfect model of reality. Useful, but inexact.

Given that "one" may be the most basic fundamental axiom of mathematics, as more abstractions are built upon it the accuracy with which those concepts reflect the physical world declines.

Hence the invention of fractions, etc...

[1] https://www.dailymail.co.uk/news/article-2433619/Siamese-app...


> There's no such thing as "one" in nature: there's not "one" apple. An apple is a monstrously complex collection of cells and dynamic chemical processes, it's not "one".

> The concept of numbers are a tool that humans use to categorize, summarize and model the world around us.

By the same argument, aren't apples invented too?


That is a good question for a kid.

Ask him back, if it's discovered, where was it "stored" all this time.

If it's invented, does it mean that other potential sentient species will have a different math than ours ?


That's a really insightful question for a kid to ask! Not sure I agree with your answer though, for me that's the kind if thing it would be hard to prove either way (ironically).


Given the fact the rules exist whether we know about it or not, I'm team discovered


Definitions (and axioms) are invented. Theorems are then discovered.


You missed a great opportunity to answer a simple "yes". ;)


not to nitpick, but there's a ton of stuff in mathematics that is just defined out of convenience or convention, it's not natural law. Rather, a better definition would be that mathematics is a description of reality, rather than law itself.


Isn't it the same? Laws of physics describe how the physical world works, similarly to "laws of math" that describe how the mathematical world works. In both cases we tend to define our laws in a way that's easy for humans to use and understand.


I've spent a lot of time and brain cycles over the last decade writing software, entire products actually, that now sit as motionless bits on a disk somewhere, unused and useless.

In the process of creating them I had a lot of fun, honed my existing skills and picked up new ones, broke out of various boxes I had been stuffed into in terms of how to architect systems, and just generally worked out my creative muscles.

I've also written a lot of software that is useful and used, and in a practical sense makes (lots of) money.

It's meaningless to break out the time and cycles spent on one Vs the other. The necessary skills required were enabled by one another. Moreover it would have been to a large extent impossible to determine a priori which would be which.

Am I an impractical person because I've spent time and cycles on the former?


Yep, 'fun training' is one of those things you never know if it could pay off, but because you know the information now there is a much higher chance it is going to pay off for you.


A Hash Table's double hashing [1] creates an arithmetic progression. Improving bounds of this theorem could lead to faster, smaller hash-tables. This is the first wild speculation I could come up with.

[1] https://en.wikipedia.org/wiki/Double_hashing


> What's the point of spending "time" (brain cycles) on such problems?

What's the point of anything?

> Is it just a mathematician's version of brain teaser?

All of mathematics can be characterised as "brain teasers". Make of that what you will.

> What useful development comes out of such?

I'm not qualified for this particular example. However, it seems quite number-theoretic, and number-theory has given us such "practical" things as asymmetric encryption and universal computation (Turing's famous paper from 1936 was titled "On Computable Numbers", after all).


> What's the point of anything?

Not that I disagree with you, but some things have an impact on people's quality of life: nutrition, healthcare, and psychological well-being.


> ... but some things have an impact on people's quality of life: nutrition, healthcare, and psychological well-being.

I would perhaps classify that as "immediate" impact, not "lack of impact".

Number theory has driven computer science, calculus, statistics, and so on.

CS, calculus, stats... they all drive (for example) "nutrition". For example, how do you know what a "vitamin" is? How do you measure it? How do you quantify it's effects on the body (and mind)? A whackload of analysis DEPENDS on number theory for us to understand almost everything about the natural world!


It's a big world, not everyone needs to be a farmer/doctor/psychologist so some people who have other skills do other things. Sometimes those other things help. A lot of time they don't. For better or worse we live in a society.


I'd say there's a strong case that this has a positive impact on the psychological well-being of anyone who's interested in number theory.

Perhaps not that many people in the world understand the formulation of the problem, but then not many people "get" abstract art, either.


Psychological well being is definitely improved if people could work on something they like and get praised/rewarded for it. And I believe that is way people could get interested in something. Best engineers I know of, had got into their field because they are driven by solving problem just for the sake of it.


So? What is the point of living?

:-)


Farming provides food. Predicting the probability of water spontaneously combusting, while interesting, provides much less food.


You know what does though? Irrigation. You know what allows you to optimally irrigate so you don't waste money that you could be spending either on growing more crops or your family? Maths.

Only a fool pretends the ground floor of a house is worthless because they live on the second floor.


I'm not saying Math is useless, I'm saying there must be tiers for topics in how useful they are such that someone can say "Hey, that's pointless!" and one can't follow with "Therefore all things are pointless". Or, that allow me to rank niche topics in their usefulness.

Irrigation is very important. Now, is Irrigation Maths more important to us as a species than Terrence Taos work on spontaneously combusting water? Probably! Would it be more useful to have PhDs be funded in optimizing food routes between countries after climate change over game theory applications for blackjack? Probably!


Unfortunately, no. Maths is so vast that what might seem a trivial and silly brain teaser can turn out to unlock a massive problem in a seemingly completely unrelated subfield of mathematics, and we won't know until someone discovers that link.

What someone might call "a silly little brain teaser" today could actually result in a breakthrough paper weeks (or centuries) from now in a different subfield because someone far smarter than us realized that part of the problem they were working was actually analogous to a number theory related problem that was simplified, even a tiny bit, by this solution. (Hell, Nash built his entire career on spotting those kind of links and then telling other mathematicians to focus on working out the individual pieces)

Maths plays out over "we don't even know how long or short" time scales. What's the use? We don't know, it's probably completely useless. Until someone suddenly realizes that it's not.


>Now, is Irrigation Maths more important to us as a species than Terrence Taos work on spontaneously combusting water? Probably! Would it be more useful to have PhDs be funded in optimizing food routes between countries after climate change over game theory applications for blackjack? Probably!

I imagine you would be equally annoyed at Euler in 1736 when he was wasting his time with bridge brain teasers (and invented graph theory in the process) instead of solving bubonic plague or optimizing irrigation. Science just doesn't (in general) work the way you propose.


And the majority of science sit in empty libraries with maybe a single citation by a postdoc trying to write enough papers to get the next postdoc...


Sounds like you forgot that's literally everything niche until a post doc or tenured mathematician goes "yo hold up. This tiny proof (helps) solve(s) X". And now their name's in the science history books.


OP didn't say all math was useless, just this math, which yes is most likely completely useless.


The interesting thing about the question is that there really aren't any right or wrong answers.

People pointing out that maths is full of advancements that had no immediately identifiable use at the time, but that came to be useful later, is correct. Yet it doesn't even begin to answer the OP's question.

I doubt very much that many people choose to pour their lives into endeavours that they don't particularly enjoy just because some hypothetical person at some hypothetical future point in time might hypothetically find a hypothetical use (hypothetically ;P).

The answer is that "value" presupposes the question "valuable to who and why?"

Newton invented Calculus because he had an immediate use for it. Other mathematicians pour themselves into solving problems because they enjoy it and find a lot of reward in the prospect of solving a previous unsolved problem. Both are "valuable", just to different people for different reasons.


Yeah it's fine for mathematicians to amuse themselves, the problem is when they demand salaries to do that and taxpayers like OP rightfully ask "whats in it for me?" And when the answer is "IDK but maybe in a century we might have a problem this math is useful in solving" then it's not surprising that no one wants to fund pure math research. It's not the 20th century anymore when math research was going to meaningfully improve someone's life through the invention of things like electrical devices.


Yeah, if you force other people to pay for something then you had better offer them an attractive value proposition. Though public funding of mathematics and other sciences is not what I thought we were discussing :)


In which world do you think mathematicians are raking in taxpayer money? Mathematics requires very little funding: a blackboard, some chalk, a pen and paper, a desk, some coffee. That's it.


Thats a pretty myopic view of history there champ.

The pragmatic, practical perspective here is that funding the egg heads has had incredible outcomes (and its so cheap too), so dont let the simpletons shake the golden goose down just because they don't understand anything they cant fuck, fight, or eat.


You described my thinking very well, and much better than I. Thank you


They were trying to solve a real world problem using this kind of math, and then decided it'd be easier to improve upon the math itself than to continue to pound at the problem - so by definition, I'd say you're absolutely wrong here, or they never would have started this proof in the first place.


Until it's not, because someone suddenly realizes that a seemingly completely unrelated problem in a completely different subfield that's holding up a major proof is actually analogous to a problem where this result removes a bunch of roadblocks.

That's the problem with math from a "so what is this good for?" perspective: we don't know yet, but we sure have a litany of instances where seemingly useless proofs had a profound impact anywhere from weeks to centuries later.


>Predicting the probability of water spontaneously combusting, while interesting, provides much less food.

Unless you can make a prediction that water will combust in low energy conditions, in which you can use this combusting water to generate excess power. Then use that power to compress nitrogen into ammonia, and then use that product as fertilizer.

The British show Connections went over how completely different things sometimes 'connect' and bring fruitful new ideas.

The problem space of reality has emergent behaviors that are not (easily?) predictable. Sometimes you have to iterate large portions of it.


What is the point of providing food? Surviving? Existing? Is that the highest achievement? Is remaining breathing for a few more seconds the greatest goal?


I have a friend working on lidar for a military contractor. He uses these exact theorems to design beam pulses for measuring distances of asteroids and stuff. The idea is to construct beams so that you can tell by the autocorrelation function the exact distance away an object is. It turns out that these weird kinds of integer sequences can help you design beams that give the autocorrelation function this property.


This sounds really interesting, thanks for sharing.


It's normally extremely hard to guess the applications of this kind of research. Having said that, almost all modern cryptography relies on maths that was not obviously applicable at the time.


>In late 2021, Kelley and Meka were analyzing the chances that a team of players in a certain cooperative game would be able to win, a standard type of computer science problem. It occurred to them that techniques from research on the size of progression-free sets might be helpful. But they found it easier to directly study those techniques than to apply them to the cooperative game. “My best idea for how to make progress on this problem [was] to actually improve the tool itself, not to use it in a more clever way,” Kelley said.

I wish they went into more detail about how this problem was relevant to the other problem.


I don't know what specific problem Kelley and Meka were working on, but the connection between arithmetic progression free sets and computer science (especially communication complexity) is somewhat well established. See for example this paper[1] which gives new constructions of "corner-free sets" (which are closely related to 3 arithmetic progression-free sets) by thinking about a specific communication protocol.

[1] https://drops.dagstuhl.de/opus/volltexte/2021/14276/pdf/LIPI...


Most of the other comments focus on how and why theory research often leads to "practically useful" outcomes down the line, so I'll instead say that not all research has to be practical. Sometimes people do things just because they're interesting and fun.

Why do people play board games, or video games, or sports? Why do people read fiction? Why do people have hobbies which strictly distract from productive work time? The answer to any of these questions is the same as the answer for "Why do people do theoretical research?": it's fun.


The flippant answer is https://abstrusegoose.com/504


Any mention of Lobachevsky requires this:

https://youtu.be/gXlfXirQF3A


Eh, someone just suggested that a bound of this form might break some cryptographic assumptions for finite fields of (large) prime order. (This is not obvious to me, but it's a point that I think is worth considering, and already would show that such a result is useful.)

It's very hard to know how these things end up being useful. Even if a specific piece of knowledge isn't useful by itself, it can lead to things that are useful down the line. It's both very hard to know which ones will be, or which ones do lead to actually useful results; for example, think of work on propositional logic which ends up being the bedrock of computation, work on elliptic curves (cryptography), work on PDEs (physics), quantum physics (transistors), photonics (communications), etc.

Some fields have pretty "fast-to-application" times, but it's not always clear which ones will _not_ yield useful results. Plus, and here's the real, honest answer: all of this shit is very fun! Why not do it?

I'm also a (somewhat) practical person, and that guides many of the problems I try to solve, but sometimes there's just such a tantalizing puzzle that it's hard not to resist! It's pretty damn hard to "only" work on "useful" stuff and not be just an ok academic/programmer/etc., you have to let yourself get nerd-sniped into doing random things. With very high probability, these little "side-quests" end up being very useful down the line, for one reason or another.


There is no direct practical reason. Other people are giving you the long term outlook view, why it is valuable in aggregate, but if you were aiming for immediately practical results neither math nor CS would fit the bill.

I can't speak for others but for me math is kind of like a brain teaser on crack. It is immensely satisfying to think about and I would imagine most mathematicians get a similar kick out of it. Not a mathematician myself mind you.


Some number theory results about sequences and progressions can be applied to create more efficient algorithms to approximate the solutions to multi-dimensional integrals that come up in physics (especially solid state physics on lattices).

Solid state physics problems of that sort are used in the process of engineering new materials (e.g. a more efficient dielectric for capacitors, a magnetic material that can store magnetic memory data at higher temperatures without destablizing, a higher temperature superconductor, etc). Idk if this particular result has such an application, but I wouldn't be surprised if it did.

In addition to solid state physics, "special functions" show up in many areas of applied math and physics, including the design, calibration, and data analysis of/from MRI machines and antennas. Many special functions including the hypergeometric functions can be approximated more efficiently using fancy number theory algorithms.

There is also a broad category of physics and chemistry problems that are best solved by Monte Carlo simulations, and sometimes Monte Carlo approximations can be made using many fewer iterations by choosing a set of points within the sample space that satisfy number theoretic constraints.

Also, physics models often inspire machine learning models so it's not impossible that some niche ML algorithm could eventually benefit from a faster algorithm that uses this result or a derivative of it.

None of these directly answer your question about a definitive application for this particular number theory result, but I hope it gives you an idea of what it could potentially be used for. The closest analogy I know if is that certain multi-dimensional Monte Carlo integral approximations can be done orders of magnitude more quickly if you choose lattice points whose coordinates satisfy some similar-ish constraints to the ones described for this problem.


Understanding the fundamental nature of numbers and sets will allow a person to avoid working on solutions to their "real world" problems which would be precluded by that nature.

Take cryptography for instance. Why large prime numbers? Why elliptic curves? If you have proven the fundamental mathematics behind factoring large primes, then you have a basis for thinking that a cryptographic system based on large primes will be reasonably secure.

Many of the developments you use everyday are based on the understanding of just such "brain teasers" :-).


Lazy loading solutions doesn't work when there's no library of solutions to pull from.

Humanity already tried thinking it was better than the selfless pursuit of knowledge, that path leads nowhere good.


Before cryptography, all of number theory was considered just one brain teaser after another. Without those brain teasers we wouldn't have secure private communications.


Sometimes, discovering these proofs makes the math easier. If we know X is the case for all Ys, and X is easier to compute, you can use X where you would use Y before. Of if you know X is true for all Ys, you can skip the part where you have to verify that your Y has X property. Because we now know it must have it.

And it's math all the way down in computers. So anything that makes math easier or quicker has the potential to affect anything a computer can process.


lets just hope that politicions who controls the flow of resource are not as shortsighted as you or we are doomed. The computer and all the software won't be exist without this boring and pointless mathemathics from a long time before


Understanding mathematics before we understand what problems it solves happens all the time. Maybe nothing will come of this, but because we can't know that, it's worth researching.

People didn't believe there was non-Euclidean geometry and not long after the taboo was broken, we had General Relativity. Einstein was a genius, but he did not invent the math for GR. Mathematicians like Riemann, Minkowski, Hilbert, Ricci, Levi-Civita and others created the mathematics for it first.

Finding the area under a curve or the tangent line of an arbitrary curve were considered two unconnected brain teasers until Calculus was invented by Newton and Leibniz and applied to gravity by Newton.

The "Math First" approach has done more than enough to prove its continued viability and without it you wouldn't have been able to type that here.


For a long time certain mathematicians took great pride in studying the most useless part of their field, prime numbers. Without their studies we would not have modern cryptographic techniques (RSA, SSL, HTTPS). And no bitcoin either.


>What useful development comes out of such?

Mathematicians aren't concerned with that at all. If something they do ends up being useful to science and engineering, then wonderful. But it's not the point.


All fundamental research is odd like that. Someone has to make significant progress on a 'useless' topic before some semblance of usefulness starts becoming visible.


This has significant implications for randomness and entropy.

I can already see it being useful for verifying entropy and for verifying anonymization of statistical data.


Well, you can now use this as a tool to show that if you can reduce part of a problem to creating a progression free set you can use that to show the complexity of a problem.

The ways I've seen combinatorics (and graph theory!) interact with software engineering is typically showing that a problem is too expensive to solve at scale.


A practical person doesn’t usually discover or invent something which turns out to be practical in the long run.


We're mortals, the only relevantquestion is "is it a better use of brain cycles than X".

Considering that way vaster amounts of brain cycles are devoted to making people watch one more TikTok video, then, I think this kind of brain-scratching is more useful than much of what most of us do.


The point is to advance knowledge of mathematics. Knowledge is a practical pursuit in and of itself.


My master thesis was about a problem that I could only find an NP solution for, but some of the underlying mathematical problems have not been studied/solved in depth yet, so I'm patiently waiting for a solution.



Maybe the question is what useful developments haven’t come from mathematical play originally, and can’t trace their lineage to abstract brain teasers?


One can see it as peeking behind the veil to view the underlying structure of the universe we live in. It permits certain things and does not permit others.


The gist of the insight of exercise is that one is conditioned, even if the exercise itself seemed pointless, for the unpredictable in later situations.


Ever heard about culture? I bet by useful you mean profitable


The fallacy is thinking that this was the puzzle. This is just one piece in a puzzle that's so big no one person can see the whole thing, and that we're using as a blueprint for large parts of modern civilization while still figuring it out. We don't even know whether we've completed most of the puzzle, or whether the puzzle is actually vastly larger than what we've already put together.

If you can't see why this is useful, then great: that's literally what it means to be a layman. But that also means that you don't know enough about the subject field for the real answer (the one that explains what other maths this affects) to satisfy your question because you won't know why _those_ things would matter.

So you're going to get the layman's answer: "because this shows that any work already performed in the assumption of its truth is now known to be valid", which is critically important because a lot of what maths is used for in real life is based on assumptions about the properties of numbers. Now others can stop wasting their time trying to determine whether this aspect of number theory can be relied upon, and under which circumstances, and trying to find all expressions of it in both software and hardware in order to determine whether those might have problems on a case-by-case basis. And at the same time, this result allows others still to move forward with work that is known to rely (in part!) on this property either being true or false, for which it was previously uncertain whether it would even be worth the effort because it was unknown whether the results would turn out to be useless because no one knew whether the basis it relied on was flawed.

As for what useful development comes out of things like this:

points at the computer you're using to comment on HN threads, and every single application on it that you've ever used

That.


Please stop with the click bait headlines :(


"Theorem shows how 3-term arithmetic progressions inevitably arise in large sets"

"Improved upper bound on density of integer sets containing no subset a,a+b,a+2b"


Ok, we've unstunned the mathematicians and added combinatorics to the title above.


Any headline containing the word "bombshell", "destroys", "shocking" or things of that ilk are ALWAYS poorly written fluff, because editors are generally competent.

If there was an actual bombshell that would be the headline!

It wouldn't be "Surprise Computer Science Proof Stuns Mathematicians" it would be "Scientist solves Fermat's Last Theorem" or something


I mean, not every big result is FLT. This headline actually tells me, a non-mathematician, much more than something more accurate like "Strong Bounds for 3-Progressions". I have no context for interpreting that phrase at all, but I can appreciate the one chosen (if it's true, which it is).


this is actually untrue. yes it’s a decent signal of bollocks, but this is a perfectly interesting and well-written article

those headlines and similar clickbait techniques are annoying, but they’re seen as a necessary evil by plenty of “good quality” content creators and editors

this is especially evident on youtube, where it appears as if you literally cannot gain mass success without surprising facial expressions in your video thumbnails

if you think I’m exaggerating, open Youtube in a private tab and see how many of the recommended videos don’t have someone pulling an unusual facial expression in them

plenty of these videos are of perfectly good quality, but in order to succeed they have to follow a shadowy pattern


I always thought a set of satirical articles like "Mayor Slams Opponents on Support of Budget Cuts" which then go into said mayor being in custody for assault charges or other similarly overly literal interpretations of the original meaning would be good fun. Well, it probably exists I just haven't seen them yet :).

It's great headlines aren't using the overly technical description (that honestly doesn't help much either) but it is a bit depressing the reason for doing so is "nobody clicks those" not "it could be made clearer to the average person" so we end up with things that are so vague you have no clue what it could actually be about yet is written to ensure it should be interesting enough for you to click. After that they don't really care what you get out of it, you've loaded another article and ads instead of leaving. This article actually bucks the trends a bit by being decent enough with what content is in the body at least.


I mean, this is a pretty big result in combinatorics, solving a 70-yr old problem.


Then the head line should have said “combinatorics” instead of the off-brand Buzzfeed “stuns mathematicians”.


Did you read the article? The headline is pretty accurate, if not pretty abstract.


It's click bait because it contains zero information about what was actually done.


It's only bait if you bite on it, as you did. I did as well, because I could see it was a Quanta link, and found it quite interesting. Would I have clicked on it if the title was "The Kelley--Meka bounds for sets free of three-term arithmetic progressions?" Probably not. So in this case, I consider myself fortunately baited.


So wait, you're saying that people should just accept that headlines are uselessly hyperbolic, and either (A) click every one of them to see if they're worth your time or (B) never click any, to avoid "biting" and getting "baited"?

Eminently practical approach, but we can hope for a better world where this isn't necessary...


It seems that there is actually (C) somewhere in there, which would be "apply some judgement" i.e. look at the source, and maybe (D) let someone else on HN bite on it and get a synopsis there.


Well for (C) I don't have time to memorize the "clickbaityness" of every single news outlet on the internet (of which there are probably hundreds of thousands), so that's a non-starter in reality. For (D) that seems like a practical solution, but now I have to read all of the top-level comments (of which there may be many), to see if one of them tells me if it's clickbait, before clicking on the article?

Hey, you know what, while we're assigning WORK to other HN members, why don't we just ask that people submitting science news do a little digging and find a non-clickbait headline? Why is that unacceptable, while your assertion is acceptable?


i feel like you’ve lost sight of what “bait” is


it may be clickbait but it does provide information about the crossing over of fields, which is interesting, particularly when those fields are computer science and maths


Ok, but what's the use of this data set? Like how does this improve something in the world?

Edit: why disagree... with me asking questions?


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


I'm not posting a shallow dismissal. I'm legitimately asking what this applies to since I didn't see anything in the article about potential application. It's fine if there aren't any potential applications too, just like the videos/articles about Rubik's Cubes solving - the people/solutions are impressive even if it doesn't have any practical applications.


I would say the main marker of shallow dismissal in your comment was the "Ok, but" (and to some extent the "Like how"). If you say you were asking out of curiosity, I believe you, but in that case your comment pattern-matched to the opposite of your intent, because a question asked out of curiosity would not normally include those markers and would not normally be phrased this way.


Hmm, interesting. I'll try to avoid that pattern of speech in the future.


You must be confused, the title specifies computer science and mathematicians - the default assumption here should be that if it is useful no one is going to realize for some decades.


> Ok, but what's the use of this data set?

Same use as the rest of mathematics: Entertainment for those interested in the topic.


This is why America's obsession with trying to boost math scores is possibly a waste of time and money. No amount of math literacy will ever approach anything like this. Those who are gifted and motivated enough will learn the material anyway and there are enough professors who can teach it.

People who are professionals are so way far above and beyond anything done by non-professionals. It's not like that with reading or writing, in which knowing how to read means you can in theory appreciate almost any book.

I think more emphasis should be on learning the basics, which enough kids find hard enough to do. No reason to try to make high schoolers learn algebra 1 & 2.


The authors of the paper are academics in university CS departments with heavy math backgrounds (one with a B.S. in mathematics, another was a visiting professor at MIT dept of mathematics). I don't understand what you mean or how it relates to the article or surrounding circumstances.


>there’s no point planting seeds because plants will grow anyway

did you have a bad maths teacher at some point?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: