Hacker News new | comments | show | ask | jobs | submit login
Simple Set Game Proof Stuns Mathematicians (quantamagazine.org)
303 points by retupmoc01 on May 31, 2016 | hide | past | web | favorite | 82 comments



> The paper soon set off a cascade of what Ellenberg called “math at Internet speed.” Within 10 days, Ellenberg and Dion Gijswijt, a mathematician at Delft University of Technology in the Netherlands, had each independently posted papers showing how to modify the argument to polish off the original cap set problem in just three pages.

This is a very interesting trend I've noticed. I wanted to cite a few papers on arXiv in one of my own research papers recently, but my advisor commented that none of the articles had been peer reviewed (since arXiv is a preprint server). I told him that in the last year alone, six papers on arXiv follow a "research trail" (i.e. a paper is put on arXiv in May that builds on results from a February paper that builds on results from December, etc.), and that the most recent peer-reviewed article in a published journal is so far behind the state-of-the-art that completing my paper without any mention to the arXiv works would put me significantly behind the rest of the field.

Of course, these papers all relate to math and computer science — whether a new algorithm or proof works is (usually) immediately evident upon implementation, and the papers on arXiv include the complete algorithm and often link to the author's code. Peer-reviewing their work yourself often takes no longer than a half hour or so (unlike, say, a research article in materials science, where a complete replication study could take over a year).


That has been common in physics for a long time, too. For hot new topics, the conversations are often happening through preprints rather than peer-reviewed articles. (The papers are usually eventually published in peer-reviewed journals, though.)


Building on research seems like it's a form of peer review, especially in math where you'd need to understand the entire proof before making incremental improvements to it.


While this can be true it's also common to have "assuming X is true" where X is some very complicated hypothesis.

We saw this with Fermat's Last Theorem and it was with a great sigh of relief that it was finally proven in the 90s. If the inverse was true then entire fields of Mathematics would have collapsed.


FLT is a poor example. Very little depended on it, and if it weren't for its colorful history, it wouldn't ever have gotten the exposure it did.

For a real good example see https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existenc...


> If the inverse was true then entire fields of Mathematics would have collapsed.

Honest question: what would've been the consequences of this?


Maybe this is more true for research that assumes the truth of the Riemann Hypothesis than research that assumed the truth of Fermat's Last Theorem?

Maybe there was some significant body of research before 1995 that assumed the Taniyama–Shimura–Weil conjecture, a more powerful statement that implied Fermat's Last Theorem and was ultimately proven as a way of proving it.

https://en.wikipedia.org/wiki/Modularity_theorem


The consequence is that a whole body of work winds up coming with an asterisk until people figure out what they can and can't trust. Papers may be looked at for inspiration, but won't be quoted for results. Eventually some of it gets proved properly, and the rest is abandoned. After that the older papers become mere historical curiosities.

A reasonably recent example of this is the https://en.wikipedia.org/wiki/Italian_school_of_algebraic_ge....

A possible place where this could happen is the classification of finite groups. It has been "proven", but the proof is long, technical, and never was adequately reviewed. Lots of papers these days start off using the classification in interesting ways. However there is an open program to produce an actual reviewed proof. If in the process of doing that, we found that the original result was long, there would be a fairly large project to figure out the consequences.

See https://en.wikipedia.org/wiki/Classification_of_finite_simpl... for more.


But when the results are useless anyway, it doesn't really matter if they are right or wrong...they just may be speculative of some alternate universe, or may still contain ideas that are applicable elsewhere.


Prime numbers used to be useless when first researched (edit: during the previous two centuries, when their properties was studied). We don't always know in advance what will turn up useful.


You say, "Prime numbers used to be useless when first researched," but when the Middle-Kingdom Egyptians were doing their initial research on prime numbers, they needed them for the algorithms they used to calculate with fractions. These were used in the Rhind Papyrus to calculate things like the volumes of granaries. You could hardly have picked a worse example.


No, he picked a famous and perfect example. He just didn't specify it well enough.

Over the last 2 centuries, number theorists developed the theory of large prime numbers. The numbers that they were dealing with were so large that they had no conceivable use in describing the physical universe.

Famously one prominent number theorist, G. H. Hardy, wrote A Mathematician's Apology, a book describing and justifying his life. In it he famously described his field as being utterly useless with no practical applications.

Then cryptography came along, and the mathematics of finding large prime numbers, and factoring hard to factor large numbers, turned out to have practical applications of great importance!


Please don't blame me for refuting what he did say, instead of what he would have said if he'd known what he was talking about.


From my point of view, both of you demonstrated a lop-sided knowledge of math history.

Clearly you know more about the ancient history and origins. I'd be willing to bet that you know that the ancient Greeks knew 2500 years ago what prime numbers were, had proved that there were an infinite number of them, had algorithms like the Euclidean algorithm for finding the greatest common denominator, had proven unique factorization AND had demonstrated that sqrt(2) was not a fraction. We don't actually know how much farther a lot of the knowledge goes.

On the other side he had obviously encountered cryptography, and knew that a whole lot of the necessary number theory dates back to Gauss, 200 years ago. https://en.wikipedia.org/wiki/Disquisitiones_Arithmeticae is the origin of concepts like modular arithmetic, quadratic residues, and so on. But he was not familiar with the ancient history predating that, or else he could not have thought that the study of primes only goes back 200 years!

He could have avoided the problem on his side by Googling for what he was going to say before saying things with glaring and obvious errors. Very few of us are so careful.

You could have avoided the problem on your side by giving him the benefit of the doubt and assuming that he's probably not a complete idiot, then trying to figure out what he might have meant. You might or might not have figured out "cryptography", but you could have at least made your post in the form of a much more pleasant question. However that is fairly rare to find, and doubly so online.

As for me, I'm just lucky enough to know both halves of the history, so could easily sort it out.


Ben, I'd've thought you'd known me long enough to know that I'm familiar with RSA and the history of number theory. (Maybe I misremember; you seem to have started doing Perl after I left clpm.) I read A Mathematician's Apology last year (most of it, anyway), and my friend Nadia keeps publishing papers that factor large numbers of RSA keys in practice, the latest being CacheBleed. Your understanding of cryptography is surely deeper than mine — the most I've ever done myself is write an implementation of SRP — but it's not as if I haven't heard of the field.

I had thought that it was common knowledge that (small) prime numbers had a lot of practical uses (mental arithmetic in general, arithmetic with fractions, including with vulgar fractions, gear train design, that kind of thing) but apparently I was wrong. It turns out that lots of people don't know about this. So my inference, that only a complete idiot would not know this, he did not know this, and therefore he was a complete idiot, was ill-founded.

And so I came out looking like some kind of ignorant, arrogant know-it-all. I really appreciate the feedback, Ben. Natanael_L, I'm sorry I was such a dick to you.


I am familiar with your name, but hadn't tracked you well enough to remember anything more than, "He knows Perl." I left Usenet about a year before I learned Perl, so I was never in clpm.

So you just failed to register the cryptography reference and then backsolve to what he really meant. If that's the worst thing that you did last month, then you're a better person than I...


I meant exactly what he mentioned. How was I supposed to know you were going to bring up a far older example which I hadn't heard of?


Yup. An example from machine learning: This paper, "Weighted Residuals for Very Deep Networks"

https://arxiv.org/abs/1605.08831

from May 28, cites this paper, "Wide Residual Networks"

https://arxiv.org/abs/1605.07146

from May 23. Less than a week between them! And it's with good reason too, because they want to compare with state of the art on a certain problem/dataset, and that improved five days ago.

Both of these papers are about very basic advances, in a sense: If you have a residual network implementation lying around (and there are plenty of open source ones), it's trivial to implement the improvements they propose. So it's not as crazy to cite a 5 day old paper as it seems.


> whether a new algorithm or proof works is (usually) immediately evident upon implementation

I would argue that this is a dangerous claim, because an algorithm working generally is different from proof that it always works, and this can lead to serious issues. I would say that this is one of the big reasons peer review exists!


I'm not familiar with academia but I was surprised that the discovery was made by three mathematicians at three different far flung universities. Is that type of collaboration also being accelerated by the internet?


Collaborators will often meet at conferences or similar, and then go out of their way to attend the same conferences. There's a big difference between the fast in person work that lays the ground work, and the refinement that goes into finishing the paper. The internet is way more useful for the latter, imho. It's much easier to share ideas quickly on a black board...


The situation could be even better if the proposition and proof could be written in a machine verifiable form using a proof assistant like Coq.

Potentially you could have a section in a maths journal which 'papers' (in the form of computer readable propositions and proofs) are immediately reviewed for correctness by a computer. Post-publication, humans may then assess the impact / significance by voting (perhaps with propositions from significant conjectures potentially configured to instantly go to the top of the significance rankings).


A lot of the history of mathematics is summed up in the last paragraph of this very interesting article. "The fact that the cap set problem finally yielded to such a simple technique is humbling, Ellenberg said. 'It makes you wonder what else is actually easy.'" Mathematical progress comes about by some mathematician noticing a pattern that makes solving a formerly difficult problem more easy than solving it used to be. (For this purpose, innovations like place-value decimal numeral notation fit the preceding statement.) Of course the hard thing is being the first human being to figure out the easy way to solve the problem, and then to embed that understanding in a technique that other problem-solvers can learn. But the increasingly rapid collaboration among mathematicians these days (mentioned in a comment posted earlier) speeds the uptake of new techniques and makes more likely that new understanding will rapidly spread among mathematical problem-solvers.


I did really well in school, and am a professional developer now, and my wife utterly destroys me at this game.

She can find sets in real time as the cards are being dealt, while singing along with music, it's maddening.


Could it be somehow related to better peripheral vision?

I met a woman once who could gather four-leaf clovers nearly instantly, at the same spot where me and my male friends had spent minutes searching.

I've seen written in some places (but could not find a worthy source) that women might have better peripheral vision due to food gathering in prehistoric times, but this might just be some old sexist construct (if anyone knows a good source to confirm or refute this, please tell me).


Funny anecdata: in my circle of friends, the trait that best predicts Set skill isn't IQ or gender or anything like that, it's musical ability.

This came up at a party. Six people playing Set, all of us from different cities, and the three that took band in high school utterly dominated the three that didn't. The reason? Field trips. The Band kids had all played Set for hours on buses, the rest of us had only played it once or twice and lost interest.


I like how you introduce a correlating attribute in the beginning of your comment, get me thinking about the relationship between musical ability and set-identifying ability, then clobber whatever hypothesis I was putting together just as fast with the confounding "field trips" variable.

Nice work :)


Thank you! I should be writing clickbait headlines.

"Will this STUNNING mathematical breakthrough in a children's card game finally make Kim leave Kanye?"


But why is set so popular on band trips, but not football trips?


I'd guess that the players on a football team would be reviewing the playbook all the way up until kickoff. Meanwhile the band isn't going to be practicing their performance on the bus.


I bet it's closer to the pattern matching part of the brain that counts objects at a glance. This is a different skill than counting with words.


From https://en.m.wikipedia.org/wiki/Subitizing

"Subitizing is the rapid, accurate, and confident judgement of numbers performed for small numbers of items. [...] The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid, accurate and confident."


I don't know if that's a male/female trait. I literally find four leaf clovers so often/fast that I don't even consciousnessly know how I do it. I'll just reach down and grab them.

I'm a male btw


As a kid I would find them all the time. These days if I end up standing around a clover patch with nothing to do, I always look for them but never find them. I can't imagine I had the patience to look for them as a kid for very long, so I feel that my ability must have gone down hill OR that the patches I'm by now just don't have as many of them. I was always convinced that the ones at the house I grew up at had an abundance of four (or higher) leafed clovers and that patches at school, for example, were relatively poor.

Perhaps that ability decline is just my current reluctance to get down on my hands and knees and leaf through them by hand.


My mother is like this with four-leaf clovers. We can be walking together and having a conversation, and she'll stop mid-stride and pick up a four-leaf clover that any normal person would have had to stop and hunt for. I definitely did not inherit this skill - I can't recall ever having found one.


quoting https://news.ycombinator.com/item?id=10954918 :

> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.

> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.


My girlfriend is amazing at finding 4 leaf clovers, but I destroy her in Set...so they may not be related skills


Our friend is so jarringly good at Set and Scrabble that nobody will play with her anymore. She has to play people on the internet and they frequently disconnect mid-match.


If you want to lose friends and be disowned from your family - learn to play Monopoly by the actual rules and follow this strategy: http://imgur.com/a/vX3zm

I can confirm that it works and nobody wants to play Monopoly anymore. Instead we play more fun board games


This strategy works when one person plays by the complicated rules and everyone else plays for fun. When you go pull out some BGG game, you'll have the same problem: the board game geek rules lawyer playing strategy and trouncing players who play for fun.


This is why it's better to go to a YouTube channel like Rahdo Runs Through [0] than BGG's ranking page. Rahdo plays with his wife and they prefer games with a lower interaction level than the typical BGG forum power user. This means strategies based on blocking the other players' access to the resources they need are not included or emphasized in the games they play.

Though you will find exceptions, of course. The current number one game on BGG is Pandemic Legacy [1], a purely cooperative game.

[0] https://youtube.com/user/rahdo

[1] https://boardgamegeek.com/boardgame/161936/pandemic-legacy-s...


To be fair - Monopoly is a partially luck-based strategy game. And while having fun is certainly a big part of board games - so is winning and winning is pretty fun for most people.

I see it as a split between "casuals" and "competitive" players. Competitive players find fun in attempting to win or giving it their best shot while casual players see winning as a bonus and just want to have fun. Which may or may not involve playing with the best strategies or giving it their best.

I also agree entirely with lmm. The best games are games that are fun even when losing. If you can't have fun while losing (eg: Monopoly) then I don't consider it a good game.


Monopoly is a particular case where it's important to play with the actual rules if you play for fun - the common deviations from the rules make the game much, much more drawn out than intended, beyond the point where everything is already decided but you get to slowly slog it out through the motions and it becomes boring. "As intended", monopoly has a rather rapid endgame after the (arguably more fun) initial and middle part.


In any competitive game the majority of players will be losing. So a good game is one that's fun even when you're losing.


That's awesome, thanks. Also introduced me to Georgism which I naturally relate to Piketty. Now I've got a lot of reading to do.


> and they frequently disconnect mid-match.

Speaking solely of scrabble, this happens pretty often regardless of skill level. ;-) I try to keep 3 games going at a time, but my last 5 have quit in the first three turns! And I'm using a matchmaking algorithm, so they're players of my approximate skill level too!


My family is ultra-competitive and this used to be one of our go to games until my sister mastered it. My brother-in-law wrote an android game to help her practice and after about a month she was down to sub 1 minute clears. We had to give up the game after that.


Is the android app available for download anywhere?


He got a cease and desist from the manufacturer so he took it down unfortunately.


I'm also a professional software engineer who did very well in high school and at the university and, similarly, my wife consistently beats me at this type of games.

Granted, my wife is well-educated and I believe she is smarter than I am, but it's like she immediately sees the solution while I see the input and have to work out the solution. I usually attribute this to my deficient color vision, since that's the property that usually trips me up. Alternatively, I've wondered if I'm focusing too hard and if the best approach might be reacting on instinct and relying on my subconscious to solve the problem before I'm consciously aware of having parsed the input.


there are many forms of intelligence!


Reminds me of Sir Ken Robinson's TED talk


Sounds like she would be an amazing NFL QB if she can also throw a football :)


My wife also is amazing at this game and destroys me every time :)


Synesthesia? Perhaps her brain sees those sets and identifies them in such a way that it takes no conscious effort? A group might appear in a different color, or "taste" or feeling, for example...

https://en.wikipedia.org/wiki/Synesthesia


Nice to read about mathematics that involves Set! I found out about Set here on Hacker News several years ago, when I read Peter Norvig's blog post on the odds of not finding a set in the 12 cards [1]. I had never heard about the game before, but tried it and really liked it.

The simplicity of the rules, the mathematical nature of it, and the fact that adults and kinds can play together makes it such a great game. I continued with the analysis of finding the odds of not set in each round of play, and had fun doing so. It seems very difficult to find an analytical solution (and quite beyond me), but simulation was a nice project that gave some good insights into how the odds vary over the rounds played [2].

[1] http://norvig.com/SET.html

[2] https://henrikwarne.com/2011/09/30/set-probabilities-revisit...


the similar but easier game "spot it!" works better for playing with kids, i find. (i prefer it as an adult too - i thought i'd love set, but playing it feels more like work than fun to me.)


There's some interesting analysis of the math behind Spot It! on stackoverflow [1].

That post mentions the Fano Plane, which, incidentally, I first read about in the book How Not to Be Wrong by Jordan Ellenberg, a mathematician quoted in the article about Set that started this thread. In the book, he uses the Fano Plane to explain how to pick numbers for a specific kind of lottery.

[1] http://stackoverflow.com/questions/6240113/what-are-the-math...


thanks, that was a beautiful thread! also a really good diagram explaining the projective plane.


For those who check the comments first: Not clickbait. Title is literally accurate.


Oh come on,

The title and the article seem to be essentially accurate but are written in a breathless "we'll make this math stuff exciting, darn it" tone that some might like but which, as a math person, I find somewhat grating.

I think science/math writing of twenty or forty years ago was still reasonably effective and with that, we could get-by with a headline such as "A impressive result in the combinatorics field that uses the 'polynomial method'"

Edit: I appreciate that quantamagazine.org is doing a bunch of math articles. But we gotta admit it's injecting about every bit of "this is simply amaaaazing" rhetoric it can muster, which detracts somewhat from at least my enjoyment of it and actually it a harder to figure what's happening (from MA-level perspective).


As a grad student in math I read a lot of lifeless papers. Seeing the exact opposite is fun. I thought it was a good article. Why does math need to avoid breathlessness if we seem to embrace it with, say, a million breathless articles about Stephen Curry's latest game? It only seems bad because there's not a culture of excitement over math like there is for basketball, so it stands out. But if we want there to be such a culture, maybe we need to start by taking every bit of excitement we have and stretching it to the fullest.


Martin Gardner was very good at that, and I think Erica Klarreich (one of the main Quanta math journalists and the author of this particular article) is good at it too: she likes to focus on the history and context of a problem, describing mathematicians' expectations and hopes for a solution, and then how progress was made and what consequences it may have.

I guess there's some risk that many of these stories fall into similar patterns (something was hard and mathematicians worked on it for a long time, they didn't expect it to be solved soon, now it's been solved solved and people are impressed and see various applications), but it's nice to see the details and hear from some people in the field in their own words.

I feel like Klarreich tends to give more details than Gardner did when writing about current research; maybe that partly has to do with the web form, because she can include links to the actual papers and supplementary materials. It's also a nice counterpoint to the enthusiasm that her articles convey because the applications and consequences are specific; it's not like journalism that says "maybe now we'll get a pony/interstellar spaceflight/spooky quantum communications/faster computers!" without really showing how the discovery is going to enable that.

The fact is that mathematicians are excited by unexpected progress in math research, so hopefully other people can be too! :-)


Even if mathematicians were literally or figuratively stunned by the headline, it's a clickbait headline because it gives you no information about the content of the article, while trying to entice you to click through and read it with a big claim.

Example: Clickbait headline: "This underdog just won a major political office!" Regular headline: "Truman defeats Dewey"


If you think about it, you'll realize that in order to phrase my post that way in the first place, I must already have thought that the headline was written in the form of clickbait, otherwise it makes no sense.


I wouldn't say it literally stunned anyone, but it's certainly a very (figuratively) impressive result.


According to Google, the average human male foot is 26.3 cm, or 0.86 feet. So I suppose I am literally 7 feet fall.


Wonderful piece; what I like is that I remember this game being taught to me by the mathematics professors at my school, and I enjoyed the game but didn't ask any deeper questions about the game, such as its mathematical properties. Now here I see this way of looking at the game, perhaps even the reason for its existence, and it's like rediscovering an old friend.


To me the most exciting part is that this may lead to a faster fast matrix multiplication algorithm which will be huge. The current bound is O(n^~2.372). They could very feasibly hit the theoretical limit of O(n^2+e).


Maximal size of cap sets for games with up to six attributes -

2 4 9 20 45 112

https://oeis.org/A090245


Another good link concerned with this problem is from OEIS:

https://oeis.org/A090245

"Maximum numbers of cards that would have no SET in an n-attribute version of the SET card game"

The OEIS links appear to have been updated to reflect the recent breakthrough of Ellenberg and Gijswijt, including their unified paper arXiv:1605.09223 from May 30.


For your entertainment... http://3e.org/set


I don't understand the Alice imagery (nor the witch, not sure if that is also Alice imagery somehow) -- any ideas?


The witch appears to be the Red Queen. (Rotate her head about 135 degrees ccw and you get what is close to the symbol for a chess queen).


Can anyone explain how these polynomials are generated? Are the constants or exponents based on the card characteristics? And if it is not understood why they work, then how can we use them in a proof?


Again with the Ramsey theory! Every day, i'm reading about some amazing new result in Ramsey theory! It's like the microservices of discrete mathematics! Enough already!


I've been bumping into it a lot, too. I think it started with the ZFC-independent turing machines a while back.


IIRC "Elegance in mathematics is directly proportional to what you can see in it and inversely proportional to the effort required to see it" -- S. Eilenberg


   > a different design with four attributes — color (which can be red, purple or green),
Was this game invented to annoy the colour-blind?


It was invented based on the file-marking symbols used by a geneticist for genetics research:

http://www.setgame.com/founder-inventor

https://web.archive.org/web/20130313190253/http://www.setgam...

You can play a variant of Set in a single color (which the rules suggest is easier -- but it's also more feasible for color-blind players).


iOS app of the referenced game: https://appsto.re/us/EXmoU.i

Physical version of the game is available, I suggest adding it to your collection.


Variants exist on the web as well, e.g., http://thebreretons.com/trifecta/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: