This is a very interesting trend I've noticed. I wanted to cite a few papers on arXiv in one of my own research papers recently, but my advisor commented that none of the articles had been peer reviewed (since arXiv is a preprint server). I told him that in the last year alone, six papers on arXiv follow a "research trail" (i.e. a paper is put on arXiv in May that builds on results from a February paper that builds on results from December, etc.), and that the most recent peer-reviewed article in a published journal is so far behind the state-of-the-art that completing my paper without any mention to the arXiv works would put me significantly behind the rest of the field.
Of course, these papers all relate to math and computer science — whether a new algorithm or proof works is (usually) immediately evident upon implementation, and the papers on arXiv include the complete algorithm and often link to the author's code. Peer-reviewing their work yourself often takes no longer than a half hour or so (unlike, say, a research article in materials science, where a complete replication study could take over a year).
We saw this with Fermat's Last Theorem and it was with a great sigh of relief that it was finally proven in the 90s. If the inverse was true then entire fields of Mathematics would have collapsed.
For a real good example see https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existenc...
Honest question: what would've been the consequences of this?
Maybe there was some significant body of research before 1995 that assumed the Taniyama–Shimura–Weil conjecture, a more powerful statement that implied Fermat's Last Theorem and was ultimately proven as a way of proving it.
A reasonably recent example of this is the https://en.wikipedia.org/wiki/Italian_school_of_algebraic_ge....
A possible place where this could happen is the classification of finite groups. It has been "proven", but the proof is long, technical, and never was adequately reviewed. Lots of papers these days start off using the classification in interesting ways. However there is an open program to produce an actual reviewed proof. If in the process of doing that, we found that the original result was long, there would be a fairly large project to figure out the consequences.
See https://en.wikipedia.org/wiki/Classification_of_finite_simpl... for more.
Over the last 2 centuries, number theorists developed the theory of large prime numbers. The numbers that they were dealing with were so large that they had no conceivable use in describing the physical universe.
Famously one prominent number theorist, G. H. Hardy, wrote A Mathematician's Apology, a book describing and justifying his life. In it he famously described his field as being utterly useless with no practical applications.
Then cryptography came along, and the mathematics of finding large prime numbers, and factoring hard to factor large numbers, turned out to have practical applications of great importance!
Clearly you know more about the ancient history and origins. I'd be willing to bet that you know that the ancient Greeks knew 2500 years ago what prime numbers were, had proved that there were an infinite number of them, had algorithms like the Euclidean algorithm for finding the greatest common denominator, had proven unique factorization AND had demonstrated that sqrt(2) was not a fraction. We don't actually know how much farther a lot of the knowledge goes.
On the other side he had obviously encountered cryptography, and knew that a whole lot of the necessary number theory dates back to Gauss, 200 years ago. https://en.wikipedia.org/wiki/Disquisitiones_Arithmeticae is the origin of concepts like modular arithmetic, quadratic residues, and so on. But he was not familiar with the ancient history predating that, or else he could not have thought that the study of primes only goes back 200 years!
He could have avoided the problem on his side by Googling for what he was going to say before saying things with glaring and obvious errors. Very few of us are so careful.
You could have avoided the problem on your side by giving him the benefit of the doubt and assuming that he's probably not a complete idiot, then trying to figure out what he might have meant. You might or might not have figured out "cryptography", but you could have at least made your post in the form of a much more pleasant question. However that is fairly rare to find, and doubly so online.
As for me, I'm just lucky enough to know both halves of the history, so could easily sort it out.
I had thought that it was common knowledge that (small) prime numbers had a lot of practical uses (mental arithmetic in general, arithmetic with fractions, including with vulgar fractions, gear train design, that kind of thing) but apparently I was wrong. It turns out that lots of people don't know about this. So my inference, that only a complete idiot would not know this, he did not know this, and therefore he was a complete idiot, was ill-founded.
And so I came out looking like some kind of ignorant, arrogant know-it-all. I really appreciate the feedback, Ben. Natanael_L, I'm sorry I was such a dick to you.
So you just failed to register the cryptography reference and then backsolve to what he really meant. If that's the worst thing that you did last month, then you're a better person than I...
from May 28, cites this paper, "Wide Residual Networks"
from May 23. Less than a week between them! And it's with good reason too, because they want to compare with state of the art on a certain problem/dataset, and that improved five days ago.
Both of these papers are about very basic advances, in a sense: If you have a residual network implementation lying around (and there are plenty of open source ones), it's trivial to implement the improvements they propose. So it's not as crazy to cite a 5 day old paper as it seems.
I would argue that this is a dangerous claim, because an algorithm working generally is different from proof that it always works, and this can lead to serious issues. I would say that this is one of the big reasons peer review exists!
Potentially you could have a section in a maths journal which 'papers' (in the form of computer readable propositions and proofs) are immediately reviewed for correctness by a computer. Post-publication, humans may then assess the impact / significance by voting (perhaps with propositions from significant conjectures potentially configured to instantly go to the top of the significance rankings).
She can find sets in real time as the cards are being dealt, while singing along with music, it's maddening.
I met a woman once who could gather four-leaf clovers nearly instantly, at the same spot where me and my male friends had spent minutes searching.
I've seen written in some places (but could not find a worthy source) that women might have better peripheral vision due to food gathering in prehistoric times, but this might just be some old sexist construct (if anyone knows a good source to confirm or refute this, please tell me).
This came up at a party. Six people playing Set, all of us from different cities, and the three that took band in high school utterly dominated the three that didn't. The reason? Field trips. The Band kids had all played Set for hours on buses, the rest of us had only played it once or twice and lost interest.
Nice work :)
"Will this STUNNING mathematical breakthrough in a children's card game finally make Kim leave Kanye?"
"Subitizing is the rapid, accurate, and confident judgement of numbers performed for small numbers of items. [...]
The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid, accurate and confident."
I'm a male btw
Perhaps that ability decline is just my current reluctance to get down on my hands and knees and leaf through them by hand.
> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.
> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.
I can confirm that it works and nobody wants to play Monopoly anymore. Instead we play more fun board games
Though you will find exceptions, of course. The current number one game on BGG is Pandemic Legacy , a purely cooperative game.
I see it as a split between "casuals" and "competitive" players. Competitive players find fun in attempting to win or giving it their best shot while casual players see winning as a bonus and just want to have fun. Which may or may not involve playing with the best strategies or giving it their best.
I also agree entirely with lmm. The best games are games that are fun even when losing. If you can't have fun while losing (eg: Monopoly) then I don't consider it a good game.
Speaking solely of scrabble, this happens pretty often regardless of skill level. ;-) I try to keep 3 games going at a time, but my last 5 have quit in the first three turns! And I'm using a matchmaking algorithm, so they're players of my approximate skill level too!
Granted, my wife is well-educated and I believe she is smarter than I am, but it's like she immediately sees the solution while I see the input and have to work out the solution. I usually attribute this to my deficient color vision, since that's the property that usually trips me up. Alternatively, I've wondered if I'm focusing too hard and if the best approach might be reacting on instinct and relying on my subconscious to solve the problem before I'm consciously aware of having parsed the input.
The simplicity of the rules, the mathematical nature of it, and the fact that adults and kinds can play together makes it such a great game. I continued with the analysis of finding the odds of not set in each round of play, and had fun doing so. It seems very difficult to find an analytical solution (and quite beyond me), but simulation was a nice project that gave some good insights into how the odds vary over the rounds played .
That post mentions the Fano Plane, which, incidentally, I first read about in the book How Not to Be Wrong by Jordan Ellenberg, a mathematician quoted in the article about Set that started this thread. In the book, he uses the Fano Plane to explain how to pick numbers for a specific kind of lottery.
The title and the article seem to be essentially accurate but are written in a breathless "we'll make this math stuff exciting, darn it" tone that some might like but which, as a math person, I find somewhat grating.
I think science/math writing of twenty or forty years ago was still reasonably effective and with that, we could get-by with a headline such as "A impressive result in the combinatorics field that uses the 'polynomial method'"
Edit: I appreciate that quantamagazine.org is doing a bunch of math articles. But we gotta admit it's injecting about every bit of "this is simply amaaaazing" rhetoric it can muster, which detracts somewhat from at least my enjoyment of it and actually it a harder to figure what's happening (from MA-level perspective).
I guess there's some risk that many of these stories fall into similar patterns (something was hard and mathematicians worked on it for a long time, they didn't expect it to be solved soon, now it's been solved solved and people are impressed and see various applications), but it's nice to see the details and hear from some people in the field in their own words.
I feel like Klarreich tends to give more details than Gardner did when writing about current research; maybe that partly has to do with the web form, because she can include links to the actual papers and supplementary materials. It's also a nice counterpoint to the enthusiasm that her articles convey because the applications and consequences are specific; it's not like journalism that says "maybe now we'll get a pony/interstellar spaceflight/spooky quantum communications/faster computers!" without really showing how the discovery is going to enable that.
The fact is that mathematicians are excited by unexpected progress in math research, so hopefully other people can be too! :-)
Clickbait headline: "This underdog just won a major political office!"
Regular headline: "Truman defeats Dewey"
2 4 9 20 45 112
"Maximum numbers of cards that would have no SET in an n-attribute version of the SET card game"
The OEIS links appear to have been updated to reflect the recent breakthrough of Ellenberg and Gijswijt, including their unified paper arXiv:1605.09223 from May 30.
> a different design with four attributes — color (which can be red, purple or green),
You can play a variant of Set in a single color (which the rules suggest is easier -- but it's also more feasible for color-blind players).
Physical version of the game is available, I suggest adding it to your collection.