Hacker News new | past | comments | ask | show | jobs | submit login
Monumental (if correct) advance in number theory posted to ArXiv by Yitang Zhang
1303 points by gavagai691 on Nov 7, 2022 | hide | past | favorite | 417 comments
Yitang Zhang, the mathematician behind the 2013 breakthrough on bounded gaps in primes, posted to the arxiv today a result which (if correct) comes close to proving the nonexistence of Landau--Siegel zeros: https://arxiv.org/abs/2211.02515.

To give a sense of the scale of this claim: If correct, Zhang's work is the most significant progress towards the Generalized Riemann Hypothesis in a century. Moreover, I think this result would not only be a more significant advance than Zhang's previous breakthrough, but also constitute a larger leap for number theory than Wiles' 1994 proof of Fermat's Last Theorem (which was, in my opinion, the greatest single achievement by an individual mathematician in the 20th century).

Some discussion / explanation of Siegel zeros and Zhang's claim can be found here:

https://old.reddit.com/r/math/comments/y93a86/eliundergradua...

https://mathoverflow.net/questions/433949/consequences-resul...

An account of Zhang's remarkable story (and his previous breakthrough) can be found here. Famously, prior to his breakthrough, he worked at Subway and lived in his car:

https://www.newyorker.com/magazine/2015/02/02/pursuit-beauty




Two additional notes:

1. Zhang posted an attempt at solving this problem in 2007 that he later more or less admitted was flawed: https://mathoverflow.net/questions/131221/yitang-zhangs-2007.... But speaking with mathematicians who are intimately familiar with Zhang's previous work, there seems to be good reason to be optimistic nevertheless. First, the idea behind Zhang's proof is similar to the zero-repulsion ideas appearing in known results about Siegel zeros, and is thus reasonable. Second, Zhang seems to have matured late, and unlike the flawed 2007 paper, his 2013 paper on bounded gaps in primes is meticulously written. He came a long way between those two papers, and he may have come even further since then.

2. Zhang is 67 years old. If the paper is correct, then Zhang constitutes a strong counterexample to G.H. Hardy's famous claims that "mathematics is a young man's game" and nobody alive today could say, as Hardy did, that "I do not know an instance of a major mathematical advance initiated by a man past fifty."


It should be noted that Zhang was a math prodigy when he was young, around 13 years old, however because of the Cultural Revolution in China, school education was stopped for a decade and his parents were purged and he was sent down to the countryside so he could not study at school but was forced to work in the fields and factories as re-education. It was only a decade later that he managed to get into university because universities re-opened after the Cultural Revolution, by then he was 23 already when he started his bachelors' degree.

Note that, universities could accept people who did not attend school if they passed their university entry exams because so many people were unable to attend schools because they were all closed and teachers purged during the Cultural Revolution.

I would say he "matured" later mainly because he did not have the right opportunities because he could not go to high school and after his university graduation, had no good opportunities because many good professors were purged during the Cultural Revolution so he fled to the US for a better life.

Source: https://www.newyorker.com/magazine/2015/02/02/pursuit-beauty

And I quote from the above source which is from a 2015 New Yorker interview with Zhang:

'I asked Zhang, “Are you very smart?” and he said, “Maybe, a little.” He was born in Shanghai in 1955. His mother was a secretary in a government office, and his father was a college professor...As a small boy, he began “trying to know everything in mathematics,” he said. “I became very thirsty for math.”...The [Cultural] revolution had closed the schools. He spent most of his time reading math books that he ordered from a bookstore for less than a dollar.'

As well:

'...when he was fifteen he was sent with his mother to the countryside...where they grew vegetables. His father was sent to a farm in another part of the country. If Zhang was seen reading books on the farm, he was told to stop...After a few years, he returned to Beijing, where he got a job in a factory making locks. He began studying to take the entrance exam for Peking University, China’s most respected school: “I spent several months to learn all the high-school physics and chemistry, and several to learn history. It was a little hurried.” He was admitted when he was twenty-three.'


The professional math world is full of smart but delusionally ambitious people who do things like focus all their energy on the Jacobian conjecture and the Riemann hypothesis. Most crash out never finishing their doctorates (because these problems are too hard and working on them does not provide what it takes to survive professionally). Zhang is an example of such a person. What is very unusual about him is not that he continued to work on such things anyway, rather that he eventually found some measure of success. What I infer from his story is that he is tremendously stubborn and genuinely oblivious to ordinary material feedback. Evidently he has some talent too, but that's not the unusual part of his story.

Said another way - I've known quite a few people like him to a point - with the difference that none of the others ever produced good mathematics, much less solved a major problem.


He had a talk three days ago, explaining his thesis where he remarked: “When the paper was posted online just a few days ago, many people who don’t focus on mathematics didn’t understand it, thinking that it was the Landau-Siegel zeros conjecture solved, and some even thought that it proved the Riemann Hypothesis is wrong. Actually, I don’t have this ability. I only partially solve the Riemann hypothesis within a certain range. If I say I overturned Riemann Hypothesis, few people would believe it.”[1]

Maybe he loves what he's doing and that's the root of being stubborn and "genuinely oblivious to ordinary material feedback". Although love or passion can be overrated or too general to describe his attitude toward problem-solving, I think people can't be just stubborn, there's a drive that holds them to a higher standard.

[1]https://pandaily.com/mathematician-yitang-zhang-confirms-par...


> people who do things like focus all their energy on the Jacobian conjecture ... Most crash out never finishing their doctorates (because these problems are too hard and working on them does not provide what it takes to survive professionally).

In Zhang's case, I believe his doctoral thesis actually proved the Jacobian conjecture... but his thesis was relying on an incorrect result given by his advisor's own paper (presumably at the guidance of his advisor).


I think Zhang's previous result was good enough to rebuff Hardy's claims.

Actually I think Math is more or less a young people's game is because whence someone be super successful and famous it's kinda difficult psychological to retain the previous mental state and push out similar results.




Just curious how many examples that show FIRST major discovery after say 40? I think I spotted a few.


This was an interesting one - a proof by a 67 year old retiree that nobody in the field knew about for 2-3 years after because they didn't read their email.

https://www.quantamagazine.org/20170328-statistician-proves-...


Marjorie Rice. Housewife.

https://www.quantamagazine.org/marjorie-rices-secret-pentago...

Her discoveries first mentioned in a 1988 magazine when she was 65.


The discovery of these pentomino tilings is an interesting achievement and evidence of an exceptional mind, but it's very far from a major advance in mathematics.


"I think Zhang's previous result was good enough to rebuff Hardy's claims."

I agree.


age is just a number, some people may have a prejudice against larger numbers, and that to me, seems irrational


Might come off as political, but Americans need need to throw off the yoke of British intellectual affectations, especially pre WWI ones.


I think you're conflating Hardy's thoughts as a mathematician, with the politics of his country of origin. The two, as far as I know, weren't really connected


Huh? Could you expand on this?

Disclosure: a Brit who does not see Americans oppressed by compatriot affectations.


The structure of the modern education system is what Brittain designed to run an effective empire before any communication system existed. Every student made to operate independently as an arm of the British empire with standardized knowledge and skills, like cogs in an enormous beautiful machine.


Hardy may have a point, on average. And it’s probably because of the responsibilities factor, i.e. older people have kids, families, departments to run, etc that takes them out of the game. If I did math with this level of intensity I would not have time for both without a partner willing to make quite some sacrifices.


I remember from articles about his earlier primes work that his wife lived in San Jose i.e. on the other side of the country from him. They didn't really go into it but neither of them seemed upset by this.

That house price appreciation must just be that good.

(Also, it was reported he "worked at a Subway" but IIRC he was actually the accountant for a friend's Subway franchise.)


He did work behind the counter sometimes.

Source: https://yewtu.be/watch?v=88Q2v6FTSBI 49:03 - 49:56


I disagree with Math had to be learned when you are young.

Intelligent people will end up learning something profound when they are young.If they find something else interesting enough at a later stage in their life, they apply some transformation learning.

Leibniz did not start his training in Math until he was ~30

> Thus Leibniz went to Paris in 1672. Soon after arriving, he met Dutch physicist and mathematician Christiaan Huygens and realised that his own knowledge of mathematics and physics was patchy. With Huygens as his mentor, he began a program of self-study that soon pushed him to making major contributions to both subjects, including discovering his version of the differential and integral calculus.


That rejoinder has not been true for a while now. Laszlo Babai's graph isomorphism work is another recent example.


Regarding #2, I think Andrew Wiles already disproved that conjecture, solving Fermat at 41 or thereabouts, but Zhang is certainly another nail in its coffin.


> 2. Zhang is 67 years old. If the paper is correct, then Zhang constitutes a strong counterexample to G.H. Hardy's famous claims that "mathematics is a young man's game" and nobody alive today could say, as Hardy did, that "I do not know an instance of a major mathematical advance initiated by a man past

I think the actual truth is more like, "big breakthroughs mainly happen early in one's career". Most mathematicians start their careers young, therefore they publish breakthroughs while young. Zhang started quite late so his innovations are later in his life, but still early in his career.

And it makes sense, everyone has a slightly unique way of thinking, and long-standing problems will only yield to unique thinking. Eventually someone will come along that has just that right type of unique thought process that will find a hole to solve such a problem.


True. I am wondering whether this trend would make Fields Medal (https://en.wikipedia.org/wiki/Fields_Medal), the most prestigious award in mathematics, to change its requirement of that recipients must be of age 40 or less.


They didn't break the rule for Andrew Wiles after he proved Fermat's Last Theorem, which was arguably the most notorious unsolved problem in all of mathematics.

So I expect them to stick to their rule.


It's also worth noting that the average life expectancy has increased by roughly 20 years since G.H. Hardy first published that claim, so it would extra worrisome if we didn't have any counterexamples.


I'd like to think that there's much more socioeconomic diversity among present day scientists, compared to those back in the day of Hardy.

Not to mention that up until 1910s, life expectancy was under 50.

Being able to work in academia as a tenured professor/researcher probably resulted in drastically different life expectancy, compared to being forced to do something else. I think it's safe to say that Zhang would been forced to live as a peasant, had he been born 120 years ago.


It's also unclear how much of Zhang's recent work is recent ideas, vs ideas he had decades ago but only were made presentable recently.


Raoul Bott is another good example of someone switching from (electrical engineering I believe?) to math in his 40s.


Think Knuth and a few others are good exceptions to the rule.


I can't resist saying one last thing about Siegel zeros: number theorists REALLY would like for this result to be correct because the possibility of Siegel zeros is unbelievably annoying. I mean mathematicians are supposed to enjoy challenges / difficulties, but Siegel zeros are just so recurrently irritating. The possibility of Siegel zeros means that in so many theorems you want to write down, you have to write caveats like "unless a Siegel zero exists," or split into two cases based on if Siegel zeros exist or don't exist, etc.

But here is the worst (or "most mysterious," depending on your mood..) thing about Siegel zeros. Our best result about Siegel zeros (excluding for present discussion Zhang's work), namely Siegel's theorem, is ineffective. That is, it says "there exists some constant C > 0 such that..." but it can tell you nothing about that constant beyond that it is positive and finite (we say that the constant is "not effectively computable from the proof").*

So then, if you try to use Siegel's theorem to prove things about primes, this ineffectivity trickles down (think "fruit of the poisoned tree"). For example, standard texts on analytic number theory include a proof of the following theorem: any sufficiently large odd integer is the sum of three primes. However, the proof in most standard texts fundamentally cannot tell you what the threshold for "sufficiently large" is, because the proof uses Siegel's theorem! In this particular case, it turns out that one can avoid Siegel's theorem, and in fact the statement "Any odd integer larger than five is the sum of three primes" is now known https://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture. But it is certainly not always possible to avoid Siegel's theorem, and Zhang's result would make so many theorems which right now involve ineffectively computable constants effective.

*Why is the constant not effectively computable? Because the proof proceeds basically like this. First: assume the Generalized Riemann Hypothesis. Then the result is trivial, Siegel zeros are exceptions to GRH and don't occur if GRH is true. Next, assume GRH is false. Take a "minimal" counterexample to GRH, and use it to "repel" or "exclude" other possible counterexamples.


>I can't resist saying one last thing

Please, keep going. This is good reading.


In that case, you might find interesting these two short explanations I posted to Reddit about Siegel zeros (the second is a continuation of the first) :)

https://old.reddit.com/r/math/comments/y93a86/eliundergradua...

https://old.reddit.com/r/math/comments/ymlacu/professor_yita...

The class number formula, mentioned in the second comment, is one of the craziest "bridge results" in all of math (meaning a result that connects two seemingly disparate areas). The class number formula connects the values of Dirichlet L-functions at s = 1 (Dirichlet L-functions are complex functions related to the distribution of primes in arithmetic progressions), to class numbers of number fields. (Remember that the value of Dirichlet L-functions at 1 is exactly what the question of Siegel zeros concerns.)

To give a crash course on what some of those words mean:

1. A number field is what you get when you take the rational numbers, and you throw in the roots of some polynomials to get a bigger object where you can still do all of the usual arithmetic operations, in the same way that we throw in the roots of x^2 + 1 (namely, i, -i) into the real numbers to get the complex numbers.

2. The ring of integers is the right notion of the "integers" in that number field. (That is, rational numbers : integers = number field : ring of integers in that number field.)

3. The class number of a number field tells you how close you are to having unique factorization into primes holding in the ring of integers of that number field*. If the class number is 1, then you have unique factorization; if the class number is 1000, then you are very far from it.

What this connections means is that you can prove things about regular old primes in arithmetic progressions (in the integers) by proving things about these exotic / abstract primes (in rings of integers of number fields), and vice-versa.

Anyway, as a result of the class number formula, there are a lot of results about class numbers that are ineffective because of Siegel's theorem too, e.g., https://en.wikipedia.org/wiki/Brauer%E2%80%93Siegel_theorem. Zhang's result (if correct) would make all of those effective, too.

*While in the integers, it is true that every number factors uniquely into a product of primes, this is unfortunately not true in more general contexts. In fact, algebraic number theory basically began with a mistaken proof of Fermat's Last Theorem, which was mistaken precisely because it assumed that unique factorization always holds in this more general context, which is not true. (If unique factorization did always hold, then that proof of FLT would have been correct.)


I'm not a mathematician, but the story of Yitang Zhang desperately makes me want this paper to be correct.

> Prior to getting back to academia, he worked for several years as an accountant and a delivery worker for a New York City restaurant. He also worked in a motel in Kentucky and in a Subway sandwich shop. A profile published in the Quanta Magazine reports that Zhang used to live in his car during the initial job-hunting days.

https://en.wikipedia.org/wiki/Yitang_Zhang


While not common, I know of other Chinese diaspora in North America with similar stories.

Many graduates from post Cultural Revolution China left the country in 90's and found the language barrier (and accompanying discrimination) too high to overcome. The man delivering your fried rice might have a PhD.


An unfortunate parallel: the best physics professor (unfortunately, just an adjunct) I ever had was a Russian immigrant. He quit teaching to go run a blini restaurant because it paid way more :(


My calculus professor from university in the late '90s had earlier on won a Lottery Visa to the US. As I understood it he took advantage of it, only to find out that while there he could only find very menial jobs.

He came back to Romania and to Bucharest, which was really good for me and for my then colleagues because he was an excellent professor.


This is so true and still the case. I've seen so many highly skilled immigrants passed over for politics and image reasons... Academia is ruthless but for the wrong reasons.


I had so many Russian mathematics professors. They dominate the field, for obvious historical reasons.


Why is this :( though? The market is working and filling a market opportunity. Maybe he likes running his own business? Unless he was being discriminated against for being a Russian immigrant and treated unfairly?


Because the world would be better off having people of that calibre making progress on outstanding problems in maths than having one more mediocre restaurant in it


ok, you're insulting the guy's restaurant and the market disagrees with you, apparently


Contrary to what you may believe, the market has failure modes. Just because something is difficult to monetise doesn't mean that it's not valuable. Fundamental research is one such thing. It can open up vast realms of possibility for all of humanity, but in a way that's difficult, if not impossible, to privatise. And the benefits may take years, decades or even centuries to pay off. Thus there is very little incentive for any market actors to invest in them.

But if Newton had decided to open an inn instead of work on the Principia the world would have been far worse off for it. Possibly centuries behind where we are now.

I will apologise for calling the restaurant mediocre. I have no idea about the quality of the food, but if you think that's what this hinges on then you've missed the point. That as good a restaurant as it may be, it's not worth losing a mathematician of his calibre over.


I get it, but I do get some satisfaction out of observing when justifications of perceived value meet market realities. I don't think it's bad, I think it's interesting, and the person made the choice that was theirs to make. That's economic freedom.


I'm not sure what you're deriving satisfaction out of. That the market prevented someone from being able to earn a living and fulfilling their potential at the same time? I'm genuinely confused


I don't believe there's enough data in the original comment to assume that the market prevented them from being able to fulfill their living working with physics. It said they could make more money running a restaurant. If that it is the case they could not earn a living at all, and it's due to discrimination, which I asked about, then I agree it's not good. But if it's the case they could make a living in physics but chose to instead open a restaurant due to making a better living that way, then yes I think it's good.


It's late, but he made poverty wages as an adjunct.


Based on the provided information I judge it more likely that their passion was maths and physics but were prevented from doing so because of certain economic realities than they had more interest in running a restaurant than engaging in research. You may disagree.


Eastern Europeans can relate too. Lots of people with phd from finance to physics and philosophy were or at least started like that in Western Europe too.


> worked in a Subway sandwich shop

FTA:

In Kentucky, he became involved with a group interested in Chinese democracy. Its slogan was “Freedom, Democracy, Rule of Law, and Pluralism.” A member of the group, a chemist in a lab, opened a Subway franchise as a means of raising money. “Since Tom was a genius at numbers,” another member of the group told me, “he was invited to help him.” Zhang kept the books.

Quite a different feel to that characterization.


Douglas Prasher who cloned the green fluorescent protein became a courtesy cab driver at a dealership, until he was brought back to the lab by Roger Tsien. https://www.discovermagazine.com/mind/how-bad-luck-and-bad-n...


Someone needs to make a movie about his life, or at least a documentary.


There is a documentary: Counting from Infinity - https://www.youtube.com/watch?v=sBADiHU_0Wg


There is a documentary, but I can't attest to its quality (I haven't watched it yet): http://www.zalafilms.com/films/countingabout.html.


Hmm... thanks, it looks quite good.

But why is streaming rental for 24 hours? Why can't we do rentals for two weeks for videos? Is there a good reason to make it so difficult to stream? I don't want to watch it a million times. I just want to make it through once, but it takes me several sittings typically to finish a movie.


I was thinking about watching this documentary two years ago. That sounds like the reason that I decided not to.



> But why is streaming rental for 24 hours?

Money / capitalism.


I hope to see a movie about Perelman, too.


Sadly, they would probably make it a schmaltzy Oscar-grab of a movie, like they did with Ramanujan's story.


The Eternal Triangle, With Brigitte Bardot playing part of hypotenuse.


Good luck finding a movie about Isaac Newton or Einstein, or literally anyone whose value to the world is more than pretending to be someone else or lying a lot, let alone a very interesting mathematician no one recognizes. (Ok I know there’s a few movies about folks like Turing and Nash, but it’s pretty slim pickings).


Einstein is probably a terrible example here because in his day he was regarded as a pop culture icon so there’s plenty about him in film [1]. That’s not even considering that he’s had plenty of excellent biographies written about him. Agora with Rachel Weisz is an excellent one about Hypatia.

Could we do better as a society? Maybe. To do that though you’d have to bring these stories into film classrooms and get students inspired. But first you have to make the stories inspiring and a lot of them aren’t thaaat much except for the advancements they provided to the field and that’s harder to convey. Maybe Leibniz and Newton would be a good rivalry on the screen.

[1] https://www.themoviedb.org/keyword/8689-albert-einstein/movi...


To my knowledge there’s only one dramatization of his biography which is this:

https://www.dogomovies.com/einstein-and-eddington/movie-revi...

My point is even einstein doesn’t get the same treatment as, say, a politician or actor gets as far as coverage. There’s more out there about Elvis than Newton.


Newton as a human being was insanely interesting. Einstein as well. I would pick Hooke and Newtons relationship before Leibniz, fwiw.


This whole theme is sort incompatible with movies. Most people like to relate to the characters in the plot. The "genius guy" does something we can't understand has very little appeal as a story.

Same thing goes for mathematics that is too "deep". Most people could not care less about prime numbers. Yes, they drive important cryptographic procedures. But we care about cryptography. We care that the message gets to its destination "safely". If its done using prime numbers, imaginary numbers, geometric numbers or fantasy numbers, it does not matter.


See I disagree. They tend to live very interesting lives, not because of their achievement, but I suspect their achievements and how interesting their life is is inextricably intertwined. They tend to not fit in convention very well and live at odds with society in a not too dissimilar way to say a Bob Dylan or Jim Morrison, but without the music and poetry. You’re right of course that folks like to watch movies about people they are aware of and understand their contribution. But I’d also point out that society overall is a lot better educated than in the past, more people than ever would find scientists and mathematicians and engineers easier to identify with and would understand their contribution than ever before - and those people also control an incredible amount of the available disposable income.

I think the real issue is the producers and directors and business folks in the film industry are not in that camp. They’re more likely to make a biography of Robin Williams than Paul Dirac, and they decide what gets made.


There have been many, many films made about Srinivasa Ramanujan. I'd particularly recommend the 2015 one, The Man Who Knew Infinity, I found it super charming.


There's an excellent series in which Newton plays a large part, called The Baroque Cycle. I'd love a TV adaptation of it, although you'd have to gut much of academic stuff in it and just focus on the (excellent) plot


Yes Neil Stephenson is such a versatile writer. The Baroque Cycle is definitely recommended although not his best work.


I've seen a few biopics or dramatizations of famous people - notably, The Imitation Game about Alan Turing which was all right, and the Theory of Everything which supposedly was about Stephen Hawking's work but it was more about his relationships - it was based on the memoirs of his ex-wife. But both were dramatizations from secondhand information at best; a biopic of older people like Einstein will be even more picky.

I mean would a biopic about Einstein touch on his many relationships (including his cousin) and the drama around that?


Also, biopics are terrible. I can't think of a single one that is worth it, especially if you consider that for many of the subjects, you could just watch a documentary on the person anyway, often with real footage and interviews with them. Biopics are usually just cash grabs and weird Hollywood flexes about mimicking someone else. The fact that they do so well at the awards ceremonies speaks to this.


I find them useful for teaching my 8 year old something about the person behind the name. After watching Einstein and Eddingiton (pretty ok movie) she has a better appreciation for who he is beyond a name. Obviously you learn almost nothing about his research and it’s a very tiny slice of Einsteins extraordinarily interesting life.


Is that better than just watching a documentary?


And the movies about figures like these that do exist seem to have roughly the same story, relying heavily on the "socially inept genius" trope.


N Is a Number: A Portrait of Paul Erdős is a decent enough documentary about a specific mathematician.

Colors of Math breezes across six contemporary mathematicians.


There is a tv show about Einstein, well, only the first season is about him.


Which one?


I can recommend „The Strangest Man: The Hidden Life of Paul Dirac“.


It’s only in book form, no?

There are tons of excellent books on mathematicians, engineers, and scientists. Just virtually no visual media.


Why everything has to be a movie?


It's one of the few working ways to get modern societies to learn something, but a catchy musical can work too.


you dont learn anything useful in 2h of heavily romanticized hollywood fiction


I am updating my assumptions about Subway Sandwich Artists. When they seem unfocused on the artwork at hand it may be because they are thinking more important thoughts.


As an older person currently working on a PhD, this guy was and is a something of a hero to me. He has an interesting life story. He was very into math at an early age, so he's different from people like me who got interested in it later in life, but he's also different in that his family was sent down to the countryside in China. I remember reading a lot about him a few years ago and relating to some of the professional difficulties he had and also to some of his ways of thinking and approach to doing math.

I never expected to see his name in a context like this again. I'm glad he's still being himself and working hard on what he loves.


He and his parents were victims of Mao's cultural revolution (like millions of others) - and he was forced into agricultural labour instead of attending high school. Amazing cinderella story. A real academic hero.


> like millions of others

Including Xi Jinping.


Unfortunate Xi is repeating past mistakes


At least Xi is much more assertive (not aggressive) in safeguarding his country's territorial integrity whereas Mao didn't give a hoot and let his country land grabbed by its neighbors left and right.


Would love to hear your story about getting a PhD as an older person. Have you written anything about it that's public? If not, would it be possible to contact you? Thanks!


you got to read this article from his sister: https://zhishifenzi-com.translate.goog/depth/character/480.h...


Thanks for that link. Wish I could read Mandarin Chinese so I could read the original too.

Clearly a flawed person. Not sure why he blew off his family so hardcore for so many years. Too bad to hear he was arrogant as a kid too, although a lot of smart kids are. Some of them turn out to be Peter Thiel, luckily this guy just wanted to work on math.

Anyway, I wish he had been better to his parents. On the other hand, he needed that big breakthrough to save his life as a mathematician: until that point, he was just an adjunct lecturer with no stability at all. Life is weird and complicated and we don't have full control of the choices we make, some choices can seem really hard to some people. I'm not excusing his behavior toward his family but I would be interested to know why he made that choice.


What the article doesn't say is the reason why Yitang would not come back visit his family. Not sure if it is his own decision (makes sense as he lost connection with his family for many years) or some political reason (China doesn't allow him to go home).


Thanks for the link. It was indeed a very touching story. Considering his involvement with the pro-democracy group, it is most likely he wasn't allowed to go back to China. Of course this cannot be mentioned in an article published in China.


Yitang never publicly mentioned this but he was likely granted a green card under the Chinese Student Protection Act of 1992. He was also involved with groups that supported democratic movement in China. This explains why he was unable to visit when his father passed away, and the obstacles he faced when he attempted to apply for Chinese visa after becoming a US citizen. He simply would not have been allowed to enter China had he not made the twin prime breakthrough. It was quite fortunate that in the end he was able to visit his mom just a few months before she passed away.


Wow, from the 2015 article:

[A journal reviewer of his famous paper says]: "you should be careful. This guy posted a paper once, and it was wrong. He never published it, but he didn’t take it down, either.’ ” The reader meant a paper that Zhang posted on the Web site arxiv.org, where mathematicians often post results before submitting them to a journal, in order to have them seen quickly. Zhang posted a paper in 2007 that fell short of a proof. It concerned another famous problem, the Landau-Siegel zeros conjecture, and he left it up because he hopes to correct it.

Looks like he might have lived up to that!


If the proof is right, Zhang is in contention for greatest living mathematician with seven papers total, and on the basis of two of them (and should win all the major prizes he has not won yet and is still eligible for: Abel, Wolf, etc.). Would truly live up to Gauss' motto: "pauca, sed matura!"


Even more incredible is that his own advisor refused to write him letters of recommendation upon graduation [1]

  After graduation, Zhang had trouble finding an academic position. In a 2013 interview with Nautilus magazine, Zhang said he did not get a job after graduation. "During that period it was difficult to find a job in academics. That was a job market problem. Also, my advisor [Tzuong-Tsieng Moh] did not write me letters of recommendation." ... Moh claimed that Zhang never came back to him requesting recommendation letters. In a detailed profile published in The New Yorker magazine in February 2015, Alec Wilkinson wrote Zhang "parted unhappily" with Moh, and that Zhang "left Purdue without Moh's support, and, having published no papers, was unable to find an academic job".

  In 2018, responding to reports of his treatment of Zhang, Moh posted an update on his website. Moh wrote that Zhang "failed miserably" in proving Jacobian conjecture, "never published any paper on algebraic geometry" after leaving Purdue, and "wasted 7 years of his [Zhang's] own life and my [Moh's] time".
1. https://en.wikipedia.org/wiki/Yitang_Zhang


Yes, if you want to see something incredible (in both the literal sense and the usual sense), read https://www.math.purdue.edu/~ttm/ZhangYt.pdf (by Moh).


> For some 10 years, I had recommended 100 mainland Chinese students to the department and all accepted by the department. I am always indebt to the trust of my judgements by the department. Only very few of them misbehaved, bit the hands which fed them, none of them intended to murder their parents/friends, almost all of them performed well and became well-liked.

No murderers, great success!


It's a reference to Brendt Christensen.


I’ve looked it up and wished I had not.

( Interestingly, every summary of the case in media and Wiki stops listing the evidence against him at his secretly taped confession to a girlfriend - confession that included some things absolutely not confirmed. The most convincing evidence to my eyes is the victim’s DNA in the blood found under the carpet and elsewhere there it has survived cleaning efforts. This is not mentioned anywhere except in the court recordings: https://news.wttw.com/sites/default/files/article/file-attac... . Kinda sad what is convincing these days ).


Probably a reference to Yongfei Ci actually (https://dailyillini.com/news-stories/2014/06/20/yongfei-ci-s...)

UIUC is not having a great track record wrt grad student murderers o_O


I doubt it, as the non-bolded portions were written in 2013, 4 years prior to the murder of Yingjing Zhang.


It's tragic how the relationship dynamics between Moh and Zhang almost resulted in the total write-off of Zhang and a loss of genius/talent, with nothing left but bitterness and animosity.

I'm glad Zhang was able to find success despite his initial setbacks and from what it seems like in his recent interviews also let go of his bitterness/resentment (holding something like that in your heart can only ever hold you back). And though the power dynamics here were clearly unequal, I don't think it's fair to blame Moh entirely for what happened at Purdue.

I think it's important to remember Moh is also human with all the complexity that comes along with that. In reading his published statement, even though there is no direct apology to Zhang, I sense that he does genuinely regret how things turned out.

Perhaps one day, Zhang and Moh will be able to meet again and resolve/rekindle their relationship.


In the earlier version I saw (I guess it consists of the non-bold parts), he didn't mention as much negative stuff about Zhang. His claim that Zhang "want to be famous all the time" I regard with suspicion.


Yeah I started reading that from the Wiki citation. Yikes. Academia is brutal.


Man, it's so weird and pathetic

All of these guys are probably a hundred times smarter than me or most of the other code monkeys working for the FANGMAN, but they're all squabbling over little 5-figure scraps of grant money.


https://en.wikipedia.org/wiki/Sayre%27s_law

  In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake


I think it’s their egos they’re squabbling over, not grant money.


Zhang evidently doesn't care about money at all. The same is true for many professonal mathematicians. Caring about money makes it difficult (not impossible) to do anything deep.


Wow thanks so much. That is indeed "incredible" on many levels.


I think the word eligible is problematic, since it is actually aluding to age discrimination, which is what the Fields medal is doing and what should not be allowed.


One can't really take papers down from arXiv anyway.


Well, you can update them with a disclaimer that points out known issues.


I just read an article from Yitang Zhang's little sister (in Chinese). It touched my soul. A must-read (translated from Chinese using Google translation): https://zhishifenzi-com.translate.goog/depth/character/480.h...

Part of the translation is not so perfect and you have to use your best guess sometimes. The Chinese version is a masterpiece.

In addition, here is an article from Yitang Zhang's PhD advisor. Not so nice: https://www.math.purdue.edu/~ttm/ZhangYt.pdf


Could you provide a link to original (Chinese) article?



谢谢!


Regarding Zhang's age: Most Fields medalists these days are very close to exceeding 40 years. Wiles himself just a special silver medal, because of that - which I think is outrageous age discrimination. Mind you he didn't get the silver medal because his results were a bit less spectacular than those of the winners, but rather they were so groundbreaking he couldn't be ignored, but because he was just over 40 they had to downgrade thr medal to a silver one (I guess in order to not be in conflict with the rules that the organization that awards the Fields medal has).

This is part of a general trend. As the world population ages, the number of people being able to achieve world-class performance later in life increases. One can see this spectacularly in sports, where older and older people win medals (for example: https://www.npr.org/2022/02/12/1080338798/older-athletes-bre... )

The Fields committee would probably do good updating their statutes, in order to not be so out of step with the times...


My understanding is that the Fields medal was always meant to be awarded to someone young and on their way up. But then it became the medal, the Nobel prize of math.

Maybe we just need a new Nobel in math.


The hope is that the Abel Prize becomes the "Nobel Prize of Mathematics", as it is directly modeled after it.


> Most Fields medalists these days are very close to exceeding 40 years

They are usually spotted at a much younger age, and they get the medal when they are almost 40 because after that it will be too late. I.e. if there are several deserving mathematicians you give the medal to the one where the clock is about to run out, since the others will have more shots at it.


I first found out his story from new yorker https://www.newyorker.com/magazine/2015/02/02/pursuit-beauty when the issue came out. I almost busted in tears.

I just wish he continue to enjoy mathematics, no matter if this paper is flawed or not.


It's a fantastic piece. New Yorker has some really in depth writing.



I need a “Explain like I’m 5” for Landau--Siegel zeros.

This sounds like a hard task as I couldn’t find anything online that does it :(


Hey, this comment was my attempt at an ELIUndergraduate (5 is probably a little too ambitious for this topic). I hope it may be helpful!

https://old.reddit.com/r/math/comments/y93a86/eliundergradua...


Thanks!


There is a function called the Riemann Zeta function which is defined as an infinite series ZETA(s) = 1/1^s + 1/2^s + 1/3^s + ...

For certain complex number inputs s, this function ZETA(s) returns zero. Riemann's hypothesis states it returns zero when the real part of the input Re(s) = 1/2, and the imaginary part Im(s) some non-zero value (the first zero occurs at Im(s) = +/- 14.135.) As far as we've checked with computers, all zeroes have Re(s) = 1/2. We are interested in these "zeros" because we can use them to construct a harmonic function (think overlapping waves) which tells us how the prime numbers are distributed.

A Siegel zero is a potential counterexample where a zero could theoretically occur for complex number with Re(s) close to 1 (i.e. not 1/2.) This is based on the study of Dirichlet-L functions which are a generalized version (i.e. superset) of the Riemann Zeta function.

If Zhang's result is correct, it simplifies the problem space for finding Riemann zeros, and thus for understanding the distribution of primes.


There are moments I'm pretty proud of my intellect and what I have achieved with it. Reading and not comprehending even the very basics of proofs like Zhang's are a good reality check in that regard.


I am pretty confident that I will never in my lifetime fully understand stuff like this (not the symbols themselves, but the overall meaning of each term and why it is like that): https://i.snipboard.io/by4tsH.jpg


For the meaning, you just have to retrace back to where the things were defined, just like in programming. I am a mathematician, and I do not understand anything in the linked screenshot either (other than big O notation, which many people here should actually know!). FWIW, the author’s preference for Greek letters is rather excessive for my personal taste.


The Greek characters in the screenshot are standard for the subject. The letters "chi" and "psi" (in that order) are the preferred letters for denoting Dirichlet characters, and zeros of L-functions are always denoted by "rho."


See, this is what I meant. Not only reading in the paper where X was defined but have this in-depth knowledge of why a certain symbol was used and how it came to be, knowledge that probably not even all full-time mathematicians that studied their entire life have.


I see, this makes sense. Probably gets easier when one gets used to it.


Unless they're deeply ingrained in the literature I find preferences for curly letters quite irritating (especially some older typefaces), dyslexic or not I genuinely cannot read them sometimes; it's a big blocker on my productivity.

I sometimes rewrite them using A, B, C then try to understand them. This procedure should be automatic but unfortunately TeX is ancient.


Ok, but there is a difference between looking up definitions and understanding something.


Agree, same in programming, you could see that f = (a,b) => a ^ (10+ b) , but not understand what it actually means and why 10 is there and not 9.

Another example is the use of commonly known constants, such as pi, i or e. No paper will define those constants and explain indepth what they mean, so you need prior knowledge or to find external resources. I think for such complex papers you will find many such cases where extensive prior knowledge is needed.


Maybe because you haven't tried to understand it?

Can't be harder than learning the meaning behind these characters: https://www.pandatree.com/book/DiaryofWorm.jpg


I don't mean the symbols themselves, but how each member was defined and the history behind it.

It's like watching a Marvel movie and not only knowing the plot of the current movie but also the deep history of each character and their relationships with other characters.

I assume the paper didn't come out of nowhere and it's based on "the shoulders of giants".


Pretty sure it's much harder than Chinese. The depth and abstraction of the concepts involved and the density of these concepts in these lines vastly eclipses anything in any natural language


It would take a long path to understand it.. and the path would need to be filled with good resources.


I'm sure 3b1b will make an explanation video :)


I mean I got as far as things like the square root symbol and power-of, I've scanned the comment section and links and there's still little to nothing I understand, lol. Something with prime numbers.


Flicking through this paper is very humbling and awe-inspiring. Having studied in this field when I was younger, I have just enough understanding to appreciate the insane amount of thought represented here. Kudos.


Can someone explain the decimal constants that are used throughout the proof? For example, on page 52. It's rare to see these kinds of numbers used in mathematical proofs, but I'm sure they were chosen for good reasons.


Unfortunately, nobody can explain anything like this right now. The paper was posted today, is 111 pages long, and it will likely take even professional mathematicians around a year to understand / check it completely.


Often in math papers there is a series of constants appearing in estimates, each of which depends, sometimes in nonobvious ways, on those appearing earlier. Some authors like to give these constants genuine numerical values in order to help keep track of the dependencies between them and, in particular, to make sure that they aren't accidentally treating two different C's as the same. In delicate analytic proofs this sort of thing is particularly important - subtle missed dependencies are one of the places where serious proof attempts can go wrong in hard to find ways. I'm not saying this is what Zhang is doing, but it's one possibility.


I'm not an analytic number theorist, but my understanding is that the proof requires some values to be 0.5 + some positive epsilon to be correct, and for the sake of clarity, these somewhat arbitrary constants are chosen to get to the result.


0.502 and 0.504? If so, they're introduced as parameters back on page 10. I can't tell you much more than that!


“One does mathematics because one has to, and if it is appreciated, all the better!”

–Karen K. Uhlenbeck


Is the 2022 in the bound there cause it's sharpest or because he happened to have the freedom to pick it equal to the year of publication?


Not sharpest; he says he thinks it can be improved, but not down to 1 (if it could be improved down to 1, that would prove the nonexistence of Landau--Siegel zeros).


Aw man we need a SoME2 video about this breakthrough https://youtu.be/cDofhN-RJqg


Thanks for the ELI-not-a-mathematician. I am super excited for those involved, and understand none of it :-).


go boilermakers! unfortunate that he left on bad terms, but cool to see alma mater. his advisor is a prick tho lmao had him one sem


Alternative link for the New Yorker article: https://archive.is/TVj2A


Is Yitang Zhang this century's Ramanujan?


Not really, he seems a grinding kind of mathematician, vs. Ramanujan who wouldn't bother with proofs.


> While he never went to school at UK, this will not be his first time on campus. Following his graduate work at Purdue University, Zhang lived in Lexington for a few years and managed the finances at a local Subway franchise. During that time he would also spend hours in the mathematics library at UK, reading journals on algebraic geometry and number theory.

A real-life Good Will Hunting, his backstory is incredible.


This sounds like the Margaret I. King library building AKA the Science & Engineering library. Otherwise, I'm not sure which library they're referring to. Interestingly, there was a subway on campus too. I wonder if it was that one, by chance. Small world.

source: UK is my alma mater.


IIRC, in the 90s there was a Subway in the same complex that housed the Kentucky Arcade, but maybe it was a Blimpie's. Maps show it as a seven minute walk to the King Library. My hope is that I crossed path with this genius at some point in between epic battles of Tekken 3 or NBA Jams. I grew up in Lexington.


That's NumberZhang!


Haha, that made my day! (It’s a reference to That Mitchell and Webb Look).


He first talked about this breakthrough in a Peking Univ alumni meeting, and a preprint circulating in the field before he submitted to arXiv (https://drive.google.com/file/d/1vTLoh_Cpw6Zr436rD7FnjzND6ng...). It's a 111 pages long proof.


When I first read the wiki article it came across as something generated by machine learning.

After spending way to much into this, is it basically that you theoretically in specific cases might get a rouge result? But when working with numeric methods or statistics you already sort this out on a set level. No?


Is there a real world outcome to this result?


There are many, many, many consequences for prime numbers--which to me are concrete enough to be interesting (and are way more concrete than what 99% of mathematicians work on!).

On the other hand, I doubt this proof will help you to build a faster gizmo or something in the real world.. especially since it's proving something we really think is true (a consequence of the Generalized Riemann Hypothesis), and for real-world applications you can just assume the thing that we think is true is actually true (even if we haven't been able to prove it for a century). (E.g., you don't need to prove that factoring is hard to use cryptography for practical purposes..)


I think it's fair to say that this could lead to certain new things becoming known about the distribution of primes. This could have implications for cryptographic algorithms that depend on prime numbers being hard to find.


Careful, prime numbers are not hard to find.

Like, try

  openssl prime -generate -bits 2048
Congratulations, you just found a prime that is big enough for every cryptographic protocol that uses prime numbers (not counting unusual and non-deployed post-quantum proposals).

Some number theory research may impact the security of cryptosystems, but not all results do.


That doesn't quite generate a prime. It generates a number that's a prime with a very high probability. There's always a small chance that it's not prime but it's good enough given the tradeoffs needed to verify that it's prime with 100% certainty.


Yeah, sorry perhaps I shouldn't have made such a specific sounding claim. I'm not an expert on this topic, but was pretty sure I had heard from reputable sources in the past that the Riemann Hypothesis had some bearing on the distribution of prime numbers. And it feels safe to say that this could have practical implications for cryptography. But maybe I should just leave it to the experts :).


Small correction: cryptographic algorithms don't depend on "prime numbers being hard to find", as they are not hard to find. Say you want to sample a 1024-bit prime. Then if you sample a random 1024-bit integer, it will be prime with probability 1/1024, roughly. This is a consequence of the prime number theorem [1]

Some crypto (namely, RSA) depends on on composite numbers being hard to factor, which is a different problem.

[1]: https://en.wikipedia.org/wiki/Prime_number_theorem


RSA depends on large prime factors being hard to recover from their product.


Yes, that is what "factoring a composite number" means, and that's what I said in the last sentence.


Crucially, the factors involved have to be very large.


If it were just that, it would be trivial to break. It's the fact you generate the key after a modulus operation that makes it difficult to recover.


Nope. The public key is not reduced mod anything in RSA. The large semiprime the user calculates is emitted explicitly as part of the public key and is used as a modulus in RSA operations.

Factoring that large integer directly yields the user's private key.


Yep.. should not have done that from memory. I had the private exponent in mind and forgot about the simplicity of the actual "RSA problem."


> depend on prime numbers being hard to find

That depend on large semiprimes being hard to factor.


No you can't make a startup off this result now get back to optimizing ads.


How about a SaaS log parsing startup?


Just a cultural observation: I wonder how many Chinese geniuses are behind the iron firewall of China. With the number of Chinese population and STEM focus, it's likely that in two decades Chinese dominance of sciences, technology, and later culture (language, art etc) is ensured.


Cheeky of him to use -2022 in the exponent reflecting the year of publication

> It is possible to replace the exponent −2022 in Theorem 1 by a larger (negative) value if the current arguments are modified, but we will not discuss it in this paper.


I can't judge the math directly, but I do know that my local Subway makes good, healthy sandwiches full of vegetables and other things. So that's a plus on his resume.


still blown away that he was my calc II professor in college.


Should I be suspicious that the exponent in the equation is the current year? LOL

I still need to read the article. The letter from Moh was interesting.


Yes, you should :)


Another one in the books. It reminds me of the quote stating whether the research is flawed or otherwise, only time will tell.


He's one of my personal heroes!!! Go Zhang!


Subway needs to use this guy in an ad.


Sorry for the ELI5, checked the Reddit link also. Any immediate implications of this?


Complete aside it is interesting that he announced his result in Peking university given that he has been vocal about his dislike of the Chinese communist party.


Big if true


Appropriately concise.


I was gonna comment this too goddamn


I like the link to old.reddit. New reddit is still such a sluggish site..


Is this something a proof assistant could be used to help validate?


In theory, yes, but in practice proof assistants are only just starting to become practically useful for research-level mathematics. Validation/verification would likely require an enormous amount of proof engineering to build up the theorems required to verify a proof of this complexity. See the Liquid Tensor Experiment postmortem [1] for the results of a similar undertaking.

[1]: https://leanprover-community.github.io/blog/posts/lte-final/


What does this enable?


True


yo


Math used to be a young person game, but it now requires so much knowledge just to get to the frontiers of human knowledge, not to speak of making a dent into uncharted territory, that results are being obtained later and later in life. When mathematicians have had time to accrue sufficient knowledge while still being sharp enough to make the intellectual leap.

The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute anything novel, statistically speaking. And as population seems to be close to peaking, we will also have less chances of exploring the extremes of mathematical dexterity.

Perhaps we could then rely on computer assisted theorem provers. Or life extension, as long as intellectually productive years are also increased. Or we will need to focus and specialize kids earlier on.


> [...] but it now requires so much knowledge just to get to the frontiers of human knowledge, not to speak of making a dent into uncharted territory, that results are being obtained later and later in life.

That's not really the case here. Zhang spent 10 years out of school working in fields in the cultural revolution, and didn't start college until he was 23. After his PhD, he couldn't get an academic job for 8 years and ended up delivering food and working as an accountant. He only got a part time lectureship after that. He wasn't made a professor until his big proof at the age of 58.

It seems incredibly likely he'd have been able to proove his big proofs faster had his life gone otherwise.


> It seems incredibly likely he'd have been able to proove his big proofs faster had his life gone otherwise.

Why do you think that? It takes sometimes years (!) until you get used tona certain mathematical theory. And how long it takes isa very personal thing.

Remember von Neumann, who that that we don't learn nu mathematics, but only get used to it.

I think the only thing one can say was incredibly likeli is that Zhang, while not formally employed as a mathematician, kept thinking about mathematics (a rather common for people with mathematical training, though many just stick to do occasional problem solving).


> Zhang spent 10 years out of school working in fields in the cultural revolution, and didn't start college until he was 23. After his PhD, he couldn't get an academic job for 8 years and ended up delivering food and working as an accountant.

During that time he definitely spend less time on maths than he could or would in a better situation.

Especially cultural revolution oppression was monumental waste of human lives. See https://en.wikipedia.org/wiki/Cultural_Revolution#Death_toll alone


A self-made man :)


> but it now requires so much knowledge just to get to the frontiers of human knowledge, not to speak of making a dent into uncharted territory, that results are being obtained later and later in life. When mathematicians have had time to accrue sufficient knowledge while still being sharp enough to make the intellectual leap. The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute anything novel, statistically speaking.

IMO, an under appreciated dynamic across the board of human endeavours.

With increasing complexity comes the need for more time to understand and master anything.


That's a very big claim, perhaps true only if human endeavor only ever builds linearly. But I dont think that is true. Certainly not in the arts or music, e.g. the Beatles did not need to ingest the entire corpus of Mozart or even Scott Joplin to be highly productive. Although, it certainly helped that they were extremely open minded to all types of music.

A second example is that of recent advances in virtualization, where the last decade of advances in things like cgroups, namespaces and containers were all done by people who I assert had no training in IBM MVS/zOS and therefore weren't building on what went before (imho, to their detriment).


The Beatles themselves didn't need to ingest the entire corpus of past musicians to be productive & inventive, but they did stand on the shoulders of giants musically (vs being born in some prior century), and they were in the right time & place to be part of a growing scene whose smarts exceeded that of any of it's individual participants. This is the nature of 'scenius' and of golden ages - a lot of ideas are already teed up in the collective consciousness.


> only if human endeavor only ever builds linearly.

A lot of mathematics is cumulative, though.

And even if you can go further by going thinner (specialising more), maybe breakthroughs require lateral thinking and connections that are predicated on not being too specialised, but having breadth also. If that is the case, sooner or later we might be in trouble.


It's also a major failure in didactics. It feels like very little of the new knowledge since twentieth century has been truly digested for easy teaching. Why isn't general relativity taught in elementary school? It should be possible.


FWIW I notice how remarkably bad I was taught math at school.

When I started to go to university I really noticed how bad it was. At university the jump forward was really noticeable.

For example, at school they would show you a couple of simple explanations about derivative math or integrals, briefly and start with all the formulas.

At university I used to have a teacher that started with: history of mathematics, why they were invented, made a point about its primarily practical origins.

To explain things, he could most of the time come with real-life instances of application and there were much more often intuitive or geometric interpretations of the techniques used much more often even before starting the explanation itself to have an intuitive idea and visualization of what you were achieving.

After that, I noticed that to learn math, the first thing is to develop an intuitive, non-mathy idea of what you are doing and later formalize it.

At school and high school they just taught it as almost-memorize tables, apply formulas.

Talking about Spain, btw.


This! Identical situation in the UK from my experience. Why maths is taught completely separated from its history and its purpose is baffling. Well actually it’s not: it’s because it serves the teachers. You hammer the formulas and patterns into the kids so they can pass the exams. Your school gets good grades and the principal is happy because he can now market his school as successful and the government inspectors are happy because the school is hitting its metrics. Meanwhile, the kids haven’t actually learned anything. As soon as the exam is passed they forget the formulas and go on with their lives.

We should be explaining the story of maths and how it benefitted society. We should be asking kids about their interests and then showing them how mathematical tools can be used in those areas. We need to show kids how maths is tied to real life rather than just presenting them with a boring formulas to memorise.


> because it serves the teachers

I don't think this is the case. I think it's because the teachers themselves don't have a good grasp on "the story of maths and how it benefitted society".

There is a silly meme about asking high school maths teachers "how will we use this in life", and imo it's not because there isn't a good response, but rather that it requires a good understanding of the ways math is actually used. Few high school teachers have actually themselves used the math they teach for anything other than academic exercise. Someone trained in control theory or using physics equations can make things that appear almost magical using maths, and if they are talented, they can find a way to explain it to laypeople. However, people with that combination of talents are desired by just about everybody, from universities to companies, and high schools simply have no way to compete (not least because teaching high schoolers is a soul-crushing job for bureaucratic reasons)


Yeah this is a very good point. The amount of people who basically go from school to university to school teacher is alarmingly high and bad for society in my opinion. Grown adults who have never known anything but the school system teaching the next generation. It’s ridiculous. I do still think the original point about hitting metrics is a major contributing factor though along with low pay and underfunding.


When I came across the (obvious, in retrospect) visual explanation for why (a + b)(a + b) == a^2 + 2ab + b^2 I was blown away by how simple it was, and by why on earth I had to just memorise that formula in school.


This is exactly the kind of intuitions I am talking about. Much easier to explain like that.


By "visual explanation" do you mean imagining a square whose sides are length (a + b), and breaking it up into smaller squares and rectangles?

Or do you mean mulitplying out the terms:

  (a + b)(a + b) = a^2 + ab + ba + b^2 = a^2 + 2ab + b^2

?


The former, yes.


It's interesting that some people find that way easier to remember. For me, multiplying out the terms seems faster, probably only because I've practised so many times in my life that I can picture the algebra in my head. It also seems more general, as it's a technique which you have to know anyway. I guess it comes down to being more geometrically minded vs algebraically minded. School should try to cater to both!


I wouldn't use it to calculate, but I would use it to explain why the calculation is what it is.


I see no reason to believe that general relativity could be taught in elementary school. It requires advanced undergraduate or graduate mathematics, not to mention that the physics itself is quite difficult (to put it mildly).

To put the blame on didactically seems to miss the more important factor that humans just aren’t that intelligent, save the one in a million genius who might have the intellectual capacity to learn something so difficult at such a young age.


Interesting! As other comments ts have pointed out, didactics as an important aspect of how humanity responds to such a dynamic.

Otherwise, I’d say our two thoughts are connected. With increasing difficulty in understanding new progress, there could be an inertial tendency to over emphasise the importance of old knowledge because it’s comforting/easier/pragmatic for teachers and parents.


Agreed, it feels like the days of maverick engineers leading huge efforts is mostly over. Will there be any more Kelley Johnsons, or Gene Kranzs in our lifetimes?


Engineering is different from math. A large part of the reason why engineering innovation is lacking in a particular field is due to entrenched players, heavy bureaucracy, and government regulation, for instance with spaceflight and EVs.


Tools can be used for this purpose. New tools can unlock new abilities for engineers. I’m not sure if it applies to math but I don’t have any reason why it shouldn’t.


There is still plenty of amazing engineering going on, for example EUV development at ASML.


This sentiment is humorous, as I'm an optimist apparently. More children will learn more quickly. Applying AI to education should supercharge the smartest.


> Applying AI to education should supercharge the smartest.

What does that mean?


I don't know but it also fails to disregard that the smartest people (whatever that means) can have executive function issues that are orthogonal to their mental capacity.

Additionally, the belief that intelligence alone will improve the world is misguided imo. You need empathetic, intelligent people.


As we do not yet have artificial intelligence making confident statements about what its effects might be is surely rather rash.


You must be an optimist if you think what passes for AI right now could in any way help education. It's an incredibly useful tool, but not in the way you almost certainly are thinking.


"The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute anything novel, statistically speaking."

People talk about this a lot. While I think it could happen for certain subdisciplines (it already takes essentially an entirely PhD's worth of time to learn all the necessary background to be an algebraic geometer, so most algebraic geometry PhD students publish nothing besides their thesis during their PhD studies), it can never happen to mathematics as a whole. If one part of math gets too deep, you can always go somewhere else, where the water is still "shallow."


I’m not so sure. The same argument would apply to theoretical physics in 1960. Circa 2023, there are remarkably few shallow parts of physics.

Math as a whole may last longer, but this list reminds us how far we’ve come in a mere few millennia: https://usercontent.irccloud-cdn.com/file/SaI50Q1d/166786520...

On the timescale of civilization, it seems less and less likely that lone mathematicians can revolutionize the field.

We’re fortunate to have been born so early, relatively speaking.


> it seems less and less likely that lone mathematicians can revolutionize the field.

Which inspires the question: how much can cutting edge math be parallelized?


But you can manufacture new areas of mathematics. For example, Conway's Game of Life, and then prove theorems on it.


Physics is limited by having to represent phenomena in our physical world simply.

Mathematics is not just a small integer multiple larger than this.


This is almost correct.

It's not that math is vastly larger (or more sophisticated) as a field. Both fields are infinitely large in many senses. Rather, the number of respectable starting points where you can do interesting things is much larger, orders larger in math.


There are plenty of unexplored things in physics too; the key word is "respectable".

Looking from the outside, physics suffers a lot from fashion/hot trend tendencies, where you need to be doing the "hot" thing to make the jumps necessary to the coveted Tenure Track — and otherwise, you get kicked out.


Then you spend your whole PhD reinventing a wheel that has a different name in the 30 year old textbook from the next field over, and neither your peers or professor have any awareness of that


Or, as Juergen Schmidhuber likes to point out, in the same field.


I suspect computers can also help us get deeper. Stuff like computer algebra systems.

Maybe some CAS-assisted work gets us into feedback loops allowing us to go indefinitely, as in a technological singularity.

But the "shallow" part is also quite wide.

You can teach people what you've learned forever, for instance.


do you have experience with CAS? i would love to learn how to use CAS to write proof more effectively


CAS can help with the more mechanical parts of a proof. I use it often to quickly check if something can be rewritten to something else I want. The sad part is that even if the CAS doesn't find a solution it doesn't mean there is no solution. It just didn't find it. But it can save a lot of work if the first thing you do is just quickly check, if you're lucky you just saved yourself a lot of work.

I don't know what field you are in so it may or may not be helpful to you.


One very common and simple use case is looking for counter examples. If your theory states for example that "all Matrices with property X also have property Y" then it is quick and easy to generate a whole bunch of matrices with property X and check that they all have property Y. Of course that doesn't actually prove anything, but it can be used to disprove a statement and save you a bunch of time chasing down a dead end.


To be honest, my experience is limited to double-checking my algebra with the free Wolfram Alpha. I need it maybe a few times a year.


> If one part of math gets too deep, you can always go somewhere else, where the water is still "shallow."

Yes, but the shallow areas aren't very interesting, which is why people work in the deep areas.


Most of the now-deep, now-interesting areas were once shallow and uninteresting.


There's also occasionally realignments where the deep stuff gets shallower. It takes a lot of rickety scaffolding to get to a new place, and occasionally the finished product stands fine on its own.

The simplest example that comes to mind is that you can learn group theory without really needing to know anything about Galois theory. I also imagine there's a lot of good math that has shed vestigial physics...


Isn't this evidence that this process has started to occur? The difficulty to make progress in certain areas of math pushes mathematicians to the shallower areas where it's easier to make a contribution. Progress in the first ones will stall and at some point the shallower areas will become less shallow and same phenomenon will occur.

Maybe we will keep forever discovering new shallow areas but I suspect this is not the case. In any case this is a phenomenon that I think will play out in the next few hundred years, not sufficiently impactful in the next few decades but more and more noticeable.


Another approach to the finite-mathematician-lifespan problem might be to develop new foundations that are closer to the edge. I expect there's a logical universe in which sets are bizarre and hard to construct but objects which would take a modern mathematician years to grok are convenient and simple to work with.


My favorite thought game along these lines is imagining an alien civilization where permutations are the fundamental object instead of the natural numbers.


Maybe food is plentiful enough that they never need to bother with its quantity, but there's some strobing energy source in their environment which periodically walks important chemicals through an ultimately cyclical series of states.

So like, the food is poisonous on strobes 3 and 8 of a 14 strobe cycle, and understanding that is the key to staying alive in their environment.


Homotopy type theory?


This approach reminds what the geometric algebra folks are trying to do.


>Math used to be a young person game, but it now requires so much knowledge just to get to the frontiers of human knowledge, not to speak of making a dent into uncharted territory

>The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient

Hmm. I had never thought of it like this. Is it possible for human knowledge to become so advanced in a single subject that it takes a persons entire life to learn just one subject? That's already true of something like the human body, which is why doctors specialize in organs or regions of the body. The more human knowledge expands, the more individuals specialize into increasingly smaller niches.


This is a growing problem in many fields, IMHO. I've been wondering for awhile if it's an inherent flaw in knowledge in that if knowledge can't supplant older knowledge in a high compressed reduced form as things progress, we're just building so much information/knowledge for any given field that at some point, it may be quicker to simply rediscover the process than to search the knowledge for the prior work.


This is where longevity research a la Harold Katcher comes in and allows for super centenarian geniuses to shape the future of humanity.


But what if there are necessary prerequisites for making significant advances in longevity, which themselves are so complex that they require more than a human lifespan to master? We could be at a dead end unless the computers start doing the work. And even then, perhaps the same impediment applies to the development of an AI that can bootstrap other AIs into getting the job done.


Harold Katcher: “The Illusion of Knowledge”.


If you could live two or three lifetimes, would you really want to spend them all on the same thing?

Feels like attention would become the limiting factor.


Even with optimal compression, there is still a finite minimum size.


Think of it like Rust eventually supplanting C++ once the ecosystem and libraries are complete enough for people to make the change.

The cognitive burden reduces via the new foundations not requiring deep understanding of the old ones to be able to make new steps forward.


This is not a good example, as Rust does not build on C++, it simply replaces it as the foundation. Of course, the development of Rust is built upon lessons learned from C++.

But there is no universal law that says that the foundations of a field must be simple enough for one human to understand them in 70 years. It may well be that the simplest possible statement of a field of knowledge is still too complex for a single human to understand it in a normal life-span (not to mention that our capacity for storing information is limited - you can't actually continually learn new things for 70 years without forgetting much of what you learned initially).

Even today, you could spend literally your entire life trying to learn everything we know about the human body and you would almost certainly die before having learned everything. Now, fortunately, there is plenty of real work, both as a doctor and as a researcher, that can be done by focusing on just one aspect of the body (say, the circulatory system) and having only relatively shallow knowledge about the rest. Still, there is the possibility already that a mind that could build on all of the deep knowledge we have could come up with new ideas in medicine and biotech that we are unable to because of this silo-ing.


> This is not a good example, as Rust does not build on C++, it simply replaces it as the foundation.

No, that's exactly the point. You can thereby drop the cognitive load of C++ and build greenfield on the Rust foundations.


Here is a short story that covers that idea: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/


Thanks for posting it!

I've read it, lost it, and have been searching for it for a few years now.


I think we're kind of becoming multi a cell organism.

There's limit to human knowledge, but we are getting more and more efficient at communication, both with other humans and with machines.

Even without something like Neuralink we are creating abstractions and interfaces allowing us to quickly connect our work with others. E.g. especially with ML you can lazy load all necessary context.


While I get what you're saying, and clearly there is "more stuff", I think we shouldn't discount how progress can make things simpler on this front. Proofs we learn are usually much more refined than the original versions. Notational improvements and more interesting abstractions mean we can cover more ground more quickly.

Another angle here, I remember seeing a study where some children were just... taught algebra. Like given high school algebra classes in 3rd grade, and kids were able to absorb all that abstract reasoning "just fine" (according to the study).

Of course there's only so much abstraction that can be done, but I think we shouldn't assume we are at the end of history on much of anything (except for parsing algorithms)


To add to your comment, quaternions predate rotational matrix operators by a considerable amount (1843 vs not exactly clear, ~1900 with Peano or ~1920 with Weyl), despite quaternions being much more challenging to manipulate. There are definitely simpler ways to view the same things.

There was a cottage industry of exotic hypercomplex numbers that disappeared when linear algebra matured to eclipse them.

In fact, Maxwell's Equations were originally derived with quaternions.


I am somewhat unconvinced that quaternions are more challenging, or at least a worse way to think about the issue :

https://eater.net/quaternions

And speaking of Maxwell's Equation"s" :

http://www.av8n.com/physics/maxwell-ga.htm#sec-preview


Well, a rotation matrix doesn't require doing 2 half rotations, and doesn't require reaching into the 4th dimension in such a way that it gets perfectly cancelled out. It doesn't require abstract analogies about cubes with strings glued to them or people holding coffee cups.

With some familiarity with linear algebra, it's easy to derive the formula for constructing a rotation matrix. You just have to think about what the operation does to the axes. The derivation for quaternion rotation is far more abstract, by virtue of the operation we actually care about involving a sandwich of multiplications with unclear 4 dimensional meaning. There's no hyperspheres with a rotation matrix.

Augmenting your space to handle not just rotations & scaling, but translations is easy for matrices, just requires a homogeneous coordinate and you get 4x4 matrices with intuitive columns.

Augmenting quaternions to handle translations requires the 8 dimensional dual-quaternions.

I definitely like geometric algebra, it's a very nice continuation of topics in linear algebra and makes it clear why things like normals behave differently from standard vectors. But I don't use it every day. I use standard linear algebra every day.


>Math used to be a young person game, but it now requires so much knowledge just to get to the frontiers of human knowledge, not to speak of making a dent into uncharted territory, that results are being obtained later and later in life [...] The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute anything novel, statistically speaking

Nonsense.

First, the frontier of mathematics is far closer than you think. Number theory is one of the oldest branches of mathematics, going back thousands of years. There are many more branches that have originated in the latter half of the 20th century. And new ones are coming into existence all the time.

Second, we don't need to focus and specialize — we need to do the opposite. Mathematics is about seeing connections and patterns. We need to teach philosophy and critical thinking, and we need to give people exposure to the vast universe of unexplored — and fun — world of mathematics so that they can go towards the frontiers and push them, rather than spend half a lifetime going in the well-trodden direction.

Finally, undergraduates are still producing new results, every year, in numerous REU programs around the US alone. What gives?

The problem isn't that math is so well-studied that you need so much education to do it. The problem is that we aren't teaching people to do math, we teach them about math that they may or may not use elsewhere.

We don't encourage (or give space) for them to play, experiment, explore, wonder, ask questions, venture into the unknown, and be surprised by what they see (outside the aforementioned REUs).

So many people simply don't get to even start doing mathematics until their third year of graduate school, simply because that's when the structure of our education allows them to.

None of that is necessary. We do it that way because professors are underfunded and mentorship is not rewarded (publish or perish), among other things.

The situation is due to structural problems in academia, not mathematics, humanity, or the advances we made.

Signed, —your neighborhood mathematics PhD


> undergraduates are still producing new results, every year, in numerous REU programs around the US alone.

Speaking as a former undergraduate math student producing “new” results in REU programs, all but a few of these results are completely negligible. There’s still a genius here and there though.


The same can be said about the entire field of mathematics and science in general.


I think it will never get to that. As the frontiers expand, new ways of travel to the frontier are opened up, and the old meandering roads that took so long are left to history.


I haven't spent much time thinking about this but were Einstein's contributions a combination of intellect and "shallow" or relatively quick frontiers. I.e. a very intelligent intellectual working on the problem sets that are approachable without the same level of overhead required these days?

I.e. Today's world wouldn't support an individual who could have such an impact on most math/scientific disciplines?


In a mathematical sense, special relativity is not particularly difficult. Undergraduate physics students often learn it by year two as it doesn't require much more than calculus and linear algebra. That's not at all to downplay Einstein's genius. If it weren't him, I'm sure others would've gotten there eventually, but he was the first there, and his insights were primarily physical (assuming both the constancy of the speed of light in all reference frames and that the laws of physics are the same in all inertial frames of reference).

General relativity was much more difficult. It took Einstein about a decade to develop it and he had to learn differential geometry in order to do so. This work was undoubtedly more profound and required much more advanced mathematics merged with deep insights such as the equivalence principle [0]. This was what made Einstein so successful: the combination of mathematics with physical insights that no one else had put together at the time.

[0] https://en.wikipedia.org/wiki/Equivalence_principle


When it comes to physics, I think the issue is more that we are on the limit of what we can reasonably test. It's like trying to figure out how the body works before the invention of the microscope.

Newton's Gravity was known to be incomplete for many years, just by looking at Mercury. It took several carefully derived experiments studying light for 100s of years that led us to Maxwell's equations in the 1860s that became the basis for Einstein's special relativity in the 1910s. You don't need a billion dollar particle accelerator to come up Maxwell's Equations (2 of them were done by Guass in the 1700s!). 20 or 30 years (and talent!) studying calculus or physics could get you something.

Today we know Einstien's theory of gravity is incomplete but the only places it is incomplete is inside of a black hole (good luck running a test in a blackhole) or at very tiny scales where gravity's affect is minimal. Today, I don't know how you would even come up with a competing theory without having millions to spend on a particle accelerator. While einstein famously just did a thought experiment on what should happen if the speed of light is constant for all observers, most "discoveries" are done now by smashing particles in billion dollar tubes.


> good luck running a test in a blackhole

Theoretically speaking, it's not that hard to run a test inside a black hole (getting to one is the hardest part that we know of). It's communicating the results to anyone else that's "slightly" harder.

Jokes aside, in physics we have both a problem with what can be tested, but also one on the theory/mathematics side. Even for a relatively technical problem, we don't have any good way of solving the actual equations of any slightly complex system in either GR, QM, even Newtonian mechanics. We are actually always relying on numerous approximations and simplifying assumptions, and some of these could themselves lead us astray in some cases.


One day we will be able to download knowledge to our computer-assisted brains, and math will become a young man's game again.


I would add that now we have new tools to enrich mathematicians works: new software tools and computing power. It is not about AI, AI is only a special case or a name for tools that can solve or ASSIST humans in cognitive tasks.

Age is (almost) not a limitation with the right tools, less in the near future.


> we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute Arguably the same could have been said in the 1700s when people lived till a mean life expectancy of about 40 and they would have perhaps been concerned that at some point in their future nobody would be able to make scientific advances in their lifespan. Yet here we are, living till 80s and beyond, making accomplishments in our 50s and 60s.

So we could solve this a few ways:

- Continue to extend the length of human life. Living to 150 will be the norm in a few generations.

- Create AGI that are smarter than us and make the advances for us

- Genetically engineer humans to be twice as smart, and learn as much as an average 30-year-old in 15 years


Or just make mathematical education more efficient which seems like the Occam’s Razor answer to me.


Regarding population decline vs. extreme dexterity... there's surely a good bit of room to improve our ability to discover and foster the dexterous extremes. If "nurture" plays any fraction of a role in the process, there are too many paths through life in this world that would still deny someone the opportunity to achieve their potential dexterity. As a baseline, I'm just talking about abject poverty / famine / war and other horrors that would easily divert someone from even the opportunity to be spotted as a prodigy.

So even if world population declines for a while, I suspect it'd be possible to enjoy more mathematical talent in the coming decades / centuries.


Scott Alexander has an excellent short story on this subject: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/


This is a big problem in all scientific fields. No one can hold all the relevant information in their head. I expect that there are various "easy" and worthwhile results to be had in many fields if someone could combine and integrate large paths of known results. But this is just not feasible to do for a human mind.

Over time I think we need to either create a GAI or enhance ourselves(via genetic engineering, brain-computer-interface, eternal youth or other ways) to overcome this fundamental human limitations.


While there is a boundless, growing amount of prior work, from (an interested) outsiders point of view there also seems to be progress on discoverability and formalisations around how this is expressed. Building on the abstractions of others seems like it would provide an environment for _more_ novel work.

The abstractions need to be understood, but potentially only to a level of n-1. Isn't this one of the core principles of proofs?


I can't pretend to be anywhere near the frontiers of math but aren't some contributions about providing a unifying framework or simplifying re-expression of something that was already known in a way that makes it more comprehensible? Is there not always hope that the right new perspective or innovation in one generation will allow subsequent generations to get up to speed faster?


Point is that the low hanging fruits are finite and are picker continuously, thus sooner or later they run out. Reinventing something I consider low hanging just not recognized for some time.


It's not really plausible that such a process could continue forever. It may be plausible that there are some better foundations to various fields of math that could reduce the size of the proof for Fermat's last theorem by, say, 100 fold - but it's not plausible that it could ever be fitted onto the side of a page with the right abstraction. It's not plausible that, even in the far future, you'll be able to learn the information needed to derive formulas you'd learn in post-graduate maths today after a 1 week course in 5th grade.

On the other hand, it's extremely plausible that math as an abstract idea could build arbitrarily complex structures that would be of interest if we were able to learn to model them - so, almost necessarily, within the limitations of current human cognition and/or lifespan, there will be some structures that it is impossible for a human to learn.

Of course, in some arbitrary future we may find that those limitation of human cognitition/life-span can themselves be overcome - if that's the case, then the argument changes significantly.


This effect clearly exists. It suffices to check how physics and math were expressed in the papers that discovered them and how they are currently represented to notice that sometimes organizing knowledge in the right framework and simple notational changes can be sufficient to power up our ability to make new advances.

Question is, are we dedicating enough effort to this endeavors? Is rewriting known math under a different guise sufficient to survive the publish or perish attrition? And will these efforts keep returning results?

I suspect the answer is no to the first two questions, and I don't know the answer to the third.


Wouldn't Scholze be a counterexample to the thesis that young people can't do fundamental contributions?

I would say that it supports the idea that only young people are able to change the fundations of maths, while older people can still use the existing techniques to push the bord, specially on those super technical fields where experience and knowledge are an advantage.


Maths always used to be an entertainment for aristocrats and philosophers.

Only in very few exceptional times in history it was been used for something else, being the largest exceptions the industrial and scientific revolution, and these usually did not require breakthroughs: just apply existing knowledge.

It’s never been a young person game.


This doesn’t account for growing literacy and numeracy in the developing world, we might be nearing peak population but we are still ages away from a globally educated population with both the resources and education to start contributing to academia/research.


Similar situation with energy generation. Low hanging fruit is pretty much gone. Current output relies heavily on advancing up a tech ladder. If we somehow have to start over it might not be possible to make it back.


We detached this subthread from https://news.ycombinator.com/item?id=33513127.


We will have super advanced AGI exploring the frontiers of Math and Physics. Check out the recent Lex Fridman pod with Andrej Karpathy. Fascinating discussion.


Definitely a potentially meaningful reason as to why we should invest more heavily into life extension.

I don’t foresee AI overtaking human cognition for at least a decade or two.


Not nit picking, just informing, population currently stated to peak around 2080-2100 at around 10.2-11.4b, depending on the estimate!


You forget, we are becoming cyborgs. Computers will allow us to move much faster, work as a team, and see the future


Perhaps math focused hot-housing at a younger age could provide an extension?

(genuine question, as I don't know anything about it)


> Perhaps we could then rely on computer assisted theorem provers.

Or AI, particularly if we figure out AGI.


Or we innovate elsewhere.


On a similar theme, Scott Alexander's short story Ars Longa, Vita Brevis https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

Analogously, I've heard transhumanists make the argument that as fluid intelligence declines as crystallized intelligence grows, we have yet to see a human mind in its full power.


There's a great SSC story about this concept: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/


Add Comment


Brings to mind Gould's quote - "I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops."

https://www.goodreads.com/quotes/99345-i-am-somehow-less-int...


I think about this often. Having studied mathematics, it's a field where there is a huge range of talents, even at 18 (and younger). There are genuinely gifted people who stride ahead, but the sad reality is that even at top institutions most of the undergraduate cohort will be merely averagely clever people who have received great (often private) tuition.

In a field where finding every genius really matters because of the difficulty of expanding the frontiers it's heartbreaking to realise most just never get the opportunity of good mathematical education in the first place.


I don't think this is the whole story. Of course, it is a tragedy when people who had undeniable raw talent are neglected or passed over and don't get to achieve as they might. It's also sad and disappointing when we observe structural injustices in our society.

However, isn't it possible that giving great teaching resources to people who aren't already at the top level in terms of raw talent, allows them to succeed at a very high level? If so, it's not a zero sum game - the 'averagely clever people who have received great tuition' and who go on to succeed might be a sign of the education system working, not its failure. We might focus more on making sure a wider group of 'averagely clever people' get such good tuition, rather than on the finding and promotion of undiscovered geniuses.

Two examples that I often think about in this context: firstly the Budapest school of mathematics in the early 20th century. A hugely disproportionate number of the world-class mathematicians from that era were directly taught by Fejer and/or part of his seminar and mentee group. (He stands out as a supervisor to a far greater extent than any mathematician stands out from the pack for his own achievements.) Is it possible that while one or two of this group might have been 'born geniuses', it also includes several who might otherwise have been second-rate, but who became world-class because of the influence both of Fejer and of their high-performing colleagues?

Secondly, George Harrison of the Beatles. In the second half of the Beatles' recording career, he wrote some of their best-loved songs, several of which are regarded as absolute classics. Did the Beatles happen to contain three inherently gifted songwriters, by sheer coincidence? Or is it more likely that working for a decade alongside two extremely talented and successful songwriters nurtured and elevated Harrison far beyond what might otherwise have been expected?


> rather than on the finding and promotion of undiscovered geniuses.

There's a catch in it as well, which is that you cannot really identify a genius up-front. A genius is, by definition, someone who thinks differently. A lot of people think differently. But the thing with the genius is that he thinks differently and correctly. The latter part won't be obvious until after the fact.

When Alex Ferguson started out as a Manchester United manager he probably already was a genius, but it wasn't obvious until many years later.

Same with Sergey Brin and Larry Page. They probably had a genius vision for the company right from the start. But it's only in retrospect that we recognize that it was, in fact, a genius vision.

Finding a genius is, I think, almost by definition, impossible.


Alex Ferguson might have been a good football manager, but he wasn't a genius - and I say that as a Man United supporter. He didn't advance football in any significant way, and even towards the end of his career he was somewhat naive from a tactical perspective.

He was just a good man-manager in what is a sea of managerial mediocrity - football management is largely restricted to ex-pro-footballers who can't do anything else. Rinus Michels was a genius, Arrigo Sacchi was a genius - they actually advanced the sport.


On a similar note, Page and Brin were obviously smart and driven and all, but applying linear algebra to the search problem was arguably one of the things that was "in the air". It is hard to judge, because what seems obvious ex post was (obviously) not obvious ex ante. But, again, eigenvalue decomposition/SVD (and linear algebra more generally) - you throw it at the Netflix problem, you throw it at image compression, you throw it at anything really, something's gonna stick.

It's an interesting counterfactual: without Page, when would Page Rank have come around? The idea that the stationary distribution of a Markov Chain (under certain conditions) is given by the eigenvector to the (largest) eigenvalue 1 is certainly decades old, if not a century.


Yes, and also don't forget that Rajeev Motwani's (Ph.D. supervisor and FFF investor) research field was randomized algorithms. So thinking about random walks in his group was a bread and butter thing, probably, not something that required genius. Rajeev is co-author to several seminal paper on the search engine and worked intensively with Google and joined their board. According to Prabhakar Raghavan, Sergey Brin acknowledged in a BBC Radio interview in 2009 that Google might not have been possible without Rajeev. (Tragically, he passed away in 2009 by falling into his own pool with alcohol in his blood, and he could not swim.)


You start to loose correlation with IQ really, really quickly as you move away from feats of solving established math problems to football management and entrepreneurship. Solving established math problems is more like taking an IQ test than any other activity that I can think of.


> Same with Sergey Brin and Larry Page. They probably had a genius vision for the company right from the start.

That seems inconsistent with reports that they tried to sell early to Yahoo! for $2m.


I think the Beatles included three inherently gifted songwriters by sheer coincidence. No doubt the apprenticeship helped, but Harrison was writing songs like Taxman and experimenting with Indian classical music on Love You Too in 1966.


It's not either/or, it's both. They were gifted, but they also had the time, money and motivation to work on their skills and hone their craft. It's not by coincidence that in literally every community that talks about how to level up their skills, the first response is always "practice practice practice".


By 1966 George Harrison had been in a group (with various names) with Lennon and McCartney for 8 years, their songwriting partnership had been actively writing songs for 5 or 6 years, they had written almost all the songs for 6 or 7 albums, several of which are regarded as among the best albums of all time.


> If so, it's not a zero sum game - the 'averagely clever people who have received great tuition' and who go on to succeed might be a sign of the education system working, not its failure.

The failure is that mediocre students which come from an environment which helps them (good school, private tutors, stable home life) can be successful, while "naturally gifted" yet disadvantaged students don't.

The basic fact that parental income is still the best predictor of educational achievement should tell you everything you need to know.


> parental income is still the best predictor of educational achievement

Is that controlled for genetic influence though? Suppose smart people 1) have high income, 2) have smart kids.

Disentangling the various causal factors is substantially harder than your quote above suggests.


I read a study that specifically addressed this by looking separately at adopted kids. I'll try to look for it and post a link when I'm at my computer.


> The failure is that mediocre students which come from an environment which helps them (good school, private tutors, stable home life) can be successful, while "naturally gifted" yet disadvantaged students don't.

I think you have rephrased what I said to again push the idea that it's a zero sum game. If mediocre students can be successful, given the right educational inputs (for some definition of 'successful' and 'mediocre'), isn't this a huge opportunity? Perhaps we should improve society as well as social justice by trying to promote mediocre students from poorer backgrounds to this same level of success, rather than by working on the problem of undiscovered genius.

> The basic fact that parental income is still the best predictor of educational achievement should tell you everything you need to know.

My whole point is that this does not tell you everything you need to know. If low-parental-income and high-parental-income students had exactly the same potential to succeed, the fact that parental income is a good predictor of achievement might be telling you either (or any combination) of two things: that wealthier students are achieving beyond what should be expected (and so we are ending up with incompetent people in very senior positions because they did not get them on merit), or that poorer students are achieving below where they should be (and so we are wasting the potential of talented people). These are really different problems! And probably have quite different solutions.


I certainly agree that talented students from disadvantaged backgrounds ought to be helped. But thinking parental income is the causal factor behind the observed correlation of student success with income may be mistaken.

Two other hypotheses are that the money itself is not important, but rather the familial transmission of certain cultural traits (which also often lead to high incomes), and/or that, similarly, the causal factor is genetic transmission of some cognitive or personality traits. Note that a factor being genetic does not mean it is immutable - for instance, there could be a genetic factor that leads to students doing poorly in school because they are intolerant of sitting still for hours in desks, which would become irrelevant if one stopped requiring them to sit still for hours.

A mistaken idea of the causal explanation for the correlation with income would lead to interventions that don't actually work.


This is the problem right here. The solution is that Maths teaching needs to be brought online Duolingo/Brilliant style (with in person teacher support obviously). We can completely standardise the curriculum and do A/B testing to find out what is the most efficient way to get the information into students’ heads.

The data could be analysed further to see if there were certain mathematical learning styles (I suspect there are). One cohort of students might be better off on the visual/geometry heavy track, one might be better on the abstract track etc.

It should be initiated at the national government level and mandated to use in schools but the actual building of the application should be contracted out to a company from Silicon Valley/Roundabout/$COUNTRY_TECH_HUB.

Until this happens or something similar we’re just going to continue having this grossly unequal and unfair state of affairs where access to good mathematics education is complete geographical potluck for anyone whose parents can’t afford tuition. The amount of wasted potential is staggering.


Given that even our first-tier colleges and universities still have no real idea how to teach math in a way that ensures that students will internalize and understand it, this isn't surprising. I think one of the most curious things about the modern world is that centuries into working with higher math, we still do not have good ways to teach it.

Most students with creative insights that could help advance the field tend to wind up one of two ways: (We need to create a viable, replicable third way - today it seems to require exceptional personalities for both student and teacher...) 1) They slog through the existing math education system, but it crushes their creativity and insight, leaving them mostly useless husks (this pit catches most mathemeticians, IMO), or, 2) They become disgusted by the incomprehensibility of higher mathematics and abandon the field entirely (this is a huge chasm of a pit, that prevents most all engineers and scientists from ever becoming more than moderately competent in mathematics and methods.)


I think you mean tutelage instead of tuition there (I could be wrong)


Very nice quote.

I also like to think that intelligence is everywhere, but the persistent pursuit of truth is difficult. Zhang embodies a man determined to work on his particular pursuit.

I hope I can be as persistent over time.


We detached this subthread from https://news.ycombinator.com/item?id=33516152.


This is EXACTLY what I'm thinking about. Tons of talents were wasted. To fix this we have to change the global political climate.


I mean Alan Turing proved that even a demonstration of your abilities isn't enough to avoid persecution.


But what about the sweatshop owners?


There was a scientist who did his best stuff while he was working as a patent clerk.


Patent clerk (at least back then) is a more skilled job than it sounds, as it's someone that actually reviews the patents rather than just a general office dogsbody, so you have to understand the technology and to some extent the science in those patents. It's not totally unrelated to being a scientist and definitely a less surprising early career job than delivery driver.


A patent clerk now is a technical job as well. It’s a flaw in peoples thinking that “clerk” implies something low rank.

If someone tells you they were a “clerk” for a Supreme Court judge don’t be surprised when later in life they have an impressive legal career


Pretty sure all the SC justices clerked in their early careers. I'm not in law but I think it's the expected path


We detached this subthread from https://news.ycombinator.com/item?id=33513275.


Ironically he didn't patent E=MC^2 and now everyone says it scot-free.


> he didn't patent E=MC^2 and now everyone says it scot-free

Laws of nature (and descriptions thereof) aren't patentable.

https://www.bitlaw.com/source/mpep/2106_04_b.html#:~:text=Th....


But he did learn to split the beer atom: https://www.youtube.com/watch?v=Ra7oqFqj9uU


That’s US law though (no idea if Germany had it any different at the time, but it’s a different jurisdiction).


Germany outlawed Einstein.


Interesting phrasing to use for wanting to genocide him and all the other jews. Einstein got out on time, six million others didn't.


I'm sorry for the others, friend. I am here to laugh with you.


Hakuna matata! Downvotes are just a karma payment coming due.


No, but his image is licensed.


Do you know how much a patent clerk makes?


You’re missing the reference


I don't think they are, I think they're stating the difference between being a patent clerk in the early 1900s and working at Subway while living in your car is substantial.


You've never worked in the private sector... They expect results.


For those of you who don't watch many movies:

https://www.youtube.com/watch?v=RjzC1Dgh17A


Thank you, my faith in HN has been restored!


In this case, he makes a difference!



I didn't know how to make the links clickable. Thanks!


We'll do that.


The “lack of progress” in philosophy comes down to this. By the time of Socrates or Confucius, there had already been a lifetime’s worth of philosophical thought done that you needed to grapple with. Everyone since then has necessarily had to let something slip through in order to move forward at all.


I think we sometimes understate philosophy's progress: it's certainly not measurable in decades, but the world of 2022 looks very different (in terms of philosophical priors and their consequences) than the world of Socrates or Confucius. A handful of examples that come to mind:

* (Nearly) everybody on our planet lives under a government whose fundamental structure and right to power comes from modern (meaning 17th century) political philosophy.

* Everybody posting on this board is using a machine that only existed in the mind games (and works) of math philosophers just over a century ago.

* The philosophical foundations for contemporary science are very young, practically within living memory. Popper died less than 30 years ago!


Did those things appear because philosophers talked about them or did philosophers talk about them because they appeared?


I can easily imagine an Earth with no philosophers after say Aristotle that is functionally identical to our current one.


Minus evidence-based science, free markets, government of the people, freedom of religion and the end of slavery. Yep, exactly the same.


Might as well attribute all of that to Christianity. That claim probably holds as much water.


Attributing science and freedom of religion to Christianity? That's a bold thought.


And a correct one. There is a reason these things did not develop until when and where they did...


Pinning concepts like "science" and "freedom" to a specific era is a silly endeavor. But if you had to, neither would originate in Christian Europe: science as a concept predates Christianity by at least 800 years; the thing we call "freedom" today didn't take shape until 1600 years into Christian Europe's development.

Christianity, like just about every religion, has made invaluable contributions to philosophy. But it's ahistorical, to an embarrassing degree, to attribute such broad concepts and bodies of inquiry to it.


Wow. I guess you're not familiar with any philosophers other than Aristotle, otherwise you could never make such a statement.


Please elaborate


There should be a rule where you're not allowed to phrase a question as a false statement that insults a lot of people...


Without philosophy, there is no Hegel, without Hegel there is no Marx, without Marx there is no Lenin, without Lenin, there is no Stalin, without Stalin there is no Cold War, without the Cold War there's probably not a war in Ukraine right now, and without a war in Ukraine I wouldn't be worried about my gas bill. I think that's pretty concrete.

(Just a single, rather obvious example. Easy enough to find others.)


Each of the examples I gave have direct origins in philosophy. It’s not even clear what the alternative would be for Hobbes and Popper; for Frege you could call him a mathematician instead, I suppose, if you ignore all of the other philosophy he did.


The causality here is hard to prove, even untangle. I think the French Revolution was largely more influential for the state of governments today. I also think that science was not influenced so much by Popper, but much more by Fisher and Neyman. People tend to use what is a successful tool, and be influenced by power and theatre, not so much by fragile logical constructs without empirical value found sitting in the armchair.


Do you think the French Revolution happened in a vacuum? Without the ideals of Enlightenment philosophy it would have amounted to nothing more than mob violence (if it had even gotten started at all).


Good point, I agree that it did not. Were those ideas however progress, or a tool that the Committee of Public Safety used because it fit their agenda?

Was this philosophy finding truth that guided policy, or was it making up rhetorical arguments in a power struggle on behalf of the rising middle class and its policy?

In absence of criteria of non empirical truth that are independent of the very non empirical truths they are criteria for I tend to believe the latter.

In addition to this inherent weakness philosophical arguments are not for the masses anyways. Not even philosophers usually agree what they mean or entail. How could those idea(l)s have guided the revolution as more than just superficial self justification?

Remember that the enemies of the French Revolution had their philosophers too, they just lost.

Historical events usually have very mundane reasons, class struggles, geopolitics etc. Ideas may play a part too. But I believe not in the sense that they are the horses in front of the cart.


Well, the original point was whether philosophy had any influence on world events, not whether that influence was good or not.

To me, the idea that ideas don't play a major role in world history and politics seems odd. There's nothing more fundamentally human than trying to understand the world around you and wanting to shape it in a certain way. Of course material conditions are important; the French Revolution wouldn't have happened had the state not gone bankrupt. But it also wouldn't have happened in that way without Enlightenment ideas. Very basic principles of the revolution, some of which were kept by Napoleon and in subsequent generations and still live on today were formulated by those enlightenment thinkers. Just consider the declaration of human rights.


I understood the thread roughly this way: „philosophy makes progress. The evidence is that the modern world has adopted the more progressed rules of enlightement and scientific rules.“ And my reply was that this is highly questionable.

I’m saying that the enlightenment ideas happened because the uprising middle class needed justifications in their power struggle. Not that the ideas formed a middle class that then acted upon them.

I agree with you that humans need ideas to tell a story of themselves. And that story better paint them as good! But that story is usually not the driving force.

So I fail to see progress as I don’t see a criterion to measure progress (in general. Aristoteles‘ logic is a tool. Here I see how to compare it to FOL and judge which is better or worse for specific use cases).

As for human rights: do I agree with them? Yes! Can I justify them? No! Do I know where to exactly draw the lines when two rights collide? No! Is it important to justify them in a philosophical way? I doubt.


I don’t see any direct origins. Karl Popper was a small child when Einstein came up with relativity. Hobbes was just a reactionary apologist of absolute monarchy. And Frege’s philosophical contributions were world-changing as long as we can pretend that propositional calculus is philosophy.


I don’t think this is going to be productive, but to be clear: the claim isn’t that science didn’t exist before Popper, or that Popper somehow gets credit for all of science. It’s that Popper has produced the best description of science’s epistemic underpinnings thus far, allowing science to progress without imploding under the demands of positivism. In other words: when we “do” science, the epistemic model we use is less than a century old.

The claim that propositional calculus is in the domain of philosophy and mathematics is not controversial in either domain. Nor is it unusual: the history of philosophy is the history of discharging subjects once they develop a field of their own.


> It’s that Popper has produced the best description of science’s epistemic underpinnings thus far, allowing science to progress without imploding under the demands of positivism

I am not sure what that claim means precisely. Popper saved science from… a certain group of philosophers?

> when we “do” science, the epistemic model we use is less than a century old

The epistemic model is whatever the scientist in question uses. One can be a Popperian, another can claim they are guided by God, and someone else would just say “shut up and calculate.”

Popper didn’t like Bohr and his Copenhagen interpretation from the philosophical standpoint yet he didn’t deny that Bohr was a good physicist.

> The claim that propositional calculus is in the domain of philosophy and mathematics is not controversial in either domain. Nor is it unusual: the history of philosophy is the history of discharging subjects once they develop a field of their own.

The context of this discussion is that philosophy progresses over time. You cannot say that it progresses just because we put a label “philosophy” on every new thing that isn’t formalized yet. In that trivial sense, of course, philosophy had great progress; especially in its subfield of natural philosophy also known nowadays as science. But did Frege really stand on the shoulders of Hegel, Spinoza and Kant and improved on their ideas?


> Popper has produced the best description of science’s epistemic underpinnings thus far

Ok, sure.

> allowing science to progress without imploding under the demands of positivism

How is giving a description of something necessary for its continued progress? Were people not having sex before evolutionary psychologists and biologists elucidated how and why?


You might be interested in the fact that Einstein himself claimed that he might not have stumbled upon relativity if it hadn't been for David Hume:

https://aeon.co/essays/what-albert-einstein-owes-to-david-hu...


I second your other two points but I think this one is inaccurate:

> * (Nearly) everybody on our planet lives under a government whose fundamental structure and right to power comes from modern (meaning 17th century) political philosophy.

A large number of countries are ruled by dictators and their close supporters. Sometimes they position themselves within the boundaries of a more or less modern ideology but ultimately care only about being the one in command and use that ideology as lipstick. This is a straight continuation of prehistory, not something stemming from 17th century.


I meant more the structure of and justification for the state and sovereign, not liberal ideology. You’re absolutely right that a large number of people don’t live under the latter.


What is most annoying is: many of the dictators were installed by "democracies" so that these can continue their unsustainable lifestyles, at the expense of extreme suffering elsewhere.

...and I disagree with the parent's parent's point on progress in philosophy: all three examples are not progress of philosophy but at best influence or application of it, and in the last case did not involve trained/professional philosophers (e.g. Turing was a mathematician/chemist-turned-cryptoanalyst/computer scientist).


We measure philosophy success relative to finding the meaning of life. On the other hand everybody agrees that computers have advanced enormously without achieving yet AGI.


“We” don’t do anything of the sort. The meaning of life is a Douglas Adams joke; philosophy’s scope extends slightly further.


The measure of success is different from school to school, if any exists at all in a given school. And there is no lack of schools.


We detached this subthread from https://news.ycombinator.com/item?id=33515219.


which lack of progress? The few encounters with logic (in a philosophical sense, although it feels more "mathy") I had recently were all more recent works, certainly more recent than a large part of math I use!


[flagged]


"Cultural Revolution" is actually a very apt name for what happened. One of the core concepts of it was the destruction of the "four olds": old ideas, old culture, old customs, and old habits. They were horrifyingly efficient at it, resulting in what was one of the single-largest destructions of cultural heritage in history, and almost certainly the largest one perpetuated by one's own people (albeit the original iconoclasts probably give them a run for their money).


Everyone in Asia did or is doing the same thing.

Meiji did this in Japan, to the point where many B. temples were burnt down (thankfully not as bad as Mao), and the whole of Japan's culture considered obsolete. KMT boys were doing the same thing in China before Mao. Nehru and his Congress cretins who inherited the British state, did the same, and India continues these "destroy Bharata" policies to this day.

In fact, since a lot of Asian culture is tied to India, one might call this an extension of the millennia long war on India from the West, but now on the realm on the realm of ideas. Not surprising that the occidental academia has deep-hatred for India's old-culture and everyone who imbibes it (at the same level as medieval anti-semitism from my own experience). They also have the victim-complex of Islam (and Xtianity to a lesser extent) to their support now.


Westernization is not the same thing as throwing away ALL your cultural heritage and adopting Marxist ideology to the letter...

You might argue it's a difference in degree, but if the Cultural Revolution actually succeeded (or not reverted), it would be the difference between zero and non-zero cultural heritage left.


Please do not take HN threads on generic ideological tangents.

https://news.ycombinator.com/newsguidelines.html


For anyone else who was curious: the Reagan version of this quote used "I'm", not "I am". https://www.reaganfoundation.org/ronald-reagan/reagan-quotes... I mention this not to be nitpicky, but just because googling the quote with "I am" didn't work well.


It's not really remarkable because governments rarely come up with nasty names. Even then the cultural revolution isn't an inherently positive. Culture can be good or bad and so can revolutions


Of course they don't. But what's remarkable, and the U.S. is no exception here, is that the names they come up with are quite often the diametric opposite of what effects they actually have. The "Inflation Reduction Act of 2022" is but one recent example.


> It's the old "I'm from the government and I am here to help" joke.

What else is the government for? I understand it's a popular Reagan talking point but I don't really grok it.

If you assume the govt doesn't work for us, particularly if you're in politics, it becomes a self-fulfilling prophecy.


This is probably not the best thread for a political discussion, but I will take a shot. Nobody disputes that governments should be a force for good. Government's obviously can do both good and bad. Many reason atrocities like the cultural revolution came of under the pretext of trying to help. If you think the government has veered into doing bad, it is natural to want to reform it and make it stop doing that. the talking point and metaphor simply conveys a belief that some of the governments work is bad and should be stopped. It's a lot more complicated, but that's the 5-year-old explanation of it


> the talking point and metaphor simply conveys a belief that some of the governments work is bad and should be stopped

This is a nuanced and thoughtful position. The statement "The 9 words you never want to hear are I'm from the government and I'm here to help"" is not nuanced and thoughtful, it doesn't suggest governments do good and bad, it positions governments (and their agents) as incapable of doing good. That someone from the government trying to help you must not be, for whatever reason.


I don't know what to tell you. It's not a literal statement. At Best it is a heuristic and more often, a platitude.

It's like don't talk to Cops or corporate HR works for the company and not you.

If you were to take it at face value and debate it against it, you would be making the fallacy error of secundum quid. If you want to have a real discussion about how much good and or bad the government does, you need to find a real human to have that discussion with. The saying alone is a not complete self-contained philosophy.

https://en.m.wikipedia.org/wiki/Secundum_quid#:~:text=Secund...


Government that limits its own power by law is humanity's greatest political invention.

Anyone can make a government that sometimes does good. The challenge is to create a government that cannot do unfathomable evil.


The idea is that government is inherently bad. But no government is worse.

People in government will abuse their positions as much as possible.

So limiting government as much as possible is the goal.

Good example is pandemic. Governments saved a small number of people by collapsing economies. Quit possibly killing more people then were saved. There is a very good argument that the best thing governments could have done is “nothing”.


Government is the worst possible way to solve any problem. Therefore, we should only let it solve those problems that absolutely cannot be solved any other way. There are plenty, and this is not a statement that all government is bad, but that it needs to be tightly constrained.

People constantly rail about the evil wrought by big business, and they're right to do so, but what is the government (in the US anyway), but the biggest (and perhaps most corrupt) business of all? Except it's not constrained by the requirement to turn a profit, so it doesn't suffer the bad consequences of failure, because you can always print more money.

And you're absolutely right about the pandemic response.


> To give a sense of the scale of this claim: If correct, Zhang's work is the most significant progress towards the Generalized Riemann Hypothesis in a century.

Thanks, that totally failed to give any sense of scale.


It's the moral equivalent of making major headway against P!=NP; or, proving that there are no global hidden variables in QM; or, that there's a clear path ("just engineering") to room-temperature semiconductors.


>room-temperature semiconductors

I suppose superconductors. Semiconductors are well in the room temperature regions :)


the path is just really, really clear


I can't decide if this is my favorite autocorrect, or least favorite. I'll let it stand — mostly because it's too late to edit!


I feel like saying this is similar to making progress on P!=NP is not accurate (extreme disclaimer: I have no formal math training). My understanding of P!=NP is that an answer to that has strong implications for the nature of the concept of determinism _of reality_, let alone most cryptography and lots of other stuff too. From my quick scan of the GRH wikipedia article, it doesn't appear that the GRH has the same widespread almost philosophical implications as P!=NP.


I think P != NP is extremely interesting, but I wouldn't say it has strong implications for the nature of the concept of determinism or reality. I think that the idea of a Turing machine / the notion of computability has deep philosophical implications, but even that I wouldn't say has implications for "the nature of reality."

If you think that prime numbers are interesting, then I can tell you that GRH is the single most central conjecture in the study of prime numbers. Personally, I think prime numbers are some of the most fundamental and intrinsically interesting objects in pure math, but of course, this is subjective!


>has deep philosophical implications, but even that I wouldn't say has implications for "the nature of reality."

what would be an example of "deep philosophical" implication that has no bearing on the "nature of reality"


>Moreover, I think this result would not only be a more significant advance than Zhang's previous breakthrough, but also constitute a larger leap for number theory than Wiles' 1994 proof of Fermat's Last Theorem (which was, in my opinion, the greatest single achievement by an individual mathematician in the 20th century).

Was the rest of the context not more enlightening?

E.g. Would constitute a larger leap than proving Fermat's Last theorem which was "the greatest single achievement by an individual mathematician in the 20th century"


After a glimpse to the Generalized Riemann Hypothesis wiki page, I'm afraid there's no football stadium or olympic pool for us.


I wish there will be time when "(if correct)" would be nonsense because correctness is checked in a fraction of second by something like CoQ and this proof is accompanied every known theorem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: