Hacker News new | past | comments | ask | show | jobs | submit | gavagai691's comments login

"Save for Maynard, a 37-year-old virtuoso who specializes in analytic number theory, for which he won the 2022 Fields Medal—math’s most prestigious award. In dedicated Friday afternoon thinking sessions, he returned to the problem again and again over the past decade, to no avail. At an American Mathematical Society meeting in 2020, he enlisted the help of Guth, who specializes in a technique known as harmonic analysis, which draws from ideas in physics for separating sounds into their constituent notes. Guth also sat with the problem for a few years. Just before giving up, he and Maynard hit a break. Borrowing tactics from their respective mathematical dialects and exchanging ideas late into the night over an email chain, they pulled some unorthodox moves to finally break Ingham’s bound."

This quote doesn't suggest that the only thing unorthodox about their approach was using some ideas from harmonic analysis. There's nothing remotely new about using harmonic analysis in number theory.

1. I would say the key idea in a first course in analytic number theory (and the key idea in Riemann's famous 1859 paper) is "harmonic analysis" (and this is no coincidence because Riemann was a pioneer in this area). See: https://old.reddit.com/r/math/comments/16bh3mi/what_is_the_b....

2. The hottest "big thing" in number theory right now is essentially "high dimensional" harmonic analysis on number fields https://en.wikipedia.org/wiki/Automorphic_form, https://en.wikipedia.org/wiki/Langlands_program. The 1-D case that the Langlands program is trying to generalize is https://en.wikipedia.org/wiki/Tate%27s_thesis, also called "Fourier analysis on number fields," one of the most important ideas in number theory in the 20th century.

3. One of the citations in the Guth Maynard paper is the following 1994 book: H. Montgomery, Ten Lectures On The Interface Between Analytic Number Theory And Harmonic Analysis, No. 84. American Mathematical Soc., 1994. There was already enough interface in 1994 for ten lectures, and judging by the number of citations of that book (I've cited it myself in over half of my papers), much more interface than just that!

What's surprising isn't that they used harmonic analysis at all, but where in particular they applied harmonic analysis and how (which are genuinely impossible to communicate to a popular audience, so I don't fault the author at all).

To me your comment sounds a bit like saying "why is it surprising to make a connection." Well, breakthroughs are often the result of novel connections, and breakthroughs do happen every now and then, but that doesn't make the novel connections not surprising!


"It has been proved all the zeroes are within a narrow strip centred on the line and you can make the strip as arbitrarily narrow as you like."

Nothing close to this is known.

The nontrivial zeros of zeta lie within the critical strip, i.e., 0 <= Re(s) <= 1 (in analytic number theory, the convention, going back to Riemann's paper is to write a complex variable as s = sigma + it)*. The Riemann Hypothesis states that all zeros of zeta are on the line Re(s) = 1/2. The functional equation implies that the zeros of zeta are symmetric about the line Re(s) = 1/2. Consequently, RH is equivalent to the assertion that zeta has no zeros for Re(s) > 1/2. A "zero-free region" is a region in the critical strip that is known to have no zeros of the Riemann zeta function. RH is equivalent to the assertion that Re(s) > 1/2 is a "zero-free region." The main reason that we care about RH is that RH would give essentially the best possible error term in the prime number theorem (PNT) https://en.wikipedia.org/wiki/Prime_number_theorem. A weaker zero-free region gives a weaker error term in the PNT. The PNT in its weakest, ineffective form is equivalent to the assertion that Re(s) >= 1 is a zero free region (i.e., that there are no zeros on the line Re(s) = 1).

The best-known zero-free region for zeta is the Vinogradov--Korobov zero-free region. This is the best explicit form of Vinogradov--Korobov known today https://arxiv.org/abs/2212.06867 (a slight improvement of https://arxiv.org/abs/1910.08205).

I think your confusion stems from the fact that approximately the reverse of what you said above is true. That is, the best zero-free regions that we know get arbitrarily close to the Re(s) = 1 line (i.e., get increasingly "useless") as the imaginary part tends to infinity. Your statement seems to suggest that the the area we know contains the zeros gets arbitrarily close to the 1/2 line (which would be amazing). In other words, rather than our knowledge being about as close to RH as possible (as you suggested), our knowledge is about as weak as it could be. (See this image: https://commons.wikimedia.org/wiki/File:Zero-free_region_for.... The blue area is the zero-free region.)

* I don't like this convention; why is it s = sigma + it instead of sigma = s + it? Blame Riemann.


Thank you! I periodically remember this and for years I've been meaning to try to find out for sure whether I had somehow just misunderstood completely. Very pleased to know for sure that I had and that the Riemann Hypothesis remains a genuine mystery.


You don't think that a model is something abstract? Abstract doesn't have to imply nonphysical in the sense that people think souls or God are nonphysical. I mean abstract in the sense that language, mathematics, or a sketch are abstract.

To expand on this: I think models are representations, and whether or not something is a model depends in some way on human minds. (In particular, it depends on whether a something would be understood by a human mind to be a representation.)

I don't think that any correlation between physical systems qualifies one as a model for the other. Your definition as written would include any two things that are connected causally, or have a common cause, as models for one another. One problem (though not the only one) that I have is that your definition removes any mention of human minds.

In particular, I think "representation" is, broadly speaking, some kind of correspondence relationship between linguistic or pictorial things (where I include mathematics as "linguistic") and physical reality, and "a representation" is some linguistic or pictorial thing that corresponds to reality. I think that a model is a kind of representation.

A model is a kind of representation where for convenience and tractability, certain aspects of reality are left out or "abstracted away" (deliberately), with the goal of understanding the real world by understanding the simpler representation of the real world.


> You don't think that a model is something abstract?

Models definitely don't have to be abstract. For example, researchers will talk about studying a disease or the effectiveness of a treatment in a "mouse model"[1].

That model is an actual concrete mouse that is being used as a model of a human. It's not abstract in the sense of language, mathematics, or a sketch, and they do the research by looking for the physical effects on the mouse model and drawing an analogy to what would correspondingly happen in a human.

[1] eg https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7121329/


> You don't think that a model is something abstract?

They certainly can be, but they don't have to be. An orrery, for example, is not abstract.

> whether or not something is a model depends in some way on human minds

Well, yeah, but what exactly is that "some way"? That is exactly the complication I was trying to avoid.

Maybe the word "model" is too loaded and I need to pick a different word. I'm open to suggestions.


Well, it is an empirical question whether or not matter is continuous and infinitely divisible, or discrete. Our best theories tentatively suggest that matter is discrete (though it is hard to say how much confidence to put in that, or how we could really know either way).


While interesting, that doesn't matter for what I said.

An example of a key question is whether mathematical constructions can use the absolute truth of statements in mathematical constructions. Even if those absolute truths are not themselves something that can be settled by any algorithm.

If they can, then we get all of the weirdness of infinite set theory, such as that there must be more reals than rationals. If they can't, then all of mathematics could fit within a countable universe, and things like Cantor's diagonalization proof just demonstrates a Halting problem kind of self-reference in the definitions of the real numbers.

And this brings us to my point. It doesn't matter whether the universe is continuous and measured to finite precision, or discrete. The set of measurements that we could potentially ever make within this universe is finite (though large). The set of measurements that could be made in principle from the principles that we can discover within this universe is countable. And we have no way to produce an oracle that can always decide truth or falsity. And therefore reality cannot encompass the actual existence of the uncountable infinites that ZFC claim must exist.


I agree that this is a problem with the above definition of model, analogous to the (IMO fatal) problems with https://en.wikipedia.org/wiki/Deductive-nomological_model#We....


That has to do with causality, which is a complicated topic that I was trying very hard to avoid at this point in the exposition. But maybe that was a mistake.


No, that's not exactly a sci-fi concoction. In special and general relativity, there are three dimensions for space and one dimension for time, and this is not something that is of "incidental" importance to special / general relativity, it's a pretty essential shift in perspective to these theories to think of the universe as (curved) four-dimensional spacetime.

But "dimension" is something mathematical. I would say it doesn't quite make sense to say "is the fourth dimension time" in the same way as it wouldn't make sense to say "is the fifth an apple?" The same way that numbers can refer to different things in different contexts (including in the context of different scientific theories), dimensions can correspond to different things in different contexts. For example, statistics and machine learning heavily use "high dimensional" mathematics, but there the "dimensions" would correspond to different variables you are trying to predict or explain. E.g. if you were trying to predict chance of heart attack from 1000 different factors, then you would have 1000+1 total "dimensions," and in that case the "fourth dimension" might be "cigarettes smoked per week" (rather than time).


Contextually of dimension even exists within a specific scientific theory. In relativity, the direction you call time might contain some component of the direction I call space. This implies notions like simultaneity are not well defined in a universal context.


Sure. Seating Pokemon trainers and Pokemon, where each trainer brings their Pokemon.

Idea stolen from this video on Gale-Shapley, which is also typically presented in a "gendered / heteronormative" way (but doesn't have to be): https://www.youtube.com/watch?v=fudb8DuzQlM.

I mean there are so many possible examples, right? Anything where you have pairs of partners.. bridge, tennis doubles, etc.


My background in philosophy is "studied a decent amount of it in college," so I certainly could be wrong here. But to me the paragraph that you are describing as "just plain wrong" seems relatively measured, consistent with the SEP article, and consistent with what I learned about logical positivism through my philosophy courses and textbooks (e.g., Schwartz's Brief History of Analytic Philosophy, Godfrey-Smith's Theory and Reality, Ney's Metaphysics, Devitt and Sterelny's Language, Truth, and Reality). Here is said paragraph:

"As in most philosophical movements, not all positivists agreed with each other, but they generally agreed that if you couldn’t verify something, it was meaningless."

Here is Ney on positivism (p. 121):

"Logical positivists differ on what verification must involve in any particular case, but a pillar of logical positivism was the view that there are two basic kinds of verification: by analytic and by synthetic methods."

Or Godfrey-Smith (p. 27):

"I turn now to the other main idea in the logical positivist theory of language, the verifiability theory of meaning."

You can also look at Devitt and Sterelny's discussion of verificationism, or the section on logical positivism in Schwartz. They are pretty consistent with the quoted paragraph from Philosophy Bro.


"The sad part is that as the trend continues we may reach a point where a mathematician's intellectually productive life is not sufficient to contribute anything novel, statistically speaking."

People talk about this a lot. While I think it could happen for certain subdisciplines (it already takes essentially an entirely PhD's worth of time to learn all the necessary background to be an algebraic geometer, so most algebraic geometry PhD students publish nothing besides their thesis during their PhD studies), it can never happen to mathematics as a whole. If one part of math gets too deep, you can always go somewhere else, where the water is still "shallow."


I’m not so sure. The same argument would apply to theoretical physics in 1960. Circa 2023, there are remarkably few shallow parts of physics.

Math as a whole may last longer, but this list reminds us how far we’ve come in a mere few millennia: https://usercontent.irccloud-cdn.com/file/SaI50Q1d/166786520...

On the timescale of civilization, it seems less and less likely that lone mathematicians can revolutionize the field.

We’re fortunate to have been born so early, relatively speaking.


> it seems less and less likely that lone mathematicians can revolutionize the field.

Which inspires the question: how much can cutting edge math be parallelized?


But you can manufacture new areas of mathematics. For example, Conway's Game of Life, and then prove theorems on it.


Physics is limited by having to represent phenomena in our physical world simply.

Mathematics is not just a small integer multiple larger than this.


This is almost correct.

It's not that math is vastly larger (or more sophisticated) as a field. Both fields are infinitely large in many senses. Rather, the number of respectable starting points where you can do interesting things is much larger, orders larger in math.


There are plenty of unexplored things in physics too; the key word is "respectable".

Looking from the outside, physics suffers a lot from fashion/hot trend tendencies, where you need to be doing the "hot" thing to make the jumps necessary to the coveted Tenure Track — and otherwise, you get kicked out.


Then you spend your whole PhD reinventing a wheel that has a different name in the 30 year old textbook from the next field over, and neither your peers or professor have any awareness of that


Or, as Juergen Schmidhuber likes to point out, in the same field.


I suspect computers can also help us get deeper. Stuff like computer algebra systems.

Maybe some CAS-assisted work gets us into feedback loops allowing us to go indefinitely, as in a technological singularity.

But the "shallow" part is also quite wide.

You can teach people what you've learned forever, for instance.


do you have experience with CAS? i would love to learn how to use CAS to write proof more effectively


CAS can help with the more mechanical parts of a proof. I use it often to quickly check if something can be rewritten to something else I want. The sad part is that even if the CAS doesn't find a solution it doesn't mean there is no solution. It just didn't find it. But it can save a lot of work if the first thing you do is just quickly check, if you're lucky you just saved yourself a lot of work.

I don't know what field you are in so it may or may not be helpful to you.


One very common and simple use case is looking for counter examples. If your theory states for example that "all Matrices with property X also have property Y" then it is quick and easy to generate a whole bunch of matrices with property X and check that they all have property Y. Of course that doesn't actually prove anything, but it can be used to disprove a statement and save you a bunch of time chasing down a dead end.


To be honest, my experience is limited to double-checking my algebra with the free Wolfram Alpha. I need it maybe a few times a year.


> If one part of math gets too deep, you can always go somewhere else, where the water is still "shallow."

Yes, but the shallow areas aren't very interesting, which is why people work in the deep areas.


Most of the now-deep, now-interesting areas were once shallow and uninteresting.


There's also occasionally realignments where the deep stuff gets shallower. It takes a lot of rickety scaffolding to get to a new place, and occasionally the finished product stands fine on its own.

The simplest example that comes to mind is that you can learn group theory without really needing to know anything about Galois theory. I also imagine there's a lot of good math that has shed vestigial physics...


Isn't this evidence that this process has started to occur? The difficulty to make progress in certain areas of math pushes mathematicians to the shallower areas where it's easier to make a contribution. Progress in the first ones will stall and at some point the shallower areas will become less shallow and same phenomenon will occur.

Maybe we will keep forever discovering new shallow areas but I suspect this is not the case. In any case this is a phenomenon that I think will play out in the next few hundred years, not sufficiently impactful in the next few decades but more and more noticeable.


I think P != NP is extremely interesting, but I wouldn't say it has strong implications for the nature of the concept of determinism or reality. I think that the idea of a Turing machine / the notion of computability has deep philosophical implications, but even that I wouldn't say has implications for "the nature of reality."

If you think that prime numbers are interesting, then I can tell you that GRH is the single most central conjecture in the study of prime numbers. Personally, I think prime numbers are some of the most fundamental and intrinsically interesting objects in pure math, but of course, this is subjective!


>has deep philosophical implications, but even that I wouldn't say has implications for "the nature of reality."

what would be an example of "deep philosophical" implication that has no bearing on the "nature of reality"


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: