Hacker News new | past | comments | ask | show | jobs | submit login
The Riemann Hypothesis (utexas.edu)
294 points by gballan 37 days ago | hide | past | web | favorite | 87 comments



Here's a quite friendly elucidation by the inimitable Avi Wigderson, from: https://www.ias.edu/ideas/2009/wigderson-randomness-pseudora...

> Let’s elaborate now on the connection (explained on the cover of this issue) of the Riemann Hypothesis to pseudorandomness. Consider long sequences of the letters L, R, S, such as

> S S R S L L L L L S L R R L S R R R R R S L S L S L L . . .

> Such a sequence can be thought of as a set of instructions (L for Left, R for Right, S for Stay) for a person or robot walking in a straight line. Each time the next instruction moves it one unit of length Left or Right or makes it Stay. If such a sequence is chosen at random (this is sometimes called a random walk or a drunkard’s walk), then the moving object would stay relatively close to the origin with high probability: if the sequence was of n steps, almost surely its distance from the starting point would be close to √n. For the Riemann Hypothesis, the explicit sequence of instructions called the Möbius function is determined as follows for each step t. If t is divisible by any prime more than once then the instruction is Stay (e.g., t=18, which is divisible by 32). Otherwise, if t is divisible by an even number of distinct primes, then the instruction is Right, and if by an odd number of distinct primes, the instruction is Left (e.g., for t=21=3x7 it is Right, and for t=30=2x3x5 it is Left). This explicit sequence of instructions, which is determined by the prime numbers, causes a robot to look drunk, if and only if the Riemann Hypothesis is true!


>monomer-dimer problem

Oh hey, I did my undergrad thesis on that! It generates neat looking graphics:

https://imgur.com/a/Z6hySAw


Looks like an X-ray pic of a system on a chip.


I did a Masters in Biophysics, the topic was diffusion of proteins in cell membranes, they usually show random walks (albeit restricted to compartments or showing distinct speeds (bound/unbound?)). That graphs does not look like a random walk, so the Riemann Hypothesis is false?


The graphic comes from certain assumptions that do not hold in the general case. Specifically, that of non-isotropic fragmentation and arbitrary fragmentation depth.


Something's going in there.


> (e.g., t=18, which is divisible by 32)

Did they mean 3x3x2 instead of 32? There is no way 18 is divisible by 32 in any common sense of divisible.

> This explicit sequence of instructions, which is determined by the prime numbers, causes a robot to look drunk, if and only if the Riemann Hypothesis is true!

So, do they look drunk for large walks? That sounds like something that is easily computed for tens of thousands of steps.


It has been computed for at least 10^16 steps, and it sure does look drunk, but sadly that does not constitute a proof.

https://en.m.wikipedia.org/wiki/Mertens_function


They meant 3^2 = 9.


what does "look drunk" actually mean tho? it's a bit of a weird property...


The drunken man is moving away from where he started (any point in time can be labelled as "start") at a speed of about the square root of his linear speed (speed from his point of view). And the direction is random. This can also be 1D motion. Actually in 1 and 2D it is likely that the drunken man hits his starting point again at some point, it goes to 0 fast in 3D.


For 1D It's trivial to see that a side scroller with side-strafing controls (Space Invaders not Asteroids) will always return to the center if L and R appear with equal probability, in euclidean geometry


As the parent poster quotes, drunk looks like:

> if the sequence was of n steps, almost surely its distance from the starting point would be close to √n.

Of course that raises the question "What is close?"


Mathematics is a uniquely beautiful field to me. The commutative property has always struck me as special in its own way. 2 x 3 = 3 x 2 feels so obvious, but multiplication is really just addition, and 2 + 2 + 2 = 3 + 3 is far less intuitive, yet states the very same claim.

Most fascinating to me is that many theories are effectively 1-way functions. Entire branches of mathematics have been developed to prove otherwise trivially stated claims. It is something to marvel at, if from a safe distance.


The notion that multiplication is ("just") repeated addition is the hardest misconception to overcome blocking progress in mathematics.

Repeated addition is an algorithm that works when you have a non-negative integer argument. Imagining that the algorithm defines the operation limits conception.


I think the big conceptual leap in understanding math is getting beyond the idea that it’s about counting concrete objects in general, and realizing that you can create new sorts of abstract structures that may or may not have anything to do with reality and rules for working with them.


True! I was enlightened in class when we defined addition and multiplication on three random characters then checked if the commutative property holds.

Turned out you can do whatever you want... It just that usually you want addition to be commutative and usually want to use an infinite set of symbols called numbers.


> hardest misconception to overcome

This seems like an exaggeration.


More accurately, it’s a colloquialism.


The number of people who fail to overcome it is close to the number of people who never get a grasp of what mathematics is about. Ask them. It comes up all the time.


Yes, but your inference that having a concept of multiplication as repeated addition leads people to “not grasp mathematics” is backwards.

I have seen little evidence that people who become interested in mathematics have an unusually difficult time “overcoming” this viewpoint.


Their difficulty letting go of repeated addition leads them to consider mathematics opaque, and they lose interest.


I would like to understand what multiplication is in your (and i guess advanced mathematics)


The comment to which you replied says that thinking about multiplication as "just" repeated addition is problematic, so let's look at that.

Consider 3 x 2. If we take that approach, it seems ok - we understand it to mean "add together 3 2's" - 2 + 2 + 2, which gives the correct answer of 6.

What about -3 * -2? What does it mean to add a negative number of times?

What about pi * pi? What does it mean to add something pi times?

What about the matrix M * the matrix N?

etc. The parent's point is that this repeated addition thing is just an algorithm one can use to calculate a multiplication for some operands, specifically whole numbers, not a general definition of the operation of multiplication.


If we want to define π×π, the best we can do is some kind of algorithm for generating an approximation to π (e.g. as a continued fraction or as a positional decimal fraction) to any desired degree, and then an algorithm for multiplying such approximations. We can prove some bounds on the error introduced by our multiplication algorithm, and that gives us a way of approximating the product to any desired precision.

For example if we want to know π^2 to about 3 decimal digits (~9.87), we can start by using an approximation of π to about 4 decimal places (~3.142) and then multiplying the two decimals.

For rational numbers, multiplication algorithms are usually built on breaking a number down into constituent pieces, multiplying every pair of pieces from the two multiplicands, and then adding up all of the partial products.

Matrix multiplication has the additional complication that the elementary terms involved (entries in different places in the matrix) cannot be added to each-other. But the basic procedure is still the same: break the two multiplicands down into basic units which we already know a multiplication table for, compute all of the partial products, then sum them up.


I don't agree that a definition of multiplication that includes transcendentals such as pi must be numerical. In fact, to the extent that numerical approximations are approximations _of_ something, the thing they would be approximating is the actual value of the operation of multiplication being applied to pi and pi. The only reason we're talking about algorithms at all was to distinguish between them and definitions.


Well, that’s a philosophical rather than mathematical question. I don’t really believe in a concept of “actual value” outside of the context of computations (though I don’t mind conceding it as a matter of convenience and social convention, since the distinction almost never matters for practical purposes). I am not an expert, but mathematicians have investigated this, https://en.wikipedia.org/wiki/Computable_analysis

We can treat π purely symbolically if we like, but as soon as we want to do anything useful with it we need some kind of approximation or algorithm.


I am familiar with the constructivist family of ideas. Obviously I agree this a question of philosophy, specifically the philosophy of mathematics. (So not "rather than", since foundations of mathematics is a branch of mathematics.) And because adopting a constructivist approach to mathematics means adopting different ideas about what a mathematical definition _is_ or _can be_, I have to say that it's a bit disingenuous to introduce this context only after engaging in the above discussion.

For example, if someone asks you to explain Euclid's proof of the infinitude of the primes, and you say that Euclid did not provide any such proof and nothing more, I think it's quite disingenuous. It would be more proper to say, from a constructivist view, the argument Euclid made isn't a valid proof, and then either explain the proof in the logical context in which it was made or decline to.

In this case, the point of discussion was separating the definition of multiplication from an algorithm implementing it. It's quite unfair to silently take a position that a mathematical definition without an algorithm isn't valid or meaningful and then on that basis argue that only numerical approximations to transcendentals have meaning.

So many common mathematical concepts such as "the integers" have no meaning in a constructivist approach that it's not sensible to engage in mathematical discussion without establishing that one's fundamental basis of approach varies so widely from the common one.


> What about -3 * -2?

My son actually asked me that a while back (or rather, “why does a negative times a negative make a positive?”), and I honestly didn’t have a very satisfying answer. The best I could come up with was to go back to the definition of multiplication as repeated addition and then start with multiplying/repeated-adding a negative number by a positive number: that would work backward on the number line and give you a negative number. Then, since multiplication is commutative, a positive number multiplied by a negative number must behave the same way: multiplying a positive by a negative causes the negative to move backwards. So, if multiplying a positive by a negative moves it the “other way”, multiplying a negative by a negative must move the first number to the right.

It works, but it’s not as intuitive as I would have liked - I did better with why negative exponents are 1/x^n and why the angles in a triangle add up to 90 degrees.


The following page explains this one pretty well I think, skip to the "A proof" at the bottom: http://mathforum.org/dr.math/faq/faq.negxneg.html


Easy.

-3 x -2, multiply signs first: +, remainder: 3x2 repeat 2 3 times and add; 2 + 2 + 2 = 3, remainder: 0

3.141 * 3.141, multiply signs first: +, remainder 3x2 repeat 3.141 3 times and add; 3.141 + 3.141 + 3.141 = 9,432, remainder: 3.141 * 0.141

shift decimal: 3.141 * 1.41, remainder: 3.141 * 1.41

3.141 * 1.41, repeat 3.141 1 times and add: 3.141 = 3.141, remainder: 3.141 * 0.41

shift onto result: 9.432 + 0.3141 = 9.7461, remainder: 3.141 * 0.41

shift decimal: 3.141 * 4.1

repeat 3.141 4 times and add; 12.564, remainder: 3.1410.1

shift onto result: 9.87174‬

shift decimal: 3.141 1

shift onto result: 10,053764‬

result: 9,875881‬

check back with calculator: 3.141 * 3.141 = 9,875881

Multiplication can be represented as a number of additions and shifting the results of substeps over the decimal point


No one is arguing that one cannot implement multiplication algorithms using addition. In fact, one can do it with just NAND. But the definition of multiplication isn't in terms of NAND or addition.


Multiplication can have a number of meanings, but there are two important ones:

- multiplication as function composition, eg the product of two matrices is the linear transform obtained by applying the right matrix and then the left (onto a column vector)

- multiplication as “the operation that distributes over addition” — in the theory of rings and algebras, if you have one operator “+” which forms an abelian group, then any operator “•” such that (a+b)•(c+d) = ac+ad+bc+bd for all a,b,c,d then you have a ring (if • is associative) or an algebra (otherwise) and you can apply all kinds of structure theorems.

This is very oversimplified but it’s important to remember that you can multiply lots of things which don’t look like integers at all.


I had read somewhere that till the point there's only addition, mathematics can remain consistent and complete both. The moment multiplication operation is introduced, Godel's incompleteness theorems come along. Do I understand correctly? If so, what exactly happens when multiplication comes in?


Limited to a numeric sense, multiplication is an operation which began life as repeated addition, and which had to then be extended in various ways as we extended our number systems. We know, for instance, that integral multiplication x * y provides the area of a rectangle with sides of length x, y. This applies without modification to the rationals and reals as well, so we know from this that multiplication remains consistent without having to rely on the concept of repeated addition.

Similar extensions happened with negative arguments. At a certain point, pure intuition vanishes, and instead you are left with the task of simply creating a set of rules that define a consistent mathematics. Often times, the approach you take is limited by this very requirement; our mathematics is only consistent if a negative multiplied by a negative results in a positive, for instance.

This topic hints at the gap between applied mathematics rooted in concrete reality, as it were, and theoretical mathematics, which of course deals purely with abstract structures.


The problem is that there's different types of multiplication. The fact that multiplication over the Reals can be thought of as "iterated addition" is a consequence of commutativity. But mathematicians often generalize multiplication to various contexts as an arbitrarily-defined transformation, which may or may not be commutative. E.g. multiplication over the Reals is commutative; multiplication over Complex Numbers is commutative; multiplication over Quaternions is non-commutative.

Incidentally, "exponentiation is just repeated multiplication", right? Try using iteration to solve 0^0.

  lim{x -> 0} (0^x) = 0
  lim{x -> 0} (x^0) = 1
The "iteration mindset" doesn't always generalize cleanly.


It doesn't, but isn't that, in this case, only because you math gals and guys simply define x^0 to be one? There is no logical reason from a geometry standpoint indeed but you needed a definition.


(x^0) = (1) makes sense because lim{y -> 0} (x^y) = (1).

E.g. (10^0.001) = (1.00230524...)


On-point.

Exponentiation is not like multiplication because it's not associative.

(2^3)^4 != 2^(3^4)


For that you want to check out Group Theory.


>2 x 3 = 3 x 2 feels so obvious

You just got used to it because you learned it very young. I'm pretty sure it wasn't that obvious when you learned it.


If I remember correctly, in my school multiplication was explained in terms of rectangles made of unit squares. Commutativity was literally visible to the naked eye then.


It's both intuitively obvious geometrically, and algebraically is conceptually tricky to grasp. Or, more precisely it's trivial to decide that arithmetic operations are commutative or not, but tricky to decide correctly for all operations a child knows.


I also think you define integers as successors. Z, S(Z), S(S(Z)) etc. you can prove that a * b = b * a, given that you have defined the + function and * functions.


When my kids started school I was surprised to learn that at age 4 it is not obvious that the number of items on a whiteboard remains constant if all the items are moved to a different part of the board. So what seems obvious, probably is something we just don't remember learning.


2 + 2 + 2 = 3 + 3 becomes much more obvious when you think of a 3x2 rectangle and rotate it by 90 degrees to a 2x3 rectangle.


but then isn't this just a tautological argument? why does multiplying the sides of a rectangle give you the area?


Multiplication in the form we know it arose from practical needs of area measurement, not as some abstract operation in semirings and rings.

Multiplying the sides gives you the area because multiplication was invented/discovered to do that.


you don't need to invoke a concept of area to justify this. you can just think of a grid of dots. since multiplication is repeated addition, each row represents one step of the repeated addition.

2x3 = 3 + 3

...

...

3x2 = 2 + 2 + 2

..

..

..

since rotation doesn't change the number of dots, we expect 2x3 and 3x2 grid of dots to contain the same number of dots, which proves that multiplication is commutative.


Because of little squares being added.


More interesting, in more general contexts multiplication is not commutative.

For example, if we apply the same pair of 3-dimensional rotations in opposite orders, we generally get different results.

Scaling and planar rotation (in combination, a.k.a. “complex numbers”) are conveniently among the types of commutative multiplication.

> multiplication is really just addition

This is a misleading summary. That multiplication of integers per se can be re-expressed as addition (or if you like, as counting) depends on the basic parts involved being very simple and uniform. The basic multiplication table for integers is just

   × | -1  0  1
  –––––––––––––
  -1 |  1  0 -1
   0 |  0  0  0
   1 | -1  0  1
Since every other integer is just some sum of these basic parts, and multiplication distributes over addition, that covers it. To multiply two integers, first break each one down into some sum of a collection of –1, 0, and 1, then look up each partial product in the basic multiplication table above, and finally sum up all of the results. This process boils down to counting. Since the basic integer multiplication table is commutative, so is integer multiplication in general.

But other kinds of numbers have richer structure based on a richer multiplication table of basic elements.


Well, I am not sure if you've come across some of Bertrand Russell's work, but he qualified Mathematics as having what he called "supreme beauty". The older I get, and I am not a mathematician by any stretch of the imagination, the more I get what he means. This stuff is surely a gift for all of us.


3Blue1Brow has a nice video on it https://www.youtube.com/watch?v=sD0NjbwqlYw


I really wish the Riemann-Zeta Function were more often explained in terms of a prime number sieve. It's actually not particularly difficult to follow and the connection between the function and the distribution of primes would be completely obvious.


This.

In his book on the topic, William Stein did not get around to the connection to the product of primes until page 121.


If you've got enough math background to follow it, Harold Edwards' Riemann's Zeta Function is a gem of a book and available inexpensively from Dover. There is an English translation of Riemann's paper at the end of the book. I spent a worthwhile few weeks of spare time working my way through the paper ( and a lot of pencil & paper to work "between" the steps in the paper -- math is not a spectator sport ) with a longish diversion diving into the gamma function along the way.


Math should be a spectator sport! ;)


I think "don't try to prove the Riemann hypothesis" is only part of the iceberg that includes "you may want to prefer theory building over problem solving" and "we're not getting any medals here". It's interesting that this was written by one of the category theoretic schools; the category theorists that I studied under are quite wary of things like the RH. After all, Saunders Mac Lane never won a Field's medal. I am not throwing shade, but it's exceedingly difficult to try to judge (any) mathematician's "worth" in the way a prize or medal does in popular media.


Every now and then I try to delve into the frightening world of math. Then I see something like this, and start to feel very tired.

Then I think, “My hair is already falling out. Do I need something like this to accelerate the process?”


As a mathematician, I have noticed that people that like to build something that, for example, can fly, start from a paper plane; they don't get discouraged because they can't yet build a 737 aircraft. However, in math, you need a lot of experience before you can even judge whether a problem is in fact a 737 and not a paper plane (and even then, you can be mistaken). I often see students discouraged because of this and that's why I suggest taking it slow from the beginning.

> Every now and then I try to delve into the frightening world of math.

To want to understand is to be human. :)


Honestly, I think the near constant exaltation of problems like the Riemann Hypothesis, P=NP, Fermat's last theorem, etc. is more damaging to the field than good. Many of these theorems have dubious application to anything practical were the theorems unquestionably proven. Subsequently, it frequently gives the impression to a lay observer that mathematics is all about number theory and tackling pointless puzzles.

Going into undergrad I was briefly discouraged from going into mathematics because this was the impression I got. They're interesting to think about, but I didn't want my future to be firmly situated in inapplicable theory.

I say this knowing there is plenty of work to be done in the applied mathematics, especially in trying to simplifying the understanding of complex problems. I'd like to see more of the glorification of moderately hard problems which take more time to explain but are well within the grasp of people who start working on it, than easy to explain problems which will likely never be in the grasp of anyone.


Thank you for your elegant reply. I really appreciated it because I am a non-engineer with a 3d printer, so your paper plane example was quite relevant.

“Tinkering” with mathematical concepts is fundamentally different from the kind of tinkering I can do, but looking from the outside in.. I’ll probably pick up a calculus textbook one day :)


For what it's worth, most mathematicians find the Riemann hypothesis frightening as well. There are much lower level aspects of math to be explored without climbing the proverbial Mt. Everest of math problems.


I disagree with this. I don’t think the Riemann Hypothesis is “harder” than various other unsolved problems like Yang-Mills or Navier-Stokes. Most professional mathematicians would succeed just as much or as little as the particular subset who actively work on the Riemann Hypothesis, if they chose to study it. I think it’s more that in terms of instrumental rationality / goal achievement, working on the Riemann Hypothesis doesn’t suit very many people. There are other trade-offs with funding, applications, lifestyle goals, etc., so it’s just not for everyone.

But there’s no way at all that “most mathematicians find the Riemann Hypothesis frightening” as you suggest.


The other unsolved problems at frightening as well, insofar as such an informal emotional word has any consensus meaning, perhaps "if I had funding to spend my whole life on this problem, there's a god chance I'd make very little progress."


Agreed, I think frightening carried a connotation that I didn't intend. My intention was to state that problems like the Riemann hypothesis are of a level of difficulty such that mathematicians believe these problems cannot be solved without a major fundamental leap forward in our understanding. Erdos expressed a similar frustration toward the Collatz conjecture when he stated "Mathematics may not be ready for such problems".

So with respect to the original comment I was replying to, there are other areas of mathematics that one can study that are well understood and not so opaque as the Riemann hypothesis.


I'm satisfied to know that there are people who need something like this.


Mathematicians usually have more or less okay hair. Something to do with the blood flow from all that thinking.


If this interests you, I would strongly recommend reading http://www.riemannhypothesis.info/2014/10/tossing-the-prime-... this explains the relation between random walks and the RH in a surprisingly easy to understand fashion.


My absolute favorite numberphile video is how pi occurs in a peculiar way with Riemann Zeta/Mandelbrot. Maybe it only amazes me because I don’t have a PhD in math, but it just seemed so strange how pi shows up in this video.

https://youtu.be/d0vY0CKYhPY


Why does everybody think that by virtue of math ought to be nice, such a nice hypothesis ought to be true? Isn't it just a form of the survivorship bias that we observe only nice side of math? What if this hypothesis stands true for all N < 10^10^10^467+17, and then suddenly it doesn't? Perhaps to make a breakthru in math (and physics) we need to consider the possibility that the reality can be ugly and counterintuitive and beyond a certain complexity level, math and physics cannot be described by nice formulas.


I think that's a misconception. Mathematicians are keenly aware that there are plenty examples in history were many people "believed" (hoped? expected?) that some result might be true because it would be "beautiful", but it turned out to be false. Or where numerical evidence suggested something only to turn out to be wrong in the end. And where the first counterexample only exists at huge, numerically infeasible bound (see e.g. https://en.wikipedia.org/wiki/Skewes%27s_number)

And hence a well-known suggestion is that when tackling a hard problem, is to try finding a proof on even days, and a counterexample on odd days...

And indeed, lots of people tried (and still try) to find counterexamples to, or otherwise disprove, the Riemann Hypothesis. However, there are indeed many, many results and heuristics that give a strong suggestion that the RH might be true -- far more than mere numerical results computing zeroes of the Zeta function. Of course none of them constitute a proof; but this really goes far beyond a simple hope for "beauty" in the theory.


If it's ugly, mathematicians lose interest. So, only the beautiful stuff remains in the set of things taught by mathematicians.

There could be beautiful stuff concealed by a layer of ugly than mathematics never breaches.


So if RH is proven, what actually changes? As far as I know, there are tons of theorems that already presuppose RH to be true There wouldn't suddenly be an insight into how to find larger primes, for example.


The method used to solve RH would almost certainly contribute substantially to our understanding of mathematics and give new directions to tackle other questions in mathematics. For example, the resolution of the Poincare Conjecture by Perelman involved making progress with the technique of "Ricci flow" which is still an active area of research both in itself and for tackling other problems. The Poincare Conjecture per se is often not involved in these Ricci flow questions even though the proof of the conjecture is used.


Back in the 20th century there was a goal to ensure maths was consistent, notably by David Hilbert, and that everything could be proven from a finite set of axioms, and he wanted to ensure everything was built on a logical foundation and was rigorous.

In practice it may not change much, but more philosophically speaking, it's dangerous to take something as granted without a proof. If we then have many ideas building off this unproven idea, and it's later shown to be false, a lot of those theorems would be based off a false premise. This doesn't necessarily mean they the proposition is incorrect, but it would mean that they are unproven.

It's more to satisfy the rigour in maths, and also pose as a challenge to mathematicians. In the journey to proving RH a lot of new ideas could be found that could prove useful, too.


Proving some things may become easier (as an example of that there are already more than a few papers with proofs that rest on RH ... and then additional effort to go and find RH-less proofs.

A silly example (from memory) using another rather non-obvious proof.

1. Assume 2^(1/4) = X/Y for integers X/Y (Assume the quartic root of 2 is rational)

2. Raise both sides to the 4th power: 2 = X^4 / Y^4

3. Multiply both sides by Y^4: Y^4 + Y^4 = X^4

4. QED, quartic root of 2 is irrational, by contradiction with Fermat's last theorem.


It's the same story as with many other deep and important math problems: Practically, not much would change. Many papers already assume RH - we would just have confirmation that they are reality then. What would change is that we would likely have novel and powerful methods that were used in the proof. This is what most of the buzz is about. For example, the proof of Fermats Last Theorem introduced novel connections between sub fields of mathematics.


1. I'm a big fan of John Baez.

2. I'm getting the impression from this article that solving the Riemann Hypothesis is similar to solving P=NP in that a solution can be used to attack RSA encryption.


> P=NP in that a solution can be used to attack RSA encryption.

Note that 1) P=NP does not necessarily give raise to any polynomial algorithm that solves a NP problem. The proof would prove the existence of one such algorithm, but it might well never be found (which is the current status quo) 2) even if it would be polynomial, it could still run longer than the heat of the universe. O(n) = n^10000000 would still be a polynomial runtime for example. The second reason is why Donald Knuth does think that P=NP might be possible.


Isn’t there some (highly impractical) algorithm which dovetails through different Turing machines, in a way that has an asymptotically optimal runtime for a given problem, just with really terrible constants?

I thought we knew an algorithm that, if P=NP, would solve NP problems in P time, (but with absurd constants), and otherwise solves the problems is worse than polytime.

But I could be remembering this totally wrong.


> (highly impractical)

Forget everything practical - we are in deep theoretic waters here! There are thousands of algorithm with even a polynomial solution where you still go for the heuristic because the polynomial version is way to slow.

> Isn’t there some (highly impractical) algorithm which dovetails through different Turing machines, in a way that has an asymptotically optimal runtime for a given problem, just with really terrible constants?

Optimal might be, as long as optimal does not mean polynomial. Otherwise you would read about it in the newspapers ;) I don't know of such algorithm, but "trying out different turing machines" gives me a strong gut feeling of "not polynomial".

> I thought we knew an algorithm that, if P=NP, would solve NP problems in P time, (but with absurd constants), and otherwise solves the problems is worse than polytime.

Is that the algorithm you are refering to? Sounds like what Turing proposed once. The interesting branch is the P=NP, since then you could answer really really interesting things in P. Theoretically - and if indeed P=NP ;)


this doesn't say the exact algorithm (I haven't found it), but this stack overflow answer talks about it : https://stackoverflow.com/questions/5107140/what-is-meant-by...


No, they don't give rise to attacks because the lower bound of NP-complete algorithms) is monstrously huge.

People tend to treat P as Omega(n^4) because all the P algorithms they know are. But that's not because P is everywhere that small, it's because larger P algorithms are just as infeasible as O(e^n) in practical computers will ever be for human timespans, as long as you choose a reasonably large N, so they aren't studied much in practice.


I've been making a serious attempt at solving it but I'm not a mathematician. Even still I have a few good leads yet to pursue, and I learned a ton about the practice of mathematics that I never would've learned otherwise. (Wish I could share my leads, but I kinda want the money and glory... :) )

The article is spot on. I've had so many moments where the math looks so fishy that it seems like R has to equal 1/2 (ie hypothesis is true), but I just don't have the facts to prove it. In particular, it's really hard to evaluate the infinite sums you find working thru the problem. I actually believe that there's a good chance the hypothesis is false but we'll see someday.


> Wish I could share my leads, but I kinda want the money and glory... :)

Well, like Prof Baez says, unless you've solved other major open problems before, it's probably unwise to believe you'll be the one to crack it. You are likely to gain more by sharing your leads and seeing what mathematicians have to say.


Since you have no money or glory in math yet, your better bet is to cash in your leads so you end your life with at least one bit.

Impressively, almost every sentence in your comment was already addressed by the OP linked Twitter thread of warnings.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: