Four weeks pass, but nothing happens. Half a year later, the devil shows up again – in a rather gloomy mood.
"I’m sorry", he says. "I couldn’t prove the Riemann hypothesis either. But" – and his face lightens up – "I think I found a really interesting lemma…"
Zagier described the one that Ono gave him as "a cute problem about the asymptotic behavior of certain polynomials involving Euler's partition function, which is an old love of mine and of Ken's—and of about pretty much any classical number theorist."
"I found the problem intractable and I didn't really expect Don to get anywhere with it," Ono recalls. "But he thought the challenge was super fun and soon he had crafted a solution."
That sounds like a potentially useful problem-solving trick - pull a small but key problem out of your bigger problem, and then nerd-snipe your friends into solving it.
Both methods have yielded a lot of progress in mathematics. For example, solving a cut down version of Fermat’s Last Theorem got us the theory of ideals in algebra. Relatedly, Wiles work on the modularity theorem ultimately led to the actual proof of FLT.
True, and that's why we like to generalize things. It's usually easier to prove a mathematical object belongs to another class of structure which has certain properties than to prove the object itself has all those properties.
Of course as you've noted it's not always like that at all. It's significantly easier to prove many things about finite-dimensional vector spaces over fields versus infinite-dimensional modules over rings.
Solving mathematical problems is a lot like working a sponge. It's often useful to alternate between expanding and contracting the scope of your problem.
- Someone on Reddit (https://old.reddit.com/r/math/comments/brgp5z/mathematicians...)
By coincidence I toyed with the problem a bit this past week. Pretty difficult to tackle with a "basic" math toolbox (calc, trig, algebra, and some discrete math/number theory that's taught in many CS undergrad curricula). Much like Fermat's Last Theorem, it seems like this might require much higher-level math to solve what seems like a simple problem.
Here’s one version of the proof: https://www.math.lsu.edu/~mahlburg/teaching/handouts/2014-72...
And, a related, very simple and elementary proof of Chebyschev’s theorem that there is always a prime number between n and 2n: https://en.wikipedia.org/wiki/Proof_of_Bertrand%27s_postulat...
It was easy for me to follow along.
The world would be a better place if more people had the same approach to problem-solving, IMHO.
- reframe the real part of a complex input s as 1/2 + k for -1/2 < k < 1/2, since we're only looking at the critical strip 0 < Re(s) < 1. Then show k has to be zero for the math to work out, so that the real AND complex parts of the value of the function at s are both zero. Eventually failed because I can only show that Re(s) = 1/2 would work out relatively "nicely" so you could more easily pick Imag(s) to find zeroes, but can't prove that a different real value would still let the math work out so you could find a zero in the first place.
- come up with a formula to sum the series. I don't have much math know-how in this regard, except for some tricks I learned in discrete math/algos class so I quickly ran out of ideas. Pairing sum terms together (like you can do to prove the sum of telescoping series, or of natural no's from 0 to N) doesn't seem to work.
- The original sum formula looks like sum(n=1, infinity, ((-1)^(n+1))/(n^s)) where s is the complex input. I tried to turn it into a continuous function by using -cos(pi*n) for the numerator and then use a definite integral, but the result is nasty before you even plug in the integral bounds. There's an imaginary term as the second argument for the gamma function, which I don't understand how to handle.
- show two different real values, r1 and r2, of zeroes at r1 + ci and r2 + di for some imaginary parts c and d. Then show the ratio r1/r2 = 1, and since we know r1 = 1/2 works, we know 1/2 is the only possible real part that works. Much like the summing stuff above, I couldn't figure out how to simplify things enough to even get an equation in terms of either r1 or r2.
- show that two zeroes on either side of the line s = 1/2 + iy cannot exist, or at least something like if one exists then the other must not exist, which contradicts the notion that we'd have two zeroes on either side of the line, where one has a real part between 0 and 1/2 (exclusive) and the other is in (1/2, 1). I don't have a good way of actually attacking this idea though.
There are some other ideas I tried, but the main problem is I can't simplify the sum. The sum itself plus the fact there's exponents in every denominator makes it especially hard to simplify algebraically, or at least I don't know how to do it.
It's not clear to me what could be undecideable about the original proposition; either the continuation matches at the points in question, or it doesn't, and if it doesn't, how is it a continuation?
Maybe this line of argument fixes the flaws in the original BBM paper in some way - I'm not qualified to comment - but I sort of doubt it.
That being said, it would be kind of narratively satisfying if Riemann is undecidable. There are many novel theorems from the past two decades which are conditionally true assuming Riemann is true. Riemann itself has become a useful technique for conditionally proving new theorem. If Riemann is undecidable, it would imply a great deal of mathematics has been developed which compartmentalizes the undecidability of other theorems.
However, the paper is clearly a serious attempt by someone inside academia who works in quantum computing, which is a related and heavily mathematical field.
You may have just given me a few hours of weekend reading material. :)
The basic idea is to prove that a certain function L(N) doesn't grow too quickly. It's true that appropriate bounds on L(N) imply RH.
But Eswaran's arguments for this point are full of handwaving, and in this field handwaving is just not good enough. L(N) is a sum of terms +1 and -1, and he claims that it behaves like a series of coin tosses, but the specific ways he argues it's like a series of coin tosses are not enough for the conclusion he needs.
E.g., in section 5, on page 13, he states a theorem which says: you can pair up the +1 and -1 terms in that series, so that for each +1 there's a -1 and for each -1 there's a +1. This is certainly true, but it doesn't imply that the series is random-like or that the partial sums L(N) don't grow much faster than sqrt(N) (which is the condition he needs). E.g., consider the series that goes ++-++-++-++-++-++- etc., whose partial sums obviously grow like N/3. This satisfies that "pairing" condition just fine. (Pair the first + with the first -, the second + with the second -, etc. There are infinitely many of both, which is all you need to make that work.)
In appendix V, he tries a different approach. (The fact that he gives two different "proofs" of the same key result, and both are handwavy, is a Very Bad Sign.) There's lots here that looks wrong, but here's one thing: he claims to prove that if |L(N)| ~ N^a for some a then we must have a=1/2, and having given his proof he claims to have proved that in fact |L(N)| ~ N^1/2. But these are very different propositions. Another possibility is that we don't have |L(N)| ~ N^a for any a at all, which in fact is true because it's known that infinitely often |L(N)|<=1 and infinitely often |L(N)|>=sqrt(N).
The lectures appear to be about his alleged proof rather than about RH in general. In any case, I would not trust someone capable of making the errors in his RH paper to teach any research-level material in pure mathematics.
You have been reading almost the very first version of the paper.
The correct version of the paper is:
The basic idea is to prove that the Liouville (lambda) series of length N behaves like coin tosses or as a random walk for very large values of N.
The "alternate proofs" that you talk about are just alternative proofs of certain theorems that are necessary for the proof of RH. Their existence does not mean I do not believe in the previous proof.
The Reply to your misgivings can be found by studying Items (3) and (4) in the Summary below. But before you peruse them, I suggest that you must Trust me, because I am a serious researcher and what ever I have found I have openly written about them in my papers and Project Notes and my lectures (I have been open and free with my ideas). If you think that I am a non-serious philanderer then you can save yourself the trouble by not reading what I have done. I am of the firm belief, that to begin to understand someone's new idea one must first trust him!
Most of my work has been uploaded in Research gate and there have been thousands of reads/downloads. The work is Outlined in the summary below:
SUMMARY OF KUMAR ESWARAN'S WORK ON RIEMANN HYPOTHESIS
1) THE MAIN PAPER:IS: (Version May 2018)
2) HIS INVITED TALK IN IIT MADRAS (Date May 14 2019)
3) Along with this Invited Talk an Extended ABSTRACT outlining the Crucial Steps of the Proofs and References to Notes are given see:
4) He has made many additional Notes on his work in the following Link
Project Notes and Replies to various comments on RH
His answer to various comments can be found by clicking 'Project Log" in the above Link.
5) Additionally Prof Vinayak Eswaran took it upon himself to give a series of Seven Lectures addressed at the level of undergraduates.. His motivation for doing this is given in his INTRODUCTION which may be read first:
END OF SUMMARY
I thank You Once again for your interest.
(i) It is cyclic and there cannot be any periodicity in the lambda sequence. (This is proved in the paper)
(ii) The sequence that you choose goes as N/3, this means that F(s) cannot be extended to the left beyond Re(s)=1. In fact, if such as sequence is possible than we have a disproof of RH.
(iii) The sequence does not have the statistics of a coin toss. See Appendix VI of the Main Paper whose link is in the summary. The chi-squared statistics will be different.
(iv) I have given a second Proof of equal probabilities:
which does not depend on mapping (but only uses induction):
This second proof should overcome your misgivings.
To repeat: I suggest you read Item (3) in the "SUMMARY" link I provided this outlines the main argument of the proof and the basic steps.
Thank You for your interest.
As you say, your argument depends on having the Liouville series behave like a series of independent coin tosses. But you never really show that it does. In your "Appendix IV" you argue, e.g., that if m,n are coprime then lambda(m),lambda(n) are "independent". Strictly speaking, that's nonsense: independence is a relationship between _random variables_ and lambda(m),lambda(n) are not random variables at all. There might be some theorem that says something like "if you consider a long enough series of these, then their statistics are very close to those you'd get from independent random variables", but your argument doesn't show anything at all like that.
(You do argue -- the argument seems clearly unsound to me, but the conclusion, or something like it, is probably correct -- that for fixed L and large enough N you can't deduce lambda(N) from lambda(N-1),...,lambda(N-L). But this has nothing at all to do with the relevant notion of statistical independence.)
 Why is it unsound? Well, several things look wrong with it. Here's one of them: you say, in effect, that you can only work out lambda(N) from earlier values if you have lambda(some divisor or k). But clearly that isn't true; e.g., if N is a multiple of 3 then you have lambda(N) = lamda(2N/3).
The "arithmetical proof" in Appendix V is exactly the same as before, and remains unsound for the same reason: it doesn't distinguish between "if L(n) ~ Cn^a then a=1/2" and "L(n) ~ Cn^1/2". The second of these is in fact clearly false, as I said before, because for some n L(n) is very close to zero.
The errors being made here are not, it seems to me, the sort of superficial ones that just indicate glitches in the exposition of a basically-sound underlying argument, and that need patching over. They indicate fundamental misunderstandings of how this stuff works.
A couple of other wrong things -- but, to be clear, the point isn't that a few specific statements are wrong, but that you don't have a firm grasp of just what it is you need to prove:
In section 3.1 of your "pathway" document you say that it will be enough if the function lambda satisfies three conditions: (1) +1 and -1 equally common, (2) sequence not periodic, (3) can't predict new values of lambda from previous values. This is simply not true. For instance, consider a sequence defined as follows. Start with the sequence +-++--++++---- etc. (i.e., runs of length 1,1,2,2,4,4,8,8, etc.) This has properties 1 and 2, and it "almost" has property 3. Now perturb it by flipping each sign with probability 1/8 (either actually randomly, or using a deterministic-but-random-like sequence -- e.g., if you consider that the actual lambda(n) form such a sequence then you can flip the sign of the n'th term iff lambda(3n), lambda(3n+1), lambda(3n+1) are all +1). This still satisfies properties 1 and 2, and now clearly satisfies property 3 as well. But it doesn't at all satisfy the condition you need, of growth no faster than about sqrt(n): if you pick n=3.2^k-2 (i.e., at the end of a run of +) then the sum is approximately n/4.
In section 5.2 of your main paper, you give two slightly more refined versions of the "twin" argument from older versions of the paper: it's not just that we can pair up numbers so that lambda(n),lambda(n') have opposite signs, but that we can divide the positive integers into subsets (the "towers") each of which has alternating signs. Or: ... into subsets which we can pair up with one another so that corresponding numbers in paired subsets have opposite signs. But these conditions, even taken together, don't have the consequence you want. Suppose we have any sequence of +,- at all, provided it has infinitely many of both signs; it can be decomposed into subsets satisfying those conditions, as follows. We start with all "towers" (of course they aren't "towers" in precisely your sense) T1, T2, ..., empty, and a "next sign needed" for each tower of +,-,+,-,+,-, etc. Now go through our sequence in order: n=1,n=2,n=3,..., assigning each number in turn to the "highest-priority" tower it's eligible for. Highest-priority means smallest value of (tower number) times (1 + number of numbers already assigned to tower). I claim we always assign infinitely many numbers to each tower. If not, consider the towers that only ever get finitely many numbers, and suppose tower k is the one for which (tower number) times (1 + max numbers ever assigned) is minimal. Then once n is large enough tower k will have highest priority, and we have both infinitely many + and infinitely many -, so tower k will get another number assigned to it, contradiction.
I profoundly disagree with what you say about trust. "Nullius in verba" (https://en.wikipedia.org/wiki/Nullius_in_verba)! Being a serious researcher doesn't mean that your work is exempt from checking or immune to serious error. I'm happy to trust that you mean what you say and are trying to get the mathematics right, but no one is entitled to be trusted to have actually got it right.
(Incidentally, "philanderer" doesn't mean what I think you think it means. A philanderer is a man who is in the habit of flirting with (or more-than-flirting with) a lot of women. Perhaps you're looking for a word like "dilettante"?)
to your point :
The point I make is that knowing lambda(N) (N large) and possibly, lambda(N-1), lambda(N-2), ..., lambda(N-K) For some fixed K. It is not possible to predict ALL the other lambdas higher than N. This is proved in the paper and can be intuitively understood because one cannot predict when the next prime occurs. Thus lambda(n) behaves like c(N) where c(N) is the Nth toss of a coin tossing experiment.
It is only necessary to prove that lambda(N) behaves (approximately) like a coin toss for large N. As explained this requirement is from Littlewood's theorem (proved in the paper). It is not necessary that the lambdas are perfectly random, of course they are not. Example, lambda(2n)= -lambda(n)) but if you choose n very large say n=10^100, then 2n will be very far away from n. Thus the 'randomness' in a very large sequence of lambdas is still practically undisturbed; the unpredictable occurrence of new primes also contribute toward 'randomness' - this is especially so for very long sequence of length N; To prove RH we only need this behaviour for N near infinity to satisfy Littlewood's Theorem. (Appendix VI actually verifies this phenomena over very large sequences). In a paragraph below I show how if M is large it is impossible to predict lambda(M+1) knowing only lambda(M) with out actually calculating it by using all the info on primes below Sqrt(M).
Regarding your argument where you construct an example in your para , of a sequence of lambdas and flip the signs of lambdas according to your specific rule is not permissible. Because you must remember that the lambda(n) take on values +1 or -1 depending how many factors the particular integer n has. So you cannot 'flip' their signs arbitrarily or as per your prescription, without actually demonstrating that your prescription does not violate the rules of arithmetic or the rules of factorization. The lambda's are arithmetical functions and obey some rules. I suggest that you study the proof of unpredictability (independence). it will convince you. NOTE: If you have a method of predicting lambda(n) from (say) its k previous values (k fixed), then this (hypothetical) method should work for ALL n, clearly it then means there exists some function f(x1,x2,..xK) which predicts the value according to the hypothetical method. The paper proves that such a function cannot exist, (this is my second proof of independence; see page 13 para 2(a) of Main Paper or slides 54-58 in my IITM Lecture).
In the next paragraph (where you talk about towers), you again "suppose" the lambdas in the towers to have some properties - you cannot suppose these things without actually proving that your "suppositions" do not violate the rules of factorization and arithmetic that govern the values of lambda(n).
Each tower contains integers whose lambdas alternate + - + - etc. I know that there are infinite number of towers; this does not make any difference to the proof - they are all countably infinite. The important thing is that each integer occurs only once in some tower and does not occur in any other tower.
To prevent ourselves being inveigled by Cantor-like arguments (which imposes "pre-suppositions" on the properties of the lambdas), I suggest that we get out of this "mapping" mind set. I have given an independent proof (which should be studied) of equal probabilities which only uses induction and the assumption that each even integer is followed by an odd integer which precedes another even integer (i.e the number of even integers is equal to the number of odd integers). Because of the latter reason, I believe this proof is water-tight. This proof is given in:
Regarding your last point, I agree I was just loosely stating matters when I put things like: "L(n) ~ Cn^1/2"; (sorry!).
The actual rigorous requirement for the proof of RH is that as N tends to infinity one must have Mod[ L(N)] < C. N^(0.5 + e), where e is small but positive. The inequality must hold in the limit for very large values of N not a particular value, so Littlewood's condition actually specifies a bound; you can read the statement of Littlewood's Theorem given in the paper.
The better way to intuitively understand as to why the lambdas behave like coin tosses is to consider the following argument:
In Appendix VI (I strongly recommend that you read Appendix VI in its entirety), I have taken very large sequences of consecutive lambdas and I have actually shown by doing a chi-squared fit and comparing each sequence with a binomial sequence of the same size, that they are indistinguishable (statistically speaking) from coin tosses. In fact if one considers, the sequence of the first 176 trillion consecutive lambdas, the sequence is indistinguishable from coin tosses. My paper gives an explanation (reason) for such a phenomena and this reason lead to the proof of RH.
As far as I can tell, I think I have answered your Queries; the answers are also contained in greater detail in Lecture 5 to 7 in the Lecture Series of Vinayak Eswaran, whose Link is given in the last paragraph of this note: "SEVEN LECTURES". I very strongly suggest you study them and then you can read the papers and Notes contained in "THE SUMMARY" and my "Pathway" (see my previous write-up for the links in Researchgate under my name).
Yes "dilitannte" would have been the correct word. By using the word "trust" I only meant that you should trust me just for the duration of time that it would take you to have read my arguments and to have studied carefully my papers and the Notes (as suggested in the previous para) till the very end, with an open mind. Of course, after this is done, (but not before), I expect that my reasoning will be subject to the closest and strictest scrutiny.
I thank You once again for all the interest you have taken.
Since this blog will be read by others. I just wish to say (for the curious non-expert): that my brother Vinayak Eswaran (formerly professor at IIT Kanpur and presently a professor at IIT Hyderabad), has taken it upon himself to upload a series of Seven Lectures entitled "Seven Lectures on the Proposed Proof by Kumar Eswaran of the Riemann's Hypothesis".See Link:
These lectures are set at a level which can be understood by a STEM undergraduate student.