Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians revive abandoned approach to the Riemann Hypothesis (phys.org)
91 points by new_guy 24 days ago | hide | past | web | favorite | 33 comments



A mathematician has spent years trying to prove the Riemann hypothesis – without success. Finally, he decides to sell his soul to the devil in exchange for a proof. The devil promises to deliver a proof within four weeks.

Four weeks pass, but nothing happens. Half a year later, the devil shows up again – in a rather gloomy mood.

"I’m sorry", he says. "I couldn’t prove the Riemann hypothesis either. But" – and his face lightens up – "I think I found a really interesting lemma…"


The idea for the paper was sparked two years ago by a "toy problem" that Ono presented as a "gift" to entertain Zagier during the lead-up to a math conference celebrating his 65th birthday. A toy problem is a scaled-down version of a bigger, more complicated problem that mathematicians are trying to solve.

Zagier described the one that Ono gave him as "a cute problem about the asymptotic behavior of certain polynomials involving Euler's partition function, which is an old love of mine and of Ken's—and of about pretty much any classical number theorist."

"I found the problem intractable and I didn't really expect Don to get anywhere with it," Ono recalls. "But he thought the challenge was super fun and soon he had crafted a solution."

That sounds like a potentially useful problem-solving trick - pull a small but key problem out of your bigger problem, and then nerd-snipe your friends into solving it.


Definitely. Two fairly standard techniques in mathematics are solving a cut down version of a problem and lifting that to a full solution, and solving a more general version and specializing to a particular case. Sometimes, amazingly, the more general problem is easier to solve, perhaps because fewer irrelevant details stand in the way.

Both methods have yielded a lot of progress in mathematics. For example, solving a cut down version of Fermat’s Last Theorem got us the theory of ideals in algebra. Relatedly, Wiles work on the modularity theorem ultimately led to the actual proof of FLT.


> Sometimes, amazingly, the more general problem is easier to solve, perhaps because fewer irrelevant details stand in the way.

True, and that's why we like to generalize things. It's usually easier to prove a mathematical object belongs to another class of structure which has certain properties than to prove the object itself has all those properties.

Of course as you've noted it's not always like that at all. It's significantly easier to prove many things about finite-dimensional vector spaces over fields versus infinite-dimensional modules over rings.

Solving mathematical problems is a lot like working a sponge. It's often useful to alternate between expanding and contracting the scope of your problem.


"This method doesn't work and Ken knows that, not sure wtf this article is. At most, this idea can show that the zeroes off the critical line are density zero, which would be a huge result, but could never prove RH. Again, Ken knows this (he says as much when he talks about it) so wtf is this article?"

- Someone on Reddit (https://old.reddit.com/r/math/comments/brgp5z/mathematicians...)


Neat. I mean, I don't understand the mathspeak since I stopped taking math after calculus in college, but it's neat to see fun approaches turn into viable tactics for finding a solution.

By coincidence I toyed with the problem a bit this past week. Pretty difficult to tackle with a "basic" math toolbox (calc, trig, algebra, and some discrete math/number theory that's taught in many CS undergrad curricula). Much like Fermat's Last Theorem, it seems like this might require much higher-level math to solve what seems like a simple problem.


Man HR is harsh as hell. While yes, if a proof of RH is found it will probably use advanced tools, ever finding a proof with elementary tools is not totally out of the question. Regarding the Prime Number Theorem, which can be seen as a baby version of RH, experts in the field such as Hardy have been quoted as saying elementary proofs would not exist. Then 50 years after Hadamard and Poussin found the original proofs, Selberg and Erdos published proofs based on elementary methods. No one can say that it's worthless for someone to try the same for RH.


I've taken math up to calculus and linear algebra and enjoy watching 3B1B proofs. Is that enough to follow the elementary Prime Number Theorem proof? I've been interested in understanding that for a while. Any good links?


You can probably slog through it with that level of knowledge, if you’v got a good grasp of summation notation.

Here’s one version of the proof: https://www.math.lsu.edu/~mahlburg/teaching/handouts/2014-72...

See also:

https://www.math.columbia.edu/~goldfeld/ErdosSelbergDispute....

https://www.math.columbia.edu/~goldfeld/ErdosSelbergDispute....

And, a related, very simple and elementary proof of Chebyschev’s theorem that there is always a prime number between n and 2n: https://en.wikipedia.org/wiki/Proof_of_Bertrand%27s_postulat...


You will need to learn some complex analysis which is very beautiful. Ash has a very good chapter on the Prime Number Theorem as a reward for learning complex variables: https://faculty.math.illinois.edu/~r-ash/CV.html You don't need to learn the whole book just some knowledge of contour integral and analytic continuation is sufficient to understand the proof. So you could just start with the last chapter and fill the gaps as you go. (If you go linearly through the book you will encounter things like the normal family not required for your stated goal. Even though the normal family stuff are even more beautifully amazing IMHO the presentation here may be too abstract to grok and could discourage one to soldier on.)


Visual Complex Analysis by Tristan Needham is an incredibly approachable textbook on the subject.


Check out the book “Prime Obsession”

It was easy for me to follow along.


Notice also the decidability of primality in polynomial time, which was proved by some Computer Science graduates (Neeraj Kayal, and Nitin Saxena) in 2011, and led Bombieri (one of the leading experts on Number Theory) to say that Number Theory may have gone astray in theoretical complexity in some fields.


That's very interesting, do you know offhand where Bombieri said that?


I agree, I'm not saying a simple proof is ever out of the question either. But my failed attempt makes me wonder how much of a creative burst a typical person would have to have in order to come up with a simple solution in this case.


Don’t conflate “elementary” in the GP post with “simple.” Elementary in this context just means that the proof does not use tools from complex analysis (i.e. it only uses theorems from real analysis, which is basically freshman calculus on steroids). The elementary proof of the prime number theorem is not easy by any means.


I didn't mean you were the one implying an elementary proof is unobtainable, that was directed at some of the other replies to your comment like "oh really?". I agree that it's enlightening to give these things a try.


Oh really? :-D


Well, parent tried to understand the problem, used the tools at his hand to solve it without preconceptions, and realized it was more complex that it seemed.

The world would be a better place if more people had the same approach to problem-solving, IMHO.


Not saying that there is anything wrong with it. It is still funny though :-) It is like somebody saying they tried to go to the moon, but after walking for an hour they discovered it was harder to get there than they initially thought.


Yes! I had a lot of ideas but didn't get anywhere, including:

- reframe the real part of a complex input s as 1/2 + k for -1/2 < k < 1/2, since we're only looking at the critical strip 0 < Re(s) < 1. Then show k has to be zero for the math to work out, so that the real AND complex parts of the value of the function at s are both zero. Eventually failed because I can only show that Re(s) = 1/2 would work out relatively "nicely" so you could more easily pick Imag(s) to find zeroes, but can't prove that a different real value would still let the math work out so you could find a zero in the first place.

- come up with a formula to sum the series. I don't have much math know-how in this regard, except for some tricks I learned in discrete math/algos class so I quickly ran out of ideas. Pairing sum terms together (like you can do to prove the sum of telescoping series, or of natural no's from 0 to N) doesn't seem to work.

- The original sum formula looks like sum(n=1, infinity, ((-1)^(n+1))/(n^s)) where s is the complex input. I tried to turn it into a continuous function by using -cos(pi*n) for the numerator and then use a definite integral, but the result is nasty before you even plug in the integral bounds. There's an imaginary term as the second argument for the gamma function, which I don't understand how to handle.

- show two different real values, r1 and r2, of zeroes at r1 + ci and r2 + di for some imaginary parts c and d. Then show the ratio r1/r2 = 1, and since we know r1 = 1/2 works, we know 1/2 is the only possible real part that works. Much like the summing stuff above, I couldn't figure out how to simplify things enough to even get an equation in terms of either r1 or r2.

- show that two zeroes on either side of the line s = 1/2 + iy cannot exist, or at least something like if one exists then the other must not exist, which contradicts the notion that we'd have two zeroes on either side of the line, where one has a real part between 0 and 1/2 (exclusive) and the other is in (1/2, 1). I don't have a good way of actually attacking this idea though.

There are some other ideas I tried, but the main problem is I can't simplify the sum. The sum itself plus the fact there's exponents in every denominator makes it especially hard to simplify algebraically, or at least I don't know how to do it.


I've seen a paper[1] that claims RH, as stated, is undecideable, and then proposes to prove that an analytic continuation of it has the property required. The idea seems to be that you can choose your continuation to have or not have the property.

It's not clear to me what could be undecideable about the original proposition; either the continuation matches at the points in question, or it doesn't, and if it doesn't, how is it a continuation?

Enlightenment welcome.

[1] http://phys.lsu.edu/~fmoxley/r1.pdf


I've been following the development of the Bender-Brody-Muller approach to the Riemann hypothesis because I knew one of the authors in a previous life. I don't think it's taken that seriously by number theorists [0][1]. The paper was written by physicists, who sometimes play a little fast and loose with mathematical rigour.

Maybe this line of argument fixes the flaws in the original BBM paper in some way - I'm not qualified to comment - but I sort of doubt it.

[0] https://arxiv.org/abs/1704.02644 [1] https://math.stackexchange.com/questions/2211278/riemann-hyp...


I'm not familiar with that particular paper, but I'm aware of a similar line of attack from a decade or so ago. Off the top of my head I can tell you that we know Riemann is provably false if it is false[1]. I'm not aware of any comparable result for the affirmative, but personally I'm a bit skeptical of the undecidability argument.

That being said, it would be kind of narratively satisfying if Riemann is undecidable. There are many novel theorems from the past two decades which are conditionally true assuming Riemann is true. Riemann itself has become a useful technique for conditionally proving new theorem. If Riemann is undecidable, it would imply a great deal of mathematics has been developed which compartmentalizes the undecidability of other theorems. ______________________

1. https://mathoverflow.net/questions/79685/can-the-riemann-hyp...


Not having read the paper, I’m skeptical, both because of the author’s relative lack of credentials (he’s a physicist with all of 5 publications, from what I can see), and because the core idea seems too simple.

However, the paper is clearly a serious attempt by someone inside academia who works in quantum computing, which is a related and heavily mathematical field.

You may have just given me a few hours of weekend reading material. :)


Nice detailed lectures on RH along with a suggested proof is here https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...


I have looked at Eswaran's paper that purports to prove RH and I'm pretty sure it's all wrong. (Currently-latest version of paper is here: https://arxiv.org/pdf/1609.06971v10.pdf)

The basic idea is to prove that a certain function L(N) doesn't grow too quickly. It's true that appropriate bounds on L(N) imply RH.

But Eswaran's arguments for this point are full of handwaving, and in this field handwaving is just not good enough. L(N) is a sum of terms +1 and -1, and he claims that it behaves like a series of coin tosses, but the specific ways he argues it's like a series of coin tosses are not enough for the conclusion he needs.

E.g., in section 5, on page 13, he states a theorem which says: you can pair up the +1 and -1 terms in that series, so that for each +1 there's a -1 and for each -1 there's a +1. This is certainly true, but it doesn't imply that the series is random-like or that the partial sums L(N) don't grow much faster than sqrt(N) (which is the condition he needs). E.g., consider the series that goes ++-++-++-++-++-++- etc., whose partial sums obviously grow like N/3. This satisfies that "pairing" condition just fine. (Pair the first + with the first -, the second + with the second -, etc. There are infinitely many of both, which is all you need to make that work.)

In appendix V, he tries a different approach. (The fact that he gives two different "proofs" of the same key result, and both are handwavy, is a Very Bad Sign.) There's lots here that looks wrong, but here's one thing: he claims to prove that if |L(N)| ~ N^a for some a then we must have a=1/2, and having given his proof he claims to have proved that in fact |L(N)| ~ N^1/2. But these are very different propositions. Another possibility is that we don't have |L(N)| ~ N^a for any a at all, which in fact is true because it's known that infinitely often |L(N)|<=1 and infinitely often |L(N)|>=sqrt(N).

The lectures appear to be about his alleged proof rather than about RH in general. In any case, I would not trust someone capable of making the errors in his RH paper to teach any research-level material in pure mathematics.


Hi, thanks for taking the time to explain in detail why you think the proof is wrong. I think this kind of discussion will go a long way in trying to find the solution. Will pass on the comment to the prof. He will like that someone took the time to critique. So far most of the time all he gets is,"the proof is not valid because no one has done it so far" :).


Thanks a great deal for your comments.

You have been reading almost the very first version of the paper. The correct version of the paper is: https://www.researchgate.net/publication/325035649_The_Final...

The basic idea is to prove that the Liouville (lambda) series of length N behaves like coin tosses or as a random walk for very large values of N.

The "alternate proofs" that you talk about are just alternative proofs of certain theorems that are necessary for the proof of RH. Their existence does not mean I do not believe in the previous proof.

The Reply to your misgivings can be found by studying Items (3) and (4) in the Summary below. But before you peruse them, I suggest that you must Trust me, because I am a serious researcher and what ever I have found I have openly written about them in my papers and Project Notes and my lectures (I have been open and free with my ideas). If you think that I am a non-serious philanderer then you can save yourself the trouble by not reading what I have done. I am of the firm belief, that to begin to understand someone's new idea one must first trust him!

Most of my work has been uploaded in Research gate and there have been thousands of reads/downloads. The work is Outlined in the summary below:

SUMMARY OF KUMAR ESWARAN'S WORK ON RIEMANN HYPOTHESIS

1) THE MAIN PAPER:IS: (Version May 2018) https://www.researchgate.net/publication/325035649_The_Final...

2) HIS INVITED TALK IN IIT MADRAS (Date May 14 2019) https://www.researchgate.net/publication/333185552_Invited_L...

3) Along with this Invited Talk an Extended ABSTRACT outlining the Crucial Steps of the Proofs and References to Notes are given see: https://www.researchgate.net/publication/331889126_The_Pathw...

4) He has made many additional Notes on his work in the following Link Project Notes and Replies to various comments on RH

His answer to various comments can be found by clicking 'Project Log" in the above Link.

-----

5) Additionally Prof Vinayak Eswaran took it upon himself to give a series of Seven Lectures addressed at the level of undergraduates.. His motivation for doing this is given in his INTRODUCTION which may be read first: https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...

END OF SUMMARY

I thank You Once again for your interest.

Regards KE


P.S.: The sequence that you have given cannot happen because of the following reasons:

(i) It is cyclic and there cannot be any periodicity in the lambda sequence. (This is proved in the paper)

(ii) The sequence that you choose goes as N/3, this means that F(s) cannot be extended to the left beyond Re(s)=1. In fact, if such as sequence is possible than we have a disproof of RH.

(iii) The sequence does not have the statistics of a coin toss. See Appendix VI of the Main Paper whose link is in the summary. The chi-squared statistics will be different.

(iv) I have given a second Proof of equal probabilities: which does not depend on mapping (but only uses induction):

https://www.researchgate.net/publication/324828748_A_Simple_... This second proof should overcome your misgivings.

  To repeat: I suggest you read Item (3) in the "SUMMARY" link I provided this outlines the main argument of the proof and the basic steps.
Just to put things in proper perspective and credibility: I wish to say that the fact that the lambda sequence behaves like coin tosses have been actually verified by computation (chi-square fitting) over very long sequences. And also over the sequence of the first 176 trillion integers! See Appendix VI of Main paper for details.

Thank You for your interest. KE


The Link to Item (4) was missed it is given below: (4) https://www.researchgate.net/project/The-Dirichlet-Series-fo...


The version I linked to was (so far as I could tell) the latest on the arXiv, posted a couple of years after the first version. I've now looked at the latest version and it seems to me it still has much the same problems as before, just hidden a little better.

As you say, your argument depends on having the Liouville series behave like a series of independent coin tosses. But you never really show that it does. In your "Appendix IV" you argue, e.g., that if m,n are coprime then lambda(m),lambda(n) are "independent". Strictly speaking, that's nonsense: independence is a relationship between _random variables_ and lambda(m),lambda(n) are not random variables at all. There might be some theorem that says something like "if you consider a long enough series of these, then their statistics are very close to those you'd get from independent random variables", but your argument doesn't show anything at all like that.

(You do argue -- the argument seems clearly unsound to me[1], but the conclusion, or something like it, is probably correct -- that for fixed L and large enough N you can't deduce lambda(N) from lambda(N-1),...,lambda(N-L). But this has nothing at all to do with the relevant notion of statistical independence.)

[1] Why is it unsound? Well, several things look wrong with it. Here's one of them: you say, in effect, that you can only work out lambda(N) from earlier values if you have lambda(some divisor or k). But clearly that isn't true; e.g., if N is a multiple of 3 then you have lambda(N) = lamda(2N/3).

The "arithmetical proof" in Appendix V is exactly the same as before, and remains unsound for the same reason: it doesn't distinguish between "if L(n) ~ Cn^a then a=1/2" and "L(n) ~ Cn^1/2". The second of these is in fact clearly false, as I said before, because for some n L(n) is very close to zero.

The errors being made here are not, it seems to me, the sort of superficial ones that just indicate glitches in the exposition of a basically-sound underlying argument, and that need patching over. They indicate fundamental misunderstandings of how this stuff works.

A couple of other wrong things -- but, to be clear, the point isn't that a few specific statements are wrong, but that you don't have a firm grasp of just what it is you need to prove:

In section 3.1 of your "pathway" document you say that it will be enough if the function lambda satisfies three conditions: (1) +1 and -1 equally common, (2) sequence not periodic, (3) can't predict new values of lambda from previous values. This is simply not true. For instance, consider a sequence defined as follows. Start with the sequence +-++--++++---- etc. (i.e., runs of length 1,1,2,2,4,4,8,8, etc.) This has properties 1 and 2, and it "almost" has property 3. Now perturb it by flipping each sign with probability 1/8 (either actually randomly, or using a deterministic-but-random-like sequence -- e.g., if you consider that the actual lambda(n) form such a sequence then you can flip the sign of the n'th term iff lambda(3n), lambda(3n+1), lambda(3n+1) are all +1). This still satisfies properties 1 and 2, and now clearly satisfies property 3 as well. But it doesn't at all satisfy the condition you need, of growth no faster than about sqrt(n): if you pick n=3.2^k-2 (i.e., at the end of a run of +) then the sum is approximately n/4.

In section 5.2 of your main paper, you give two slightly more refined versions of the "twin" argument from older versions of the paper: it's not just that we can pair up numbers so that lambda(n),lambda(n') have opposite signs, but that we can divide the positive integers into subsets (the "towers") each of which has alternating signs. Or: ... into subsets which we can pair up with one another so that corresponding numbers in paired subsets have opposite signs. But these conditions, even taken together, don't have the consequence you want. Suppose we have any sequence of +,- at all, provided it has infinitely many of both signs; it can be decomposed into subsets satisfying those conditions, as follows. We start with all "towers" (of course they aren't "towers" in precisely your sense) T1, T2, ..., empty, and a "next sign needed" for each tower of +,-,+,-,+,-, etc. Now go through our sequence in order: n=1,n=2,n=3,..., assigning each number in turn to the "highest-priority" tower it's eligible for. Highest-priority means smallest value of (tower number) times (1 + number of numbers already assigned to tower). I claim we always assign infinitely many numbers to each tower. If not, consider the towers that only ever get finitely many numbers, and suppose tower k is the one for which (tower number) times (1 + max numbers ever assigned) is minimal. Then once n is large enough tower k will have highest priority, and we have both infinitely many + and infinitely many -, so tower k will get another number assigned to it, contradiction.

I profoundly disagree with what you say about trust. "Nullius in verba" (https://en.wikipedia.org/wiki/Nullius_in_verba)! Being a serious researcher doesn't mean that your work is exempt from checking or immune to serious error. I'm happy to trust that you mean what you say and are trying to get the mathematics right, but no one is entitled to be trusted to have actually got it right.

(Incidentally, "philanderer" doesn't mean what I think you think it means. A philanderer is a man who is in the habit of flirting with (or more-than-flirting with) a lot of women. Perhaps you're looking for a word like "dilettante"?)


May 29, 2019 I thank you for your detailed commentary and queries. My replies are as follows.

to your point [1]:

The point I make is that knowing lambda(N) (N large) and possibly, lambda(N-1), lambda(N-2), ..., lambda(N-K) For some fixed K. It is not possible to predict ALL the other lambdas higher than N. This is proved in the paper and can be intuitively understood because one cannot predict when the next prime occurs. Thus lambda(n) behaves like c(N) where c(N) is the Nth toss of a coin tossing experiment.

It is only necessary to prove that lambda(N) behaves (approximately) like a coin toss for large N. As explained this requirement is from Littlewood's theorem (proved in the paper). It is not necessary that the lambdas are perfectly random, of course they are not. Example, lambda(2n)= -lambda(n)) but if you choose n very large say n=10^100, then 2n will be very far away from n. Thus the 'randomness' in a very large sequence of lambdas is still practically undisturbed; the unpredictable occurrence of new primes also contribute toward 'randomness' - this is especially so for very long sequence of length N; To prove RH we only need this behaviour for N near infinity to satisfy Littlewood's Theorem. (Appendix VI actually verifies this phenomena over very large sequences). In a paragraph below I show how if M is large it is impossible to predict lambda(M+1) knowing only lambda(M) with out actually calculating it by using all the info on primes below Sqrt(M).

Regarding your argument where you construct an example in your para [1], of a sequence of lambdas and flip the signs of lambdas according to your specific rule is not permissible. Because you must remember that the lambda(n) take on values +1 or -1 depending how many factors the particular integer n has. So you cannot 'flip' their signs arbitrarily or as per your prescription, without actually demonstrating that your prescription does not violate the rules of arithmetic or the rules of factorization. The lambda's are arithmetical functions and obey some rules. I suggest that you study the proof of unpredictability (independence). it will convince you. NOTE: If you have a method of predicting lambda(n) from (say) its k previous values (k fixed), then this (hypothetical) method should work for ALL n, clearly it then means there exists some function f(x1,x2,..xK) which predicts the value according to the hypothetical method. The paper proves that such a function cannot exist, (this is my second proof of independence; see page 13 para 2(a) of Main Paper or slides 54-58 in my IITM Lecture).

In the next paragraph (where you talk about towers), you again "suppose" the lambdas in the towers to have some properties - you cannot suppose these things without actually proving that your "suppositions" do not violate the rules of factorization and arithmetic that govern the values of lambda(n).

Each tower contains integers whose lambdas alternate + - + - etc. I know that there are infinite number of towers; this does not make any difference to the proof - they are all countably infinite. The important thing is that each integer occurs only once in some tower and does not occur in any other tower.

To prevent ourselves being inveigled by Cantor-like arguments (which imposes "pre-suppositions" on the properties of the lambdas), I suggest that we get out of this "mapping" mind set. I have given an independent proof (which should be studied) of equal probabilities which only uses induction and the assumption that each even integer is followed by an odd integer which precedes another even integer (i.e the number of even integers is equal to the number of odd integers). Because of the latter reason, I believe this proof is water-tight. This proof is given in: https://www.researchgate.net/publication/324828748_A_Simple_...

--- Regarding your last point, I agree I was just loosely stating matters when I put things like: "L(n) ~ Cn^1/2"; (sorry!). The actual rigorous requirement for the proof of RH is that as N tends to infinity one must have Mod[ L(N)] < C. N^(0.5 + e), where e is small but positive. The inequality must hold in the limit for very large values of N not a particular value, so Littlewood's condition actually specifies a bound; you can read the statement of Littlewood's Theorem given in the paper.

----

  The better way to intuitively understand as to why the lambdas behave like coin tosses is to consider the following argument: 
(a) Each integer, n, can be uniquely factorized into primes, (b) So the larger the value of n, the number and variety of primes become more, because there will be more number of primes. So if is a large number M, (say M=10^100) and if lambda(M)= +1, meaning M has an even number of prime factors, then it is impossible to say whether the next number (M+1) will have lambda(M+1) +1 or -1 without actually calculating it. You will actually see that the primes which factorize M are different from the primes that factorize (M+1), in fact they will be completely different. That is why the lambdas behave like "coin tosses". Another way of "seeing it", is to compare a binomial distribution (of coin tosses) with a Gaussian distribution (this comparison is valid for large number of samples). It is well known that: if a random variable X (say height of a person) depends on very many unknowns variables like diet, height of parents, environment etc.. then the random variable X behaves like a Gaussian. The same for coin tosses (coin toss depends on the very many ways it can be tossed) and a similar thing happens here because the lambda(n) depends on the factorization of n involving many primes(variables).

In Appendix VI (I strongly recommend that you read Appendix VI in its entirety), I have taken very large sequences of consecutive lambdas and I have actually shown by doing a chi-squared fit and comparing each sequence with a binomial sequence of the same size, that they are indistinguishable (statistically speaking) from coin tosses. In fact if one considers, the sequence of the first 176 trillion consecutive lambdas, the sequence is indistinguishable from coin tosses. My paper gives an explanation (reason) for such a phenomena and this reason lead to the proof of RH.

As far as I can tell, I think I have answered your Queries; the answers are also contained in greater detail in Lecture 5 to 7 in the Lecture Series of Vinayak Eswaran, whose Link is given in the last paragraph of this note: "SEVEN LECTURES". I very strongly suggest you study them and then you can read the papers and Notes contained in "THE SUMMARY" and my "Pathway" (see my previous write-up for the links in Researchgate under my name). -- Yes "dilitannte" would have been the correct word. By using the word "trust" I only meant that you should trust me just for the duration of time that it would take you to have read my arguments and to have studied carefully my papers and the Notes (as suggested in the previous para) till the very end, with an open mind. Of course, after this is done, (but not before), I expect that my reasoning will be subject to the closest and strictest scrutiny.

--

I thank You once again for all the interest you have taken.

SEVEN LECTURES: Since this blog will be read by others. I just wish to say (for the curious non-expert): that my brother Vinayak Eswaran (formerly professor at IIT Kanpur and presently a professor at IIT Hyderabad), has taken it upon himself to upload a series of Seven Lectures entitled "Seven Lectures on the Proposed Proof by Kumar Eswaran of the Riemann's Hypothesis".See Link: https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...

These lectures are set at a level which can be understood by a STEM undergraduate student.

Regards K.E ----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: