
Mathematicians revive abandoned approach to the Riemann Hypothesis - new_guy
https://phys.org/news/2019-05-mathematicians-revive-abandoned-approach-riemann.html
======
archgoon
A mathematician has spent years trying to prove the Riemann hypothesis –
without success. Finally, he decides to sell his soul to the devil in exchange
for a proof. The devil promises to deliver a proof within four weeks.

Four weeks pass, but nothing happens. Half a year later, the devil shows up
again – in a rather gloomy mood.

"I’m sorry", he says. "I couldn’t prove the Riemann hypothesis either. But" –
and his face lightens up – "I think I found a really interesting lemma…"

------
twic
_The idea for the paper was sparked two years ago by a "toy problem" that Ono
presented as a "gift" to entertain Zagier during the lead-up to a math
conference celebrating his 65th birthday. A toy problem is a scaled-down
version of a bigger, more complicated problem that mathematicians are trying
to solve._

 _Zagier described the one that Ono gave him as "a cute problem about the
asymptotic behavior of certain polynomials involving Euler's partition
function, which is an old love of mine and of Ken's—and of about pretty much
any classical number theorist."_

 _" I found the problem intractable and I didn't really expect Don to get
anywhere with it," Ono recalls. "But he thought the challenge was super fun
and soon he had crafted a solution."_

That sounds like a potentially useful problem-solving trick - pull a small but
key problem out of your bigger problem, and then nerd-snipe your friends into
solving it.

~~~
pmiller2
Definitely. Two fairly standard techniques in mathematics are solving a cut
down version of a problem and lifting that to a full solution, and solving a
more general version and specializing to a particular case. Sometimes,
amazingly, the more general problem is easier to solve, perhaps because fewer
irrelevant details stand in the way.

Both methods have yielded a lot of progress in mathematics. For example,
solving a cut down version of Fermat’s Last Theorem got us the theory of
ideals in algebra. Relatedly, Wiles work on the modularity theorem ultimately
led to the actual proof of FLT.

~~~
throwawaymath
_> Sometimes, amazingly, the more general problem is easier to solve, perhaps
because fewer irrelevant details stand in the way._

True, and that's why we like to generalize things. It's usually easier to
prove a mathematical object belongs to another class of structure which has
certain properties than to prove the object itself has all those properties.

Of course as you've noted it's not always like that at all. It's significantly
easier to prove many things about finite-dimensional vector spaces over fields
versus infinite-dimensional modules over rings.

Solving mathematical problems is a lot like working a sponge. It's often
useful to alternate between expanding and contracting the scope of your
problem.

------
butterthebuddha
"This method doesn't work and Ken knows that, not sure wtf this article is. At
most, this idea can show that the zeroes off the critical line are density
zero, which would be a huge result, but could never prove RH. Again, Ken knows
this (he says as much when he talks about it) so wtf is this article?"

\- Someone on Reddit
([https://old.reddit.com/r/math/comments/brgp5z/mathematicians...](https://old.reddit.com/r/math/comments/brgp5z/mathematicians_revive_abandoned_approach_to_the/))

------
oneepic
Neat. I mean, I don't understand the mathspeak since I stopped taking math
after calculus in college, but it's neat to see fun approaches turn into
viable tactics for finding a solution.

By coincidence I toyed with the problem a bit this past week. Pretty difficult
to tackle with a "basic" math toolbox (calc, trig, algebra, and some discrete
math/number theory that's taught in many CS undergrad curricula). Much like
Fermat's Last Theorem, it seems like this might require much higher-level math
to solve what seems like a simple problem.

~~~
Ragib_Zaman
Man HR is harsh as hell. While yes, if a proof of RH is found it will probably
use advanced tools, ever finding a proof with elementary tools is not totally
out of the question. Regarding the Prime Number Theorem, which can be seen as
a baby version of RH, experts in the field such as Hardy have been quoted as
saying elementary proofs would not exist. Then 50 years after Hadamard and
Poussin found the original proofs, Selberg and Erdos published proofs based on
elementary methods. No one can say that it's worthless for someone to try the
same for RH.

~~~
math-fan
I've taken math up to calculus and linear algebra and enjoy watching 3B1B
proofs. Is that enough to follow the elementary Prime Number Theorem proof?
I've been interested in understanding that for a while. Any good links?

~~~
fspeech
You will need to learn some complex analysis which is very beautiful. Ash has
a very good chapter on the Prime Number Theorem as a reward for learning
complex variables:
[https://faculty.math.illinois.edu/~r-ash/CV.html](https://faculty.math.illinois.edu/~r-ash/CV.html)
You don't need to learn the whole book just some knowledge of contour integral
and analytic continuation is sufficient to understand the proof. So you could
just start with the last chapter and fill the gaps as you go. (If you go
linearly through the book you will encounter things like the normal family not
required for your stated goal. Even though the normal family stuff are even
more beautifully amazing IMHO the presentation here may be too abstract to
grok and could discourage one to soldier on.)

~~~
User23
Visual Complex Analysis by Tristan Needham is an incredibly approachable
textbook on the subject.

------
ncmncm
I've seen a paper[1] that claims RH, as stated, is undecideable, and then
proposes to prove that an analytic continuation of it has the property
required. The idea seems to be that you can choose your continuation to have
or not have the property.

It's not clear to me what could be undecideable about the original
proposition; either the continuation matches at the points in question, or it
doesn't, and if it doesn't, how is it a continuation?

Enlightenment welcome.

[1] [http://phys.lsu.edu/~fmoxley/r1.pdf](http://phys.lsu.edu/~fmoxley/r1.pdf)

~~~
soVeryTired
I've been following the development of the Bender-Brody-Muller approach to the
Riemann hypothesis because I knew one of the authors in a previous life. I
don't think it's taken that seriously by number theorists [0][1]. The paper
was written by physicists, who sometimes play a little fast and loose with
mathematical rigour.

Maybe this line of argument fixes the flaws in the original BBM paper in some
way - I'm not qualified to comment - but I sort of doubt it.

[0] [https://arxiv.org/abs/1704.02644](https://arxiv.org/abs/1704.02644) [1]
[https://math.stackexchange.com/questions/2211278/riemann-
hyp...](https://math.stackexchange.com/questions/2211278/riemann-hypothesis-
is-bender-brody-m%C3%BCller-hamiltonian-a-new-line-of-attack)

------
nutanc
Nice detailed lectures on RH along with a suggested proof is here
[https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...](https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZdGAPmgGGrhfb&disable_polymer=true)

~~~
gjm11
I have looked at Eswaran's paper that purports to prove RH and I'm pretty sure
it's all wrong. (Currently-latest version of paper is here:
[https://arxiv.org/pdf/1609.06971v10.pdf](https://arxiv.org/pdf/1609.06971v10.pdf))

The basic idea is to prove that a certain function L(N) doesn't grow too
quickly. It's _true_ that appropriate bounds on L(N) imply RH.

But Eswaran's arguments for this point are full of handwaving, and in this
field handwaving is just not good enough. L(N) is a sum of terms +1 and -1,
and he claims that it behaves like a series of coin tosses, but the specific
ways he argues it's like a series of coin tosses are _not enough_ for the
conclusion he needs.

E.g., in section 5, on page 13, he states a theorem which says: you can pair
up the +1 and -1 terms in that series, so that for each +1 there's a -1 and
for each -1 there's a +1. This is certainly true, but it doesn't imply that
the series is random-like or that the partial sums L(N) don't grow much faster
than sqrt(N) (which is the condition he needs). E.g., consider the series that
goes ++-++-++-++-++-++- etc., whose partial sums obviously grow like N/3\.
This satisfies that "pairing" condition just fine. (Pair the first + with the
first -, the second + with the second -, etc. There are infinitely many of
both, which is all you need to make that work.)

In appendix V, he tries a different approach. (The fact that he gives two
different "proofs" of the same key result, and both are handwavy, is a Very
Bad Sign.) There's lots here that looks wrong, but here's one thing: he claims
to prove that if |L(N)| ~ N^a for some a then we must have a=1/2, and having
given his proof he claims to have proved that in fact |L(N)| ~ N^1/2\. But
_these are very different propositions_. Another possibility is that we don't
have |L(N)| ~ N^a for any a at all, which in fact is true because it's known
that infinitely often |L(N)|<=1 and infinitely often |L(N)|>=sqrt(N).

The lectures appear to be about his alleged proof rather than about RH in
general. In any case, I would not trust someone capable of making the errors
in his RH paper to teach any research-level material in pure mathematics.

~~~
K_Eswaran
Thanks a great deal for your comments.

You have been reading almost the very first version of the paper. The correct
version of the paper is:
[https://www.researchgate.net/publication/325035649_The_Final...](https://www.researchgate.net/publication/325035649_The_Final_and_Exhaustive_Proof_of_the_Riemann_Hypothesis_from_First_Principles)

The basic idea is to prove that the Liouville (lambda) series of length N
behaves like coin tosses or as a random walk for very large values of N.

The "alternate proofs" that you talk about are just alternative proofs of
certain theorems that are necessary for the proof of RH. Their existence does
not mean I do not believe in the previous proof.

The Reply to your misgivings can be found by studying Items (3) and (4) in the
Summary below. But before you peruse them, I suggest that you must Trust me,
because I am a serious researcher and what ever I have found I have openly
written about them in my papers and Project Notes and my lectures (I have been
open and free with my ideas). If you think that I am a non-serious philanderer
then you can save yourself the trouble by not reading what I have done. I am
of the firm belief, that to begin to understand someone's new idea one must
first trust him!

Most of my work has been uploaded in Research gate and there have been
thousands of reads/downloads. The work is Outlined in the summary below:

SUMMARY OF KUMAR ESWARAN'S WORK ON RIEMANN HYPOTHESIS

1) THE MAIN PAPER:IS: (Version May 2018)
[https://www.researchgate.net/publication/325035649_The_Final...](https://www.researchgate.net/publication/325035649_The_Final_and_Exhaustive_Proof_of_the_Riemann_Hypothesis_from_First_Principles)

2) HIS INVITED TALK IN IIT MADRAS (Date May 14 2019)
[https://www.researchgate.net/publication/333185552_Invited_L...](https://www.researchgate.net/publication/333185552_Invited_Lecture_at_IIT_MADRAS_On_the_Pathway_to_the_Proof_of_the_Riemann_Hypothesis)

3) Along with this Invited Talk an Extended ABSTRACT outlining the Crucial
Steps of the Proofs and References to Notes are given see:
[https://www.researchgate.net/publication/331889126_The_Pathw...](https://www.researchgate.net/publication/331889126_The_Pathway_to_the_Riemann_Hypothesis)

4) He has made many additional Notes on his work in the following Link Project
Notes and Replies to various comments on RH

His answer to various comments can be found by clicking 'Project Log" in the
above Link.

\-----

5) Additionally Prof Vinayak Eswaran took it upon himself to give a series of
Seven Lectures addressed at the level of undergraduates.. His motivation for
doing this is given in his INTRODUCTION which may be read first:
[https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...](https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZdGAPmgGGrhfb&disable_polymer=true)

END OF SUMMARY

I thank You Once again for your interest.

Regards KE

~~~
gjm11
The version I linked to was (so far as I could tell) the latest on the arXiv,
posted a couple of years after the first version. I've now looked at the
latest version and it seems to me it still has much the same problems as
before, just hidden a little better.

As you say, your argument depends on having the Liouville series behave like a
series of independent coin tosses. But you never really show that it does. In
your "Appendix IV" you argue, e.g., that if m,n are coprime then
lambda(m),lambda(n) are "independent". Strictly speaking, that's nonsense:
independence is a relationship between _random variables_ and
lambda(m),lambda(n) are not random variables at all. There might be some
theorem that says something like "if you consider a long enough series of
these, then their statistics are very close to those you'd get from
independent random variables", but your argument doesn't show anything at all
like that.

(You do argue -- the argument seems clearly unsound to me[1], but the
conclusion, or something like it, is probably correct -- that for fixed L and
large enough N you can't deduce lambda(N) from lambda(N-1),...,lambda(N-L).
But this has nothing at all to do with the relevant notion of statistical
independence.)

[1] Why is it unsound? Well, several things look wrong with it. Here's one of
them: you say, in effect, that you can only work out lambda(N) from earlier
values if you have lambda(some divisor or k). But clearly that isn't true;
e.g., if N is a multiple of 3 then you have lambda(N) = lamda(2N/3).

The "arithmetical proof" in Appendix V is exactly the same as before, and
remains unsound for the same reason: it doesn't distinguish between "if L(n) ~
Cn^a then a=1/2" and "L(n) ~ Cn^1/2". The second of these is in fact clearly
_false_ , as I said before, because for some n L(n) is very close to zero.

The errors being made here are not, it seems to me, the sort of superficial
ones that just indicate glitches in the exposition of a basically-sound
underlying argument, and that need patching over. They indicate fundamental
misunderstandings of how this stuff works.

A couple of other wrong things -- but, to be clear, the point isn't that a few
specific statements are wrong, but that you don't have a firm grasp of just
what it is you need to prove:

In section 3.1 of your "pathway" document you say that it will be enough if
the function lambda satisfies three conditions: (1) +1 and -1 equally common,
(2) sequence not periodic, (3) can't predict new values of lambda from
previous values. This is _simply not true_. For instance, consider a sequence
defined as follows. Start with the sequence +-++--++++---- etc. (i.e., runs of
length 1,1,2,2,4,4,8,8, etc.) This has properties 1 and 2, and it "almost" has
property 3. Now perturb it by flipping each sign with probability 1/8 (either
actually randomly, or using a deterministic-but-random-like sequence -- e.g.,
if you consider that the actual lambda(n) form such a sequence then you can
flip the sign of the n'th term iff lambda(3n), lambda(3n+1), lambda(3n+1) are
all +1). This still satisfies properties 1 and 2, and now clearly satisfies
property 3 as well. But it doesn't at all satisfy the condition you need, of
growth no faster than about sqrt(n): if you pick n=3.2^k-2 (i.e., at the end
of a run of +) then the sum is approximately n/4.

In section 5.2 of your main paper, you give two slightly more refined versions
of the "twin" argument from older versions of the paper: it's not just that we
can pair up numbers so that lambda(n),lambda(n') have opposite signs, but that
we can divide the positive integers into subsets (the "towers") each of which
has alternating signs. Or: ... into subsets which we can pair up with one
another so that corresponding numbers in paired subsets have opposite signs.
But these conditions, even taken together, _don 't have the consequence you
want_. Suppose we have _any_ sequence of +,- at all, provided it has
infinitely many of both signs; it can be decomposed into subsets satisfying
those conditions, as follows. We start with all "towers" (of course they
aren't "towers" in precisely your sense) T1, T2, ..., empty, and a "next sign
needed" for each tower of +,-,+,-,+,-, etc. Now go through our sequence in
order: n=1,n=2,n=3,..., assigning each number in turn to the "highest-
priority" tower it's eligible for. Highest-priority means smallest value of
(tower number) times (1 + number of numbers already assigned to tower). I
claim we always assign infinitely many numbers to each tower. If not, consider
the towers that only ever get finitely many numbers, and suppose tower k is
the one for which (tower number) times (1 + max numbers ever assigned) is
minimal. Then once n is large enough tower k will have highest priority, and
we have both infinitely many + and infinitely many -, so tower k will get
another number assigned to it, contradiction.

I profoundly disagree with what you say about trust. "Nullius in verba"
([https://en.wikipedia.org/wiki/Nullius_in_verba](https://en.wikipedia.org/wiki/Nullius_in_verba))!
Being a serious researcher doesn't mean that your work is exempt from checking
or immune to serious error. I'm happy to trust that you mean what you say and
are trying to get the mathematics right, but no one is entitled to be trusted
_to have actually got it right_.

(Incidentally, "philanderer" doesn't mean what I think you think it means. A
philanderer is a man who is in the habit of flirting with (or more-than-
flirting with) a lot of women. Perhaps you're looking for a word like
"dilettante"?)

~~~
K_Eswaran
May 29, 2019 I thank you for your detailed commentary and queries. My replies
are as follows.

to your point [1]:

The point I make is that knowing lambda(N) (N large) and possibly,
lambda(N-1), lambda(N-2), ..., lambda(N-K) For some fixed K. It is not
possible to predict ALL the other lambdas higher than N. This is proved in the
paper and can be intuitively understood because one cannot predict when the
next prime occurs. Thus lambda(n) behaves like c(N) where c(N) is the Nth toss
of a coin tossing experiment.

It is only necessary to prove that lambda(N) behaves (approximately) like a
coin toss for large N. As explained this requirement is from Littlewood's
theorem (proved in the paper). It is not necessary that the lambdas are
perfectly random, of course they are not. Example, lambda(2n)= -lambda(n)) but
if you choose n very large say n=10^100, then 2n will be very far away from n.
Thus the 'randomness' in a very large sequence of lambdas is still practically
undisturbed; the unpredictable occurrence of new primes also contribute toward
'randomness' \- this is especially so for very long sequence of length N; To
prove RH we only need this behaviour for N near infinity to satisfy
Littlewood's Theorem. (Appendix VI actually verifies this phenomena over very
large sequences). In a paragraph below I show how if M is large it is
impossible to predict lambda(M+1) knowing only lambda(M) with out actually
calculating it by using all the info on primes below Sqrt(M).

Regarding your argument where you construct an example in your para [1], of a
sequence of lambdas and flip the signs of lambdas according to your specific
rule is not permissible. Because you must remember that the lambda(n) take on
values +1 or -1 depending how many factors the particular integer n has. So
you cannot 'flip' their signs arbitrarily or as per your prescription, without
actually demonstrating that your prescription does not violate the rules of
arithmetic or the rules of factorization. The lambda's are arithmetical
functions and obey some rules. I suggest that you study the proof of
unpredictability (independence). it will convince you. NOTE: If you have a
method of predicting lambda(n) from (say) its k previous values (k fixed),
then this (hypothetical) method should work for ALL n, clearly it then means
there exists some function f(x1,x2,..xK) which predicts the value according to
the hypothetical method. The paper proves that such a function cannot exist,
(this is my second proof of independence; see page 13 para 2(a) of Main Paper
or slides 54-58 in my IITM Lecture).

In the next paragraph (where you talk about towers), you again "suppose" the
lambdas in the towers to have some properties - you cannot suppose these
things without actually proving that your "suppositions" do not violate the
rules of factorization and arithmetic that govern the values of lambda(n).

Each tower contains integers whose lambdas alternate + - + - etc. I know that
there are infinite number of towers; this does not make any difference to the
proof - they are all countably infinite. The important thing is that each
integer occurs only once in some tower and does not occur in any other tower.

To prevent ourselves being inveigled by Cantor-like arguments (which imposes
"pre-suppositions" on the properties of the lambdas), I suggest that we get
out of this "mapping" mind set. I have given an independent proof (which
should be studied) of equal probabilities which only uses induction and the
assumption that each even integer is followed by an odd integer which precedes
another even integer (i.e the number of even integers is equal to the number
of odd integers). Because of the latter reason, I believe this proof is water-
tight. This proof is given in:
[https://www.researchgate.net/publication/324828748_A_Simple_...](https://www.researchgate.net/publication/324828748_A_Simple_Proof_That_Even_and_Odd_Numbers_of_Prime_Factors_Occur_with_Equal_Probabilities_in_the_Factor-
ization_of_Integers_and_its_Implications_for_the_Riemann_Hypothesis)

\--- Regarding your last point, I agree I was just loosely stating matters
when I put things like: "L(n) ~ Cn^1/2"; (sorry!). The actual rigorous
requirement for the proof of RH is that as N tends to infinity one must have
Mod[ L(N)] < C. N^(0.5 + e), where e is small but positive. The inequality
must hold in the limit for very large values of N not a particular value, so
Littlewood's condition actually specifies a bound; you can read the statement
of Littlewood's Theorem given in the paper.

\----

    
    
      The better way to intuitively understand as to why the lambdas behave like coin tosses is to consider the following argument: 

(a) Each integer, n, can be uniquely factorized into primes, (b) So the larger
the value of n, the number and variety of primes become more, because there
will be more number of primes. So if is a large number M, (say M=10^100) and
if lambda(M)= +1, meaning M has an even number of prime factors, then it is
impossible to say whether the next number (M+1) will have lambda(M+1) +1 or -1
without actually calculating it. You will actually see that the primes which
factorize M are different from the primes that factorize (M+1), in fact they
will be completely different. That is why the lambdas behave like "coin
tosses". Another way of "seeing it", is to compare a binomial distribution (of
coin tosses) with a Gaussian distribution (this comparison is valid for large
number of samples). It is well known that: if a random variable X (say height
of a person) depends on very many unknowns variables like diet, height of
parents, environment etc.. then the random variable X behaves like a Gaussian.
The same for coin tosses (coin toss depends on the very many ways it can be
tossed) and a similar thing happens here because the lambda(n) depends on the
factorization of n involving many primes(variables).

In Appendix VI (I strongly recommend that you read Appendix VI in its
entirety), I have taken very large sequences of consecutive lambdas and I have
actually shown by doing a chi-squared fit and comparing each sequence with a
binomial sequence of the same size, that they are indistinguishable
(statistically speaking) from coin tosses. In fact if one considers, the
sequence of the first 176 trillion consecutive lambdas, the sequence is
indistinguishable from coin tosses. My paper gives an explanation (reason) for
such a phenomena and this reason lead to the proof of RH.

As far as I can tell, I think I have answered your Queries; the answers are
also contained in greater detail in Lecture 5 to 7 in the Lecture Series of
Vinayak Eswaran, whose Link is given in the last paragraph of this note:
"SEVEN LECTURES". I very strongly suggest you study them and then you can read
the papers and Notes contained in "THE SUMMARY" and my "Pathway" (see my
previous write-up for the links in Researchgate under my name). \-- Yes
"dilitannte" would have been the correct word. By using the word "trust" I
only meant that you should trust me just for the duration of time that it
would take you to have read my arguments and to have studied carefully my
papers and the Notes (as suggested in the previous para) till the very end,
with an open mind. Of course, after this is done, (but not before), I expect
that my reasoning will be subject to the closest and strictest scrutiny.

\--

I thank You once again for all the interest you have taken.

SEVEN LECTURES: Since this blog will be read by others. I just wish to say
(for the curious non-expert): that my brother Vinayak Eswaran (formerly
professor at IIT Kanpur and presently a professor at IIT Hyderabad), has taken
it upon himself to upload a series of Seven Lectures entitled "Seven Lectures
on the Proposed Proof by Kumar Eswaran of the Riemann's Hypothesis".See Link:
[https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZ...](https://www.youtube.com/playlist?list=PLRsxymPrOKUAk3eXhK9FZdGAPmgGGrhfb&disable_polymer=true)

These lectures are set at a level which can be understood by a STEM
undergraduate student.

Regards K.E \----

