Hacker News new | past | comments | ask | show | jobs | submit login
Computer scientists attempt to corner the Collatz conjecture (quantamagazine.org)
84 points by beefman 53 days ago | hide | past | favorite | 47 comments

At some point I gave the Collatz conjecture some thought and understood the probabilistic intuition of why it should be the case. Although I obviously couldn't prove it, I decided to write my thoughts on why it is intuitively obvious [1]. A few months later I received a comment mentioning that a proof has been submitted based on the same intuition [2].

[1] http://nocurve.com/virtual-lab/contemplating-the-collatz-con...

[2] http://nocurve.com/about/

> Every time the coin turns out heads, I take half of the money on the table (i.e the total you have left is divided by 2). Every time it turns out to be tails, your money is multiplied by 1.5 (plus a 50 cents bonus). It’s obvious that this game is totally rigged to my advantage as the amount of heads will converge to be similar to the amount of tails, but the loss you incur with each head result (money divided by 2) is larger than the win gained with each tail result (money multiplied by 1.5 + change).

This game has positive expected value, the only problem is risk of ruin if you size your bets too aggressively as a percentage of your chip stack

This type of reasoning only gives you intuition about half the conjecture. The conjecture says two things:

1. No sequence grows bigger and bigger forever; and

2. No sequence gets caught in a loop (except for the trivial 4 -> 2 -> 1 -> 4 loop).

Your article addresses 1, but I don't think it says much about 2. Sure, sequences should probably trend downward over time, but why shouldn't a sequence that hops upward at the beginning get snagged on a value it's already hit as it falls back down, trapping it a nontrivial loop?

Both parts you mentioned are derived from the Wikipedia formulation which is that "no matter what value of n, the sequence will always reach 1", but the subject of possible non trivial loops is an interesting one to contemplate from the coin toss perspective.

The equivalence I made is to a truly random coin toss where heads means the number will be divided by two, and tails means the number will be multiplied by 1.5 and then 0.5 added to the result.

If the coin toss is truly random, it stands to reason there will be no loop (following the same intuition that entropy always increases). Any permanent loop will contradict the randomness of the coin toss.

Whether or not the reasoning above gives valid intuition to the conjecture being true depends upon the degree to which the Collatz mathematical procedure can be treated as generating a truly random process, and this is the point where I presented the main flaw in the intuition.

> Aaronson came up with a rewriting system with seven symbols and 11 rules that was analogous to the Collatz procedure.

Has Aaronson written about this anywhere? I'm curious what system he came up with & how it maps onto Collatz.

Well, thank you. Actually it's possible to follow even for someone without CS education like me. I have skimmed over it and actually it seems that although they are confident that the string rewriting system is equivalent to the conjecture it has not been proven yet. Anyway it's very interesting, I will try to read it in depth.

One interesting way to look at this is as a bit twiddling puzzle. The "N /= 2 if even" step is removing all the rightmost 0s in the binary representation. The remaining bits on the left have "(N << 1) + N + 1" applied. Wonder if the behavior of those remaining bits can be broken down into a finite set of conditions and be shown to eventually yield a power of 2.

Eventually getting to a power of 2 means that n*3 must eventually, at some point, be a non-prime Mersenne number. If there were something about the initial conditions it feels like you'd be saying something about the distribution of Mersenne primes, which is unlikely?

Yes in pseudo C-like binary-oriented notation the step is:

if ( lowest bit of x is 0 ) x >>= 1 else x += ( x << 1 ) + 1

And it's immediately obvious that the else branch always produces an even number.

Yes, this is quite well known. See (e.g.) the third paragraph on Terrence Tao's blog here


For what its worth Tao is possibly the only person in the world I wouldn't be shocked to hear of making progress on the Collatz conjecture. He actually proved a pretty nice result on it recently


but the technology employed doesn't seem strong enough to go beyond "almost all" statements.

> Yes, this is quite well known.

Sure, thanks for that Terence Tao links! I just wanted to confirm the efficiency of C notation to "see" what happens with the bits at the lowest level in this case, at least to those used to C. Now thinking about it, literal C would be even more concise:

if ( x & 1 ) x >>= 1 else x += ( x << 1 ) + 1

You have the then/else parts reversed -- and missing semicolons.

Yes, you‘re right, thanks.

I'm wondering if the matrix method is powerful enough for this particular problem.

What do we know about the class of terminating rewriting systems that are solvable by translation to matrixes of size n? Is there a known terminating system that does not have a matrix solution? Or maybe at least a proof that such a system exists?

I am also wondering about that. I guess it is just one way to prove termination, it might be the case that this approach does not work for the Collatz conjecture.

There definitely exist rewriting systems where no size of a matrix works. String rewriting is in general undecidable, so matrices, being enumerable, can't prove every system.

A while ago I played around with the Collatz conjecture a bit myself. I guess this is all well-known but maybe some people find it interesting:

One thing I noticed was that the seemingly chaotic directed graph can be ordered if you arrange the integers in a two dimensional grid where the x axis is the uneven numbers and each row doubles the previous one. So like:

    1   3   5   7   9 ...
    2   6  10  14  18
    4  12  20  28  36 
It's clear to see that if n ≡ 0 (mod 2) we'll be on some row other than the first so we'd just move all the way to the top. For integers on the first row there's some interesting "orbits" forming[1].

You can categorize them by which 2^n row the 3n+1 rule will jump to and each of those will attract to some specific position the 2^1 and 2^2 ones seem to be the only ones with an integer center point (at 1 and -1 respectively, so that's where you have cycles). Another interesting pattern I've noticed (not really shown) is that "combined" jumps like "first a jump into the 2^1 row followed by a jump into the 2^2 row" also follow these fixed intervals.

[1] https://raw.githubusercontent.com/gist/ginkgo/7120618db058d6...

Another relatively recent article on this topic:


HN discussion at the time: https://news.ycombinator.com/item?id=21780068

Forgot that there's an xkcd, too:


I'm curious, can anyone shed light on the particular significance of this problem?

I know that many mathematical problems are more about "charting the terrain" and don't have a particular use until long after they're discovered. But this one seems especially arbitrary; almost like a brain-teaser. And given the funding involved, I have to wonder if there's some usecase that's already known- even if that usecase is just for other high-level math.

Edit: I found this https://math.stackexchange.com/questions/2694/what-is-the-im...

Sounds like it has something to do with the predictability of primes, which has obvious importance

Now that Fermat's Last Theorem has been proven, the Collatz conjecture is the simplest unsolved math problem. That is what makes it special. There are other unsolved problems that are at the heart of some rich field of mathematics and would be more mathematically significant to solve, like the Riemann hypothesis.

For fun I went ahead and wrote a little javascript to calculate the sequence. I realized that a lot of work would end up repeated as soon as you encountered an earlier number again (it's directed-graph traversal, really), so it records the ones that it's solved and bails out when it finds them again (denoted by the ->). You can paste the following into your browser console to get a feel for what it looks like.

    let memo = {}

    function conjecture(n) {
        let str = "";

        while (n !== 1) {
            str += n + " ";

            if (memo[n] != null) {
                str += "->";
                return str;
            } else {
                if (n % 2 === 0) n = memo[n] = n / 2;
                else n = memo[n] = n * 3 + 1;

        str += n + " ";

        return str;
    function rangeTo(end) {
        return new Array(end).fill(null).map((_, i) => i + 1)

    for (const n of rangeTo(1000)) {
A neat thing is that this terminates (at 1) for seemingly every number you try (I disabled console-logs and the process terminated for every number up to a range of 10,000,000, which is as high as I tested). It's just the formal proof that missing. There also seems to be a vague pattern to the results, but that may just be cloud-pictures.

> the process terminated for every number up to a range of 10,000,000, which is as high as I tested

Other people have taken this experiment a bit further already:


> As of 2020, the conjecture has been checked by computer for all starting values up to 2⁶⁸.

> It's just the formal proof that missing.

That's the only thing that actually matters. You've demonstrated something analogous to the following: "Claim: there are no numbers larger than 100,000,000,000,000,000." You've then gone and checked all the numbers from 1 to 10 million, concluded that it's true, and then just say that we're missing a formal proof. I don't mean to be rude, but from an actual mathematical point of view, you've basically demonstrated nothing.

10 million is mindboggling tiny when it comes to all the numbers. Consider this counterexample to Polya's conjecture: https://www.youtube.com/watch?v=eQCUPQdi6DY

Or so many other examples: https://math.stackexchange.com/questions/514/conjectures-tha...

>Claim: there are no numbers larger than 100,000,000,000,000,000.


> I don't mean to be rude

Well that's how it comes off. Obviously I didn't think I'd made some breakthrough in five minutes with some JavaScript, and obviously I know that the formal proof is the important thing. I was just poking at the phenomenon and expressing my curiosity. There's no need to be so eager to jump in and shut that down.

> the process terminated for every number up to a range of 10,000,000, which is as high as I tested. It's just the formal proof that missing.

As well as the rest of the numbers between ten million and infinity.

And I suppose, an understanding of why.

A recursive memorized solution would be more elegant. Or even a mutually recursive solution.

It would also be significantly slower and potentially limited by stack depth. Even with the memoization some of these sequences get pretty long.

I mean, a tail recursive solution wouldn't be.

Just to let you know the memoization technique you used is called dynamic programming.

"Dynamic programming" is such a dumb name for what it is and I hope that we can all eventually stop calling it that.

Emphatically agreed. For those not aware, the person who chose the name (for what were originally called "multistage decision processes", which is far more descriptive) says he chose "dynamic" in significant part because it sounded impressive to a hard-nosed boss:

"... It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities." https://en.wikipedia.org/wiki/Dynamic_programming#History

Also, by the way, to the extent that "dynamic programming" has a definition (the words have little to do with what it means, so it's hard to fix the term to a specific definition), it generally refers to a superset of memoization. So memoization is the more specific term, and should be preferred.

I think we may be able to help kill usage of the term by observing that its initialization, DP, which some people are fond of using, has a much more graphic alternate meaning. Then modern puritanism may make people reluctant to say it anymore.

The history behind the term is pretty interesting to me: https://en.wikipedia.org/wiki/Dynamic_programming#History The original research on the subject was published in engineering / optimization as opposed to computer science, and the term "programming" was used differently there ("linear programming" and "quadratic programming" for example). Now we think of it not as an optimization problem but as an algorithm design technique, but the name still stuck around.

I'm aware, I was just commenting on it as an interesting trait of the problem. There might be an n+1 aspect, where if you can prove that each sequence will eventually reach some previous number that reduces to 1 then that higher one will too

Yup, for the past two days, for the first time I thought about the problem!

Yup, quickly it gets to the role of prime numbers or a product of primes and some number of factors of 2.

Heck, obviously can draw a directed graph with an arc from each odd number, that is, a product of primes except 2, and the result of the 3X + 1 operation. Then have an arc from each even number to that number with one less factor of 2.

Then if could argue that given a number n, the nodes in the graph that can be reached from n are finite and if we don't enter cycles without a 1, then we are done!

Alas, as one might expect from all the effort on this problem, a gap here is the argument of finiteness, and that quickly gets to how many primes there are or are not! And as we know, where all the primes are is a tough problem.

Long ago in math, I decided not to spend effort on big questions about prime numbers!

So, I gave up, decided to use easier problems to think about when going to sleep!

I wonder what https://en.m.wikipedia.org/wiki/Spectral_graph_theory can tell about this graph

The collate conjecture gets a pass from me because I think there’s an even more funded problem that is even more useless: calculating pi to more and more digits. I doubt any practical application needs more than 10 digits.

It’s proven that Pi is transcendent, while it isn’t proven Collatz always finishes

But what if I want to show off to my friends how smart I am by rattling off a gazillion digits of pi off of the top of my head? I need more than 10 digits to do that...


Would it be at all useful to be able to state the rule for calculating the coefficients (in base-3) from one odd number to the next in the Collatz conjecture's evolution? That is, given an odd number's base-3 expansion, you could calculate the next odd number in the sequence from its coefficients.

I've done some work on this and have an idea of this rule but haven't been able to find anyone that is knowledgeable enough to comment on it.

To clarify my question above, there seems to be a general rule for calculating the next step in any Collatz-like iterated sequence from one odd number to the next, given the parameters (c, D) where both c and D are positive odd numbers and c > D, using the coefficients of each step's odd number, expanded in base-c. That is to say, 3N + 1 becomes cN + D.

The expansion's number of coefficients increases by 1 at each step until the leading coefficient becomes 0, and then continues. Although, it must be noted that the leading coefficient acts as a "pilot value" of sorts seeing that it is greater than c for most of the steps, given that each of the rest of the coefficients, call them s, are typical in that 0 ≤ s < c.

I recall Terence Tao made some recent progress on Collatz, I figured it might be of interest to the HN crowd:


Well, since 1 is added to a number, the number will eventually reach a power of 2, even if the number is divided by 2 in between.

Once it reaches a power of 2, it can only become odd when it reaches 1, so there you have it.

This reasoning is faulty since the same conjecture if you replace 3n+1 with 5n+1 turns out to be false. Interestingly the problem with a general pair of constants is undecidable.

> since 1 is added to a number, the number will eventually reach a power of 2

Why do you think that's the case?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact