The proof in the article states that there is a 100% chance of a random elliptic curve being either rank 0 (50% chance) or rank 1 (50% chance).
However, the article also informs us that there are an infinite number of elliptic curves with rank 2 or more. For example, the elliptic curve `y2+y=x3+x2−2x` is of rank 2.
So is it truly accurate to say that elliptic curves come only in two types?
I'm an undergrad taking some measure theory right now.
This is probably not exactly the mathematical formalism they are using, but it could be similar.
If I asked you to pick a random real number from 0 to 1, what do you think the probability is that the real number is rational? A natural way to answer this question is to try to generalize the way we say the interval from 0 to 1 is length 1 to more kinds of sets. Measure theory does exactly this, but we find out that the measure of the rational numbers is 0! This means that the probability of picking a rational number is 0. But that's clearly impossible you say, because there are an infinite number of rational numbers from 0 to 1! But in a precise mathematical way, the probability is 0.
Now a funny think I said was "pick a random real number". Since the computable numbers are also a measure 0 subset of the real numbers, it's literally impossible to randomly pick a real number with a computer...
Yes, but that doesn't mean that there are only irrational numbers. There are rational numbers, they just have zero measure. Similarly, these curves come in more than two types - those other types just have zero measure in the probability space of the curves.
Hey, non-mathematician computer science type here.
If I follow correctly, the issue with randomly picking any real number in that interval is that irrational numbers would require infinite computational steps to resolve. So the probability is really 0 that you'll get an irrational. If you have a finite number of computations, you're guaranteed to resolve to a rational, while if you have an infinite number of computations, you never resolve to anything.
Uniformly at random picking a number from the interval [0, 1] isn't possible with a turing machine (even giving it access to random coins). I.e. it's not a computable function.
It doesn't even really make sense, you can't represent uncountably many numbers on a turing machine, so it isn't even possible to return all but a tiny subset of the space.
You're imagining some turing machine that attempts to compute it anyways and thinking about the output. You seem to think that you can make a turing machine that
- In the probability 0 case that we should output a rational, will output that number
- Will otherwise infinite loop
This is randomized, so we are getting our randomness from some kind of "coin flip" like process. To know that we are in the that probability 0 case of outputting a rational, we will need to have seen infinitely many coin flips. If we've seen only n coin flips, there is still 1/2^n > 0 of the probability space that we haven't explored. So in fact any such turing machine has to loop infinitely in the rational case as well.
I'd say it's perfectly possible for a Turing machine + a source of randomness to generate a random real number in [0, 1). We just lazily generate more and more digits to the required precision. It's no different from, say, pi being computable.
We can even prove that there's zero probability that a rational number is generated: the decimal representation of a rational always has a repeating group of digits, but since each digit is generated randomly with 100% probability there will be no periodic pattern.
Computers are not type 2 turing machines, nor are any other physically existing thing that we know of. They aren't really turing machines either because they have a finite tape, but since we are only interested in running the turing machine for a finite amount of time and thus accessing a finite amount of tape that distinction is unimportant.
The standard definition of computable is on a turing machine, not a type 2 turing machine. Of course we can define an alternate model where more things are computable. Edit: And the standard definition of computable is relevant because it happens to be the exact set of functions we can compute on real computers.
While Weihrauch [0] does introduce a different definition of the word computable, that would in a randomized setting allow for sampling from the interval [0, 1] (and not just for rationals as I understand it either). Any algorithm on his "oracle turing machines" will still have to take an infinite amount of time, even to return the rationals. He just allows that in his definition of computable.
Type 2 Turing Machines are a conservative model of computation. They are as realistic as Type 1 Machines. Both models involve an infinite tape. Your previous comment used the Turing Machine abstraction, so my response was entirely valid, suggesting that replacing your abstraction with another equally valid, and I believe more appropriate for this purpose, abstraction eliminates the problem.
Also, by the way, the distinction between "complete" and "potential" infinity is useful here. The Type 2 Turing Machines features only a "potential" infinity, the same type that's common throughout Theoretical Computer Science. A real number is a only a "potential" infinity -- a process of sorts. You seem to be demanding that a real number be represented as a "complete" infinity, but this isn't needed for anything in physics or engineering or anything else. The demand you're making, which would imply that an infinite amount of time is needed, is unreasonable.
And by the way, the downvoter is somebody who can't argue with facts.
I suppose the argument you are trying to make here is basically "having a machine that if we run it for long enough, will tell us any particular digit of a number, is as good as having that number".
In the concrete example, you would argue that if a machine which after n time steps specifies which 1/2^n sized interval the randomly generated number lies in, then having that machine is equivalent to having the randomly generated number.
I disagree. If I come up with any property of the number that requires seeing arbitrarily many digits to specify (e.g. "is rational", "contains more 1s than 0s", etc) you can never tell me whether or not the number your machine specifies has that property. That said, I can see where you are coming from. There is at least some argument that under this model it's not the number which can't be computed, but the "is rational" function.
Personally, I wouldn't worry about the downvotes. Internet points aren't important in life anyways.
A program isn't a black box. The number specified (in binary) by this program
emit "0."
loop forever:
emit "110"
is both rational and contains more 1s than 0s. You no more need "run the program forever" to determine these of the number it presents than you need to perform an "infinite amount of long division" to determine it of the number presented by 6/7.
So? That's no different than a number specified in a "conventional" way, ie. by some mathematical formula in propositional logic. Let x be 0 is P if true and pi otherwise.
The conventional way of knowing a number is specifying it in a way that we can quickly determine what it is and operate on it.
If I say "the next prime after 9^9^9^9^9^9^9^9^9", or indeed "the next prime after busy beaver(1000)" I have specified a precise number. But you don't think I have it in any useful sense, because I can't compute it quickly (or in my second example at all).
Edit: And it should be noted that the above is more akin to the busy beaver example, no matter how long you operate that turing machine, if M' happens to be of the sort that doesn't halt but doesn't provably not halt, then you will never be able to tell me whether the number I "have" is 0 or pi.
But you haven't cleared anything up at all! What do you mean "determine what it is"? Do you mean compute its digits? Can you have an irrational number? a rational number with a non-finite decimal expansion? And what do you mean "operate on it"? By which operation? And what do you mean "quickly"?
In any case, the relevant program (assuming a fast random oracle)
emit "0."
loop forever:
x := query random oracle for one bit
emit x
seems to fit all your criterion. You can compute as many digits as you like very quickly. If you can have pi I don't see why you can't have this number (if you can have any random number at all).
We have branched into two different discussions it seems.
Does having a turing machine that will eventually output a number x suffice for having the number x.
And does that specific turing machine suffice for having a random number in range [0, 1].
As for the second, I have misgivvings about it but there's certainly an argument that that machine does work. The argument against that I currently find most convincing (that you don't have a classical representation that you can make two isolated copies of) is outlined here: https://news.ycombinator.com/item?id=18424725
The first discussion is more nuanced. The meaning of knowing/having a number is not something I claim to have an exact answer to (and indeed since it's an english term it probably doesn't have an exact meaning). It's clear however that having a turing machine that eventually outputs the value is not the same as knowing the value.
What operations is really the most fundamental question, I'd argue that it's clear that if you know x you can at least tell me what the n^th digit of x is. An arbitrary turing machine fails this, an arbitrary turing machine fails being able to tell me what the first digit is, let alone the n^th one.
I'm not going to take the stance that being able to compute its digits quickly is sufficient for having a real number, but I also won't argue against it. Personally I'd like to ask for equality testing too. I'm willing to yield that since I'm pretty sure that man-and-laptop will disagree and argue that being able to test for closeness is good enough, and he has a point even if I'm not convinced.
As for your the other questions you put to me, they aren't relevant for the turing machine part but:
Quickly, means in at most polynomial time. It might actually mean something stricter, but polynomial time seems to be a pretty clear upper bound on the amount of computation time you can need to find something while still being able to claim to know it. (See the paper I linked above)
We can certainly have a rational number with a non finite decimal expansion, you just need more creative forms of representing the numbers than listing digits. For instance the common method of putting a bar on top of the repeated ones, or alternatively quote notation is rather cool: https://en.wikipedia.org/wiki/Quote_notation
> I'd argue that it's clear that if you know x you can at least tell me what the n^th digit of x is. An arbitrary turing machine fails this, an arbitrary turing machine fails being able to tell me what the first digit is, let alone the n^th one.
Sound good (assuming we accept the 999... problem of non-uniqueness). So let's assume the machine makes progress in finite time ie. there is a sequence of natural numbers a(n) such that after time a(n) the machine has emitted at least n digits.
There's another possibility you mentioned above about a machine taking an input n and finding a value within 2^(-n) of the number. A machine that keeps emitting numbers on either side of an integer, eg. 2.1, 1.99, 2.00001, etc. fails to tell the first digit. But these numbers are arguably even more physical that programs-emitting-digits. They're roughly the real numbers you'd get from doing actual (classical) physical measurements.
> Personally I'd like to ask for equality testing too.
You didn't answer the question about irrational numbers but do you think we can have pi? It seems infeasible to mechanically determine that an arbitrary program you are given is a valid pi-digit-calculator though.
If you don't think you can ever have irrational numbers, then I think I see where you're coming from now. Having a number could be having a finite representation of the number that can be mechanically tested for equality (in time polynomial in the sizes of representations): a string of digits, a ratio of strings of digits, a string of digits with decimal point and a bar over the repeated ones, etc. IOW a normal form in some finitary data type.
> Quickly, means in at most polynomial time.
In what? You want the nth digit printed by time O(P(n))? If so, that is strictly stronger than finite time progress so we could dispense with that. But polynomial time doesn't make any sense for a machine that emits a finite string of digits and halts because it doesn't have an input.
> or alternatively quote notation is rather cool
Hah, I was going to ask about p-adics (do I have -1 if I have the machine that emits an infinite string of 1s?).
I'm really not satisfied with saying that a machine that gives smaller intervals like that is fully sufficient, on the other hand that's really all we're doing when we specify digits...
> They're roughly the real numbers you'd get from doing actual (classical) physical measurements.
I'd argue that a series of physical measurements don't give you a number so much as a probability distribution, even classically.
> do you think we can have pi
I'm not sure.
The problem I have with denying pi is it doesn't make much more sense than denying 1. Base pi is a perfectly rational numbering system. It's perfectly possible to introduce a special 'pi' symbol (rather like 'i') and define rules of arithmetic so that things work with both rationals and rational multiples of pi. And everything I said just applies to infinitely many other irrationals as well (e.g. 2^(1/n))
> It seems infeasible to mechanically determine that an arbitrary program you are given is a valid pi-digit-calculator though.
Indeed, this is true even if you replace "pi" with "1" though, that's another reason why having a program that calculates the digits isn't sufficient.
> Having a number could be having a finite representation of the number that can be mechanically tested for equality (in time polynomial in the sizes of representations)
Yes.
> In what? You want the nth digit printed by time O(P(n))?
It's a vague definition anyways, polynomial in the sum of everything that is relevant... probably O(P(n + the number represented)).
It seems to me that we should have a different measure than lebesgue measure when we speak of picking numbers from an interval.
I wonder if there is a measure under which computable numbers have measure 1 while the others have measure 0. Would it have all the correct properties?
All this becomes pretty intuitive if you think about numbers in terms of their decimal expansions. A "random number" is one with a random digit in each of its (infinitely long) list of decimal digits. A rational number is one where, at some finite point, all of the digits start to repeat in some finite pattern. The odds of that happening by chance are zero.
Likewise, if you take any two randomly generated list of decimal digits, at some point there will be a digit in the same place that is different between the two. At that point, you can construct a rational between the two by choosing the smaller digit and then adding "1000....".
Can you clarify that last point? I don't think it's strictly true. Aside from countable and uncountable infinities, you can have larger and smaller infinites as well. Unless every set S of all rational numbers between any irrational x and irrational y is isomorphic to the set P of all rational numbers, I don't see that this is correct. And I don't immediately see that you can put them into 1-1 correspondence.
> If I follow correctly, the issue with randomly picking any real number in that interval is that irrational numbers would require infinite computational steps to resolve.
In mathematics, doing things an infinite number of times is generally no problem, in fact it’s almost always done.
Of course, there are also many different sizes of infinity, and doing things a larger infinite size number of times requires some extra steps... but most branches of math only do things a countable infinite number of times (the smallest infinity).
"Rational" numbers are those that can be expressed as a fraction of two integers. For example, a square root of two is not rational.
"Computable" numbers are those that can be calculated by a computer (with unlimited memory, but finite speed) with arbitrary (not infinite) precision in finite time. As a rule of thumb, anything you can express using words like "plus", "minus", "square root", "logarithm" etc. is going to be computable.
> irrational numbers would require infinite computational steps to resolve
No, this is not the real reason. Both "1/3" and "square root of 2" have infinite number of decimal places, so neither can be fully written by a computer program. However, each of them can be approximated to e.g. one billion decimal places.
To show you how most numbers are not rational, consider only numbers of form "A + B * square root of two", where A and B are rational. Each different pair of A and B gives you a unique number. Among them, the numbers with B = zero are rational, and numbers with B <> zero are irrational. (Furthermore, all rational numbers can be expressed like this, but many irrational numbers, such as "square root of three" are outside of this set.) This should make it intuitively obvious why a randomly picked number is infinitely unlikely to be rational.
But the rabbit hole goes much deeper.
Let's ignore all technical details, and take a set of all numbers you can describe (unambiguously, and without any paradox) by a sentence of a finite length (including any finite number of equations of a finite length). If you need a whole book to define a number, so be it. Take all numbers humans can describe.
The point is that all these numbers are just an infinite minority in the vast ocean of real numbers.
How can that be? What else is missing? This seems tricky, because -- by definition -- I should be unable to give you an example of a number that cannot be described. But even if I can't give you a specific number, I can point you towards a concept: the numbers whose decimal digits are all randomly generated.
Any number that can be described by a finite description, contains a finite amount of information. A number with infinitely many randomly generated digits contains an infinite amount of information. To put it differently, when you generate numbers with infinite number of random decimal digits, you would have to be infinitely lucky to generate a number which only happens to contain a finite amount of information (for example, when all randomly generated digits happen to be zeroes). But the former are the real numbers, and the latter are the computable numbers. So if you take a random real number, you have to be infinitely lucky to get a computable one.
There is actually more, because "infinitely more" does not adequately describe the difference between comparing "rational" with "irrational, but still computable" numbers (both infinities have the same cardinality), and "computable" with "real" numbers (infinities of a different cardinality)... but this part, I am afraid, is beyond lay interpretations and requires paying attention to some technical definitions. A simple version is that the rational numbers and the computable numbers can both be numbered by integers, if we choose a sufficiently smart numbering scheme; but for real numbers, even this is impossible. But it takes some technical details to explain why it is so, and why it matters so much.
Not only that, we should expect to always get an irrational number, but always get rationals from a finite subset of Q. I recently asked a closely related question on Math.SE:
If the selection process involves digital steps then for sure you can ONLY pick computable numbers. But if the selection process involves some analog movement then it’s an interesting open question — my guess is that because of Planck Length, you can only pick rational numbers. So in what sense do irrational numbers or transcendental numbers or imaginary numbers exist? They are useful abstractions only.
Who was it that said “God created the integers, man invented everything else”?
> Now a funny think I said was "pick a random real number". Since the computable numbers are also a measure 0 subset of the real numbers, it's literally impossible to randomly pick a real number with a computer...
I also wanted to say that I certainly believe that you've done real math and got interesting results from the sort of model we were discussing above. It does seem like, as you say, a rich and natural theory.
That said I wanted to make another counter argument here. Particularly based on the meaning of the phrase "pick a random real number". To pick a number at random is a necessarily randomized process, however it should give a classical output. After picking the number I should be able to make two copies of it, put them on ships flying away from eachother at the speed of light, and both ships should be able to perform the same computation (without entangling particle pairs beforehand).
Moreover any reasonable representation of a real number should admit operations like "what is its n^th digit".
Under these two constraints, you would have to flip an infinite number of coins, and it would take infinite time and space to pick a real number. Or in other words, you can't really.
The way your spaceship analogy works in TTE, is that the random real number would be some kind of radio source that gets listened to by the two ships. Also, the way TTE works, there are diminishing returns from listening to the radio source for longer and longer time - this is because all functions computable in TTE are continuous, and therefore have diminishing returns from knowing their input to greater and greater accuracy (this is exactly the difference between continuous functions and discontinuous functions). Therefore you only have to listen to the radio source for some time before saying "Yep, I'm happy, I know the output of my function to within epsilon". Even the value of Pi in physics only needs to be known to a small number of decimal places - which suggests that all functions that are "physically computable" are continuous, further suggesting how natural TTE is.
Again, my comments are getting downvoted by ignoramuses. This is not the first time.
It's not accurate. Looking at the arxiv link (https://arxiv.org/abs/1702.02325), the idea is you fix some elliptic curve E, and you look at its "twists" E^d, where the parameter d is a nonzero integer.
If you only look at twists where |d| < N, you can ask "What proportion of these are rank zero, rank one, rank two, ...?" The Theorem in the paper is that as you let N go off to infinity, these proportions tend to 1/2, 1/2, 0, 0, 0, ... respectively.
Here's the thing: this does not correspond to a measure on the set of rational elliptic curves, and indeed, there is no reasonable way to define a uniform probability measure on a countable set. Consequently, statements like "half of all elliptic curves..." are kind of misleading and meaningless.
To back up my last claim, let me prove that 100% of positive integers are even in the same spirit:
Fix any odd positive integer x, and consider its "twists" x{d}, which for positive d I define to be x*2^d. Every integer is of the form x{d} for a unique choice of x and d. Now, for fixed N, if I consider all "twists" for which d<N, the proportion which are even is (N-1)/N. Thus, as N tends to infinity, the proportion tends to 1.
I never say anything about d being even. Rather, as soon as d > 0, x{d} is even, so "almost all" x{d} are even. As you vary x and d, you cover all integers exactly once, so from this perspective, "almost all" integers are even.
My point is that this is very similar to how elliptic curves are being counted here.
I think that the correct mathematical terminology is that almost all the elliptic curves [of this family] come in only too types. (I'm not sure that this is official.)
For example, we know that almost all the numbers are normal, but we have a prof that any particular number is normal (but we have some good candidates). Moreover, we know plenty of numbers that are not normal. https://en.wikipedia.org/wiki/Normal_number
Each Ck is normal in base k, but that is a much weaker claim than the claim without limiting it to a base. That, we don’t know (”It is an open problem whether Ck is normal in bases b ≠ k”)
Each Chaitin constant (https://en.wikipedia.org/wiki/Chaitin%27s_constant) is normal in that stronger sense, but we don’t know, and can’t know, much more about them than that they are normal numbers between zero and one.
Probability theory is based on measure theory. You can think measure as abstraction of the size of a set. As others point out in different ways, probability 1 and zero-probability don't mean exactly what you think they mean.
Simply put: it’s possible to have a non-empty set with zero "size".
The article also explains why it is OK to say that elliptic curves come only in two types. Read the two paragraphs starting from "You may be wondering how it’s possible...". It seems like the curves of rank 0 and 1 are infinite and curves of rank greater than 1 are also infinite. But infinities are strange. The infinity of curves with rank 0 and 1 is much much greater than those of rank > 1 so essentially the latter can be ignored. That's what I understood.
I agree it's confusing. If you consider a random variable X whose value is the rank of a randomly chosen elliptic curve. Then I believe this article is saying X converges in probability [1] to 50%/50%.
Similarly related is the concept of "Almost Surely" [2]. Wikipedia includes a great example with throwing a dart.
Yes, it’s inaccurate. The rigorous way to say it would be “Almost all ecliptic curves are one of two types” but to a layman not familiar with the technical definition of almost all it may not seem particularly impressive.
It's essentially aleph 0 / aleph 1. Aleph 1 is so much larger than aleph 0 that this equation "equals" zero.
The real problem here is that this equation doesn't really make any sense: You can't divide infinity and expect answers that make sense in ordinary language.
So you can have an infinite number of curves of rank > 1, and yet there is 0% chance you'll come across one by accident (since you'll need to sample an infinite number of rank <= 1 curves in order to do that - but of course, you can't, there's an infinite number of them).
They're not really dividing aleph_0 by aleph_1. The important thing is that it has measure 0, but you can easily have a set with cardinality aleph_1 and measure 0.
There are also no ironic quotes around "equals". The measure really is 0.
Infinity has odd properties and there are multiple infinities. See: The Aleph and Beth infinity series.
For example, the natural numbers are infinite. They are aleph-null. The ordinal numbers are also infinite. They are aleph-one. Even thought both are infinite, aleph-one has more elements than aleph-null.
This is the realm of set theory and started with Georg Cantor.
Even though the quantities of ranks 0/1/2+ are infinities (countable or not), there are vastly more 0/1 than any other, so that mathematically, it's accurate to say that the odds of finding 2+ are zero. But it's not really zero.
These statements are consistent modulo zero measure (probability) sets. For instance: there are infinitely many rational numbers, but the probability of picking a rational number at random from the interval [0,1] is zero.
The proof in the article states that there is a 100% chance of a random elliptic curve being either rank 0 (50% chance) or rank 1 (50% chance).
However, the article also informs us that there are an infinite number of elliptic curves with rank 2 or more. For example, the elliptic curve `y2+y=x3+x2−2x` is of rank 2.
So is it truly accurate to say that elliptic curves come only in two types?