Hacker News new | past | comments | ask | show | jobs | submit login
Are real numbers really real? (arxiv.org)
147 points by d3ckard 9 months ago | hide | past | web | favorite | 234 comments

There exists a notion of "computable analysis", based on the delightful fact that Turing machines (or other Turing-complete enumerable computers) can be viewed as computing real numbers. These so-called computable reals are a little limited, but still usable for most tasks [0].

[0] https://en.wikipedia.org/wiki/Computable_analysis

It's a real PITA to work with computable numbers because even their equivalence is not computable.

Also, the field is really young. It was only in 2000 that Weihrauch has proven that every computable real function is continuous.

Their equivalence can be given by a Turing machine, though, so the non-computability comes directly from the Halting Problem.

This is worth mentioning because some folks get the incorrect impression that it's not possible to write a program that, say, outputs whether two computable reals are equal to each other. It is possible! But the program might not halt.

I shouldn't trivialize the difficulty. It is a PITA. I remember my first foray into this; I wrote a program which computed sqrt(8), which worked fine, and then another program which computed sqrt(8)^2, which didn't halt. The edges are tricky.

As opposed to classical reals which have a computable equivalence? (Hint: they don't)

But when working with reals we normally don't bother with computability. Whereas with computable numbers you don't even have anything else really.

Originally I wanted to make a point here about Interactive Computation, the strong Church-Turing hypothesis and persistent Turing Machines, however, while checking on the latest research in that area before doing so, I found the following paper:

https://arxiv.org/abs/1702.06000 TE Raptis - 'Viral' Turing Machines, Computation from Noise and Combinatorial Hierarchies

Which gets cited by the following paper by the same author, which cites the OP paper(!), AND elaborates on it:

https://arxiv.org/abs/1805.06301 TE Raptis - Finite Information Numbers through the Inductive Combinatorial Hierarchy

Which itself gets yet another follow on by the same author:

https://arxiv.org/abs/1806.01637 - Encoding discrete quantum algebras in a hierarchy of binary words

Highly interesting.

They are usable for all tasks. What are you going to do with a number you can't compute? Nothing, because you can't do anything with it.

I'm frustrated that everyone here seems to be hung up over an argument about the definition of computable numbers and some nattering about Cantor, when the real meat of the abstract seems to be this:

> This alternative classical mechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory.

Wait, what? What does "a way similar to quantum theory" mean? Is he able to derive quantum postulates from information theory here, or is this just handwaving becuase "uncertainty" emerges like it did for Heisenberg?

Not expert enough to read the paper. Anyone?

There are no equations and no citations to relevant formalisms so your guess is good as anyone's.

The author had a spat with arXiv who, while not being a peer-reviewed venue do their best to keep out the crackpottery, rejected some of the stuff coming from his vicinity. Gisin's complaint was loud, ended up published in Nature, and now it seems anything goes.

Then went a flood of similar gems. Such like over-the-top argument for physical time (deep problem formal in nontrivial ways), because thinking requires time thus obviously https://arxiv.org/abs/1602.01497 (doubtful any creative time went into that). I think figure 5 therein anwsers your question best as the author can.

> https://arxiv.org/abs/1602.01497

Note that this is in the "history and philosophy of physics" section of the arXiv, so it's not flooding the arXiv in general.

I've heard a small handful of practical physicists complain that Ginsin is getting too much status by posting on the arXiv, but I haven't heard any philosophers of science complain that his papers are particularly bad, much less that they are so obviously worthless that they are a net negative due to taking up space on the daily list of what's new in that section.

Looking at figure 2 in the paper you were mentioning helped me a lot in understanding his thought process.

I'm very far from being an expert, but as far as I understood the reasoning, it seems that the author is arguing that as time passes, the current conditions of chaotic physical systems are going to start depending on more and more information regarding (or "binary digits" of) the variables defining the initial conditions.

Since the deterministic equations of classical physics are defined over the real numbers but physical variables cannot contain infinite information, at some point some of the information/binary digits of the initial conditions that (partially) defines current conditions must be a random digit -- although this conclusion is only reached after arguing that almost all of the real numbers are numbers that contain random digits (as random as they can be, because they have no structure and can't be named/characterized/defined).

As a layman, I'm not really convinced by this reasoning (if indeed this is what the author argues), but this is the gist of what I understood.

> > This alternative classical mechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory.

> What does "a way similar to quantum theory" mean?

I'm not an expert (though I have studied undergraduate physics). I imagine what he's trying to reference is that while the equations of quantum mechanics relating to states are entirely deterministic, the measurements we make are probabilistic in nature (I am careful to not say "non-deterministic" -- because it would not be correct to phrase it that way). I've skimmed through the paper and I see no reason that this peculiar nature of quantum mechanics (which we still don't understand) applies to a mathematical reformulation of classical mechanics.

I imagine the reason the author mention quantum mechanics is that they have published the paper in the quantum mechanics section of arXiv...

But personally my biggest issue with the abstract is this:

> Since a finite volume of space can't contain more than a finite amount of information, I argue that the mathematical real numbers are not physically relevant.

This conflates several concepts from different fields in a single sentence and is utterly ludicrous. The author appears to be talking about the Bekenstein bound (which states that there is a maximum amount of entropy for a given region of space with a given amount of energy -- otherwise you could violate thermodynamics by placing such a system into a black hole). But he is conflating the concept of entropy (or "information") from thermodynamics with his own concept of "number information". If this conflation was accurate then one could assume that the amount of entropy in a system is related to the decimal expansion of variables related to that system -- which is just a ridiculous thing to argue (especially if you consider that you can change the decimal expansion of a number by changing units).

>>This alternative classical mechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory.

That conclusion strongly reminds me of a claim made in the following paper:

https://www.researchgate.net/profile/Vladimir_Garcia-Morales... Garcia-Morales, Vladimir - Quantum Mechanics and the Principle of Least Radix Economy

To quote:

"… The formulation provides an explanation of how determinism and random statistical behavior coexist in spacetime and a framework is developed that allows dynamical processes to be formulated in terms of chains of digits. …"

(A note about that paper: It seems, to me, that it incorrectly uses a non-general formula to calculate radix economy, one which only applies to unsigned number systems. A fact making the rest of the paper - which unfortunately goes rather beyond my math & physics skills - somewhat dubious, albeit not wrong, to me.)

Not an answer to your question, but it reminded me of something one of my quantum mechanics professors said during undergraduate physics. Paraphrasing:

"Quantum physics may seem hard, but it's much easier than classical physics. We can describe a quantum system using a finite set of integer (or half-integer) quantum numbers. In classical physics the properties are continuous, and describing the deeper decimal places would require knowing the entire history of all the objects involved. We could also never repeat an experiment exactly, due to these infinite decimals."

Of course, we're not totally screwed in classical systems; the far decimals are only a problem in chaotic systems. Still, might be related to the author's claim.

Of course that argument is silly, because if you tried to compute the quantum physics of a simple object like a pair of spheres in a vacuum, you'd be hopelessly lost.

Classical physics is only hard (in the given sense) if you care about computing infinite decimal details, which no one does.

Not quite ... classical orbits over extended time periods (T >> 1 orbital period) tends to require significant precision thanks to the "chaotic" aspects of the diffeqs. See [1] for some additional details.

Many orbital sims use high order methods, predictor-corrector type approaches.

As for "quantum physics of a simple object like a pair of spheres in a vacuum" ... I gotta say, as a (non-practicing) physicist, I haven't the foggiest notion about what you are thinking of. Do you mean the collective wave function of the entire ensemble of atoms? Or the valence/band electrons? Or something else?

Basically, we've been exploiting symmetries for many years to reduce insanely hard problems to less insanely hard problems. For crystalline material, we exploit periodic boundary conditions to reduce the size of the Hamiltonian to something reasonable. We still have to deal with O(N^3) and higher multicenter matrix elements like <phi_i(R1) | Operator_k(R2) | phi_j(R3)>. This is computationally hard for anything but small N (say 1000-10000).

[1] http://carma.astro.umd.edu/nemo/pitp/papers/gd2_s3.4.pdf

You might find an answer via the papers I've assembled here, in another comment sub-thread:


I don't believe in the physical reality of real numbers, but that doesn't detract from the their extreme usefulness as an abstraction. For that matter the negative numbers don't really exist. Nor do fractions or zero. They're all just convenient because they do a great job modeling situations we care about in a consistent way. Or at least consistently enough for everyday needs.

It was a bit of a mind bender initially coming to this conclusion, but shrug.

Roger Penrose gives a great explanation in The Road to Reality. We know some subset of the natural numbers are real because we can count things. And we therefore know some subset of the rational numbers are real because we can cut things into pieces and count the pieces. But the reals? Well they are invented purely so we can take the square root of two. And the complex numbers? Invented so we can take the square root of minus one.

Mathematics is a tool we've developed to help us understand reality. But it itself is not reality.

I'm not following the logic there:

>We know some subset of the natural numbers are real because we can count things.

So he's saying:

1) X is real = there exists a one-to-one mapping between X and a consistent physical process.

2) Natural numbers map to counting.

Okay, but then:

>Well they are invented purely so we can take the square root of two. And the complex numbers? Invented so we can take the square root of minus one.

Not true: complex numbers also map to real-world dynamics, like oscillatory behavior. From a search for the physical significance of complex numbers:


I would add that real numbers have physical significance in terms of having to arbitrarily operate on some fraction of another value. It's true that the universe might prohibit arbitrary precision, but when you don't know the depths of the permitted halvings, and you're permitted arbitrarily large units, it amounts to the same thing.

The rational numbers aren't physically real, because that would imply the ability to divide objects into exactly fractions, which generally isn't possible in a verifiable way, which also must be consistent with the operations we want to perform. Fractions are a useful abstraction. The reality is just relations between natural numbers.

The real numbers aren't invented to capture the square root of two. You can deal with that using a simpler set. The real numbers are necessary for common approaches to calculus and other forms of math that depend on a continuum of quantity.

I would describe math as a tool for modeling reality, rather than understanding it. Because we really don't know much at all. All we've got are rules that are pretty damn good at predicting things under everyday conditions.

The rational numbers are physically real because you can compute them. They just map on to the naturals.

Depends on what you mean by physically real. I'm talking about in the sense of tangibility, not computability.

> But the reals? Well they are invented purely so we can take the square root of two.

I agree with what you say, but I also think that we overly conflate "importance" with "the order we found/defined them in". It's a big leap (conceptually, arithmetically, etc.) to go from the rationals to the reals; sure that's what the ancients actually did, but we don't need to follow the same path when e.g. generalising some result, teaching mathematics in school, defining a computer mathematics tool, etc.

There's no reason to "skip past" algebraic numbers, computable numbers, etc. especially since they're most often the sets we're dealing with; e.g. in high school most problems were something like 'find the real number x such that something-involving-x = 0'.

PS: You've inspired me to have another go at reading the road to reality; it's been gathering dust on my shelf for years!

"But the reals? Well they are invented purely so we can take the square root of two. "

To follow on you previous run: ... because we can cut things into pieces and count the pieces. We know real numbers are real because we can cut a square across the diagonal and see the relative length of the diagonal to the side.

We can't cut the square, because perfect square is an abstraction and what you are cutting is an approximation of the square. We can't cut across the diagonal, because all cuts are only an approximation of a straight line. We can't directly see the real number being the ratio of square diagonal to its side, because our eyes have finite number of photoreceptors, receiving information from a finite number of photons. It's natural numbers, all the way down...

My understanding is that many "intuitively obvious" statements about continuous functions, like the intermediate value theorem, are only true in the real numbers, not the rational numbers. So you might say that we defined the reals so that two intersecting lines actually have to touch somewhere. That seems like a reasonable motivation to me.

Aren't you presupposing that lines are themselves a continuum? It's a reasonable abstraction, but as far as we know, it's not physical real.

If a line were just a collection of discrete points related by some formula, there's no reason two lines that aren't parallel should have to intersect, from the perspective of sharing a point.

You're absolutely right that it might not reflect physical reality. I don't think it's known whether space is fundamentally discrete or not. But I can definitely say that continuous space meshes a lot more with my experience as a human being, and that because of that, the real numbers are motivated by more than just taking square roots. They model the human experience of space.

By that reasoning, no numbers exist at all.

What is the usefulness of real numbers, beyond the usefulness of computable numbers? Real numbers give you silliness like "You can cut and reassemble a ball into 2 balls identical to the original, if you cut it into 5 infinitely detailed pieces. Negative and Zero don't lead to patently impossible predictions about the world.


You can't do it with a real sphere because it's made of atoms and bends the space around it. Forget spheres, you can't even cut a string into infinitely many pieces.

That's not inherent to the Real numbers, that requires the Axiom of Choice as well. The Reals are well-founded without that.

I think the question is, what happens when we move away from this usefulness in physics? I didn't follow the paper, but some way of backdooring quantum uncertainty into classical physics might be useful, too

The author makes statements with qualification of "almost all" even though "almost all" physically allowed systems (that is, solutions to the PDEs under question) are not physically relevant or beneficial. I can only see this applying to chaotic systems like the weather, which is a valid point. The key argument that finite volume contains finite information seems potentially true but suspect. Case in point, take a unit square, it will have a diagonal who's digits have a long "infinite" string of digits but sqrt(2) isn't "infinite information", it's very finite (ie., one number!)

This desperately needs to be peer reviewed by people who study the foundations of physics.

> The key argument that finite volume contains finite information seems potentially true but suspect. Case in point, take a unit square, it will have a diagonal who's digits have a long "infinite" string of digits but sqrt(2) isn't "infinite information", it's very finite (ie., one number!)

I share the suspicion you have, but I think your counter example is not valid because there is no unit square in (physical) reality. If you would try to measure the diagonal of the unit square you would at some (finite) point run into trouble with the uncertainty principle.

All measurements have limited precision (number of significant decimal digits gained), so the amount of data that can be gained indeed is limited. But there is no general law preventing increasing the precision with better instruments and better methods, so as measurement gets more advanced, more data (such as more decimal digits) can be obtained. The uncertainty principle postulates that uncertainties of coordinate and its conjugated momentum cannot be simultaneously arbitrarily low. It is not clear to me how that postulate would prevent arbitrary increase in precision of position measurements. In theory one could have uncertainty 10E-100m in position of an electron, if uncertainty in its momentum is of the order 10E66 or higher.

> But there is no general law preventing increasing the precision with better instruments and better methods, so as measurement gets more advanced, more data (such as more decimal digits) can be obtained.

That's not true, as at the Planck length both quantum and general relativistic effects are equally important, preventing arbitrary measurements over smaller scales - that's the case even for a continuously-divisible universe. In most quantum gravitational theories today the Planck length truly is a minimum possible distance in the same way the Planck time is the minimum possible duration, representing a quantized space-time with hard limits to the precision of measurements.

Quantum gravity "theories" are speculative extrapolations of current knowledge for extreme situations, not accepted general laws of physics. They have problems with consistency (discretization of lengths and other quantities breaks relativity postulates) and even bigger problem with lack of experience with phenomena that they describe (minimum possible lengths/times/energy/etc).

Yes I know, that's why I made the general point first, then talked about how quantum gravity theories impose even harder limits if true.

> In theory one could have uncertainty 10E-100m in position of an electron, if uncertainty in its momentum is of the order 10E66 or higher.

I'm not an expert in quantum mechanics, but what I was thinking was that to measure the diagonal of a physical unit square you would have to physically build something like a unit frame from some metal atoms for example. Would this still be an intact metal frame if the uncertainty of electron momentum would be on the order of 10E66 or higher? Naively I would think if the electron could potentially have a momentum of 10E66 due to uncertainty it is very quickly anywhere but not part of the metal frame anymore.

Edit: That part aside if you go to even smaller length scales wouldn't the Planck length be a lower bound for how precise a measurement could be?

Of course it is difficult to do, but the point was that uncertainty principle by itself does not limit precision of single measurement; there are other reasons, but those are practical (we do not have the technology and understanding to build such a setup).

> Naively I would think if the electron could potentially have a momentum of 10E66 due to uncertainty it is very quickly anywhere but not part of the metal frame anymore.

The electron presumably cannot go faster than light, so if the measurement is done fast, the electron won't move much in that time, even with immense momentum. Also, big uncertainty in momentum does not mean the electron will have large momentum for a long time. If there is some containment device, the electron can have immense momentum for a short time that however changes direction so quickly that the electron does not escape the small containment. For example, atoms are such containment devices; the more the atom has protons, the smaller the region the electron is contained in.

> Case in point, take a unit square, it will have a diagonal who's digits have a long "infinite" string of digits but sqrt(2) isn't "infinite information", it's very finite (ie., one number!)

It's tricky to talk about "information" when using arbitrary human conveniences, like decimal arabic numerals, etc. In particular:

- The side length of a unit square also has infinite digits: 1.00000...

- If adding trailing zeros seems superfluous, how about writing the side length as 0.9999...?

- The diagonal has length 1 on a ruler which measures sqrt(2)ths

- The diagonal length can be written very compactly as 'x | x^2 = 2', and there's no reason a numeral system couldn't represent that "natively"

A lot of definitions of "information" (e.g. Shannon information) depend on identifying an element out of a set. Such definitions break down when we're dealing with constants, since e.g. sqrt(2) couldn't be anything else.

It's like trying to apply computational complexity to a fixed input, e.g. it only takes constant time to sort a particular list, or to find the shortest path for a particular travelling salesman, etc.

To avoid degenerating into zero, there needs to be some set/space of possibilities which all need handling.

The author's definitions seem to do this: e.g. if we're considering the set of all possible real numbers, then they do indeed "contain infinite information"; or, a phrasing I would prefer, they would require an infinite amount of information to distinguish them exactly.

> This desperately needs to be peer reviewed by people who study the foundations of physics.

So someone like "As a theoretician, Gisin brought deep insights into quantum mechanics." (https://en.wikipedia.org/wiki/Nicolas_Gisin) ?

>The key argument that finite volume contains finite information seems potentially true but suspect.

Look into the Bekenstein Bound and maybe the Pauli Exclusion Principle. A finite volume of space certainly does have a finite limit to the amount of information (physical information, quantum states and the like) it can contain.

If sqrt(2) is a normal number then its decimal expansion would contain infinite information.

It depends on how you define information, but I'd argue that it doesn't.

In particular, those digits are not random variables which are free to take on any value; they're fixed by the definition of sqrt(2). In terms of Shannon information, the information is a measure of uncertainty in a message. If our protocol is that I'll send you digits of sqrt(2), then there's no information being transferred at all: since you could have worked them out for yourself. Alternatively, if our protocol is that I'll send you digits with uniform probability, then sending sqrt(2) would be "infinite information", but only because we're having to narrow it down out of infinitely many possibilities. I don't think this is the most useful definition though, since we can arrange for any amount of information we like: if the protocol is I will either send "5" or sqrt(2), then it contains 1 bit; and so on.

Alternatively we could use algorithmic information theory, where the information content of a message is the length of the smallest tape for a universal turing machine which outputs that message. sqrt(2) can be calculated by a very small program (cranking out digits forever), so it contains very little information. Yet even here, since it's constant, we could define our turing machine such that it emits sqrt(2) when given an empty tape, and hence it again contains no information.

Let S be an encoding of the works of Shakespeare in binary. A normal number contains S in its binary expansion. We can do this for all information. In this sense is it correct to say that a normal number contains infinite information in it?

Again, depends on your definition of information. For shannon information, you would need to define a protocol. If the protocol is "I will send you the entire works of Shakespeare" then I would be sending you no information. Likewise, if the protocol is to send you S, then S contains no information.

I'm curious about something, figured I'd ask since there are some math people on this thread (I was an undergrad math major but wouldn't include myself in that group, for the purposes of this question).

The abstract contains the statement: "moreover, a better terminology for the so-called real numbers is "random numbers", as their series of bits are truly random."

Has this been proved? I suppose another way to ask it is, if the sequence never repeats, does that mean it is random?

For instance, if the sequence was

.1121231234123451234561234567... isn't random, but does it resolve to a rational (can this sequence bet represented as the ratio of two integers)? It would be a never repeating sequence that has an upper and lower bound that approaches zero as the sequence continues, right?

If my example turns out to be a rational, is it possible to construct a non-random sequence that would still never resolve to a rational?

I might try to look this up and read about it a little, it's been a while.

If the sequence doesn't repeat, and goes on forever, it cannot be a rational number. https://math.stackexchange.com/questions/61937/how-can-i-pro...

This does not "prove" the statement given in the abstract, however, as the statement isn't a mathematical statement. It cannot be proven.

Here's the question I have:

Are the bits of a real, non-rational number random.

I suppose another way to think of it would be

Are the bits of a real, non-rational number expressed as a decimal random integers between 0 and 9?

I'm out of my depth here, but it seems that is a statement that could be proven.

I think generally speaking, the answer is no.

A normal number [1] is pretty close to what you're describing. Although there are a lot of normal numbers, theoretically, they are very difficult to construct, and equally difficult to prove normality. Unfortunately, proving non-normality is also difficult, although the wikipedia article has some examples of base-10 normal numbers and a couple of examples of irrational non-normal numbers.

[1] https://en.wikipedia.org/wiki/Normal_number

Wow, yes, that's exactly it. Normal numbers.

It's been 20 years since I majored in math, so maybe I did encounter these and just don't remember. Looks like there is a proof that almost all real numbers are normal, but where it comes to specific numbers pi, e, sqrt(2), they are thought to be normal but no proof.

Well that was informative, thanks for the link!

No, of course not. You just gave an example of a number which was real, not rational, and easily described with finite information 0.001012012301234...

So... does this mean that this statement in the abstract:

"Moreover, a better terminology for the so-called real numbers is "random numbers", as their series of bits are truly random"

is demonstrably false? That there exist real, non-rational numbers with a series of bits that are not truly random?

Well, thinking about this after reading other posts in this thread, perhaps the statement really means that "almost every" real, non-rational number has a series of bits that are truly random, so this isn't a counterexample.

There are some non "random" reals. But almost all of them are "random" in this sense. The exceptions are an infinitesimal fraction.

"Real" numbers are mathematical abstractions. Perhaps it's unfortunate that "real" and "imaginary" were used instead of more abstract sounding terms that came with less attached connotation.

The argument of the author is not that the real numbers are poorly named, it's that the mathematical abstraction supposes physically impossible properties of the model. Therefore physics ought to be based on an abstraction that has more physically plausible properties instead of the real numbers.

Not that I necessarily agree with such an argument.

There is value in comparing mathematics to reality even though mathematics is defined entirely independent to reality. We are able to use mathematics to model certain aspects of reality with great effectiveness.... why? Physical reality does not seem to resemble mathematical notions at all, yet shows some of the same results. Understanding that gap would be of monumental significance.

Just wait until cranks discover the hairy ball theorem and try to prove how wrong hairy balls are...

    >I do not make any metaphysical claims about space, time nor numbers,
    >but notice that the mathematics used in practice is always finite
    >...I argue that the so-called real numbers are not really real.
    >More precisely, I argue that the mathematical real numbers
    >are not physically real, by which I mean that they do not represent anything physical
Well, isn't the statement 'only physical objects are real' a metaphysical claim?

I read it as "I do not make any new metaphysical claims". The statement you have put indeed is a metaphysical assertion, but you have to have a set of metaphysical "axioms" before you can proceed with physics, hence it's impossible to talk about physics without making implicit metaphysical assertions.

That's certainly true. But one doesn't require the axiom "only physical objects exist" to do physics, and that principal is not universally accepted among physicists.

This paper -- although I'm an outsider to physics, and I'm way more comfortable evaluating mathematics and computer science research -- seems like it has an interesting and novel approach to a theoretical problem in physics. I don't understand why the author felt the need to couple this with the given title and stress this point so much. It almost felt like click-bait, until I understood generally what was being done.

You are right, just saying "he does not make new assumptions" does not explain the author's act quite well enough.

On further deliberation, I have come to the following conclusion: there is a difference between assuming "only physical objects exist, and they are finite" and making an argument of that. The author is trying to say he is not making an argument for the validity of the axiom but implicitly says he just assumes it, to guide the focus of the reader to the main argument he is making.

I acknowledge that you definitely do not need to make that assumption, but as I said, you have to start somewhere. I'd say the metaphysical assumption captures a substantial subset of all physics, hence his ideas are worthy of consideration.

"Since a finite volume of space can't contain more than a finite amount of information"

Is that perfectly clear? Not so, to me.

In physicist's ontology, as you surely know, that's just an appeal to Bekenstein bound.

Not really being a mathematical result, it is a consequence of thermodynamics and classical (continuous) gravity. Einstein equations can be recovered from thermodynamics and Bekenstein bound as an assumption and that's all there really is to it.

It's a known result in thermodynamics. The amount of information that can be contained within a sphere of radius r is bounded by O(r^2), surprisingly. An object can attain this bound if and only if it is a Black Hole.

This result is related to the fact that black holes evaporate. If black holes can contain information, it must be possible to read that information, or else it's observationally equivalent to the information having been destroyed. The evaporation of black holes leaks the information contained inside, avoiding a paradox.

> The evaporation of black holes leaks the information contained inside, avoiding a paradox.

Has blackhole evaporation been observed, or is this just a result of the equations?

This stuck out to me too. Fractals teach us that infinite length can be fitted into finite area, so I'm dubious about this claim. Though I guess if we're in the realm of physics, the Planck constant determines a kind of minimum length, so the original statement may be true?

Pendant here: the Planck constant is simply a convenient length for all things subatomic, like a hectare or a fortnight. No minimum whatsoever implied there. Many, many things are smaller than the Planck constant.

"Many, many things are smaller than the Planck constant."

Do we have a proof of that? Honest question. I know strings are often postulated to be smaller (by many orders of magnitude in some cases) but we are not yet particularly confident they exist. I'm not aware of us being reasonably confident that anything is smaller than Planck's size. ("Smaller than Planck's size" isn't even all that easy to define.)

Event horizon of a black hole is many times smaller for instance.

The mass of a blackhole determines the size of the event horizon.

To have an event horizon smaller than the planck constant, the mass have to be also small. While theoretically possible, i don't believe this has ever been considered possible in reality - see https://en.wikipedia.org/wiki/Kugelblitz_(astrophysics)

"Smaller" in what sense? Surely not diameter. There's a known formula that gives you the diameter as a function of the BH's mass.

> Pendant here: the Planck constant is simply a convenient length...

You tipped your hand by misspelling "pedant."

You mean the Planck length (related to the Planck constant, the speed of light and the gravitational constant).

> the Planck constant is simply a convenient length

m^2 * kg / s

Ideal fractals also don't exist. :-)

I'm also curious. Is it true for position/momentum/energy/spin/... of any particle?

AFAIK energy levels of electrons orbiting around a proton are discrete quanta but are all other things above also discrete, or can some be continuous?

What about the hidden quantum state of things?

Which real numbers? All of them? That's a kind of mental infinitude that I don't think exists, although Kurt Gödel did.

Anyway no numbers exist unless you're some kind of mystic or dualist. This numeral "2" isn't the number two, nor are these two dots: . . Nor is the word "two".

There are many twonesses, but they're not in the world, they are in our heads.

Like "tuesday" doesn't exist. The words exists, and if you ask somebody what day is today they'll use that word, but there is no tuesdayness in the real physical world. You can prove this my going to the a pole (North or South) and wandering around. Now "tuesday" is stuck in gimbal lock.

If numbers are "real" it is in some "other" Universe, some realm of archetypes and dream-stuff.

> Which real numbers? All of them? That's a kind of mental infinitude that I don't think exists, although Kurt Gödel did.

I read the article as saying that real numbers as a mathematical concept is not a useful physical concept, and that considering classical physics with that as a first principle yields non-determinism even in classical models.

Which is to say that the number "5.78" (exactly) has no physical meaning, even though the number "5.78" (measured) is useful. If you put something 5.7800 cm from another object, and run an experiment that relies on something on the seventh decimal point of that distance, the result will be random. From a classical point of view, the argument would be that measurement error accrues, and the randomness is extrinsic. The author argues that if you consider the randomness to be intrinsic instead, then you potentially gain some insight into the bridge between classical and quantum mechanics.

I'm not as convinced as the author, because quantifying where exactly the intrinsic error is introduced is very closely related to the foundations of quantum mechanics. There's no reason classically that you can't make an arbitrarily sensitive measuring device to resolve the "randomness".

Except that's exactly what Plato said. If one were to grant the possibility of some other realm where numbers are real or archetypes, then one is not an nominalist at all, as the author of this paper seems to be (although he may actually be a conceptualist). A nominalist would never grant that such a realm exists or he would be a realist/platonist.

“Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.” — Leopold Kronecker[0]

[0]: https://en.wikipedia.org/wiki/Leopold_Kronecker

Loads of questions here that seem to be based on only reading the abstract. Most of these are addressed in the paper.

Unless your new classical mechanics predicts the outcome of an experiment that other existing theories cannot predict, and that prediction turns out to be true if such an experiment is conducted, your new idea is just another hypothesis.

Furthermore, all the real physical quantities come from the fact that we assume space and time to be continuous (i.e., similar to real numbers). So your hypothesis boils down to the claim that space and time are discrete. And that is not new. Plenty of people have proposed that hypothesis, and the idea of planck length and planck time are conjectured to be the smallest units of space and time. But the two values are of the order of 10^-35 m, and 10^-44 s, respectively. This is way below our ability to prob smaller length and time scales. And to the extent that we have been able to prob length and time scales (roughly 10^-15), space and time appears to be continuous.

Finally, when you're giving up continuity in any sense, you're essentially 'quantizing a theory'. So it's no longer a classical theory but a quantum theory. We have already quantized energy (quantum mechanics), as well as physical fields (quantum field theory), and associated phenomena. But quantum (i.e., discrete) space and time are only a hypothesis at the moment.

Norman Wilberger has some interesting/controversial views on this matter.

Here is a link to his YouTube channel, where he has posted many lectures on subjects through his viewpoint:


I will say, that he is viewed as a pretty notorious crank. Especially due to his views on finitsm. But I feel like his lectures on rational trigonometry are really interesting nonetheless.

"crank" is the wrong word. Mathematicians don't like his attitude, because he is smug/brash and he rejects unphysical mathematics (which many mathematicians find offensive or irrelevant), but they don't dispute his math.

> almost all real numbers contain an infinite amount of information. Since a finite volume of space can't contain more than a finite amount of information [..]

The Kolmogorov complexity can't be higher than the state of the initial system, the dynamics, and a time parameter. Even if the system's current state involves real numbers, you would have a finite description of it as long as the time values, dynamics, and initial conditions are computable.

I also think it's subtle to apply the Bekenstein bound here. Even if the numbers going in have infinite precision, that doesn't mean the relevant information quantity is infinite. For example, a qubit's density matrix has three real-number parameters and yet its von Neumann entropy is never more than 1 bit.

This corresponds to type II computability of the reals whereas the paper, as I skimmed it, is about type I computability.

"Interestingly, both alternative classical mechanics and quantum theories can be supplemented by additional variables in such a way that the supplemented theory is deterministic."

This sounds like "hidden variable" and is not true for quantum physics.

It is, but you need to give up more and allow information to travel faster than light speed.

It's local hidden variables that are a problem.

Non-locality != faster-than-light communication

As I understand it 'Spooky action at a distance' does not allow communication, but in for hidden variable theory to work a particle needs access to another particles hidden variables. Unless you want to suggest another model?

Why is non-locality a problem for hidden variables theories and not for the "standard" interpretation of quantum mechanics?

Quantom mechanics is based on non-locality, that's one of the things people have problems with.

Quantum key distribution for example is seems to be based on this https://en.wikipedia.org/wiki/Quantum_key_distribution.

As to the truth well ¯\_(ツ)_/¯

I thought you said that the hidden-variables extension of quantum mechanics is "bad" (compared to the "vanilla" QM) because you need to allow information to travel faster than light speed.

But I agree if you meant that QM (both non-local hidden-variables and standard interpretation) requires non-locality, unlike local hidden-variables theories which are ruled out by the violation of Bell inequalities which has been established experimentally.

Hidden variables where specificly proposed as a solution to non-locality. So, I just mean bad in that they did not solve that problem.

True. However, unlike the "standard" QM interpretation, the non-local hidden-variables theories are deterministic (which is the reason why they were mentioned in this thread a few messages back).

Local hidden variables are not true. Nothing is said about global hidden variables.

Where could I find out more about these particular kinds of models for QM?

Yes, the very next sentence of the abstract says:

> Most physicists straightforwardly supplement classical theory with real numbers to which they attribute physical existence, while most physicists reject Bohmian mechanics as supplemented quantum theory, arguing that Bohmian positions have no physical reality.

Bohmian mechanics is a hidden variable theory.

Ok, I'm not a mathematician (or a physicist), but this seems... dubious... Like... what?:

> Moreover, a better terminology for the so-called real numbers is "random numbers", as their series of bits are truly random

0.5 is a real number. Its series of bits is not random at all. sqrt(2) is also a real number, and its digits looks sort-of random, but they're not really: they're exactly the sequence of digits that forms a number that, when squared, equals 2. You can easily develop very simple formulas or programs to generate them, so in a "Kolmogorov complexity" sense, it's not random at all. Point is: the concept of "having random digits" is in NO way intrinsic to "being a real number". I guess what he meant to say was "non-computable numbers"? So he's saying "in a finite volume, there can't be non-computable numbers"... or something?

Again: not a mathematician or a physicist, but this comes off as pretty crank:y to me. I would love to be corrected.

The author alludes to the fact that the overwhelming majority of reals have an infinite number of decimal places that constitute an algorithmically random sequence[1]

Sure, plenty of real numbers have finite (0.1, 0.01, ...) or non-random (0.111..., 0.0111..., ...) sequences of decimal places, but there are only a countable infinity of these, whereas there is an uncountable infinity of the other, "random," kind.

Usually we almost exclusively deal in the only-countably-infinite subset of the reals which are not random and it's counterintuitive that these comprise a tiny tiny archipelago of order in a huge engulfing ocean of randomness, but such is the case.

Gregory Chaitin's "How real are real numbers?"[2] has some relatively straightforward further reading on this if you're interested.

1: https://en.wikipedia.org/wiki/Algorithmically_random_sequenc...

2: https://arxiv.org/abs/math/0411418

The author's writing has a lot of background behind it that is true and yet the author's argument that they should be called "random numbers" is poor, at best. It is not typically helpful to think of real numbers that way - at the very least you're "missing" a tiny but hugely important subset that aren't random in any sense. But more importantly, it doesn't match up with any of the usual definitions of real numbers since those involve no such restrictions of randomness of digit expansions (and generally avoid digit expansions entirely). We might as well argue that the real numbers aught to be called the non-zero numbers since almost every one of them is non-zero.

I don't think the author seems particularly aware of the existence of such arguments in the past (I assume he comes from a physics background and is new to this - though so am I). Glancing through the works cited I don't see references to past treatments of this idea like Chaitin's. If the author is not a crank, he is doing a poor job of convincing us of it.

Chaitin is a serious scientist/mathematician who knows what he is talking about, and there is plenty of discussion to other works and references in his own research. And his published work on algorithmic information theory is quite extensive.

He is using the notion of randomness as "uncompressibility", which is a very mainstream view (perhaps better known as Kolmogorov complexity, who is much more well known than Chaitin).

Just because you are not convinced doesn't mean he's a crank, it just means your background in this area is limited.

I would also note that you're reading a popular text, one that is purposefully designed to reach a mass audience, so the lack of formalism is likewise intentional, and not a reflection of crankism. You're welcome to look up his research papers if you'd like to see the difference.

I think you misread my comment which was not at all commenting on Chaitin! I was commenting on the article by Gisin, the one this thread is on.

He's a very reputable theoretical quantum physicist with several respectable results to his name. I know because my PhD was in a similar field. You could perhaps accuse him of ignorance of results in other specialities, but not crankism.

Not necessarily. Linus Pauling had several respectable results to his name, too. Even the smartest people can be so wrong when working in fields even a little bit outside their specialism that they are almost indistinguishable from cranks.

This IS his field of research.

I only accused the author of doing a bad job of not looking like a crank. I realize that non-cranks do that sometimes.

I don't know if it's still the case, but arxiv papers without a subject classifier used to be more likely to be "cranky" (contrast the /math/ in the Chaitin paper url.)

So I don't know about the paper here - I only read the abstract - but I wouldn't get too hung up on terminology. "Random" is just an easier way of saying "uncomputable" I guess. But the broad topic itself is quite respectable and of interest to non-cranks, for example, see John Baez' "Struggles with the Continuum"[1]:

> Is spacetime really a continuum, with points being described — at least locally — by lists of real numbers? Or is this description, though immensely successful so far, just an approximation that breaks down at short distances? Rather than trying to answer this hard question, let us look back at the struggles with the continuum that mathematicians and physicists have had so far....

1: http://math.ucr.edu/home/baez/continuum.pdf

EDIT: The author seems like a perfectly respectable, non-crank physicist (https://en.wikipedia.org/wiki/Nicolas_Gisin) as per another comment here.

That difference is due to a systematic change in how arXiv generates URLs for submissions, starting a few years ago. No arXiv submissions today will have a URL with a category. Articles (each with a unique arXiv id YYMM.xxxxx) can have multiple topic tags; the article under discussion has two tags, one of which is "quant-ph". Cranky articles which cannot be refuted on technical grounds are often found on the "general" topic tags.

As an aside, since the above comment is essentially trying to infer the quality of the paper from metadata, it might be worth knowing that the author is a well-regarded quantum physicist with many highly cited papers.

Nitpick: non-random numbers are rare (this word is usually formalized as "the set of all non-random numbers has measure zero"), but still uncountable. The set of computable sequences is countable, but it is only a tiny subset of non-random sequences. For example, 0.0x₁0x₂… is non-random (= partially predictable) for any sequence x₁x₂…, and this set is obviously non-countable.

I think sets like 0.0x₁0x₂… ought to be considered "mixed random", not "non-random", since they are homomorphic to a superset of random numbers. (My terminology is hand-wavy.)

Is this a difference between random and algorithmically random? I'd have expected "not computable" to be synonymous with "algorithmically random."

Could you show us one of these "random" reals?

There is a simple algorithm for generating every possible decimal number:

In binary, it's even simpler; you just generate every permutation of bits.

Your proposed enumeration of decimals (counter-intuitively) would miss an uncountably infinite number of values. This enumeration is not possible in any fashion, which you can prove fairly easily with Cantor's Diagonal Argument[1].

The punch-line is that there are more real numbers between 0 and 1 than there are whole numbers greater than 0. The proof basically shows that even if you imagine that you had a countably-infinitely-long list of all the real numbers between 0 and 1, you can construct a number that must be missing from the list -- which is a contradiction, and thus no such list can exist (implying that there are more numbers between 0 and 1 than numbers greater than 0).

For instance, your enumeration is missing every irrational (or even recurring decimal) number. That set of numbers is uncountably infinite.

[1]: https://en.wikipedia.org/wiki/Cantor's_diagonal_argument

>The proof basically shows that even if you imagine that you had a countably-infinitely-long list of all the real numbers between 0 and 1, you can construct a number that must be missing from the list -- which is a contradiction, and thus no such list can exist (implying that there are more numbers between 0 and 1 than numbers greater than 0).

So, there is no list of all the real numbers between 0 and 1, but you are claiming there are uncountably infinite of them?

That's the definition of uncountably infinite: can not be put in 1-to-1 correspondence with the integers (can not be listed).

Yes, because if there were such a list then you could assign a 1-to-1 mapping (by list index) of each number between 0 and 1 to a whole number which would prove they have the same size. If it is impossible to have a complete list of all real numbers between 0 and 1 then there cannot be such a mapping, and thus you'll "run out" of integers before you've counted all of the reals (more correctly you're showing that the size must be larger because if it was smaller or of equal size it could fit in such a list).

Numberphile did a pretty good introductory video to this if you want to learn more: https://www.youtube.com/watch?v=elvOZm0d4H0.

That is pretty much the definition of uncountability :)

It's counter-intuitive, but your algorithm actually only generate a subset of the rational numbers. That's because all of your numbers have a finite number of decimal digits. Most reals, like 1/3 or pi, will never appear in your list. (Though numbers close to them will.)

Most reals don't have a generating algorithm, since there is an uncountable number of reals, but only a countable number of algorithms. (Assuming an algorithm has to be a finite length program.)

I'm not sure 1/3 is exactly a number. It seems to be an expression when written out. pi, likewise, appears to be a word, not a number.

Numbers aren't lists of digits. They're abstract concepts that are referred to by lists of digits for the purpose of convenience. You can even have numbers without having invented digits. It's just annoying to write out one million, two hundred thirty-four thousand, five hundred sixty-seven and eighty-nine hundredths. So, we say 1,234,567.89 (or 1.234.567,89 for countries that use the comma for decimals).

0.5 in base 10 is the same as 0.1 in base 2, which is the same as 0.11111... in base 3. And the choice of base is completely arbitrary, with some societies using others, like base 12 or base 60.

That gives every terminating decimal or binary number. For example, you will never list 1/3, or pi or sqrt(2).

To see why you cannot generate every possible real number as a list, look up cantors diagonal argument.

Are you sure 1/3 is a real number?

Real numbers are defined as a particular extension of the rational numbers, which 1/3 is.

"Real numbers" here refers to a very specific construction of numbers. There is pretty good argument, which the actual post is about, that states this construction is actually rather unrealistic.

The issue being that there are way to many "real numbers".

It's as "real" as 1/2, which is on your list.

I guess you also need to interpret each number in the list in each possible base.

I'll be generous and assume you mean decimal as in the data-type: an interesting set of "floating point" numbers that behaves really badly (not a field, for example)

I'm a mathematician by training (though this is thoroughly outside the scope of a mathematician's job) and completely agree. That said, there's a real topic in the philosophy of mathematics called "finitism" that argues we should only consider finite mathematical objects and would exclude almost every real number. [1]

[1] https://en.wikipedia.org/wiki/Finitism

Almost every is a very understated way to put that.

“Almost every” is a term of art in mathematics. It means basically “100% wherever you look, but not really 100%”


Isn't it the opposite, more like 100% everywhere you don't look, and nowhere you do? I think the really real numbers are the countable subset of them which are in some way expressible, and the uncountable subset which are in no way expressible except in vague generalities like "they're not shaved by the barber" are in a sense less real.

I like your definition of "look".

I was going with where a dart* would land (100% of the time it will "really" hit a point that can't be expressed using computable numbers), but you're using it to mean anything we can compute or express. That is, if we're looking somewhere, it has to be somewhere we can say that we're looking, and therefore expressible. If we take your meaning, then yeah, that's what I mean as well :)

* (Yes, the dart is an abstract one. No comments about the Planck length, please)

The example with the dart plays into the «random numbers» notion, i.e. you have to use some kind of random process to get hold (non-constructively) of such a number.

Expressed as a percentage, it would be 100%, though ;)

Well, almost...

Nope, exactly. The measure of the interval [0, 1] excluding any countable subset (for example all rational numbers) is still exactly 1, that's usually what mathematicians mean by "almost every", here "almost every real number in [0, 1] is irrational" :)

I was joking, but to a point.

The problem is that "100%" =/= "All". But most of the time, it is considered to be equivalent. The "almost all" phrase is used instead of "100%" precisely because dividing by infinity doesn't yield a useful result in some contexts.

In more... "conventional" cases, if I tell you that 100% of the items are blue, you can conclude that none of them are red. With the intermingling of countable and uncountable sets, this statement fails. You have a situation where 100% of the items in the superset are blue, and there are also some red ones.

So it's "almost 100%" in the same way it's "almost all". I'm using the word "almost" to mean "infinitesimally close to"

Really, exactly 100%.

Yeah. But “100%”, and “all” or “every”, are considered synonyms most of the time. Not in this case though.

nice catch

How could you exclude "almost every real number", if such a thing should not even be considered?

You simply choose not to invent any of them.

You just leave the axiom of infinity out.

I would phrase the crux as follows:

Any physics theory is basically a map from one set of measurements (past) to another set of measurements (future). Any physical measurement can only have finite precision, due to fundamental constraints from physics and information theory. The set of finite precision numbers has vanishing measure in the set of all real numbers (since finite precision is a subset of rationals). So are real numbers really necessary?

As a physicist, while this question is definitely interesting, it’s not clear what will come out of this line of questioning (aka research).

I think you're suggesting that "real" numbers are an abstraction that is convenient to model with, but aren't actually fully representable in the universe, because of Planck and Heisenberg and such? I would agree with that view.

It's not even necessarily about quantum stuff. The point is that any practical measurement system (even classical ones) can only deal in finite precision numbers. And since that set is of vanishing measure on the real line, there isn't an obvious notion of continuity/smoothness (my knowledge here is limited). That can lead to some surprises.

For the more advanced readers: this result reminds me of how Haag's theorem shows (rigorously) that relativistic interacting QFTs can't exist (assuming operators sit at points indexed by real numbers), while in a Wilsonian sense QFTs are perfectly fine, because there's a UV cutoff, and all measurements have a finite resolution.

If we're going with the theory that things with a wavelength shorter than the Planck length don't exist then the sampling theorem should allow us to describe the universe with a discrete set of points.

General relativity complicates things (as usual) but the sampling theorem can be generalised to non-uniform samples so it should probably still work somehow.

In both cases this discrete theory is equivalent to a theory based on real numbers though, so in that sense there's not really any way to say which is better.

Well, computable numbers are countable, so almost all real numbers are non-computable (also they're dense in R).

So while your objection is valid, in practice any truly randomly selected real number is non-computable.

I guess I'm saying: such a basic mistake in terminology in the abstract doesn't bode well for the soundness of the paper in general. Being able to properly tell apart the terms "real number" and "non-computable number" is a fairly basic qualification.

A quick check on the author leads to https://en.wikipedia.org/wiki/Nicolas_Gisin. I think it's going to be rather safe to conclude that this author is well-aware of the difference between the computable and non-computable parts of R.

It's not a "qualification", it's just some using a slightly different set of definitions for convenience. Anyone "qualified" to read the paper understands the meaning.

To nitpick: to say that "in practice" you won't select a rational is understating the case. Being uncountable, the set of irrationals is infinitely larger. The chance of selecting a rational "randomly" is zero as far (heh) as zero has any practical meaning.

A computable number can be a normal number.


How does one go about truly randomly selecting a real number?

This sort of depends on whether you're talking about "randomly selecting a real number" in the real word or in the mathematically abstract world. In the real world, you can't do it, because almost all the real numbers are non-terminating decimals. Or to put it another way, any selection process is a computation, and almost all the real numbers are uncomputable and so can't be found by any computation.

To randomly generate a real number in the abstract, I suppose you could do a random walk of infinite steps on the real number line, with each successive step randomly going left or right and shrinking the distance walked. Off the top of my head I'm not sure if that's rigorous, but it sounds like it would work.

Simply write its binary expansion, by flipping coins, (heads=1, tails=0), forever.

Very carefully.

The question is whether the real numbers (as a set) are real, not whether each specific real number is a thing.

Transcendental — the numbers that make the real numbers distinct from the algebraic numbers — are not necessary for anything we do, and all the weirdest corner cases in math seem to come from treating them as a thing. sqrt(2) and 0.5 aren’t transcendental. 0.5 is a rational. Sqrt(2) is an algebraic. You can express them in physical terms (as roots of finite polynomials)

Most reals don’t need to exist for any purpose. All we need is some kind of vector space of constants (e.g. planck’s constant, electro-permeability, the speed of light, the mass of an electron) over the unbounded algebraics. It’s entirely possible that some of these constants would in fact be transcendental, but such a space is much much smaller than the reals. These are the numbers we can construct.

pi and e are very important numbers!

They also belong to the much smaller set of computable numbers. The set of all reals is almost entirely full of absolutely useless numbers- ones that can't be compressed or expressed by any algorithm or notation system with a finite amount of space.

I've never seen one of these supposedly useless numbers. Every one I've seen can be expressed. ;-)

Right, they might well be basis vectors.

> 0.5 is a real number. Its series of bits is not random at all

Would you say that an (arbitrarily long) sequence of coin tosses that all come up heads "is not random at all"? I'd say it's vanishingly unlikely to occur (if the coin is actually 50/50), but just as unlikely as any other particular sequence of the same length. It's like if a lottery gave out [1, 2, 3, 4, 5, 6]: it's a remarkable pattern, but no more or less likely than any other particular result. The same goes for the never-ending decimals of 0.5 and sqrt(2), and it's misleading to draw general conclusions from such specifically-chosen examples.

As of the end of April 2018, we've had 400 consecutive months where the world's temperature was hotter than average for that month, yet a whole lot of politician say that's just due to random variation.

I bet they could take a long run of heads in stride without thinking that maybe they have a biased coin.

> It's like if a lottery gave out [1, 2, 3, 4, 5, 6]: it's a remarkable pattern, but no more or less likely than any other particular result.

Do you know if there's a name for the fallacy you've alluded to? (Assuming that an ordered sequence is less likely than any other.)

Interestingly, the fallacy is actually good reasoning. Even though every permutation is equally likely, the permutations fall into equivalence classes of "notoriety", and the class of ordered sequences is far smaller (== less likely) than the class of disordered sequences (which are all equivalently noninteresting).

(This is entropy, as it applies to both information and the physical world.)

> 0.5 is a real number

Yes, and it is one of the few we have no problem with (7 is an other example of this class): The rationals (or the algebraic). Unfortunately the vast majority of the reals are not that catchy. E.g. we still do not know how many reals there are. The reals are a beast if watched from a fundamental perspective.

> E.g. we still do not know how many reals there are.

Umm... yeah we do, the cardinality of the continuum is the same as the cardinality of the power set of natural numbers. We've known this for 140 years.

Reading between the lines, I think the author is really talking about the "non-computable numbers" (i.e. those real numbers who can't be calculated to an arbitrary precision by any Turing machine), but if that's what the author is referring to, he should just say "non-computable numbers", not "real numbers", which is a much broader class.

You overstate "knowledge".

We project back currently accepted axioms, and find that Cantor's proof works. And therefore it was known, because the proof was known.

However Cantor's proof is not so cut and dry, nor was it so unarguable at the time.

It was based on set theory, and it was not clear to people at the time that set theory actually worked. Indeed, in 1901 Bertrand Russell came up with "the set of all sets that do not contain themselves" and came to a contradiction.

One of the proposed resolutions was to find a better set of axioms, which lead us to ZF and later to ZFC. This is the path that mathematics took.

Another was to question what words like "exists" and "truth" mean. In particular, does it make sense to talk about the existence of something we cannot construct? To talk about the truth of a statement that we have neither proof nor disproof of? This path leads to constructivism, and in constructivism Cantor's "proof" isn't a proof at all!

As it turns out, there are philosophical reasons to prefer constructivism, but mathematics is easier to do within formalism. After mathematicians gained enough experience with and trust for ZF, they went with convenience. But there are plenty of mathematicians historically, and even a few remaining today, who think that the entire tower of cardinalities from classical set theory is formal nonsense meaning nothing. And there is no logical flaw in their views.

Cantor's proofs (he gave multiple) are un-controversial. There's absolutely zero question about them from any reputable mathematician.

One COULD take issue with the wording: what Cantor demonstrated is that there is no injection from the reals to the naturals (i.e., no way to assign a natural number to every real with no repeats). Anyone who disputes this is a quack. The layman controversy comes from our choice to describe this situation in English as, "There are more reals than naturals". That's merely a shorthand for the more precise statement. People get bent out of shape because they mistake the shorthand itself as some deep philosophical claim, rather than looking at what it actually means.

You overstate your case.

The proof that there is a bijection between the reals and the power set of the natural numbers depends on Cantor's bijection theorem. As you can verify from https://en.wikipedia.org/wiki/Constructivism_(mathematics) or many other sources, that proof and theorem has been rejected by many reputable mathematicians over time. Most notably including Brouwer.

Constructivism is subtler than that (the wikipedia intro is misleading).

When an intuitionist says that we can't use the principle of excluded middle, they mean it more like in a functional programming sense: if we have two recipes for a cake, one of which requires a proof of X, and one of which requires a disproof of X, we cannot combine those recipes with a proof of "X or not X" and bake a cake.

Intuitionists noticed that (in a sense that can be formalized), if you do mathematics while "pretending" that the law of excluded middle is doubtful, then all your proofs become constructive. There is a misconception among laymen, who see these mathematicians who are so pretending for a pragmatic purpose, and mistakenly think these mathematicians are so pretending out of philosophical principles. That's never or almost never the case.

I can't speak for Brouwer's "religious" beliefs but what I can say is: if he attempted to teach students "It isn't always true that (P or not P)", without appropriate disclaimers that by saying that he's actually saying something very subtle and precise--then his math department would be obligated to stop him from misleading those students.

I have no idea what your background is, but my understanding is exactly opposite of yours. You are shoehorning people who think something very different from what you think into the framework of how you think people should think about it.

The first "pure existence proof" by contradiction was due to David Hilbert in 1888. The now-named Hilbert Basis Theorem resolved a famous problem introduced by Paul Gordan. Paul Gordan's famous response was, "Das ist nicht Mathematik. Das ist Theologie." (That is not Mathematics. That is Theology.)

In the debates that followed in subsequent decades, the question was whether or not we could sensibly talk about the existence of something we have no way to find or verify. Brouwer wasn't just making a subtle point in the 1920s that is palatable to modern mathematicians. He was rejecting the symbol game that was Hilbert's Formalism as meaningless nonsense.

Brouwer's school lost. So it is true that any Constructivist today does have to present a weakened form in public that fits your description. But historically they meant exactly what they said. And what they said is that proving that there is an unfindable contradiction in an infinite set is not an acceptable proof of existence. And what they believed was that allowing such unsound reasoning methods would lead to contradictions. (This belief has since been disproven, but in the 1920s nobody can be faulted for having been genuinely concerned about it.)

To gain a better understanding of the times, I would recommend two books. The first is Hilbert! by Constance Reid. It is a biography of David Hilbert, and does a surprisingly good job of describing the fundamental epistemological issues that lead him to Formalism. The other is The Mathematical Experience by Davis and Hersch.

I'm not a math historian, so I'll take your word for it about the math history. I should have qualified that my statements apply to contemporary mathematics.

Paradox: Cantor's theorem is uncontroversial in mathematics. But the controversialness of Cantor's theorem is uncontroversial in history of mathematics. :)

I, for one, am glad Brouwer's school was defeated: I wouldn't want to choose a mathematical denomination like people choose their church denomination.

EDIT: Thinking about it deeper, it does make me wonder if we haven't all already chosen a mathematics denomination, and just not realized it. It's fun to imagine an alternate reality where Brouwer won and Hilbertists are forced to couch their theorems with elaborate contortions about "when I say X exists I really mean that a Hilbert-style proof that X exists exists"...

We have indeed all already chosen a mathematics denomination. And people like me who don't think that it makes actual sense to talk about the "existence" of things that cannot be in any useful way described have lost.

That said it is worth understanding very clearly, no matter what your preferences, that the deciding factors in any debate between the two sides did NOT center on logic. Logically both positions are internally consistent. In the end it comes down to asking whether or not you wish mathematics to be convenient, or about something real. Convenience won.

Which is the same reason that ZFC beat out ZF. (Though choice is more commonly used in an alternate form such as Zorn's lemma.)

>Though choice is more commonly used in an alternate form such as Zorn's lemma

Speaking of choice & constructivism, did you know that there exists a constructive version of the axiom of choice? See here:


[Historical side note of somewhat fleeting importance on something which, besides me, nobody so far seems to have really noticed:

It seems that van Heijenoort 'overtranslated' Hilbert's infamous boxing gloves paragraph. Here's the original German, from Die Grundlagen der Mathematik, Abhandlungen aus dem mathematischen Seminar der Hamburgischen Universität 6 (1928), 65-85:

"Dieses Tertium non datur dem Mathematiker zu nehmen, wäre etwa, wie wenn man dem Astronomen das Fernrohr oder dem Boxer den Gebrauch der Fäuste untersagen wollte."

Here, van Heijenoort's translation:

"Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists."

Here, a more literal translation:

"To take this Tertium non datur from the mathematician, would be about how when would want to proscribe the Astronomer the telescope or the boxer the usage of the fists."

Now, the point of this doesn't consist of the sentence now sounding rather different (and somewhat badly phrased) - in fact, that seems rather typical of intentionally more literal translations, as usually, translation also involves fitting the translated version of the text into the common use of the target language. No, the point consists of the fact that van Heijenoort should have left the latin expression "Tertium non datur" in place, see the following reddit comments thread about TND 🆚 LEM for why:

https://www.reddit.com/r/dependent_types/comments/33tc28/jon... ]

I noticed you consistently misspell Reuben Hersh's surname. Sorry I haven't corrected you when I first saw you doing that on HN close to a decade ago, assuming you might soon self-correct if only by glancing over the book you hold so dear.

As for the book by Constance Reid, Gian-Carlo Rota's review entitled Misreading the History of Mathematics is all that should be said of it.

I second the recommendation of Reid's biography (although the title does not have an exclamation point): great book about a truly great man. Eventually Gordan came around, to some extent, and said that maybe "even theology has its uses."

> in constructivism Cantor's "proof" isn't a proof at all!

Are you sure about that, and could you point me to a reference if so? It was my understanding that Cantor's proof works perfectly well in a constructive setting.

For example, we can represent real numbers in a constructive way as functions from a natural number to a digit, where we interpret f(N) as giving us the Nth decimal place. We can represent an infinite list in the same way: taking a natural number and giving the value at that list position (a real number is hence an infinite list of digits).

Hence we can construct a function F which takes in a list of real numbers (a function mapping natural numbers to functions-mapping-natural-numbers-to-digits) and gives out a real number (a function mapping natural numbers to digits). Given a number N, this function looks up the Nth digit of the Nth number in the list, and returns a different digit (e.g. one more, modulo 10). Formally, in some Agda-like notation, it would be something like:

    Digit = Member of {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

    inc : Digit -> Digit
    inc(0) = 1
    inc(9) = 0

    List(t) = Natural -> t

    Real = List(Digit)
    Real = Natural -> Digit

    F : List(Real) -> Real
    F : (Natural -> Real) -> Real
    F : (Natural -> (Natural -> Digit)) -> (Natural -> Digit)

    F(list)(n) = inc(list(n)(n))
Cantor's proof is then a function which takes a list L and returns a proof that F(L) doesn't occur in L. We can represent non-occurrence using a function taking a natural number N and returning a proof that the Nth element of L differs from F(L) (this is a list of proofs!). We can represent proofs that two numbers differ by using a natural number, which gives (one of) the decimal places at which those numbers have different digits. We know from the definition of F that F(L) will differ from the Nth value in the list, and more specifically that it will differ at the Nth decimal place. It's easy to prove that distinct digits are different, since the set of digits is finite (although the exact encoding depends on the logical system being used). Hence we just need to show that `inc(N) != N`, and use that as proof that `F(L)(N) != L(N)(N)`, and hence `F(L)` doesn't occur in `L`:

    -- Proof that x != y, however you want to represent that (e.g. 'Equal x y -> Empty')
    Differ(t) : (x : t) -> (y : t) -> Set of proofs that x != y

    incDiffers : (d : Digit) -> Differ(Digit)(d, inc(d))
    incDiffers(0) = 0 != 1
    incDiffers(9) = 9 != 0

    -- Proof that two reals differ, which is a natural and a proof that they differ
    -- at that digit
    RealsDiffer(x, y) = {n : Natural | Differ(Digit)(x(n), y(n))}

    -- Proof that F(L) differs from everything in L. This is because the Nth element of L
    -- differs from F(L) at (at least) the Nth decimal place.
    cantor : (L : List(Real)) -> (n : Natural) -> RealsDiffer(L(n), F(L))
    cantor(L)(n) = (n, incDiffers(L(n)(n)))

The diagonalization argument is constructive, but with an asterisk. However the classic understanding of cardinality also requires Cantor's bijection theorem. If there is a one-to-one function from A into B and from B into A, then there is a bijection between them. This proof is non-constructive. And without it, the entire tower of cardinalities falls apart.

The asterisk with the diagonalization argument can be seen best with an example. Suppose that we define "reals" as programs which have been proven to construct Cauchy sequences that converge at a given rate from some set of axioms. Suppose that we construct a program that will enumerate all possible proofs from that set of axioms. This is doable. We search those proofs for proofs of statements of the form "this computer program computes a Cauchy sequence". That gives us a purported listing of all possible real numbers according to this definition. (Any given real will show up many times.)

Now apply Cantor's proof. It can construct a program which we believe produces a Cauchy sequence which converts at the given rate which is not in the list of all possible real numbers. The construction is concrete and we can write down the resulting program with only modest difficulty. BUT is the program that we found a real number under the definition that we chose?

It cannot be if the set of axioms is consistent. Because if there was a proof that it was, then it would be on the list and we'd be looking for a number that we don't want. Indeed, a proof that it actually produces a Cauchy sequence would follow immediately from the assertion that the axiom system we were using is consistent. But Gödel's theorem says that the axiom system, if it is consistent, can't ever prove that. The program that we have produced is some sort of "meta-real" - but it is not a "real" by the definition chosen since there is no proof from the chosen axiom system that it actually produces a Cauchy sequence.

And now Cantor's argument is not showing up as "there are more reals than integers". It is showing up as some sort of, "Our definition of reals is complex and recurses in on itself in a way that tangles up the division between true, false, and unknown."

See https://mathoverflow.net/questions/30643/are-real-numbers-co... for more variations on the complexity of constructivism and Cantor's diagonalization argument.

I don't think this construction works, for the same reason that rationals are lower cardinality than reals. I e, I think your program proves the rationals, not the reals.

They would be rationals if the "lists" were finite (i.e. if their input were finite, like "natural numbers less than 100"). Infinite lists of digits (i.e. where the input can be any natural number) are reals.

One subtlety is that we're talking about arbitrary functions, e.g. they aren't necessarily computable, or even representable (e.g. as programs, or otherwise).

Presumably drvd is referring to the independence of the Continuum Hypothesis: we don't know |R| in the sense that we don't know for which ordinal alpha is |R| the alpha'th infinite cardinal. The Continuum Hypothesis is the claim that |R| is the first cardinal after |N|. It's well known the Continuum Hypothesis is independent of ZFC. That means for the question of "how many reals there are" (at least if that question is read in the formal sense), the answer is "it depends: for different models of ZFC, there are different answers".

What is the cardinality of the powerset of the naturals?

It's usually referred to as "c". I'm not sure what you're asking? It's like saying "3 is the number that follows 2", and then asking "yeah, but what really IS the number 3, man?". The answer is: it's the cardinality of the sets which can be in a bijection with the power set of the natural numbers.

If you're referring to the continuum hypothesis, that is a distinct question from "what is the cardinality of the real numbers". Also: a solved one, the contiuum hypothesis is independent of ZFC, so saying "we don't even know what the cardinality is" is still wrong.

This is an extremely unusual conception of epistemology, even for mathematics. This knowledge that is contingent on axioms isn't at all convincing. At the beginning of the 20th century, this was a raging debate. It wasn't quite resolved, it just didn't quite matter to practicing mathematicians so it kind of faded into the background.

There is a massive gap between 3 and the cardinality of the continuum. 3 is directly examinable. If I count some collection of objects, my fingers say, I will immediately grasp 3.

On the other hand, the set of real numbers is a highly pathological, abstract concept. Everything we know about the reals suffers from two severe deficiencies: One, it depends on infinitary axioms which presuppose facts about the phenomenon we would like to investigate. Two, even what we can infer from such axioms is always indirect evidence. That's not surprising: All reasoning about infinity is indirect.

In certain mathematical settings it is even true that the Cauchy sequence definition of real numbers and the Dedekind cut definition do not agree. At the very least, the reals isn't even a thing: there are many reals. That you say that "we know the cardinality of the reals" because we know CH is independent of ZFC is... preposterous to say the least. "If you assume you know the cardinality of the reals, then you know the cardinality of the reals; therefore we know the cardinality of the reals" is basically what such axiomatic acrobatics boils down to. Axioms are not knowledge.

Here, another interesting construction of the real numbers:

https://arxiv.org/abs/math/0405454 - The Eudoxus Real Numbers



(Surprisingly enough, ncat & Wikipedia complement each other here in their explanations of it.)

Getting back to the OP: in a very real sense, the set of Real numbers does not exist. Therefore, it is impossible to know its size via experience. The only possible way is to derive it from chosen axioms.

Note that even your conception of 3 is reliant on axioms. How do you "know" that 3 ducks is the same 3 as 3 fingers? Only via axioms.

That's sounds a little absurd. A five year old child knows that 3 fingers is the same 3 as 3 ducks. Certainly 5000 years ago people understood 3 without ever knowing what an axiom is. The peano axioms didn't create arithmetic, arithmetic created the peano axioms. S(S(S(0))) follows | | |, not the other way around.

The cardinality of the continuum is a cardinal number. It’s one of the alephs. It is not known which aleph it is. So it’s not known what the cardinality of the powerset of the naturals is. It’s just known that it is the same cardinal number of the reals. Basically, we have two jars of marbles that contain the same number of marbles but it’s not known how many marbles that is.

The continuum hypothesis being independent just means that it's an additional rule you can add or remove from the game you are trying to play. It doesn't mean we are lacking in knowledge and that if we were to work harder we would solve this problem. We do know which aleph c is: it's aleph_1 with CH and some other aleph without CH. Just take your pick which version you like better.

It's not like we don't know which one is the true model of military combat: chess or checkers. They're just two different games with two different rule sets, and you get to pick which one you like to play more.

The set theory that most working mathematicians deal with is ZFC. In ZFC it is not known what cardinal the continuum is. Hence the statement that I was responding to is incorrect. The person I responded to said that they do know how many reals there are.

The cardinality of the reals is called c. It is known to the be the same as the cardinality of the power set of the naturals. It is not known, in ZFC, which aleph this is. We just know that it is the same as the size of another set.

If you want to add an axiom and say that c is aleph1 then you are free to do so. But if you don't have this axiom then you don't know which aleph it is. So in what sense can you say that you know how many reals there are? You only know it if you add an axiom that says, "It is aleph1."

If I have a jar of pennies and I know it has the same number of pennies as the number of quarters in another jar that I have, does this mean I know how many pennies are in the jar?

Exactly right.

I think the issue is with the implicit claim that we don't "know" a cardinal until we know which aleph it corresponds to.

> What is the cardinality of the powerset of the naturals?

That question is better than almost every other question!

The complement of the set of normal numbers has measure zero. A normal number is what the author of the paper is talking about when referring to random numbers. I think most mathematicians, if they had to bet, would bet that sqrt(2) is normal. It is not known though. If it is normal then it’s digits are random.

I haven’t read the paper but I think the author is arguing that most real numbers don’t make sense physically. If sqrt(2) were shown to be normal and since it’s the hypotenuse of a right triangle of legs with length 1, I wonder what the author's response to this would be. Perhaps he’d say that such a triangle doesn’t exist physically.

Perhaps he’d say that such a triangle doesn’t exist physically.

I am certain that this is the case given that he's referring to the maximum information density density of space as one of the reasons why exact numbers make no physical sense. See https://en.wikipedia.org/wiki/Bekenstein_bound if you don't know about that limit.

More directly, the inability to represent exact lengths falls out of the Heisenberg Uncertainty principle. As an abstract mathematical concept, numbers could be anything. As a physically relevant concept, there are limits to how much precision can matter.

What does it mean to produce a physically realized triangle with length exactly 1? Some specific number of atoms in a specific place? How does that square with the uncertainty principle?

Pretty much everything in mathematics was named on discovery, which was usually before they understood what the concept they were discovering actually was. And, somethings even the name that was given is meaningless in English because it was given in a foreign language. Real Numbers aren't somehow special here.

There are a lot of elements in mathematics that arguably could be named better. Examples include (and obviously this is opinion, not a mathematician, in all humility, etc):

+ Negative numbers, real numbers, imaginary numbers, quaternion, etc (which should really all be named in a more systematic way) + sin, cos, tan (I doubt any link remains between the function and the concept in modern teaching) + Anything named after a person instead of named for what it is (eg, Euler's constant e; should have a more informative name) + plus, multiply, exponentiation, Knuth's up-arrow (a progression of the same concept, should be named systematically) + The assuredly long list of examples I don't know about

The gains don't justify the costs, but one suspects if you could somehow give an alien modern math theory without any of the name or symbol conventions they would invent a much more systemic system than hat we have grown.

You are talking about rationals and algebraic numbers, which are just a tiny subset of the real numbers. Your point about computable vs non-computable numbers is valid, but I think the author meant that you will never encounter a real number that is non-computable, since those are impossible to express in the first place. Real numbers can contain an infinite amount of information.

Actually there are non-computable numbers known that you can encounter. See https://en.wikipedia.org/wiki/Chaitin%27s_constant.

Yes, there's actually another distinction we can make: between incomputable numbers and indescribable numbers. The latter is very slippery, since it can lead to things like the Berry paradox https://en.wikipedia.org/wiki/Berry_paradox

For example, we would need some way to avoid expressions like "the smallest indescribable number", since that itself is a description ;)

A good book to read to get aquainted with these notions is "Meta Math" by Gregory Chaitin. The impatient can jump to chapter 5.


So, I think the author meant that a real number is actually a sequence of approximations towards an ideal. So, 0.5 has infinite sequences like 5, 3, 2, 1, 3/4, 1/4, 3/5, 2/5, 1/2

Although technically you're right, one can reasonably assume that when talking about random numbers, people mean non-rational random numbers. Like sqrt(2), e, pi, ...

But none of those are really random either. The only sensible definition of randomness here is Kolmogorov complexity, and all of those numbers you just mentioned have incredibly low Kolmogorov complexity. I can write a tiny computer program that prints them out to billions of digits.

What the author is probably referring to is "non-computable numbers", which are numbers that cannot be computed to arbitrary precision by ANY Turing machine. But it's incredibly confusing and wrong to use the term "real numbers" to mean "non-computable numbers".

Okay, but 2 points:

1) Kolmogorov complexity is a measure of the computational resources needed to specify the object

Depending on what you mean by "specify the object" this may or may not be infinite resources. My vote goes to infinite. Because you either need infinite resources even to just express this number, or you need infinite resources to actually run it. I get that you can create a finite program that gives an approximation of the number, but that's not the same, and only approximations can be finite. The number itself, and it's useful specifications, are all infinite. For the real number you need infinite resources to express it.

So by what measure exactly does sqrt(2) have finite complexity ?

In fact, you could use the old "every number occurs in Pi" trick and say that every other object is contained within it (and within every other real number), and since kolmogorov complexity is a metric I have just proven that the kolmogorov complexity of sqrt(2) is larger than any other kolmogorov complexity. The only number that satisfies that condition is infinite.

2) Even if this applies to the 4 numbers I specified, or the few thousand "well known" real numbers, it doesn't apply to any real proportion of real numbers. I mean, I can compute one or two 3rd degree boundary differential equations straight in my head, but that is very different from saying I can do that in the general case. Hell, quantum computers can't do it in the general case. The same is true for specifying real numbers.

(and, more or less)

3) in reality, only approximations of sqrt(2) are computable. The real number itself isn't.

A normal number can be computable. It is not known if sqrt(2) is normal.

At no point was anybody talking about normal numbers. "Normal numbers" doesn't mean "numbers with random digits", it has a much narrower meaning that. There are non-computable numbers which are normal, there are non-computable numbers which aren't. There are computable numbers that are normal that don't look random in the least: https://en.wikipedia.org/wiki/Champernowne_constant

The author's paper is about normal numbers (numbers that contain infinite amount of information). If a normal number isn't considered to have random digits then what notion would you use?

You've misunderstood the definition of normal numbers. A "normal number" (loosely defined) is a number where the digit expansion of the number has all possible substrings of digits uniformly distributed in the limit as it goes to infinity.

Several things to note:

1. Whether or not a number is normal has nothing to do with the "randomness" of its digits or the "amount of information" in the digits. Chapernowne's constant, 0.123456789101112... isn't "random" at all and contains very little information, yet it is known to be normal in base 10.

2. A number can have totally super-random digits (as random as you want! it can even be non-computable! infinite information!) and not be normal at all. For instance, imagine you have a normal number that's as random as you want, and it starts like:


And then goes on forever, no discernible pattern. Say you construct a new number from this number, with the only difference being that you remove the digit 7. Literally, every place that a 7 appears in your original number, you just remove it. This number would no longer be a normal number, because all the sequences with the number 7 in it would appear nowhere.

The number would still have "infinite information" in the sense of the author of this paper. It would still be "just as random", it would still have "no pattern". But it would not be a normal number anymore.

Whether or not a number is "normal" or not has nothing to do with the issues raised in this paper. "Normality" is a different criterion entirely. When the author talks about numbers with "infinite information", he's not talking about normal numbers, he's talking about computable numbers, which is an entirely different concept: https://en.wikipedia.org/wiki/Computable_number

> Chapernowne's constant, 0.123456789101112... isn't "random" at all and contains very little information, yet it is known to be normal in base 10.

Why is Chapernowne's constant not random?

> The number would still have "infinite information" in the sense of the author of this paper. It would still be "just as random", it would still have "no pattern". But it would not be a normal number anymore.

That's not true. It would not be just as random. If the set you're sampling from includes a 7 (i.e. the set of digits which can be represented in any single place in the sequence), and you never see a 7 for an extremely long time, this is exceptionally good heuristic evidence that the number is not random. And if we know 7 never shows up, we also know that the number is not random, because we know it's not uniformly sampling from the set of base 10 digits.

I read through most of the paper we are all nominally talking about. He doesn't use normal numbers as you say. He is talking about computable numbers as you say. My apologies.

But a normal number has "random" digits in the sense that all finite sequences of digits (in a given base) occurs uniformly in the expansion. What other notion of random can one meaningfully give for an expansion of a number's digits? Without getting into too much philosophy.

From [1]: We call a real number b-normal if, qualitatively speaking, its base-b digits are “truly random.

My expertise is in commutative algebra so I'm outside of my comfort zone.

[1]: https://www.davidhbailey.com/dhbpapers/bcnormal.pdf

Something can be random and not have a uniform distribution. If I throw two dice, the sum will be random, but it will not be uniformly distributed. Uniform is just one distribution among meny.

The only definition of "randomness" in a sequence that holds any kind of water, philosophically or mathematically, is Kolmogorov complexity [0] (see specifically the section on "Kolmogorov randomness"). I don't know where you got the idea that "normalness" is the ultimate version of randomness, but it's not.

Forget the formal definition for a second, and just think of the intuitive notion of randomness for a second: does the sequence 12345678.... look random to you? In random sequences there should be no patterns: do you see a pattern in this sequence? If you had a computer program with a random number generator that produced that sequence of digits, would you be happy with it? No, you wouldn't.

In the sense of Kolmogorov randomness, the sequence 1234567.... is obviously not random at all, since it's trivial to find a Turing machine to generate it. It matches up perfectly with out intuitive notion of what randomness is, and it quite correctly points out that the amount of information in the string is very low, even though it's infinitely long. That is my definition of randomness, and it's (more or less) the author of this paper's definition.

[0]: https://en.wikipedia.org/wiki/Kolmogorov_complexity

mathematicians like to skip steps, generalize and the like. 'randomness' for numbers is not really a well defined mathematical property. what i like about the summary is that you can understand the gist of what he is trying to do and why.

> 'randomness' for numbers is not really a well defined mathematical property.

Randomness is a well-defined property. We say that numbers are random, or "normal", if an infinite sequence of their digits forms a uniform distribution. The reason we call them "normal" and not "random" is because they are the norm - almost all real numbers are normal.

If we could read the coordinates of any physical object to infinite precision, we could only influence the higher digits by moving the object very carefully, beyond the digits which we have some influence over, the infinite tail of the real numbers digits must be completely random.

I don’t know 0.5 still could be random. Relevant XKCD https://www.xkcd.com/221/. </sarcasm>

Since a finite amount of space - my stomach and associated crevices - can't contain a finite amount of coffee, nor could the keyboard; I argue that 200ml on the cup isn't relevant, and I can order more without deterministic effect, perhaps.

Clever. Nice. Very unexpected.

Terminology is rather artificial. I.e. mathematicians just had to come up with some names for various abstractions. So real, irrational, imaginary and etc. aren't supposed to be taken literally.

Keep in mind, this is using terminology that comes from courses appropriate for advanced undergrad/beginning graduate level math courses.

"God (sic) created the integers, the rest, was left to people."

Can't remember.

"If it cannot be done with integers, it cannot be done"

Von Neumann

Letters in Sefer Yetzira.

What book is that?


Is anything real?

Wow. The author of this paper should be awarded the Ig Nobel Prize for both math and physics :-)

Not serious, just bla bla, no equations are given and the author does not suggest any finite representation to replace the real numbers.

This kind of article is like the old philosophers way, just words, words and arguments, nothing exact or observable that can be verified or falsified. Normally this was over when physics began: we need hard fact, experiments and equations.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact