Hacker News new | past | comments | ask | show | jobs | submit login
Cantor function, a.k.a. devil's staircase (wikipedia.org)
96 points by memexy on June 11, 2020 | hide | past | favorite | 52 comments



To be more precise, it has derivative zero almost everywhere, so title is slightly misleading. To fully grasp this, the article links topics from measure theory.


Ok, we'll move "increasing function with 0 derivative" to almost nowhere in the title above.


To be clear, this is the measure/probability theory definition of almost everywhere, which means that while there are exceptions, it's true 100% of the time.


> it's true 100% of the time.

I don't think this is a great way of expressing it really since it's not very clear what "100% of the time" means formally; to most people (myself included), saying "100% of the time" in the context of whether a property holds over a set is to say that the property holds for all elements of the set.

This is completely different to saying it holds almost everywhere, which is to say it holds on the complement of a measure zero set.


This is one of those things where a surprising fact isn't astounding, but just makes me think the definitions are badly chosen. After all -- of course if you pick definitions in weird ways, you get weird results.

In this case, yes, it is true almost-everywhere, meaning that there the set of points where it is untrue has measure 0. And yet intuitively the function is increasing and so we ought to define "having a derivative" in such a way that shows that it is increasing. All this definition has done is crammed all of the increasing-ness into points that are vanishingly rare, so you can't find them. But they still exist, per _my_ definition of increasing!

I would instead choose to define "having a derivative != 0" in a different way, that captures the idea that, over any interval I choose, the function changes. Nevermind that the function becomes constant as I zoom in to a point. All that shows is that evaluating at points must not be a great way of modeling this.

Of course I'm not sure if that works exactly. But this is the reason I can't get enthusiastic about analysis: it seems like a bunch of silly definitions with silly consequences, with little relation to the actual mathematics of our universe that I care about.


The problem with this way of thinking is that "the actual mathematics of our universe that I care about" doesn't (yet?) exist. The definitions you find badly chosen have been thought about, hard, by many generations of very smart people. Some with the same basic reaction to all this you have. And yet, after all that, the definitions remain. And the alternatives proposed, at best are not compelling.

So why? Real & Complex analysis are by far the best way humans have yet come up with to model certain things, particularly modeling a large subset of physical interactions in the universe.

So to some degree we are stuck with this. Mostly it's the fact that fields like R an C are actually pretty strange objects. Strange in ways that are hard to get your head around, and that leak out into any mathematics that you build on top of it.

So what to do? There is beauty there as well. And although we have tricks to hide most of this, most of the time, it's not a bad thing really that when you look closely the weirdness is apparent.


You are mixing up definitions and names.

Do complex numbers have anything to do with complexity?


I'm not quite sure where this comment comes from - what I am saying has nothing to do with the names; I use the conventional names for the natural reasons. Perhaps you meant the potential ambiguity of terms?

I agree there are sometimes unfortunate consequences of transliterating mathematical terms (e.g. "field", "smooth", "increasing") this potential for confusion with a lay audience (cf the famous (true?) story of a senator objecting the funding of "complex" analysis - was this your reference?). However, the alternative is worse, and peppers any conversation with intractable jargon.


But can you build a fruitful system of analysis from your different definitions that doesn't have any equally surprising facts once you dig any deeper? The proof would be in the pudding. It's not like the choice of definitions in mathematics is arbitrary, they stick because they turned out to be useful.


> And yet intuitively the function is increasing and so we ought to define "having a derivative" in such a way that shows that it is increasing.

The Cantor function is not differentiable at the points of "increase", so there is no problem with it growing at those points. I'm not sure what your complaint is.

> it seems like a bunch of silly definitions with silly consequences, with little relation to the actual mathematics of our universe that I care about.

That's honestly really silly. A lot of analysis is extremely practical, and is fundamental to the study of this universe. Yes, the language allows you to express non-physical situations, but you know you can just ignore those situations, right?


For most of my life I would have unthinkingly attacked any attacker of the traditional foundations of mathematics (including the standard definitions in Analysis, Measure Theory, etc). But after reading Michael Huemer's "Approaching Infinity"[1], I am now sympathetic with your skepticism.

A shockingly large amount of pure mathematics is asserted (with either no or little argument) as the best foundational groundwork for numbers, infinities, measures, etc. Digging a little deeper shows much of the dogma to be a dubious foundation for mathematics, leading to absurdities like Cantor's function (which 99.9% of math students marvel at, instead of more appropriately wondering "wait a minute -- maybe this means our foundations are messed up?").

[1] https://www.amazon.com/Approahttps://www.amazon.com/Approach...


Just because it is unintuitive doesn't make it absurd. It is still well defined and its behaviour is understood and can be described. It's one thing to think 'this is so messed up that maybe the foundations are messed up' and it's another to actually find a new set of foundations which don't produce any 'absurd' results at all but are still useful.

I haven't read your book specifically, but many people have thought about the problematic nature of infinity in analysis. There is a whole reformulation of analysis designed to fix that called nonstandard analysis [1]. I don't know much about it myself so can't comment on the content of these links, but it seems that Cantor's set still exists there [2] [3]

[1] https://en.wikipedia.org/wiki/Nonstandard_analysis [2] https://books.google.co.uk/books?id=hBHP5foeXCsC&pg=PA142&lp... [3] https://math.stackexchange.com/questions/2906390/what-does-a...


Robinson's non-standard analysis is also discussed (and rejected) in the book I linked to. My book argues (among other things) that while all of the constructions of traditional mathematics are logically possible (not very interesting), many of them are metaphysically impossible for any object to instantiate.

For example infinitesimal numbers make no sense, in the sense that there is no such thing (in any possible world) as a positive number that is smaller than every positive real number. Of course, you can definitionally augment `(R+, <)` with formal objects `u` s.t. `u < x` for any `x ∈ R+` (where `R+` designates the positive reals), and then close these under the field operations `(R, +, *)` (I assume this is how infinitesimals are constructed). But that doesn't mean you have written down a model of actual numbers (which include the so-called "infinitesimal" numbers), since there's no such things as infinitesimal numbers.

You could argue "well, maybe infinitesimals aren't actually numbers (and perhaps more strongly: the theory of infinitesimals fails to have any metaphysically possible model), but they are still useful to us". For example: calculus is surely useful to us, and calculus can be built over the "infinitesimals". I don't have anything against this. But many people studying non-standard analysis incorrectly think that because infinitesimal numbers are being formally studied, that must mean there are good arguments out there for their actual existence.

Another incorrect dogma from traditional mathematics has to do with infinities being treated as numbers. Infinities are not numbers, for there is no such thing as a number which is greater than every natural number. There are countable (and even uncountable) collections (that is perfectly coherent). But that doesn't mean infinity is literally a number. Yet in traditional mathematics, it is merely asserted that infinities are numbers (and that indeed there are an infinity of these infinities, that can be ordered by size).

Even the ordering relation is asserted without argument. For example, let `N` be the naturals and `P(N)` be its power set. In traditional mathematics, we say that `|N| < |P(N)|` since every any injective function from `N` to `P(N)` fails to be surjective. Sure, you can define `<` to satisfy this. But that doesn't mean, in reality, the "less than" relation does satisfy this! It is merely asserted, without argument, that `|N| < |P(N)|`, when it would seem to most smart, reflective people (before they are corrupted by years of traditional mathematics training) that `|N|` and `|P(N)|` aren't orderable in the first place, since they are both infinite collections (the former being countable and the latter being uncountable), and infinite collections can't be ordered by their size (in the "less than", "greater than" sense of ordering we use with actual numbers). Of course you can define an ordering on infinite collections using the surjective relation, but that doesn't mean it's mapping onto the actual "numbers ordered by size" relation that we had in mind before we started studying infinite collections. Perhaps this is why beginning students studying infinities in analysis find it so mind-boggling: they're being taught a bunch of forced definitions/constructions which don't actually map onto any real (or even metaphysically possible) models.

When you take the traditional mathematics route literally, you run into all sorts of absurdities. For example, if you treat ∞ as a literal number, you have to patch up the field operations on it. You have to start saying: "∞ + 1 = ∞", which implies it's not the case that "∞ < ∞ + 1". You can find several other examples in that book I linked to.

The original Cantor's function (a function that is continuous everywhere, but has zero derivative almost everywhere) that OP linked to strikes me as something that fails to have a real model. I don't doubt that such a function is logically possible, as constructed. But it likely means our definitions of "continuous", "derivative", "function" (or other such mathematical constructions) are failing to map onto the abstract objects we set out to when we started studying them in the first place.

Sort of like how, in probability theory, you can have events which are possible but nevertheless have probability zero (classic example: throwing a dart at the real number line means that the probability it will land on any particular real number is zero). It's totally logically coherent to set things up this way, but that it is logically possible for this to happen means our foundations/definitions/models are all screwed up (though, I admit, still pretty useful nevertheless).


Thanks for the detailed response. This critique of maths just doesn't resonate with me at all, though.

>that it is logically possible for this to happen means our foundations/definitions/models are all screwed up

Why? Simply because it feels unintuitive to you? Where is the useful alternative proposed by this theory? Is there any sound definition of what is 'metaphysically real' and why there is any reason to dismiss things which are not?

For example consider FLT - the proof surely uses fields and concepts with countless 'absurdities' inside them. And yet it proved something quite real and tangible. Consider that there may be no alternative proof of FLT. In that case, doesn't that validate those absurd concepts and give them a reality of their own that doesn't require any 'metaphysical' form? They have revealed a truth about the physical natural numbers we can all understand. To me that gives them a very compelling 'realness'.

Of course it could be that you can throw out all the absurd definitions and still build a useful enough system to prove such things - I am sure people have tried. Maybe the point is that not enough people have tried?


Not being facetious: I suggest you read the book. It discusses precisely these objections.

I should say also the only reason I read this book was because I liked the author's other books which have nothing to do with mathematics. Before reading it I thought "man who gives a shit about philosophy of mathematics? can we just do some real math please?", but this book changed my mind about that.

Also: I agree that traditional mathematics is useful. I trust that FLT true (not to mention it's an awesome achievement). And I even trust that proofs which make heavy use of concepts I find philosophically dubious will almost always turn out to still hold when placed under the "right" sort of foundations.

There are some exceptions to this though. For example, I absolutely don't trust stuff like this: https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox

By "don't trust", I mean I seriously doubt there is anything at all out there in the universe (or in the metaphysically possible universe) that behaves like this. Instead, it's just a logical result which leans heavily on some problematic mathematical assumptions which were never actually justified. It should be taken as a reductio ad absurdem that something went wrong with the inputs (or at least evidence as such). 100+ years ago, people spent time wrestling with the foundations of math, but now it's looked down upon as a waste of time/not "serious" math, etc.


> My book argues (among other things) that while all of the constructions of traditional mathematics are logically possible (not very interesting), many of them are metaphysically impossible for any object to instantiate.

How is that even remotely relevant?


Relevant to what? Relevance to reality must be based on whether properties could ever exist in reality. Logically following from other ideas doesn't matter.


Are they messed up though? There are many variants of integration.

There's as many variants of a derivative.

There are also many kinds of measure.

They all have limitations. Usually, the major limitation is handling sparse fields, because most integrals and derivatives do not hail from discrete mathematics.

Cantor's devil staircase is exactly one of those things that bridge discrete mathematics, (mostly discrete) number theory and (mostly continuous) analysis.


There are a couple constructions like this in mathematics that... really make you think. Like Banach Tarski doesn’t make any sense.

Or the one that still bothers me, ten years after I learned it. Is take an enumeration of the rationals and take the union of balls of with radius epsilon/2^n over this enumeration. And you can create dense open subsets of the real numbers, that miss basically every real number. Despite containing an open interval around every rational. The conclusion I’ve come to is the real numbers don’t make any sense.


> The conclusion I’ve come to is the real numbers don’t make any sense.

Oh they definitely don't.

Almost all real numbers are uncomputable, which means they cannot be written down or communicated in any way.


Real numbers aren't real.

Imaginary numbers aren't imaginary.

Infinities are unphysical, so you can understand them by analogy to anything that exists in the real Universe.


I think you can derive enough real analysis to define the Cantor function using only ZFC. You can read the axioms in [1], but they are very simple and “obvious” so to speak. What’s more interesting is where ZFC is known to be incomplete. Cantor’s continuum hypothesis is known to be unprovable in ZFC, which is fun the think about.

[1] https://en.m.wikipedia.org/wiki/Zermelo–Fraenkel_set_theory


> This is one of those things where a surprising fact isn't astounding, but just makes me think the definitions are badly chosen. After all -- of course if you pick definitions in weird ways, you get weird results.

I agree that it's not astounding. I think it's surprising because the definitions are well chosen, and intuitive in most cases, but this mathematical object just falls through the cracks.


I can sympathise with you on that. The whole of pure mathematics seems to be about making definitions and seeing what weird places and surprising connections you get. This isn't as exciting as seeing how mathematics can describe and predict phenomena in the real world.


Analysis is all about explaining calculus, one of the most useful and practical real-world mathematical fields. So you should find it pretty exciting.


Almost none of pure mathematics is like that.

But the normies obsess over the pathological cases, often because they are described in misleading mathematically incorrect terms (such as Banach Tarski).

It's kind of similar to law, where people love misreading judicial opinions to "prove" that law is crazy.


That sounds like something taken from the book: Burn Math Class.

There is always f(b) > f(a) <=> f increases in [a, b] or f(b) /= f(a) <=> f changes in [a,b].

But how useful is that?


>it's true 100% of the time.

lol that's not what it means at all. almost sure/almost everywhere/complement has measure 0 means exactly what it says: the set can be covered by a countable cover whose measure is zero. that's it and no more. no one ever says something like "the normal distribution 100% of the time is not 0" even though { x = 0 } has measure zero.


I'd say the title is correct almost surely.


Reminds me of something my complex analysis prof said with respect to some constant that it was unknown whether it was rational or irrational: “There’s no reason to think it’s rational, and statistically, the probability of any real number being rational is 0.”


He could have even said that that constant was almost certainly transcendental. It always surprises me that almost all real are transcendental, but that there are only 2 transcendental real numbers I am familiar with: pi and e.


That's because almost all transcendentals are not computable or nameable, whereas all algebraics are. So when you limit to the space of nameable computable numbers you get a different statistical intuition.


I forget the constant, but as I recall, that was likely true as well.


Another fun one is the slippery devil's staircase, Minkowski's question-mark function. It's strictly increasing but the derivative is 0 at rational numbers.

https://en.wikipedia.org/wiki/Minkowski%27s_question-mark_fu...


It's also fun to define the cantor function as the CDF of a distribution. The distribution has neither a pdf nor a pmf (it is singular)

https://en.wikipedia.org/wiki/Cantor_distribution


My favorite function is the Devil's Staircase found by plotting the map-winding number of a circle map, which has steps at every rational number: https://en.wikipedia.org/wiki/Arnold_tongue .


Thanks. I hadn't heard of Arnold tongue. That page has really nice illustrations.


Is there any practical application or real world implication of this? I’m having trouble understanding why this is interesting. I don’t have much formal math background, so tying this to something non-abstract would be helpful to appreciate it.


Not sure about that specific function. But the circle of ideas to which this function belongs can be loosely described as "how do we deal with non-smooth objects"? How do we construct them, measure and use them. Some such objects are sometimes called fractals, this function is a simple example.

There are concrete application examples, but in general one guy that was particularly interested in fractals in the real world was Mandelbrot. He published several influential books on fractals in nature (and in finance). Another famous name is Taleb, although imho he's much more a populariser than a researcher.


I posted it because someone mentioned measure theory and I remembered this example from a graduate course in real analysis. Cantor's function shows there are functions that are not the integrals of their derivatives. The Cantor function goes from 0 to 1, it's increasing and continuous, but it's derivative is 0 almost everywhere. Integrating the 0 function gives us back the 0 function so it's a kind of counter-example to the fundamental theorem of calculus.

> Georg Cantor (1884) introduced the Cantor function and mentioned that Scheeffer pointed out that it was a counterexample to an extension of the fundamental theorem of calculus claimed by Harnack.

I don't know if it has any real world applications other than as a reminder to be careful about assumptions and definitions. Most people think that increasing/monotone functions must have positive derivatives but the Cantor function shows this is not the case.


> Integrating the 0 function gives us back the 0 function so it's a kind of counter-example to the fundamental theorem of calculus.

I guess that is one way to think about it, but if that is the case you can just differentiate a constant function and you the fundamental theorem of calculus won't return it to you (if you don't like that example just take the Heaviside function). In finite dimensional linear spaces, the derivative operator has a rank 1 kernel, but the integral does not. From that perspective, the derivative destroys information (specifically it destroys constants), the integral operator cannot restore it automatically and the whole + C is meant to identify an equivalence class for the indefinite anti-derivative.


> Cantor's function shows there are functions that are not the integrals of their derivatives.

That's because Cantor's function does not have a derivative that's defined everywhere. Of course a function without derivative is not the integral of that non-existent derivative.

If most people think that increasing/monotone functions must have positive derivatives, they're apparently forgetting about non-differentiable functions.


The function is differentiable almost everywhere (this is a technical term from measure theory). Here's a pretty good explanation of its properties: https://epicmath.org/2012/11/27/5-the-cantor-set-and-the-can....

> The Cantor function, also known as the Cantor Staircase, is a bizarre function that is continuous and has a derivative of 0 at every point where it is differentiable. In fact, it is differentiable at every point other than on the Cantor set, which is a set of measure zero.

So from a measure theoretic perspective the derivative of the Cantor function is well defined and it is equal to 0 almost everywhere. Almost everywhere equality is an equivalence relation and the derivative of the Cantor function is in the same equivalence class as the 0 function.


> Almost everywhere equality is an equivalence relation and the derivative of the Cantor function is in the same equivalence class as the 0 function.

And a function in that equivalence class will have an integral that is equal to a constant function on all intervals where that function is defined. If you also integrate over intervals where the function is only defined almost everywhere, all bets are off.


It has no real world implication because it relies on infinitely detailed objects. No real world approximation will shares its properties.


Some reference is Exercise 84 (Cantor function) in Tao's lecture notes at https://terrytao.wordpress.com/2010/10/16/245a-notes-5-diffe...

The results before/after that result have other niceties: a one sided fundamental theorem of calculus for monotone functions (even though the Cantor function has 0 derivative!) or the fact that monotone functions can also be decomposed as the sum of continuous monotone function and a jump function such as Cantor's function.


Nice. I'm reminded of the Weierstrass function, which is continuous everywhere but differentiable nowhere.

https://en.wikipedia.org/wiki/Weierstrass_function


Yup. It's one of the first counter-examples I learned about in undergrad analysis class. What's amazing about these things is that many extremely smart mathematicians at the time thought this was impossible, i.e. that continuous functions had to be at least differentiable somewhere.


See also: Weierstrass function [0], continuous everywhere but differentiable nowhere.

[0] https://en.wikipedia.org/wiki/Weierstrass_function


People should read Cantor's bio page too.

He was treated horrifically by the mathematics establishment. There is some justice in learning his history all these years later.


This function is closely related to the cantor ternary set which is uncountable but has measure zero

https://en.wikipedia.org/wiki/Cantor_set

This and other wacky analysis things is what inspired me to study pure math in undergrad.


Not just related—the staircase is built by, in essence, putting stairs on the ternary set. There's enough Cantor set to get from 0 to 1, but not enough to disturb the almost-everywhere vanishing of the derivative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: