Hacker News new | past | comments | ask | show | jobs | submit login
What the Tortoise Said to Achilles (1895) (ditext.com)
49 points by brett on Sept 27, 2014 | hide | past | favorite | 35 comments



The Wikipedia page is worth reading: https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achi...

Particularly this quote:

The Wittgensteinian philosopher Peter Winch discussed the paradox in The Idea of a Social Science and its Relation to Philosophy (1958), where he argued that the paradox showed that "the actual process of drawing an inference, which is after all at the heart of logic, is something which cannot be represented as a logical formula ... Learning to infer is not just a matter of being taught about explicit logical relations between propositions; it is learning to do something" (p. 57). Winch goes on to suggest that the moral of the dialogue is a particular case of a general lesson, to the effect that the proper application of rules governing a form of human activity cannot itself be summed up with a set of further rules, and so that "a form of human activity can never be summed up in a set of explicit precepts" (p. 53).


Being led down the rabbit hole is one of the reasons why I love Wikipedia.

I had searched 'What the Tortoise Said to Achilles' on Google, and ended up reading about the arrow paradox's rebuttals, which were really interesting.

But more to the point of the original article, it shows that there are definitely gray areas within morality, and it's impossible to use boolean logic to try to categorize humans.


Is the point here that the very structure of syllogism itself can be denied? That however inexorable a "If A and B, then C" argument is, someone else could always argue that it's not quite valid yet? It kind of reminds me of the point made (turgidly, but still) by Yudkowsky in The Simple Truth[1] - sometimes you just have to throw up your hands and declare the counter-arguments specious.

[1] http://yudkowsky.net/rational/the-simple-truth


Reading this story in Hofstadner's GEB destroyed my ability to accept mathematical proofs as "proven". I just don't find them convincing; but more like using authorised forms o argument within an artificially stylised tradition (like English Literature). And I wonder if alien mathematics will reveal our mathematics as embarassingly parochial - and not the universal common ground usually assumed.

So, instead of proof, I have to fall back on intuition and working code, with their severe limitations.

However... studying mathematical proof has at times informed and grown my intuition, by revealing new ways to see a problem and new (bizarre and unintuitive) ways to decompose it.

I might have been better off never having seen this story.


Why would you be better off? I don't see any advantage in having the overly optimistic belief that there's some universal, correct set of axioms.

You can still do all the math you could do before -- and if Carroll or GEB gets you more interested in the fundamentals of math, you can do even more.

Yes, you have to accept some basis of mathematics, and you now understand that some true things will be unprovable in the basis you just accepted. But that doesn't stop you from proving things.

I think you might have just transferred your optimism about math to code instead. How do you know your programming language is doing what you asked it to? That you asked it to do the right thing at all? That the compiled code has the correct behavior? That your hardware works as advertised and is not failing at the moment? In both code and math, you have to accept some abstractions that you're not going to worry about, but the things you do with math are certainly more verifiable.


Better off, because I could have learnt the conventional axioms as a skill, like the perceptual and motor skills comprising reading, writing, arithmetic. And only later question them.

My "optimism" is more for my intuition; working code is experimental confirmation.

It's easier to be optimistic about code than proofs. Firstly, working code only needs to work in the specific cases you're using (that you test for); but a proof must work in every possible case. Thus, working code is simpler and easier to check, because it's aiming at less. My code almost always confirms my intuition.

Yes, it's also helpful to have the automatic, mechanical check of executable code; and as you say, this relies on compilers, OSes, silicon, hardware. (Though, anecdotally, I have noticed subtle problems that I eventually diagnosed and confirmed to be compiler and hardware bugs.) BTW, yes I have tried COQ (proof assistant; somewhat mechanical proof checking), but simple ideas become very complex to prove, and the problem of bugs in COQ itself etc is of greater concern, for the next reason:

Secondly, and relatedly, is that the standard is much lower for code. It just needs to work. Whereas a mathematical proof is supposed to be absolutely true. In other words, I don't ask as much from code. If there turns out to be a bug, it's just learning more about the problem; about the world. It's an engineering flaw. But if my proof is wrong, the game is lost.

An argument against my intuition is probably more telling. Though my faith in it has turned out to be justified many many times, I certainly can be wrong. My only real excuse is that, as a human being, I have nothing else to fall back on but my sense of reality and reason. That's my hardware; if it's wrong, I really am lost. So I might as well trust it. Fortunately, it's almost always right; probably because I try to see things from many angles and check them in many ways before my intutive sense is fully formed.


> I think you might have just transferred your optimism about math to code instead.

To add to your point, if someone begins to doubt the utility of mathematical logic and responds by refocusing his attention from logic to code, he's somehow overlooking the fact that code is built on a foundation of mathematical logic.


Wait wait wait, proofs are still possible! The lesson of Godel's theorem is that you can have correctness or completeness, but not both within the same formal system. So you can have a system that yields only true statements, it just won't be able to encompass all true statements; or you can have a system that does encompass all true statements, but from which it is impossible to exclude some falsehoods.


> Reading this story in Hofstadner's GEB destroyed my ability to accept mathematical proofs as "proven".

That's too bad, because the anecdote doesn't challenge the basis for mathematical proofs or logical reasoning, in fact it requires it as a precondition for the anecdote to move forward. Remember that Gödel's incompleteness theorems don't argue that there are no true statements, only that some of them cannot be proven true.

> So, instead of proof, I have to fall back on intuition ...

You might be better off reviewing the structure of logic and mathematical proof. Start here:

http://en.wikipedia.org/wiki/Euclid's_theorem

My reasoning is that, if there's one proof sufficiently transparent to win acceptance from a skeptic of logic, then there might be two ... ad infinitum.


In my humble opinion, proofs by contradiction are not the best examples of mathematical reasoning to be presented to the uninitiated. It has been my experience that people untrained in mathematics find it difficult to (intuitively) accept them as valid.


Euclid's proof isn't a proof by contradiction. As the wikipedia page says:

>"Euclid is often erroneously reported to have proved this result by contradiction"

It simply says that if you are constructing a list of primes, you can always add one more to the list, therefore there are infinitely many.


The overall structure of the proof is not by contradiction, but one of the steps is. The Wikipedia article calls this out, right after the sentence you quoted.


Also, even though it's correct that Euclid didn't pose it as a proof by contradiction, it can certainly be posed that way, and I often use that form when presenting it to nonmathematicians.


Not everyone thinks the same way. There are certainly lay persons out there who do not find proofs by contradiction jarring when they come across them for the first time. I remember I was one of them.

But the impression I get from my (possibly biased) sample is that most non-trained people intuitively see proof by contradiction (or any form of nonconstructive proof, really) as a way of "cheating", because it asserts something does or does not exist without actually producing a (counter)example. YMMV, of course.


Thanks for the link; I now feel a little more convinced by Euclid's Theorem than last time I looked at it.

Though I still don't feel fully convinced by it; I don't fully see it. It's entirely possible my obstacle is not so much my skepticism as my stupidity :-)


I'd be interested in hearing why you're not fully convinced.

There seem to be 2 parts to the proof. If you have a list of primes:

    You can generate another number from that list
    You can always get a prime from that number to add to the list
I'm guessing it's the second part that isn't clicking with you, but perhaps I'm wrong.

As for 'stupidity', I wouldn't worry about it. The only people I've ever had call me a moron or question my intelligence in any way have always been people who were less intelligent than I am. And that's not because I'm a genius ;-)


I can follow the steps, but not see it. Like turn-by-turn directions, but no map. Perhaps also because I couldn't come up with it on my own - I don't see the family of which it is an instance (partly, this is the magic open-endedness of mathematics, it's not predictable).

But I'm seeing more: start with some primes. They needn't be consective or ordered, just some primes. Any old primes will do. eg 2 and 5 are OK (skipping 3).

Now multiply them all to get p. Obviously, p is divisible by all the primes we started with, because we just created it by multiplying them. eg 2 * 5 = 10

Note that p will generally be quite a bit bigger than the primes. Typically, you'll have the primes bunched up near the left of the number line, perhaps with some primes skipped between them, then a big gap to p, and continuing to infinity on the right.

Now we add one to p. This is just to the right of p on the number line. This p+1 is either prime or it isn't.

1. If it's prime, then there is a prime other than the ones we started with. eg 10 + 1 = 11

2. If it's not prime, it has divisors. This proof claims it must include divisors that are prime but are not among those we started with. <-- THIS IS THE BIT I DON'T GET

You can keep doing this, including that new prime (ie either p+1 itself or a prime divisor of it), showing there are infinitely many primes.

So, yes, it's the second part.


> 2. If it's not prime, it has divisors. This proof claims it must include divisors that are prime but are not among those we started with. <-- THIS IS THE BIT I DON'T GET

This step of the proof invokes the Fundamental Theorem of Arithmetic [0]: the statement that every natural number can be expressed as a unique product of primes (uniqueness isn't the important bit here, just the fact that such a factorization exists).

So you need to accept the Fundamental Theorem of Arithmetic as true before you can fully understand this proof.

[0] http://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmet...


To expand a bit further: this is an example of the "rabbit-hole" nature of mathematics. All but the most trivial theorems depend on previous results, and in most cases you cannot realistically follow all the dependencies until you get to the first principles, also known as axioms (and even then, there's the question of which axioms you are willing to accept!)

In order to be able to understand and appreciate mathematical proofs, you have to develop the ability to accept the truthness of a result - and to realize the consequences of it being true - even though you do not yet understand why it is true. You have to learn to accept the ensuing confusion as a natural state of mind (read [0] if this idea intrigues you).

[0] http://j2kun.svbtle.com/mathematicians-are-chronically-lost-...


> In order to be able to understand and appreciate mathematical proofs, you have to develop the ability to accept the truthness of a result - and to realize the consequences of it being true - even though you do not yet understand why it is true.

I have to disagree. A trained mathematician understands why a proof is or it not valid, based on a combination of axioms and a logical sequence predicated on axioms, but with no gaps or overlooked assumptions.

Without knowing and explaining why, it would not be possible to write a proof that would pass muster with other mathematicians, people who by instinct and training refuse to accept fuzzy explanations.

This is why Gödel's Incompleteness Theorems came as such a shock, at a time when many people expected to be able to systematize all of mathematics and predicate it on a handful of unassailable logical principles (as Russell and Whitehead attempted to do in the early 20th century).

The Incompleteness Theorems show the degree to which mathematicians expect to know why something is true, and if they cannot, why not.


> I have to disagree. A trained mathematician understands why a proof is or it not valid, based on a combination of axioms and a logical sequence predicated on axioms, but with no gaps or overlooked assumptions.

First off, to dispel any misunderstanding, I was talking about the process of becoming a mathematician, which you necessarily go through before you can call yourself one.

But even for full-fledged mathematicians, what I'm saying is true to an extent. For instance, although ZFC set theory is usually accepted as the basis for all mathematics, most mathematicians (precisely: those who do not study formal logics or other areas of metamathematics) do not state their results in terms of set theory, but instead write informal proofs that appeal to other established results in their field of study.

That is absolutely not the same as saying that their thinking is fuzzy or sloppy. The fact that I personally do not state (or, even, understand) all the details behind an established theorem X has no bearing on the validity of my proof for a theorem Y that hinges on X being true.

> The Incompleteness Theorems show the degree to which mathematicians expect to know why something is true, and if they cannot, why not.

This is a separate matter. Whether in theory it is possible for you to ascertain whether X is true or not, and whether you fully understand (down to first principles) why X is true before you use it as a stepping stone for other results are different issues.


EDIT I can see the divisor that must exist cannot be one of the given primes: taking just one of them, multiplied by the product of the rest, the next number it divides after p must be one extra addition of it, which will be greater than our number p+1. Therefore, it isn't a divisor. The same argument excludes all the other initial primes.

So this means: it has a divisor not in the initial primes (actually, I think it must have two). But why should it be prime?

I think a given divisor does not need to be prime; but it must not be divisible by an initial prime. I guess this means that either it itself is prime, or it has divisors which in turn are either prime or have divisors etc. None of these divisors are an initial prime, because then they would also be divisors of p+1, which we have established they are not.

So I guess that's the proof... but I don't feel sure of it. There are too many steps, and I'm not 100% sure of them, and can't see the whole. Perhaps I've not covered some possibility in some step - how could I be sure I've covered them all? Maybe as it becomes more familiar, I will come to see it.


Remember the role of axioms, which another poster has explained in a different way. The issue in question (not itself an axiom but one that requires acceptance of axioms) is whether each composite (i.e. non-prime) is uniquely composed of primes.

To prove this for yourself, try assembling a composite number out of non-prime factors. Then, to make sure of your result, decompose your factors into the primes from which they were composed. Finally, restate your factorization by replacing your factors with the primes that compose them.

Example: the composite number 32 is normally factored as 2^5, i.e. four multiplications of the prime number 2. Let's say I want to falsify the idea that all positive integers are either prime or uniquely composed of primes, so I instead compose 32 using the nonprime factors 8 and 4.

Then I factor 8 and 4, and discover that their prime factors are also factors of 32 --

8 = 2^3

4 = 2^2

32 = 2^5

-- so I have proven the original thesis: all positive integers are either themselves prime or are uniquely composed of primes.

The idea I am trying to convey is that the original claim doesn't mean one cannot assemble a composite out of non-primes, only that the composite number is also representable by a unique prime factorization.

More depth here: http://en.wikipedia.org/wiki/Prime_factor


> "So this means: it has a divisor not in the initial primes (actually, I think it must have two). But why should it be prime?"

Ok, this is the heart of the issue. If the new number is a prime, all is well and good, but if the number isn't prime, why should it's divisors be?

The simple answer is: they don't have to be. If you have divisors that aren't prime, then keep dividing till you hit some that are. The definition of a prime number is one that can only be divided by itself, so for any non-prime number, you must be able to keep finding factors until they're all prime!

Let's take an example. Our list of primes is {5,7} which are nice small numbers to use. By following the rule of "multiply and add 1" we get:

    5 * 7 + 1 = 36.
Ok, so let's break 36 down. We get:

    36 = 2 * 18
Right, well, 2 isn't on our list, but let's face it: 2 isn't a real prime. None of the other primes like it. It's even. Nor is 18 on our list, but that's not prime (and that was your objection before), so let's break 18 down.

    36 = 2 * 2 * 9
Well, that's a bit better. We have another unpopular 2, but we also got a 9, and even though 9 isn't prime, it's probably primier than 2 is. Continue on:

    36 = 2 * 2 * 3 * 3
There we go. Now we actually have a proper prime number, "3", that we can add to our list.

And you see (I hope) that none of these numbers could possibly be on our original list, because all the numbers already on that list give a remainder of "1" when we divide "36". Yet we must, inevitably, hit a prime number because we just keep dividing till we do!


> Right, well, 2 isn't on our list, but let's face it: 2 isn't a real prime.

I can't tell whether you're taking this position or ridiculing it, but if 2 isn't accepted as a prime number, this would falsify the Fundamental Theorem of Arithmetic for all even numbers.

http://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmet...

Quote: "In number theory, the fundamental theorem of arithmetic, also called the unique factorization theorem or the unique-prime-factorization theorem, states that every integer greater than 1[3] either is prime itself or is the product of prime numbers ..."

If your purpose was satire, then perhaps this post will inform other readers who may not detect your satirical intent.


As you suspected, it was intended as a joke.

The other primes do, in fact, quite like 2, and 9 is not in any way primier than 2 despite my previous assertion.

EDIT: Actually, there was a real point to ignoring 2, and it was just so I could carry on breaking down the factors as that seemed to be the bit that hyp0 was having trouble with. Perhaps it did confuse more than enlighten. I'm not sure.


Yes, that's what I meant by my third paragraph (the one following the question you quoted, answering it).


Ok, so do you see now why the prime divisors of 18 (in that particular case) can't be on the original list, because otherwise they would also divide 36 (and we know they leave remainder 1). And therefore we must hit some primes that aren't on the original list (in fact, none of the primes we hit can be on the original list).

In short, do you feel comfortable with the proof now?


Don't you think that's covered by the following?

> [...] divisors which in turn are either prime or have divisors etc. None of these divisors are an initial prime, because then they would also be divisors of p+1

Let me put it this way: I think I'd score 100% on an exam on this, but I don't "get" it.

The divisors of 18 are also divisors of 36, because 36 = 18 * 2. Could use a theorem here, that a divisor of a divisor (of x) is also a divisor (of x).

A way to put it: p+1 can't be divided by any initial primes; and, if we keep dividing divisors, we eventually can't divide any more, and that's the definition of prime (indivisible "atoms"!).

I like seeing the overall process as: each time you find a new prime, it could fill in a gap between primes in the list, or be greater than any of them. Or, be smaller than any of them:

  3*5 +1 = 16    -> 2
[BTW I can't tell if this is guaranteed to find all primes (though I conject it does); but that's OK, as the proof doesn't claim that, only that it always finds one more prime. Maybe there are some sets of initial primes that will always miss some prime/s - a proof of that might go deep.]

  \tangent
Regarding code vs proof, I like what rspeer/tjgq said on abstractions/theorems - accepting a lower level, and working above that. But code is different, in that its purpose is to do something. If it does that, then it "works" in a practical sense (even if buggy). Thus, I can tell that the lower levels are working well enough, for my purposes.

In contrast, the purpose of a proof is to convince me that it's true. Sure, if I accept the theorems, and I accept the proof given those theorems, then I accept the proof. But this means that in order to be convinced that the proof is true, I have to be convinced of all the levels.

The difference isn't code vs proof in themselves, but in their purpose. I demand a lot more of a proof!

  \idea
(Not Curry-Howard, but...) Maybe I should make a programming language for making convincing proofs that does work like code (unlike coq, which aims more at professional mathematicians). It can be lacking features, like handling infinity; it can be slow.

You "run" the proof to check it; you can "step into" theorems (like a debugger or some theorem websites which hyperlink them), or collapse/hide them.

It would be set-based, and so fundamental in that sense. I'm not sure if it would actually create sets and operate like an algorithm on them; or if it would use theorems about sets. I think the latter is necessary for "proof", but it wouldn't be as convincing... Generating examples would be helpful, to show it "working". hmmm... some proofs work kind of "by example", but with variables. What makes it a "proof" is that it works for the full range of values needed. So emphasising that approach might help.

The language would enable you to divide up an issue into possibilities in various ways, and automatically check that each limb is addressed (this is the basis of proof to my mind: you divide up the issue, and show that this covers all possibilities; then handle each limb. e.g. the predicate "is prime" or not). You can nest divisions. Not far from pattern matching, as in ML/haskell.

The actual proof part would be chains of implies: given x, y is true. These can be theorems/libraries: in the stdlib, or a third-party library, or your own "app" (or, the language's built-in axioms and theorems). So you can't just magically go from one step to the next; you have to specify a theorem for that step (call it). Perhaps the most basic ones would be implicit (like implicit casts), to avoid too much tedium. You can then "step into" these calls, or collapse them. Actually, just defining a new theorem is a way to collapse it (debugger "step into", collapsable tree and hyperlinks are just UI choices - vim ^] is another way to navigate calls, conceptually expanding/collapsing).

A key convention here would be that it's not enough to be correct if it's a complex, confusing, tedious "spaghetti proof". You want clear proofs, that ideally formalize intuitive notions. You build libraries that might not be the most concise, but that give you theorems that "make sense" and are useful for building convincing proofs. Libraries can compete etc for the "best" formalisms.

That is: unlike mathematicians, we don't aim at brevity or elegance, but at being convincing, intuitive, easy to understand. Accessible to ordinary people - pop-proofs, like pop-science, pop-psychology, pop-art etc. And I wouldn't be doing this magnanimously, for the poor ignorant masses, but because I need it myself!

It wouldn't be doing anything very clever, just easing the cognitive burden of complex proofs:

record the detail (just document management alone is helpful); hide/reveal one part at a time; automatic checks; affordances whose existance guides you to sensible operations (e.g. divide it up; implies; implicit theorems). There's also the fundamental aid of helping you formally defining the problem in the first place... which can be hardest part of proving it.

  \implementation
Thinking how this would actually work. Each line would be an expression following from the previous expression + a theorem. Conceptually, we are transforming the expression in each step.

In a way this is bad, because you're having to explicitly type so much... in a regular program, you just call a function, and you don't need to show the returned result - it's hidden in the execution. We could do that here, but it would be less clear:

This unclear version would look like: a starting expression, then a sequence of transformation theorems/calls (specifying exactly how they're applied). You'd run it to see the sequence of expressions as it was transformed.

A problem is we'd have to specify tediously what we want. For example, given expression a and b and operator ->, it couldn't know whether we want a (or b) - or something else, because many things are implied. So the operations would to be very precise. Like, extract the first operand; extract the second operand. It's similar for trivial operations like commut/associate a and b and c - we must specify precisely which ones to permute. I guess we could have a "permute" rule that operates at a higher level (i.e. actually a theorem...); or even have an "operator" that specifies the operation via the result... so it's no longer really an operator, but a kind of declarative system that I first mentioned.

One option is some "live" editing environment, so we can get the result of the call automatically. A couple of problems: we might not want the literal result, but an implicitly transformed version (e.g. trivially, like re-ordered terms). We'd also have to specify exactly how the theorem was applied - this might be a problem anyway (if it's to be automated).

I'm thinking that the engine works out how to apply the theorem to transform the starting expression into the resulting expression... but that might be too hard; we may need to tell it exactly how to apply it (like positional arguments to a function).

But... I guess it might even be a good rule of thumb, that if we can't mechanically work out what the step is, it's too complex! And if the user really wants it as a single step, they can write a theorem, that breaks it down enough into smaller steps simple enough for the compiler to understand. That seems to serve our goal, of convincing... at least, in principle. It might be tedious in practice.

For implict theorems, it seems commutativity/associativity would be OK; maybe distributivity - but if a big change, might be hard to follow.

Inference: We might allow any theorem to be implicit, provided the engine can infer not just how to apply the theorem, but what the theorem actually is (i.e. find some way to justify the step). This is quickly heading towards an automatic theorem prover, at least for very simple proofs. A problem with this is that the step might be less clear - that is, although the engine can work it out, the user might not be able to. This could easily happen in practice, even if the engine isn't very clever, by it having a library of very advanced theorems that the user doesn't even know about (this might be partially mitigated in an active environment, where the engine suggests the theorem - like a tooltip, google suggest, autocompletion).

Two kinds of transform, equational (an equal expression) and implies (not equal, but some aspect of it e.g. a and b -> a)


EDIT one danger with a library is it might have a theorem stronger than what you're trying to prove (and that you aren't convinced yet by). If it's applied automatically, it's not a convincing proof. (And the purpose here is to be convincing, not just to be correct).

The easy answer is for the user to only allow libraries that they've accepted.

I was thinking we might have an automatic check, that we never use a theorem which is stronger than the whole thing we're trying to prove - that is, which implies our whole proof). But that seems hard to check automatically (maybe undecidable in general).


> I think a given divisor does not need to be prime; but it must not be divisible by an initial prime.

But that's how prime is defined -- indivisible by any other numbers except 1. If you statement is true -- that a given number "must not be divisible by an initial prime", that means the number is itself prime.

Positive integers fall into precisely two categories:

1. Not divisible by any smaller numbers except 1.

2. Divisible by one or more smaller numbers.

Those in category (1) are prime. Those in category (2) are composite. There is no third possibility.


I think you're interpreting "initial primes" as all the primes up to p+1; but here, I defined it to be just the particular primes we happened to start with. There can be gaps.

> I think a given divisor does not need to be prime; but it must not be divisible by an initial prime.

I will disprove by counter-example that not being divisible by an initial prime implies it is prime:

  initial primes: 5 and 7
  p = 5 * 7 = 35
  p+1 = 36
36 has a few divisors. Lets take 18 as a "given divisor". 18 is not divisible by any of the "initial primes", 5 and 7. Yet 18 itself is not prime. (While it is divisible by the primes 2 and 3, but they aren't "initial primes".)


To see the power of a given proof, try to imagine what would be required to refute it, falsify it. This is by no means the only avenue of attack, but it's instructive. Also, it resembles the approach used by scientists with respect to falsifiable scientific theories (which aren't the same thing as mathematical proofs).


> Reading this story in Hofstadner's GEB destroyed my ability to accept mathematical proofs as "proven".

I'm not sure I understand why. If the tortoise had insisted that one plus one is three, would it have destroyed your ability to accept arithmetic?

I'm not sure what Carroll intended by this piece, but what I take from it is that there's no sense arguing with irrational people. One can certainly claim to accept A, and accept if-A-then-B, but deny B; one can also claim that up is down and the sky is candy-striped.


That was/is such a dilemma.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: