
What the Tortoise Said to Achilles (1895) - brett
http://www.ditext.com/carroll/tortoise.html
======
oskarth
The Wikipedia page is worth reading:
[https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achi...](https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles)

Particularly this quote:

 _The Wittgensteinian philosopher Peter Winch discussed the paradox in The
Idea of a Social Science and its Relation to Philosophy (1958), where he
argued that the paradox showed that "the actual process of drawing an
inference, which is after all at the heart of logic, is something which cannot
be represented as a logical formula ... Learning to infer is not just a matter
of being taught about explicit logical relations between propositions; it is
learning to do something" (p. 57). Winch goes on to suggest that the moral of
the dialogue is a particular case of a general lesson, to the effect that the
proper application of rules governing a form of human activity cannot itself
be summed up with a set of further rules, and so that "a form of human
activity can never be summed up in a set of explicit precepts" (p. 53)._

~~~
barsonme
Being led down the rabbit hole is one of the reasons why I love Wikipedia.

I had searched 'What the Tortoise Said to Achilles' on Google, and ended up
reading about the arrow paradox's rebuttals, which were really interesting.

But more to the point of the original article, it shows that there are
definitely gray areas within morality, and it's impossible to use boolean
logic to try to categorize humans.

------
tunesmith
Is the point here that the very structure of syllogism itself can be denied?
That however inexorable a "If A and B, then C" argument is, someone else could
always argue that it's not quite valid yet? It kind of reminds me of the point
made (turgidly, but still) by Yudkowsky in The Simple Truth[1] - sometimes you
just have to throw up your hands and declare the counter-arguments specious.

[1] [http://yudkowsky.net/rational/the-simple-
truth](http://yudkowsky.net/rational/the-simple-truth)

------
hyp0
Reading this story in Hofstadner's GEB destroyed my ability to accept
mathematical proofs as "proven". I just don't find them convincing; but more
like using authorised forms o argument within an artificially stylised
tradition (like English Literature). And I wonder if alien mathematics will
reveal our mathematics as embarassingly parochial - and not the universal
common ground usually assumed.

So, instead of proof, I have to fall back on intuition and working code, with
their severe limitations.

However... studying mathematical proof has at times informed and grown my
intuition, by revealing new ways to see a problem and new (bizarre and
unintuitive) ways to decompose it.

I might have been better off never having seen this story.

~~~
rspeer
Why would you be better off? I don't see any _advantage_ in having the overly
optimistic belief that there's some universal, correct set of axioms.

You can still do all the math you could do before -- and if Carroll or GEB
gets you more interested in the fundamentals of math, you can do even more.

Yes, you have to accept some basis of mathematics, and you now understand that
some true things will be unprovable in the basis you just accepted. But that
doesn't stop you from proving things.

I think you might have just transferred your optimism about math to code
instead. How do you know your programming language is doing what you asked it
to? That you asked it to do the right thing at all? That the compiled code has
the correct behavior? That your hardware works as advertised and is not
failing at the moment? In both code and math, you have to accept some
abstractions that you're not going to worry about, but the things you do with
math are certainly more verifiable.

~~~
hyp0
Better off, because I could have learnt the conventional axioms as a skill,
like the perceptual and motor skills comprising reading, writing, arithmetic.
And only later question them.

My "optimism" is more for my intuition; working code is experimental
confirmation.

It's easier to be optimistic about code than proofs. Firstly, working code
only needs to work in the specific cases you're using (that you test for); but
a proof must work in every possible case. Thus, working code is simpler and
easier to check, because it's aiming at less. My code almost always confirms
my intuition.

Yes, it's also helpful to have the automatic, mechanical check of executable
code; and as you say, this relies on compilers, OSes, silicon, hardware.
(Though, anecdotally, I have noticed subtle problems that I eventually
diagnosed and confirmed to be compiler and hardware bugs.) BTW, yes I have
tried COQ (proof assistant; somewhat mechanical proof checking), but simple
ideas become very complex to prove, and the problem of bugs in COQ itself etc
is of greater concern, for the next reason:

Secondly, and relatedly, is that the _standard_ is much lower for code. It
just needs to work. Whereas a mathematical proof is supposed to be absolutely
true. In other words, I don't ask as much from code. If there turns out to be
a bug, it's just learning more about the problem; about the world. It's an
engineering flaw. But if my proof is wrong, the game is lost.

An argument against my intuition is probably more telling. Though my faith in
it has turned out to be justified many many times, I certainly can be wrong.
My only real excuse is that, as a human being, I have nothing else to fall
back on but my _sense_ of reality and reason. That's my hardware; if it's
wrong, I really am lost. So I might as well trust it. Fortunately, it's almost
always right; probably because I try to see things from many angles and check
them in many ways before my intutive sense is fully formed.

