
The Worst Argument in the World - benhoyt
http://web.maths.unsw.edu.au/~jim/worst.html
======
barrkel
The whole denial of the possibility of empirical knowledge strand in modern
philosophy has long rubbed me up the wrong way, but possibly only in a way
that is clear to programmers.

The denial usually derives from a distinction between the mind and the world
outside it; things-in-themselves from the outside world can never be perceived
by the mind, because all perceptions are mediated by sensory organs, they are
all filtered one way or another.

But this seems suspect when one considers a simple physical computer as as an
example of a simple mind. We model the "knowledge" of the machine as the state
of its "memory", however we choose to represent that memory - flip-flop
circuits or magnetized rust.

That "knowledge" changes as the machine's I/O manipulates the state through
long chains of physical, mechanical operations, and looking in from the
outside with our more sophisticated eyes we may see that the knowledge
imparted by the "sensory I/O" may be more or less true. If it's less true (as
a digitization, it'll almost always be an approximation), then the I/O or
programming may have bugs; but if the I/O and programming are functioning
well, is it true to say that the machine has not acquired true knowledge from
its "sensory organs"? That it cannot acquire such knowledge?

Empirical (or a posteriori) knowledge is usually contrasted with a priori
knowledge, stuff whose truth is independent of the outside world, but is
usually a function of the meaning of words (such as "All batchelors are
unmarried" - these are analytic truths). Things that are supposed to be true
independent of the outside world but not embedded in the meaning of the words
are supposed to be "synthetic a priori" truths. But it seems to me that a
priori truths come from the brain examining itself, that the only way such
"knowledge" can be obtained, i.e. a state change occur, is by examining the
physical process of reasoning itself, whether directly, or indirectly as a
result of the "programming", i.e. the construction of the machine / brain's
mechanism for reasoning.

These "a priori truths" are mediated by the "I/O of self-reflection", and are
not actually a priori at all, in practice. The _knowledge_ of the truths, i.e.
the experiential sense of "dawning on oneself", i.e. what it feels like to
experience a state change in one's knowledge representation, came about
because of a physical process which may or may not have bugs; i.e. it is
mediated.

So, a counterpart to "we have eyes, therefore we cannot see" - a lovely
caricature - is "we have brains, therefore we cannot think". It seems to me no
true Idealist can deny that he cannot have ideas.

~~~
Locke1689
I actually think that you're confusing a little bit of modern philosophy with
a lot of "postmodernism." You have to be very careful there. Stove and Russell
and Wittgenstein are "modern" philosophers. They also share a commonality in
being founders of the school of analytic philosophy - the school most
philosophy departments around the world are now centered on.

Postmodernism, on the other hand, has a comparatively small following. I'd
make the exception for some of Stove's most criticized colleagues, including
Feyerabend. There's no denying that he and his followers were influential, but
I would not argue that they hold the "dominant" idea in modern philosophy.
Their view is complex, but they certainly don't deny empirical knowledge, but
instead aim to criticize certain points of empiricism which scientists
generally regard as "solid." However, as you just read, Stove was one of many
who sharply criticized him for "abusing" logical expressions.

I don't want this to turn into a really long argument, but I think you did
make an interesting point. Many who believe that _a posteriori_ knowledge is
impossible are in opposition to many of the early philosopher/mathematicians
of the early-mid 20th century. Some of the most notable are Russell,
Whitehead, and Wittgenstein (and I also want to briefly mention Gödel, who was
not very active in philosophy, but whose mathematics help the field
immeasurably.

As opposed to Feyerabend and especially the skeptics, I would state that they
have it all wrong. It is not _a posteriori_ knowledge which is impossible, it
is _a priori_ knowledge which is impossible (or tautological, to be more
accurate). Many of the early modern philosophers (especially the empiricists,
like my heroes John Locke and David Hume) supposed that much of our knowledge
comes from the outside - that is from our experience. As we now know today,
they were largely correct. We gather a huge amount of our personality and view
from our experiences, with very little "innate" knowledge. One of the main
hold-outs from that time was in mathematics. Many believed that mathematics
was something a priori solid. If you read Descartes's Meditations or Hume's
Enquiry, you will see a lot of mention of Euclid, specifically his theories of
geometry. Time and time again the philosophers used the example of basic
addition or the laws of geometry as arguments for _a priori_ knowledge. That
is, even if I don't know that that tree exists, I can at least know that
2+2=4. This would seem to be an argument for the skeptics and idealists (like
Berkeley), but instead it is an indictment. For, even mathematics is not safe
from _a posteriori_ reasoning. Why does 2+2=4? Because we have defined it to
be so. We have come up with an algebra, the countable numbers, and defined
addition using it. However, does this hold some innate truth, an _a priori_
truth about the Universe? I would argue that no, it doesn't. This _a priori_
knowledge is tautological -- math comes up with the "right" answer _because we
defined what the right answer in our consistent system is._ Except, we really
didn't. Gödel has helped here by proving that no moderately complex system can
be both complete and consistent. This is one nail in the coffin of _a priori_
math, but it continues. Eventually we reach some of our most basic axioms --
Peano arithmetic. It would seem that these are truly untouchable. a != !a. Who
can disagree with this? Well, if you look closely, you'll see an assumption
here. Or, more importantly, a definition. We define !a. We define these
expressions. These are _a priori,_ and many believe that you can build _a
priori_ systems out of them. The problem is -- you _can't._ Russell and
Whitehead soon saw this after Gödel's insight, but it's still a contentious
issue.

Well, I tried to keep that as brief as possible, but as you can see,
philosophy tends to drone on and on. This isn't really a detailed analysis,
but think of it as a footnote of my views on the issue.

~~~
yequalsx
Godel did not show that "no moderately complex system can be both complete and
consistent". What Godel showed was that a recursively enumerable set of axioms
that is rich enough to contain the natural numbers could not be complete and
consistent.

The second order Peano axioms for the natural numbers have only one model up
to isomorphism. The second order axioms are not computable, that is not
recursively enumerable. The first order Peano axioms are recursively
enumerable but have infinitely non-isomorphic models to them.

What Russel and Whitehead tried to do was to remove humans from mathematical
knowledge by finding a system of computable - that is, find a mechanical
process for determining whether or not a proof was correct. Godel showed that
this is not possible. This is the reason why Penrose and some others think
that AI will never reach the level of human intelligence.

Not sure if this impacts your points. But it definitely is not the case that
no moderately complex system can be both complete and consistent. In fact his
completeness result demonstrates the incorrectness of your statement. Just
take as your axiomatic system the collection of all true statements in
whatever system you are working with. That's a complete axiomatic system. It's
not helpful because there is no easy to use (think computable) criteria for
finding out what statements are axioms and which ones aren't.

~~~
Locke1689
I am appropriately corrected. This actually doesn't change my view at all,
because when I said "moderately complex," I assumed that the natural numbers
where included in that. If you go back to read Descartes and Kant you'll see
much of the same treatment - the addition operation as defined by our algebra
of the set of the natural numbers is used many times as an example of a priori
knowledge.

You are completely correct though - it was very late when I first commented on
this and was tired and simply wrong. I'll try and be much more specific in my
treatment of mathematics in the future, although I am not a mathematician. I
just want to note that I never claimed that Gödel proved that Peano arithmetic
was incomplete or inconsistent (although if I remember correctly, he could not
prove that the whole of PA was consistent), but simply that the nature of
Peano arithmetic does not imbue any a priori knowledge of the universe or our
existence. This is supported by Gödel in the broad sense that we cannot
generate a "universal theory" of mathematics. However, my main point is that
mathematics is not truth, it is only a model of our definitions and
observations -- a tool, if you will -- and an incomplete model at that.

~~~
yequalsx
I didn't think that your view would be changed. Quite honestly I'm not sure I
understand the philosophy behind this. But the fact that there is only one
model to the second order axioms of arithmetic (Peano's axioms with induction
included) is a bit surprising to me.

We can't come up with a computable system for finding all mathematical truth
but there appears to be a hardwired number system in the universe. It's not
computable but it is unique. The natural numbers lead naturally (no pun
intended) to the integers in a unique way. The integers uniquely lead to the
rationals and the completion of the rationals is a unique object called the
real numbers. The unique algebraic closure of the reals is the complex
numbers. There is uniqueness at each step. This coupled with the utility of
using mathematics to describe natural processes is...strange to me and some
others.

I don't know what this has to do with your points because I didn't understand
them. Not because you didn't write clearly but because I don't know enough
philosophy. I'm a mathematician and know very little about philosophy.

Thanks for your input.

------
Tichy
"He awarded the prize to himself"

So he also earns the award for the worst competition ever.

------
nazgulnarsil
if induction doesn't actually work we'll never know that for certain. what is
the alternative to induction? stop trying? epistemic hand wringing over the
fact that induction might fail us isn't useful.

~~~
ubernostrum
Well, the "epistemic hand wringing" has a very serious point, which is that it
spells big trouble for philosophy of science, which is (among other things)
concerned with the "problem of demarcation". Put simply: how do you tell what
is and isn't "science"?

Hume's formulation of the problem of induction actually pointed to two things:
one, the "logical" problem of induction, was simply the standard critique of
inductive generalization as an unsupportable method of inference. The other,
the "psychological" problem of induction, claimed that inductive
generalization was nonetheless how human beings actually think, and so we're
screwed. But in the late nineteenth century, and then again in the mid-
twentieth century, you get two thinkers who challenge this.

Charles Saunders Peirce took a view of science and of human thought which was
not based on induction: in Peirce's view, the "average" person simply believes
something until it causes some sort of conflict (at which point, Peirce
claimed, other methods of justifying belief would be developed in response,
leading to a chain which eventually ends up at the scientific method). Peirce
also didn't view science as being able to give ultimately true answers to
questions (thus sidestepping the need to justify inductive generalization,
even if it does end up as part of scientific method); rather, science can get
better and better approximations to the truth over time (as more observational
data becomes available and new theories are proposed to explain the data), but
will never actually arrive at "the truth" (and we wouldn't be able to tell
even if it did). In other words, Peirce's view of human knowledge and of
science is based around fallibility.

Karl Popper, immersed in the world of German-speaking philosophy, came to very
similar conclusions much later on, and proposed a solution to the problem of
induction in the following form. First, he accepted in its entirety the
logical problem of induction, but declared that it need not cause problems for
science, because science need not be inductive in nature. Second, he proposed
that the psychological problem of induction was a fiction: he asserted that
the way people actually reason is far closer to fallibilism (just like
Peirce), and framed it in common-sense terms as a process of trial and error.

Popper built a theory of demarcation around flipping the problem of induction
on its head: it is true, he happily conceded, that no number of observed
instances is sufficient to establish a generalization to all instances
(including those as-yet-unobserved, or unobservable). But this turns out not
to be such a big deal, because all it takes is _one observed counterexample_
to demonstrate that a theory is false. Thus we can still proceed
scientifically, but instead of speaking of theories which are "verified" by
observation, we speak of theories which survive attempts at falsification.

Popper came to the same sort of conclusion as Peirce regarding the "truth" of
scientific theories: he felt that there was no useful distinction between,
say, a "hypothesis" or "conjecture", and a "theory", because none of them can
be said to be true -- the best that can be said is that they have not yet been
proven false. And so he developed a system in which "science" consists of
those theories which can be subjected to falsification: a theory is scientific
only if there is some test which, if it gives a negative result, will be taken
as showing that the theory is false.

He talked occasionally of this system as applying a form of Darwinian
selection to theories: there is never a final "best" or "true" theory, but
there is a selection process at work which eliminates false theories through
observation of counterexamples. The theories which stay with us and form the
basis of everyday working science, then, are not those which are "true" but
are merely those which, so far, have survived that selection process. And in
judging between competing theories, Popper preferred the theory which was
boldest in terms of possible falsification: theories which make assertions
that are easy to test for falsity, he claimed, tend also to be those which --
if they survive such tests -- provide the broadest and most useful basis for
further scientific work.

Of course, both Peirce and Popper are terribly unfashionable in philosophy of
science these days. Peirce is reviled for having the gall to claim that
science advances _toward_ truth over time even if it never _arrives_ at truth
(a position which every good postmodern Kuhnian disciple will dogmatically
reject). And Popper is often viewed as a sort of semantic charlatan whose
attempt to shift from verification to falsification was merely a critique
(albeit a devastating one) of communism, Freudian psychology and logical
positivism.

~~~
nazgulnarsil
I've never been able to grasp how falsificationism is incompatible with or
different from induction.

~~~
ubernostrum
They're sort of inverses of each other; a better way to put it is
"verificationism" vs. "falsificationism".

The key difference is that a verification model seeks to establish that a
theory is true, while a falsification model seeks to establish that it is not.
Verification models cannot achieve their goal. Falsification models can.

This means throwing out the idea that you will ever have a theory "proven" to
be "true", but thanks to the problem of induction you weren't (in the general
scientific-method sense) ever going to get that anyway. Instead, you have
theories which have been proven false (since falsification gives you
counterexamples to universally-quantified conjectures, which allow the valid
deductive conclusion of falsity of those conjectures), and theories which have
not yet been proven false.

Importantly, you _never_ say that the latter group of theories are "true",
"likely to be true", etc.; you only and always say either that they've not yet
been shown false or, more commonly, that they have thus far survived attempts
at falsification.

To a lot of people it does seem like meaningless semantics, but for people
interested in the demarcation problem (which is anything but unimportant these
days) it's quite significant because it offers a viable framework for a
solution.

~~~
nazgulnarsil
thanks for the clarification. I guess the problem arose because I never
thought of induction as a method for finding the "truth" per se, but rather as
a method of finding consistent correlations (with direct cause and effect
being a special case of correlation where the correlation coefficient is 1).

------
321abc
The argument for why the article's author thinks "the worst argument in the
world" is invalid comes from a book by Alan Sokal, a physicist with a real
chip on his shoulder against Postmodern philosophy.

Sokal is perhaps best known for what has come to be known as the "Sokal Hoax":
<http://en.wikipedia.org/wiki/Sokal_hoax>

I strongly recommend anyone interested in this article and in learning about
the more recent incarnation of the analytic/continental feud in philosophy
read the articles Sokal has collected on his hoax:
<http://www.physics.nyu.edu/faculty/sokal/index.html>

Outside these articles, Sokal and his sympathizers rarely acknowledge that
there could even be any reasonable response to their allegations of buffoonary
and charlatanism. Unfortunately for Sokal and his sympathizers, this pose
leads to the conclusion that they are either inadvertently or deliberately
ignorant of much of philosophy.

