However, I think he has misclassified the error of the English Lit correspondent in this essay. Since I have witnessed this misunderstanding many times (and I want to postpone writing my C++ thread pool code), let me elaborate. There are basically three areas of intellectual endeavor, classified by how they go about investigating the truth of the statements: One is scientific, where questions are handled using the scientific approach, so well described by Asimov. The other is group is axiomatic systems, like mathematics (and heraldry), where statements are mostly proven rather than checked through experiments. Given a statement, e.g. "there are infinitely many primes", one does not go about making measurements in the real world to check its truth (although sometimes real-world models can help, pioneered by Archimedes in the "Sand Reckoner").
Then, there's a third category of questions where these two approaches fail. In fact, when asked a question of this type, one does not immediately know how to go about proceeding to investigate it, e.g. "What is truth, Is there a God", etc. This, of course, is the realm of philosophy with somewhat different techniques of tackling problems then the previous two.
With this (rather simplistic) classification, we can see the impedance mismatch between Asimov and the Lit guy. Asimov is the scientist par excellence and is talking about the scientific method. However, the Lit guy's letter (truth is relative, etc.) is the culmination of a certain philosophy that is still the accepted standard in liberal arts school and may be summed up as moral relativism. The guy's obvious error is to apply this approach to the scientific realm. But mind you, this is very common with people of this background. If you don't believe me, just find a liberal arts major and talk for 10mins. He/she will be dumbstruck that you think otherwise. A common example with such people is how Newton's theory was "wrong" and Einstein's theory "corrected" it, clearly showing there's no absolute truth. Years ago, my professor in graduate level Syntax course gave this very example with the same conclusion that the Lit guy arrived.
But apart from its misapplication to scientific matters (such as the shape of the earth), is the moral relativistic approach a useful one, despite its acceptance as an axiom in a lot of universities? Now, that is a long debate.
What I've often seen is a sort of "convenient relativism" about truth.
I've known people who, for some things, claim there's no real truth, it's all relative, you cannot prove or know anything, etc. (Which makes me wonder how they know you can't know anything, or why, if it's all relative, my belief in an objective universe is not just as valid as any other belief.)
And they'll occasionally toss this charge at matters of science. Yet in their day-today life, they happily take aspirin, ride elevators, use electricity, and put gas in their cars, all with the belief that these things will follow a known and predictably consistent behavior.
For example, uncertainty does not mean all options are equally likely. I'm not sure if this has a name. Agnostic fallacy?
Which is, of course, the point. As a rule, new scientific theories must match the data better than old ones. Over long periods of time, then, science becomes less wrong in its explanation of those things which can be scientifically investigated. The English Lit major's approach may work well with religion or philosophy, but it doesn't translate well to science.
There is a minor problem with the "less wrong" approach, though: it depends upon your metric for measuring wrongness, and that choice of metric may have a philosophical basis. Consider a small set of points that nearly follow a quadratic. One might say "this appears to be quadratic with small errors"; another might say "this fits a higher-order polynomial with even smaller errors", yet the first might be conceptually/philosophically a better description of data that was quadratic but measured from a sinusoidally-vibrating platform. A poorly chosen method for measuring wrongness could actually make it more difficult to discover underlying explanations of phenomena.
IMHO, the moral relativistic approach as you describe it constitutes an interesting mental exercise, but not a useful one per se.
If there is absolutely no hint or proof that something exists, how can it be useful to debate it (apart from intellectual masturbation) ?
Let's say we live in "the matrix" and that there is _absolutely_ no way to get out of it or anything that could hint its existence. Given that assumption, what is the point of debating whether it exists or not ? Even if there really was a matrix, how could it be useful for us to know this piece of information ? That knowledge wouldn't serve any purpose as the said matrix wouldn't in any way interfere with our universe and even less with our mere human lives. Of course, if there was any way to escape the matrix or if it would even subtly interact with our universe, that would be another question.
In the same vein, I dislike the concept of "agnosticism". If we had to be agnostic about everything that is not possible to prove (or even partially, through subtle hints) then we'd have to be agnostic about an amount of things only limited by our collective imagination. For instance, we'd have to be agnostic about "the matrix", the fact that our universe is an "atom" inside a much larger universe, that we're all imagining this universe through an induced dream, etc.
On a side note, this is one of the reason I find it more interesting to debate with theists that believe they have evidence for God than agnostics or theists that base their belief on faith.
PS: I'm not trying to discard philosophy and I actually admire this discipline. All I'm saying is that it should not attempt to answer the wrong questions.
The standard answer is that all of our naive beliefs about the world would be false. You are not actually sitting on a couch, your simulated body-projection is simulated to be sitting on a simulated couch. The very fact we can't tell whether or not we're in the matrix undermines all our knowledge.
The more insightful answer is that even if we're in the matrix, everything about the physical world is still true, there is just a metaphysical fact we are unaware of--namely, that the universe happens to be a simulation. You're still sitting on a couch, and the couch is still made of atoms, and the atoms are still made of subatomic particles and so forth, but it turns out all the subatomic particles are just data structures in the matrix and we didn't know that before. Nothing is undermined.
You could even argue that metaphysical statements are meaningless, though you run into problems going too far that way as well.
An example that's so wrong that it's not even wrong.
Newton's theory wasn't wrong, because it's a theory that describes observed reality pretty well, which is the main criterion for a proper scientific theory.
Einstein's theory isn't an approximation to, or correction of, Newtons theory. Rather, it is a completely new theory, because it completely overthrows fundamental concepts. 'Mass' and 'simultaneity' are not nearly the same thing in Einstein's and Newton's theories. That they give nearly identical results in a certain domain doesn't mean the theories are nearly identical. They just give the same results, because they are both good scientific theories of that domain.
Completely apart from that, there is no absolute truth. Nietzsche and Wittgenstein destroyed what remained of that possibility.
O, well, that's that then.
If you're going to argue P == NP, you have to know damned well what you are talking about, because all the evidence suggests P != NP. You also have to accept some pretty strange consequences of your argument.
If you are going to argue the universe can be described by local theories, you have to know damned well what you are talking about, because all the evidence suggests the universe is non-local. You also have to accept some pretty strange consequences of your argument.
If you are going to argue there is an absolute truth, you have to know damned well what you are talking about, because all the knowledge we have suggests that the concept in itself doesn't even make sense. You also have to accept some pretty strange consequences of your argument.
For most people, arguing there is an absolute truth, is like arguing you can make Houdini-like escapes. You can't and you'd drown. You know that, so you accept the consequence: you don't try an Houdini-like escape. In exactly the same way, people shouldn't try to argue there is absolute truth: it only leads to more confusion about truth and reality.
How is the statement "there is no absolute truth", not itself an absolute truth?
If I proclaim any certainty regarding anything, the most certain I can possibly be is when I know that a relevant number of people has tried in a relevant number of ways to discover whether we should be uncertain. Scientific certainty does not exist, but a scientist can still say we are 'certain' about something. It just doesn't mean it's beyond any doubt. In fact, even the phrase 'beyond any doubt' cannot possibly mean something that in fact does not leave any doubt. The situation in philosophy is exactly the same. There is not a doubt on the mind of the vast majority of great philosophers that there could not possibly be any absolute truth. That the phrase itself makes as little sense as speaking of the vat in which you are supposedly a brain (essay by Hilary Putnam). That it cannot possibly mean what you think you are intending it to mean. And despite that, it may still be false. It's just that nobody can argue even remotely satisafactorily that it is.
Mild epiphany on reading these (and surrounding) words: In the scientific method, there are truths, and truer truths.
The argument is pretty straightforward: if only physical entities exist then there cannot be any truth that is true in all times and places, since such an overarching truth is universal and not particular, whereas all physical entities are only particular.
Hence the irony of modern scientists arguing for absolute truth with those who merely follow the scientists' assumptions to the logical end.
For details that fit the 10% rule:
1) Only 10% of people will notice
2) Only 10% of those who notice will care.
Don't spend your time on these details until everything that does not fit the rule is fixed.
Science consists of both theory and experiment. The old theory does a reasonable job of explaining the old data that came from the old experiments. Along come new experiments producing new data that the old theory cannot explain.
To understand science you must contrast it with folk-wisdom. Folk-wisdom comes up with a three part explanation of the new data. It is conservative in that it keeps the old theory, as its explanation of the old data. It comes up with a new theory that explains the new data, but not the old data. Thirdly it offers an ad hoc guidance as to which theory applies.
Science is also conservative, but with a different vision of what this means. Scientific conservatism lies in the high esteem granted to the old experiments. The new theory cannot merely explain the new data, it must do pretty well on the old data. Experimental error is important and we must at first cut the new theory some slack as experimenters are prone to optimism about the accuracy of their experiments especially when they agree with established theory. But not for very long. Old experiments can be redone with greater accuracy, and the new theory is expected to win both on the new experiments and on the most accurate versions of the old experiments.
Notice my gradual build up to the central dilemma. A good theory has a degree of inevitability about it. Newton's Law of Gravity is an inverse square law. Distance raised to the power of 2. Perhaps we can get a better fit to the orbit of Mercury with 2.00036, but that would be silly. Inevitability implies brittleness. If new data contradicts the theory there may be no way to adjust the theory.
If the old theory is a good one, but new experiments contradict it, folk wisdom can accommodate this straight forwardly by keeping the good old theory as the folk account of the old experiments. Scientists however have a serious problem. The good old theory breaks when they try to bend it to fit the new data, and the new theory has to agree with the old data, even though the old data appear to force one to believe in the old theory as the unique good fit.
Scientists are caught in a trap. Their conservatism with respect to old experiments forces them to be radical with respect to new theory. The new theory cannot just beat the old theory on the new data, it has to beat the old theory on the old data, the very data that gave rise to the old theory! This requires radical new ideas. As [Lampedusa says](http://en.wikipedia.org/wiki/The_Leopard) "If we want things to stay as they are, things will have to change."
Hence the 20th century phenomenon of lay people reading popular accounts of Quantum Mechanics and of Relativity and coming away with the impression that scientific truth is prone to dramatic changes. Is there any antidote? I recommend A Quantum Mechanics Primer, by D.T. Gillespie. This slim, clear book reaches Ehrenfest's equations on page 111, allowing the interested layman to see how Quantum Mechanics leads to F=ma and beats Newtonian dynamics on its home ground. But the book requires single variable calculus and a good enough grasp of three dimensional vector spaces to realise that nothing goes too horribly wrong when we add more dimensions. Is it really accessible to a layman?
Ehrenfest's equations symbolism a new paradigm of total backwards compatibility through radical change. Asimov ably covers the data side of this, but wimps out of trying to explain the implications for theory.
Erm, yes - if you come up with a new theory of gravity, it had better explain the current orbits of the planets.
Suppose I have a wonderful new theory of gravity, it doesn't work for anything on earth, or anything you can measure in space - but that's ok because that's old data and doesn't count.
Of the two, I think Asimov's is much better (if only because the Fallacy of Grey relies too much on analogy).