I argue that we live in an anti-philosophy era. I'm not sure this will change in our lifetimes, but I think it will necessarily change if we survive for long enough.
To illustrate: there is common belief that metaphysics became irrelevant -- that it was replaced by science. This is, in itself, a metaphysical position, and a misunderstand about what science is and does. Of course, reasoning about what science is and does is part of philosophy. So is the discussion of knowledge itself, what is knowable and how can we trust the various methods of seeking more knowledge. A lot of stuff surrounding these topics happened in the XX century.
The acritical use of the yardstick of "progress" as the ultimate value for everything (along with its little cousin: "productivity") is, in itself, a philosophical position. One that is currently maintained by social norms and authority. Which does not mean it is "wrong". I am only claiming that it is accepted by most people without any reflection.
Some boundaries of scientific knowledge are quite visible. For example, consciousness. You might argue that consciousness emerges from matter interacting in a complex way (emergentism), and you might be right, but this is accepted as a serious scientific theory although it has zero content -- no way to falsify empirically, no explanatory mechanism proposed.
Real philosophy is a very subversive endeavor at the moment. Perhaps it always was. It's something for those who love knowledge, but not for those who expect any public recognition.
> To illustrate: there is common belief that metaphysics became irrelevant -- that it was replaced by science. This is, in itself, a metaphysical position
That argument doesn't seem valid. The notion that fairies don't exist is, in itself, a fairyological position; still, few would accept that fairyology is a legitimate field of endeavour.
> I am only claiming that it is accepted by most people without any reflection.
Many things are accepted by most people without much reflection. Often that's because they really are as simple as they look.
> Some boundaries of scientific knowledge are quite visible. For example, consciousness. You might argue that consciousness emerges from matter interacting in a complex way (emergentism), and you might be right, but this is accepted as a serious scientific theory although it has zero content -- no way to falsify empirically, no explanatory mechanism proposed.
What are the outstanding empirical questions about consciousness? As soon as you go down to an empirical level, you find that no mysticism is necessary: there are valid, interesting questions to be asked and answered, but they are no more outside science than questions about how, say, protein folding works.
> That argument doesn't seem valid. The notion that fairies don't exist is, in itself, a fairyological position; still, few would accept that fairyology is a legitimate field of endeavour.
You are using Popper's criterion for what constitutes a valid scientific theory, but I think you misunderstand me.
My claim is that saying that theoretical physics is the ultimate explanation of reality is a metaphysical position. Cosmology describes the big bang but it does not answer the question "why is it so?". Maybe there is no answer. Maybe it is not knowable. Maybe theoretical physics cannot have the total picture. These are all metaphysical questions that seem perfectly legitimate to me. If you are not allowed to ask or talk about these questions, I would say that you just accepted a religious faith. I am not trying to sell you anything except doubt and curiosity :)
> Many things are accepted by most people without much reflection. Often that's because they really are as simple as they look
Perhaps. Perhaps not.
> What are the outstanding empirical questions about consciousness? As soon as you go down to an empirical level, you find that no mysticism is necessary: there are valid, interesting questions to be asked and answered, but they are no more outside science than questions about how, say, protein folding works.
You are implicitly making the strong claim that knowledge can only be attained through empiricism. This is clearly not the case. I know that I am conscious and I assume that you are, but I cannot use empiricism to test this hypothesis, not can you use it to verify that I am, indeed, conscious. So there is something very fundamental -- in fact the only thing I know with 100% certitude -- that cannot be empirically tested.
Your use of the word "mysticism" betrays the current bias against such questions. I am not proposing any woo. I'm just curious. Even being curious about certain topics nowadays gets you labelled and a "mystic". I think this illustrates the point that I wanted to make in the beginning.
But you were talking about the mechanism of consciousness, not verifying whether someone is conscious or not.
It may be impossible to verify if someone is conscious in the same way that I know I am, but we can associate a set of behaviors with an intelligent creature. If we empirically prove that those behaviors emerge from a collection of neural impulses, then it takes a simple assumption (almost a cognitive axiom) that the creature exhibiting those behaviors is conscious, to conclude that emergentism has been empirically verified.
I feel like your position that emergentism is empirically impossible to verify is itself a strong position, and attempts to define a boundary on science when your stated goal is to place doubts on where the boundaries are. Yes it requires a few assumptions, and yes those assumptions are impossible to verify empirically. But then so are the mathematical axioms. We don't go around saying "physics is impossible to empirically verify because mathematical axioms are impossible to prove and are incomplete"
"...we can associate a set of behaviors with an intelligent creature. If we empirically prove that those behaviors emerge from a collection of neural impulses,..."
Beware! Here be the dragon of pure behaviorism, which potentially denies that "consciousness" is a valid noun.
On the other hand, mathematics is the last refuge of the hardcore platonist (and worse, allows them to run around loose); the rest of us are true formalists who realize it's all a game whose basic rules happen to coincide with reality suspiciously well.
Try imagining yourself as a system of molecules all responding to their environment. It is reasonable that your complexity might be a good indicator of your potential for intelligent responses. I don't think that holds for your consciousness though. If it did, that would imply either consciousness is a spectrum or the universe has a hard coded on/off switch for consciousness. The on/off hypothesis has problems with conservation of information, and if consciousness is defined as "having an internal experience" it isn't clear how that could be non-binary.
I think it's likely an emergent phenomenon. Still, being able to turn it on and off with specific components hints they're either the source of conciousness or a big part of it.
> Cosmology describes the big bang but it does not answer the question "why is it so?". Maybe there is no answer. Maybe it is not knowable. Maybe theoretical physics cannot have the total picture. These are all metaphysical questions that seem perfectly legitimate to me. If you are not allowed to ask or talk about these questions, I would say that you just accepted a religious faith. I am not trying to sell you anything except doubt and curiosity :)
I see no reason to assume that question is beyond the reach of empiricism, which has been (both by its own standard and by naive common sense) effective in a way that alternatives have not. Non-empirical approaches may be able to construct self-consistent theories, but there is an infinitude of self-consistent theories, the overwhelming majority of which are useless under naive common sense. I trust empiricism because I was able to get there in Neurath's boat fashion from the everyday naive common sense that we all live by in practice.
> You are implicitly making the strong claim that knowledge can only be attained through empiricism. This is clearly not the case. I know that I am conscious and I assume that you are, but I cannot use empiricism to test this hypothesis, not can you use it to verify that I am, indeed, conscious.
I don't accept your claim. To the extent that "consciousness" refers to something meaningful, it refers to something empirical. (At least, my experience - under formal empiricism, naive common sense, and along the path between them - is that theories that involved entities detached from empirical consequences were misleading, and less effective than theories that did without such entities).
I'm aware. I've not found the counterarguments listed there convincing. Quine's argument seems to be that since he can't see how to make the analytic/synthetic distinction rigorous it must be impossible to do so, which simply doesn't follow. And since I take the position not as an a priori principle but based on my empirical experiences of what kind of theorising has been effective or ineffective, it isn't self-defeating at all.
>And since I take the position not as an a priori principle but based on my empirical experiences of what kind of theorising has been effective or ineffective, it isn't self-defeating at all.
That is the same defense Ayer used. It doesn't amount to much, since if you don't mean to say that non-empirical statements are meaningless in some kind of objective sense, then saying that they're meaningless is just a highfalutin way of saying that you personally disapprove of them.
Non-empirical statements are meaningless in the same sense that fairies don't exist. It's not a priori impossible that a non-empirical but constructive/valuable/useful statement could exist - a black swan - just as it's not impossible that a fairy could exist - but it seems very unlikely and I'd put a very low weight on someone's claim to have seen one, given how often such claims have turned out to be false. When I dismiss something as non-empirical, that's the same kind of dismissal as saying something's a conspiracy theory - formally I'm not claiming that it's outright impossible, just unlikely (though in everyday language I might say "impossible", just as we do for conspiracy theories or fairies).
The strained analogy with fairies is obscuring your point. You seem to be using "meaningless" in a very unusual sense.
Whether or not non-empirical statements can be constructive or valuable is a separate question. Mathematics is non-empirical and strikes me as pretty constructive and valuable. But those terms are, ironically, so vague as to be almost meaningless in any case.
Do you think you contributed to the conversation or that anyone might have been missing part of the picture but you pointing out that black swans are a literal thing that are not uncommon unlike fairies that don’t exist or “black swans” (like “bull runs”) that are by definition extremely rare?
I once said “there’s no such thing as a bull run” in some context with a group of friends. Do you think you would have chimed in to say that there’s a big one in Madrid every year?
> But how do you know this? Did you reach this conclusion empirically?
Yes I did, that was my point. I haven't solved and am not claiming to have solved the problem of induction - the generalisation from "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate" to "all valuable/effective/legitimate knowledge is empirical" rests on potentially shaky ground. But that's a problem that already exists when making ordinary, object-level generalisations about the universe; it doesn't render the conclusion any weaker than ordinary scientific conclusions.
That sounds like you're saying something like this:
"I believe empiricism is true because empiricism seems to be true."
We strive to live our lives based on reason, so we should look for ways to understand the world that go beyond a circular argument.
Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.
I don't think we do. Reason is a means to an end, not a goal in itself.
> so we should look for ways to understand the world that go beyond a circular argument.
I don't see it as circular, but even if it were, my point is it's impossible to do better: all of us accept everyday common sense before we can even begin to argue technical philosophy, and if we're willing to set it aside then there are infinitely many self-consistent things we could think and no reason to prefer one over another. So no amount of sophistry will ever get you away from having to believe in everyday common sense.
> Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.
Please. You're dismissing rather than engaging. If you're not willing to actually contribute to the discussion then don't post at all.
I'm sorry you thought I was being dismissive. I felt I had reached the limit of my own pursuasiveness on the question and wanted to point you to somewhere better than me.
One final point I will try to make is that in thinking about how we know things, there's no suggestion that we need to set aside common sense. It's about starting with common sense and then seeing what we can add to it.
That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).
> That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).
I can cash it out empirically as "generates accurate empirical predictions and suggests fruitful avenues for future investigation" (fruitful in the sense of ultimately leading to more detailed and accurate empirical predictions). That the measure of a theory is the accuracy of its predictions is of course a subjective human position (there are an infinity of possible measures on which to evaluate theories, and a priori no reason to prefer one over another), but again that's (a cautious Neurath's boat extension of) the common-sense way that we all evaluate theories in practice in everyday settings.
No, that's not even close to cashing out the generalization in empirical terms. To do this you'd need to specify exactly which observations would confirm or disconfirm it. Without the parenthesized parts, your gloss of the generalization remains vague and value-laden. With the parenthesized parts it is virtually tautological, since it's in the nature of empirical knowledge to generate accurate empirical predictions. It's surely not news to anyone that if forms of knowledge which lead to detailed empirical predictions are superior to other forms of knowledge, then empirical knowledge is superior to other forms of knowledge.
What you really seem to want to do, then, is argue from the nature of empirical knowledge itself to the conclusion that it is better than other methods of empirical knowledge. But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization. And then we come back to the problem that it is impossible to find suitable premises for such an argument which can themselves be known empirically.
(For reference, the generalization we're talking about here is that "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate".)
> But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization.
Why? Everyone evaluates ordinary, everyday knowledge in terms of its empirical predictions, so everyone seems to accept the italicised statement in practice, even if they'd argue for some sophisticated alternative in the abstract.
Things like Newton's Laws are, in a strict technical sense, not true. We still use them because they are very nearly true. But, strictly, they are approximations of something more fundamental. That extends to a great number of very intelligent figures of the past who described reality as they saw it but, in the end, turned out not to have sufficient measuring technology to be as correct as we would understand it now.
It is an extremely reasonable position that large swathes of what we believe today is also going to be wrong in this sense. For example, the great debate about whether reality is discrete or continuous (currently I think the evidence is discrete, but philosophically maybe we are a simulation run in a continuous universe).
We should try and work to the current state of knowledge of the researchers, but we all know that there is a high risk of something big and impressive being discovered that changes the name of the game, like when they split the atom. We just don't know what happens next.
Our understanding of consciousness also depends on who controls the dictionary and defines consciousness and surrounding terms.
> Our understanding of consciousness also depends on who controls the dictionary and defines consciousness and surrounding terms.
I think that arguing over dictionary definitions is something of an anti-pattern in philosophical (-like) discussions. If we have a common agreement that there is something that we call consciousness, but it is a thing that we do not really understand, then pedantic arguments about exactly what the word does and does not denote are putting the cart before the horse when it comes to understanding the thing itself. It substitutes a lexicographical dispute for an examination of the thing itself, and it is not uncommon for such arguments to be used to talk around the issue by attempting to take certain positions off the table.
As for scientific theories not being strictly true, that hardly distinguishes science from philosophy in general (logic excepted), where there are at least two opinions on every issue of note.
>there are valid, interesting questions to be asked and answered, but they are no more outside science than questions about how, say, protein folding works.
Questions about consciousness are much different than protein folding, because protein folding can be objectively observed and tested.
Consciousness cannot be objectively tested, because in order to test consciousness, you must use your consciousness to do so -- there's no way to control for that variable.
Fire doesn't burn itself, and water doesn't wet itself, maybe consciousness cannot understand itself -- just like our eyes cannot directly see themselves.
If hard science is based on objective observations, perhaps the limits of science are at that which we use to make those observations -- our consciousness.
Even if we accept the questionable assertion that no study of any aspect of the mind so far has been objective, you have provided no argument that it cannot be so in principle.
I notice that the link you provide on 'hard' vs. 'soft' sciences contains passages that cast doubt on whether the distinction is significant. Biology is classified as less objective than physics, yet biology explains many things very well, and, in fact, actually better than a reduction to the underlying physics would (evolution, for example, or genetics.) By your original argument, however, human biology could not be an objective science.
>Even if we accept the questionable assertion that no study of any aspect of the mind so far has been objective, you have provided no argument that it cannot be so in principle.
Perhaps if we had aliens or god study our minds, then it could be objective and not influenced by the biases of our own human minds. There's no argument that they can't exist in principle.
Seems like a false dichotomy. Quantum physics has shown that the mere act of observing fundamental reality changes it's state, so perhaps our consciousness of reality is much more subjective than we thought.
Yes, it is a false dichotomy, but the point is that it is one that follows directly from your argument: all your 'just like' comparisons apply as well to human biology as they do to the study of the mind, and if a thing studying itself is hopelessly compromised by subjectivity, then, by your argument, this must be so for human biology.
History is another example of human self-study that is not rendered impossible by subjectivity. In this case, there is a term, 'whig history', for one form in which the investigator's bias is applied to his analysis of past events. The fact that this term exists, and that the phenomenon it labels can be identified and corrected for, shows that it is possible to work around the subjectivity of self-study.
As for quantum mechanics, it seems there is indeed a good deal of subjectivity, at least in the Copenhagen interpretation. By your argument, that should have destroyed physics as an objective science, yet physics has been extraordinarily fruitful since the discovery of QM. Here is an example of Luboš Motl dealing, in his characteristic style, with subjectivity, and, in fact, discussing the topic "Why subjective quantum mechanics allows objective science."
It is somewhat ironic that all your arguments for the impossibility of understanding the mind, on account of a lack of objectivity, are themselves subjective.
You are, of course, free to hold the opinion that there is something about consciousness that will put it forever beyond our understanding, and I cannot prove that there is not, but until we run into that barrier, I prefer to apply Occam's razor to the proposition.
The more sensible interpretation of the available facts is not that making observation changes reality, merely that we are ourselves part of reality. In any case the precise, constrained observations of QM are by no means a license to make arbitrary assumptions about "consciousness".
> As soon as you go down to an empirical level, you find that no mysticism is necessary
There are plenty of things that cannot be observed. There are plenty of things that should not be observed. There are plenty of things that can be observed but cannot be understood well enough to achieve the goals we want.
I don't find it controversial to claim there are limits to empiricism. The fact that we expect juries and judges to rule on incomplete information is a concession to that reality.
And because there are limits on empiricism, there are limits on science.
> The fact that we expect juries and judges to rule on incomplete information is a concession to that reality.
I think that arises from practical difficulties in finding all the relevant information, not from a certainty that it is, in principle, undiscoverable.
I notice how you slip easily from a personal opinion in your penultimate paragraph to the pronouncement of a universal truth in the final one. This seems to be characteristic in philosophy, as displayed, for example, in Searle's 'Chinese Room' paper.
Nevertheless, I agree with the general position that there are unobservable things (that's just my opinion, of course.) I also believe that there are limits to reason alone as a means for discerning truth, and consequently I believe there are limits to what philosophy can achieve (though I see no limits on how much discussion it can generate.)
> ...not from a certainty that it is, in principle, undiscoverable.
In principle if we have to violate rights to obtain all of the relevant information, then some information must be undiscoverable.
In a more scientific bent, you have the Heisenberg Uncertainty Principle and the existence of intractable computers science problems to support the idea that there opinion-free realms where science cannot provide solutions.
And I find labeling claims as opinion to dismiss them a bit facile. The underlying point is that there we conceded that there are practical considerations that make us act on incomplete information. Therefore we concede on a practical level that all things cannot be knowable. Conjecture and guesswork pervade everyday life.
It is interesting that you present the rights issue, as, while it is a valid response to the claim I made, it works just as well as the claim I made as a counter to your original claim about what the instructions to jurors tell us about the limits of knowledge.
Similarly, your use of quantum uncertainty is valid, but its discovery did not put a stop to physics - on the contrary, the discovery of QM has led to vast new areas of knowledge, and has even had metaphysical and epistemological implications (such as the constraints on realism that follow from Bell's inequality) that were never imagined in millennia of navel-gazing. Therefore, if anything, quantum uncertainty tends to stand as a counter-example to the apparent premise behind your claim that incomplete knowledge sets limits on science (I weasel-worded that sentence because I do believe that it could ultimately become a problem, if humanity survives long enough.)
I am not labeling a claim as opinion to dismiss it - in fact, if I cannot offer a definitive counter-argument, then my own position on the matter is an opinion - it is the transition from opinion to certainty that I find interesting. Searle's Chinese Room paper, for example, is an elaborate argument ultimately hung on the unexamined premise that syntax cannot give rise to semantics, and the non-sequitur that a model of a thing is not the thing itself, yet he declares that he has proven that a digital computer can never have a mind. This passing off of opinion as certainty may just be an issue of style within philosophy, but if so, it is an unfortunate one, as it makes it difficult to keep track of what has definitively been established and what remains as conjecture or premise. Certainly, there are many people who think Searle proved something in that paper, yet there is disagreement within that group as to what exactly that is.
Conjecture and guesswork pervade everyday life, science, and even philosophy.
An example of the limit of empiricism: how do you measure the moral of a story, when you read a good novel? Because you cannot measure it, does the moral not exist?
You devise a test for obeying the moral, and administer the test to a bunch of subjects before and after reading the story. For example, if the story is Hamlet, you can give an opportunity to back-stab their boss in order to get yet another promotion. Are they less likely to do it after seeing how badly it went for the young thane? Not an easy test to do, but possible in principle.
Joshua Greene's lab at Harvard (http://www.joshua-greene.net/) measured subjects making major moral decisions and found, for example, that people who'd received training in medical ethics made more utilitarian decisions about life and death. You could apply the same methods to morals of stories in a randomized trial by having some subjects read them and some not.
That's not measuring the existence of morality though. It's measuring whether certain things can influence behavior. And because it's a lab study, the behavior tends to be fairly trivial in practice.
Besides studying whether people behave morally is begging the question if the question is "do morals exist?".
I'm confused, what about things that can be deduced based on other things that can be observed? If I see someone's shadow, and deduce that there's a person there, whom I can't see directly, casting that shadow, will my claim "there's a person, casting a shadow" be outside of the "limits of empiricism"?
If all you've seen is a shadow, all you've seen is a shadow. How do you know it's not someone's twin? Or some sort of puppet facsimile?
Now, that's a contrived example. More realistically, I doubt we'll ever be able to get enough detailed measurements to really understand how genetics, nutrition, exercise, and environment all play together to affect human health. Clearly each is complex enough on its own and to combine them together quickly magnifies the observation problem to be entirely intractable. We can imperfectly generalize and work with populations to come up with various guidelines that seem to work well with populations. We can break down the problems, measuring particular cells or organs. But to recompose all the experimentation and observation into a full system view is much more complex than solving chess or go. It's much more complex than predicting the earnings forecast of a single company. The complexity of the problem space is just ridiculous.
So we try to push the boundaries of the complexity we can deal with, but there are diminishing returns over time. And clearly there are outer bounds we will never be able to approach.
True enough, but I don't see how any of that is specific to empiricism? This doesn't sound like a case where empiricism overcomplicates and a non-empirical approach could do better; rather it seems like reality is just that complex, and our only options are to deal with it or to simplify and accept a coarser picture (which is perfectly compatible with empiricism).
Metaphysics is basically a question of axioms. Reasoning can't get off the ground without some unproven assumptions. We can certainly talk about our assumptions without proving them!
I would argue that most axioms used by thinking brains ultimately derive from evolutionary processes. But again, this is different from proving them, and brain development is far from simple.
I'd argue that in practice everyone believes in everyday common sense - even philosophers who claim to follow some philosophy that's radically contradictory to it (in a similar sense to the economist's notion of "revealed preferences"). So someone who relies on a novel set of axioms is relying on a bigger and more dubious foundation than someone who's able to "Neurathian bootstrap" starting with just the everyday common sense that we are already all relying on.
I have a suspicion, which I don't really have the training or references to back up, that what happened to philosophy is related to the self-destruction of Art as a direct result of the World Wars.
Before 1914, it was still possible to be naive in the West and believe that everything happens for some sort of supreme purpose. Then we saw the mass destruction of humans on a society-wrecking scale, senseless deaths among the shells and gas; an entire generation returning with unacknowledged PTSD. Then this power got turned on civilians. The only possible response artistically to Guernica was a sprawling, fractured, ugly painting. Inevitably the catastrophe scaled up: everything "traditional" was appropriated and turned to Fascism, and Fascism destroyed everything that it could reach that didn't support it. Finally we built the Bomb, and realised that there wasn't really anything standing between us and the power to destroy entire civilisations in an afternoon.
We were standing in the wreckage of our own ethical systems, and have had to gradually rebuild them. This gave us the principles of human rights, and the realisation that equality of respect was the only real moral imperative. If we allow humans to deem other humans to be inherently inferior, this will be used to justify mass atrocities. Nobody is coming to redeem us and there is no final judgement to look forward to; we have to liberate ourselves in this lifetime.
In the face of this challenge, philosophy could either collapse into post-modernism, an endless hall of mirrors of signifier and signified; or it could mobilise and be used to deconstruct power relations and correctly label injustices. That gave us post-colonialism, queer theory, multiple generations of feminism, and so on. We have to live with the Other, and the question is how.
(The question of consciousness will be solved by the first AI to win a Nobel Prize for writing in defence of its own consciousness. I have no idea whether that will be next century or next year.)
(Please insert references to Derrida, Foucault, Butler, Lacan, etc as applicable; I don't see too many other people referencing modern philosophy in this discussion? Or you could downvote as well, I guess that's a philosophical argument?)
It's simpler than that. Progress in critique continued, we got effective anti-capitalist critique and marginalization-based feminist epistemology, and everything else. We had tons of progress, and the only thing we didn't do was update our ontological models of the knowledge we generated, and roll them into our public school curriculums. The only reason every single person on this planet (barring WHO millenium development goal resistance areas) doesn't know the difference between a hack and academic philosophy is that we didn't formalize it in a way they'd understand and teach it to them.
Society doesn't have to build or rebuild ethical systems, compete with critique, or compete with noise, fascism, mental illness, policy, or a saturated marketplace. You can teach meta-ethics to people and watch them use informed consent in everything (since it's often identified as the best way to be an agent in self interested understanding of pursuing and developing what you want, while participating in society effectively to help others).
No, "live and let live," would be applied ethics. Meta-ethics asks what the source of meaning in an ethical claim is. It is easier to adopt meta-ethical relativism if you know about meta-ethics, but I don't have a persuasive goal in presenting it; I don't think people "should live and let live." I just think it's clear progress in ethics will come best from people being able to assess their own and others sources of meaning in their ethical claims. I think we'd get more effective activism, relationships, etc. than without it.
A philosopher must have a reasonably sufficient grasp of most extant practical working knowledge of the world. If a philosopher makes claims which are inconsistent with the working knowledge of some people, those people will quickly demonstrate the philosopher's claims to be false. Some particular contemporary examples include Noam Chomsky's[1] misapprehension of the Khmer Rouge, Eliezer Yudkowsky's[1] squabbles with quantum physicists, or the widespread abuses of mathematical concepts by "postmodern" continental philosophers as described by Sokal and Bricmont:
And these are the better examples of would-be contemporary polymaths who make stabs at using philosophy to achieve broader social aims (as opposed to continuing long-running arguments about the ontological status of possible worlds and what Wittgenstein really said about Godel's theorem). There's so much to know it's hard not to be wrong about some of it. That situation is unique to the modern era.
So I don't think it's just about an anti-philosophy mindset being popular. Philosophy has, for practical purposes, become harder to do.
1: please don't reply to tell me that Chomsky and Yudkowsky aren't real philosophers, I know that, but I wanted examples people would recognize, and anyway their beliefs are roughly consistent with the popular positions in philosophy, and they failed in the general way I am describing, so the shoe fits
> I am only claiming that it is accepted by most people without any reflection.
If you discuss philosophy with a scientist/tech-person, on most occasions this happens, or at least from my experience. Questions about usefulness, measurability etc are the ones that are discussed first. And that's fine on an individual level. But as a generation on a whole, it's a turn into a somewhat morally ambiguous direction.
Algebra is based on five axioms [1], and an axiom by definition is a condition that is assumed to be true. How can you prove that 1+1 is really 2? There have been attempts to prove this, like in Pricipia Mathematica [2].
So basically what I am trying to say is that there is no absolute and not everything must have a value.
Worse, we're often tripped up figuring out if 1 + 1 is the correct expression in the first place. A lot of science and technology is bottlenecked on getting accurate and useful inputs.
I did not know about that aspect of the problem. What do you mean, that "we're often tripped up figuring out if 1 + 1 is the correct expression in the first place"? That we might be using the wrong paradigm for mathematic formulations?
In real life, there are problems like classification and precision that cause problems in making broader conclusions. If you are measuring poverty rates, for instance, you could be basing your decisions on several different benchmarks. The statistics, math, machine learning, etc., are all downstream from what we're even measuring and counting in the first place.
In a more elementary formulation, we could say one bag of rice plus one bag of rice is two bags of rice. But one bag of rice is underfilled and the other is contaminated with pests. So you don't really have two bags of rice after all. So maybe 8/10 + 0 is the correct expression in that case.
isnt 1+1=2 true in the same way that 0=0 is true, namely by definition? its proven analytically, 2 is defined as 1 greater than 1, 1 less than 3, 1/2 of 4, etc, all of which derive from 0=0, which is a tautology. I really don't understand the difficulty in proving something that is true by definition, it's like saying "prove all white swans are white." To me, it seems a harder thing to prove would be something empirical such as "prove the moon and stars exist during the daytime"
> isnt 1+1=2 true in the same way that 0=0 is true, namely by definition?
In the "traditional" formalism, it matters exactly what you mean by "+1" :-).
People normally define a binary addition relation (two operands -> sum) using a "more fundamental" unary successor relation (one operand -> successor).
Zero is just a constant, and numbers are defined in terms of it and the successor relation: one is "S(0)", two is "S(S(0))" and so on. So if the "+1" in your post is an invocation of the successor function, it certainly is axiomatic (from the axioms of equality):
S(S(0)) = S(S(0))
On the other hand, if we're using actual addition, it might take another step or two, using axioms of addition, like:
X + 0 = X, and
S(X + Y) = X + S(Y)
So the proof might go something like
S(S(0)) = S(S(0)), by axioms of equality,
S(S(0)) = S(S(0) + 0), by the first axiom above, then
S(S(0)) = S(0) + S(0), by the second axiom above.
And I guess I got lucky that the second axiom was not written
S(X + Y) = S(X) + Y
or I'd have been stuck for a day trying to prove that addition is commutative :-).
That makes sense, thank you. However, at what point is the symbol "2" introduced in such a scheme? Is it before addition, or after? At some point it is given a definition, like S(S(0)) where S(0)=1 and S(X)=X+1, at which point isnt it easy to substitute
2 = S(S(0))= S(0)+1 = 1+1
is this simple equivalence really not provable with sufficient axioms?
Obviously from the downvotes I am missing something fundamental, I would very much like to know what that is but am at a loss. Perhaps I am just missing mrleiter's original point.
Normally it's defined exactly as S(S(0)) like you said, so the above should constitute a proof that 1+1=2.
Anyone disputing that sort of proof normally needs to take issue with one or more of,
- The axioms,
- The rules of inference.
So someone might say, "In step one, you used an axiom of the form X=X. What basis do you have for assuming it's true?"
It's worth looking up the Munchausen trilemma for a little more on that sort of thing, but I'm not sure how many people would seriously argue against the validity of that proof (or one just like it if I've made a mistake :-)
As for why you've been downvoted, I have no idea. I think your question was perfectly fine, asked politely enough, and there are no doubt many perfectly reasonable logical systems in which 1+1=2 is trivially true (and not just "a short proof away.")
This idea "... all of which derive from 0=0" is kind of interesting. I've got a fuzzy recollection of a terrific book about the number zero ("History of a Dangerous Idea", IIRC), and thought that historically, a surprising amount of math was developed prior to 0 being recognized as a number per se.
Bertrand Russell proved it via set theory [1]. Which is again based on the axioms of set theory, so as repsilat said above, you end up in one of the Münchhausen trilemmae, in this case you end up in some mix of regressive and axiomatic argument: there's an axiom, you cannot prove it within itself, but with another axiom and so forth.
It's a difficult subject and I am by no means an expert in it.
PS: You are correct, 1+1=2 is not a tautology. With a tautology, both the statement and the negation of the statement are true. In this case either 1+1=2 or 1+1=!2 - but never both, if you get what I mean?
PPS: And sorry for all those downvotes - don't understand why. Your questions are perfectly fine.
> reasoning about what science is and does is part of philosophy
Only if you approach Philosophy with a lot more rigor and math than what people have historically done. The useful parts of discussing what you call the 'philosophy of science' involves quite a bit of statistics for example.
What people call useless is cruft dealing with Known Unknowable's which suck up time without getting anywhere.
IMO, it's not that Philosophy is dead, it's just when you keep calling the useful bit's something else eventually people don't care about the leftovers.
> Only if you approach Philosophy with a lot more rigor and math than what people have historically done. The useful parts of discussing what you call the 'philosophy of science' involves quite a bit of statistics for example.
I disagree. For example, Karl Popper's positions on the philosophy of science became the current mainstream view. His positions are not based on statistics, but on good-old-fashioned qualitative reasoning. I know many scientists who mention Popper by name when discussing such matters. Carl Sagan also helped popularize it in his book "Demon-Haunted World", with his "invisible dragon in the garage" story.
The ability to do statistics at incredible scales (currently known as Machine Learning) raises further questions about what science should be. Questions about such quantitative methods themselves. Rigor does not have to be quantitative in nature -- and let's not forget that the very idea of quantitative science was introduced by Descartes, another famous philosopher.
> What people call useless is cruft dealing with Known Unknowable's which suck up time without getting anywhere.
Depends on where you want to get. Noncommunicable knowledge is a very interesting topic, for those with the inclination. It is ok if you are not curious about such things, but I would argue that it is also ok to be. It deeply intersects mathematical logic (e.g. Gödel) and theoretical computer science (e.g. the halting problem). Will it help us create better gadgets? Probably not? Will it help us better understand the human condition? Probably yes.
Gödel is vast overkill. The list of things you can't compute with small finite amounts of processing power is vastly longer than the list of things you can't compute with infinite processing power.
AKA, Gödel is true but irreverent because it's not on the border of anything decidable. If you say the bonds are between 9^9 and 9↑↑↑↑9 that's just not useful.
I don't know where the idea that philsophy doesn't involve rigor and maths comes from. Leibniz came up with the entscheidung's problem, for god's sake! If you take the most basic interest in logic, maths, or even computers, you can't swing a cat without hitting a philosopher.
I am referring to computation, not abstractions. What's the trade-off between X and Y is the kind of place where having some specific numbers is very helpful.
EX: At what point if any can you say testing theories on public data produces more value than noise.
I'm having trouble seeing how any of that relates to your original comment or the responses to it. You said "[reasoning about what science is and does is part of philosophy] only if you approach Philosophy with a lot more rigor and math than what people have historically done." That comes across as a broad comment about levels of rigor in philosophy, not a comment on the usefulness of quantitative data in making trade-offs.
"reasoning about what science is and does is part of philosophy."
I am saying some of that analysis benefits from real hard data. So, if your saying all of that analysis falls under philosophy, then I don't think that's what most people mean when the use the term.
I actually think 'hard data' is typically about as relevant to philosophy as it is to maths. Not everything is reducible to stats.
I just find it weird to be using a machine that's a recognizable descendant of the work of philosophers, as much as engineers or mathematicians, then to be saying casually that philosophy isn't rigorous.
You could argue using hard data is less rigorous. But, in practice people make mistakes so without verification Philosophy and math is again arguably less rigorous, depending on what you mean by that word.
Science didn't replace philosophy. Science is philosophy. That's why scientists used to be called natural philosophers. Yes, in the modern era we have gotten much more specialized in the modern era and that arguably is a loss. But it is kinda necessary given the depth that many fields have gotten.
Francis Schaeffer said that most (formal) philosophy today is really anti-philosophy - either it doesn't tackle the big questions, or it gives non-rational answers. He said that the real philosophy today is being done by people like film-makers and musicians.
>To illustrate: there is common belief that metaphysics became irrelevant -- that it was replaced by science. This is, in itself, a metaphysical position
I think that is wrong. What happened is that we realized we knew so little about physics that most of our meta-physical ideas were basically a case of "garbage in, garbage out." We had bad physics, and so we also had bad meta-physics. So we put meta-physics on hold until we had a body of physical knowledge that wasn't garbage to get meta about.
That's why all these "advances in metaphysics" are all physics discoveries.
I think the anti-philosophy era you are referring to is actually more like an anti-intellectual, anti-science, and generally populist movement that seems to be picking up steam. The most irritating manifestation of which says that you can't learn anything from books, something I increasingly hear people saying. I'd say the progress in more recent philosophy has been the efforts of multidisciplinary types like those in neuroscience and philosophy (ie Sam Harris) to enable more useful and practical 'popularizations' like the realization that meditation has benefits. Unfortunately debates about what 'grue' and 'bleen' are, while actually fascinating when you dig into it, and potentially even having practical consequence, generated an anti-philosophy populist movement that resembles general anti-intellectualism, and isn't helpful.
I can't help but cringe every time I see this, which is a lot. It is not at all substantive to use a nominalist position in metaphysics as somehow vindicating the field. Nominalism about metaphysical issues is the opposite of vindicating or substantiating the field.
You're missing the point. The purpose of the statement is to show that the speaker who says "science has superseded philosophy" can't possibly really believe in what he's saying, because he's using philosophy to justify the superiority of science. Which makes no sense. If the physical sciences were really superior to philosophy, then they wouldn't need to rely on philosophy for their justification.
The statement is meant to expose a common hypocrisy. It is not meant to be an actual argument to support philosophical study, which could still be totally useless. It would be more convincing if the science-lover could use empirical methods to disprove the need for philosophy.
Isn't the real issue that meta-physics is that it always follows actual-physics? That is, if the metaphysicist says something like "guns don't really exist" and the physicist says "but I made a real gun, can I shoot you with it to prove it?" the metaphysicists always back down. Or put less aggressively, it is metaphysics that has to conform to discoveries in physics, not the other way around.
So the point the pro-science people are making is that any metaphysicists who deny the efficacy of science can't possibly believe what they are saying because they are not willing or able to actually live their lives according to their beliefs.
>because he's using philosophy to justify the superiority of science.
But saying something is a philosophical position doesn't mean he is using philosophy to justify that position! Philosophy purports to be a general analysis of how things are, so in some sense everything is a philosophical position. But this doesn't bootstrap the value of philosophy, not in any substantive sense.
There's an unfortunate perception that philosophy is utterly divorced from reality or pragmatic concern. To be honest they might have a point, but I still find philosophy to be of great value to me personally.
I mean, they don't have a point. Even if you accept obscurantism as existent and harmful, and academic philosophical consideration a violation of exploit in exploit/explore, and normative ethics as confusing regarding discussion in social maturity, and descriptive ethics as confusing regarding discussions of philosophy of law or politics or government, the unfortunate perception that philosophy is divorced from reality comes from a lack of education in academic philosophy as a field and practice, and potentially a socially/politically condoned trend of publishing any "10 step" pop-philosophy book that makes the advertising/economy/adult-education/media engine keep humming.
> To illustrate: there is common belief that metaphysics became irrelevant -- that it was replaced by science. This is, in itself, a metaphysical position, and a misunderstand about what science is and does.
This reminds me of religious people who say "Ha! Atheism is just another religion!" In both cases, there's an inability to conceive of what a truly apathetic position looks like.
> The acritical use of the yardstick of "progress" as the ultimate value for everything (along with its little cousin: "productivity") is, in itself, a philosophical position.
There it goes again! It would seem philosophy is unavoidable.
Could be just the U.S. When Harry Potter and the Philosopher's Stone was published there, they changed its title to HP and the Sorcerer's Stone. Perhaps they knew anything mentioning philosophy wouldn't be bought by Americans.
> For example, consciousness. You might argue that consciousness emerges from matter interacting in a complex way (emergentism)
Define consciousness. It's hard, because the word is a suitcase of meanings.
On the other hand, define an agent. It's much easier. An agent has to have three things - sensing, ability to act and ability to learn from sparse rewards. It is embodied in an environment, learns from the environment and depends on the environment for its "complexity".
Many problems go away when you learn how to represent the question in a better way.
Real philosophy is a very subversive endeavor at the moment. Perhaps it always was. It's something for those who love knowledge, but not for those who expect any public recognition.
If you take the entire history of philosophy, it can be summed up as a discipline which inspires thoughts and questions. In a way, it's like a form of entertainment. I'm not sure it necessarily involves knowledge.
>Some boundaries of scientific knowledge are quite visible. For example, consciousness. You might argue that consciousness emerges from matter interacting in a complex way (emergentism), and you might be right, but this is accepted as a serious scientific theory although it has zero content -- no way to falsify empirically, no explanatory mechanism proposed.
There is nothing wrong with it as a hypothesis. At least science is investigating the issue, while philosophy (with some exceptions) seems to be preoccupied with merely pumping its intuitions that this can never be - intuitions that, by the argument you presented in the above quote, have 'zero content.'
To illustrate: there is common belief that metaphysics became irrelevant -- that it was replaced by science. This is, in itself, a metaphysical position, and a misunderstand about what science is and does. Of course, reasoning about what science is and does is part of philosophy. So is the discussion of knowledge itself, what is knowable and how can we trust the various methods of seeking more knowledge. A lot of stuff surrounding these topics happened in the XX century.
The acritical use of the yardstick of "progress" as the ultimate value for everything (along with its little cousin: "productivity") is, in itself, a philosophical position. One that is currently maintained by social norms and authority. Which does not mean it is "wrong". I am only claiming that it is accepted by most people without any reflection.
Some boundaries of scientific knowledge are quite visible. For example, consciousness. You might argue that consciousness emerges from matter interacting in a complex way (emergentism), and you might be right, but this is accepted as a serious scientific theory although it has zero content -- no way to falsify empirically, no explanatory mechanism proposed.
Real philosophy is a very subversive endeavor at the moment. Perhaps it always was. It's something for those who love knowledge, but not for those who expect any public recognition.