I don't see this guy giving a damn about that problem. Instead he tosses off bon mots about Popper and Gödel, and intones comfortably that in his view his peers have the right idea because they converge on atheism/agnosticism. Wow. The world is saved.
There is a class of people who value a "small life," the kind we all need to adopt: zen monks. Unfortunately, such people are afflicted by the disease of irrationalism in their ideas about God, or some similar concept that offends the sensibility of the excessively rational so much that they can only paint Belief in the most lurid colors.
Highly unfashionable to study such people. Instead, a professional philosopher keeps giving interviews, attending conferences and publishing papers. I'll be glad when we can no longer afford them. I also apologize for my lack of charity but there it is.
There's been a total over-correction away from religion and we've lost sight of why it evolved in the first place. Social scientists have long known that there is nothing rational about the decisions we make, and certainly not in groups. Religion may have led us to quite a bit of violence in the past and might not have given us self-driving cars, but simply understanding the universe is not going to be enough to save the world when we are all still driven moment to moment by our emotional predispositions. We need a way to engineer and shape our emotions and and our collective psyche - something religion evolved to do. Science is a valuable tool for good, but it is limited by our own lack of virtue in how we use it.
I think a lot of people dislike that idea because religion, unlike science, doesn't offer us any certainty. We're addicted to an illusion of certainty in a world where our actual lived reality is always uncertain. We need to get over that. Certainty is a lie.
Religion doesn't seem the answer, it always creates more problems than it solves. It thrives on suffering. Because suffering creates a need for religion, it sustains itself this way.
Can we find meaning and purpose in our existence in a way that isn't so detrimental to quality of life, now and in our future?
Religion also provides inclusion for those that would otherwise feel outcast. It provides social structure and identity. It provides a framework for you to build culture upon. It acts as a moral guide for behavior. It unifies communities around central goals. It uses myth to convey inter-generational wisdom. It teaches rituals that improve mental and physical health. It provides dozens of important things.
The idea that religion is just about an individual doing X or Y to feel like they have "meaning" or "purpose" is nonsense. Religion evolved as the foundation of human social order. It's about communion and community, not self-indulgence. We know from our earliest architectural sites, like Gobekli Tepe, that religion preceded even the agrarian forms of human society. It's part of who and what we are.
It's extremely arrogant to call it detrimental, especially when our "post-religious" society - where everyone claims a god of their own but no one practices anything - has a workforce composed almost entirely of alienated, corporate slaves that lack more than half the things I mentioned in my second paragraph and a non-working population that suffers in silence because they offer no "economic benefit" to the rest. We are hardly enlightened in any meaningful sense of the word.
In these discussions, to try and talk about religion as something other than "grotesque superstition" has a hackle-raising effect on people invested in our current cultural totems. The only thing to do for them (the heroic thing), is to fight tooth and claw against those irrational forces trying to coagulate into a new Dark Age of superstition.
Because of this, any real attempts to understand religion are few and far between. It's certainly true that "religion" as experienced by most is a social institution constructed by people of mixed understanding and motivations, and therefore, depressingly, is just as broken as every other organization people create.
These organizations are only the structure erected around a spark that remains almost as mysterious as consciousness itself: the mystical experience of the Divine. I do think philosophy, if it's meaningful at all, should investigate that initial spark. Because it does lead to those good things camelNotation is talking about: inclusion for outsiders; Myth as a durable storage medium for the mores of a culture; Ritual that supports embodied existence, with all it's imperfections and requirements to deal with real suffering.
Personally, I think there is plenty of complexity and intricate thought in that space to keep a generation of technically minded people interested. But alas, they mostly make jokes about a "Sky God" and deny the complexity which they have even inside themselves.
Here is a starting thought: paradoxical language is used in religion precisely because it "stuns" the intellect, forcing it to switch context in ways it doesn't yet understand, but in time, will. It's a kind of interrupt.
A deep inner experience of religion is something that happens on earth because people must endure the unendurable. To pontificate that "oh, they evolved denial, problem solved" is to throw white paint over a breathtaking chiaroscuro. It may be merely an "evolved response," but it deserves more respect.
Except for those who don't fit in the religion narrow view of what is acceptable..
> It provides a framework for you to build culture upon.[cut]It provides dozens of important things.
Proof needed: there are many countries which have a high share of atheists/agnostics (northern European countries), these countries have their own culture and people seem to be quite happy in these countries without needing a religion.
Again, I do see religion helping lots of people. But we could do better, and the only reason we don't are political. Try setting up a pastor-like support system for everyone. You will see capitalists complain, since they either don't want to pay for it or want to sell that service themselves. And you will see religious people complain, since will remove a justification for the existence of that organization they belief in.
Why does the church have to teach mental and physical health and wisdom? Isn't teaching something we have schools for? Unless, of cause, you want this teaching to be done exclusively by or according to your church.
You complain about alienated corporate slaves. I see little difference to peasants and serfs in the middle ages. And they certainly didn't have a lack of religion.
You mention inclusion. You forget the cleansing of those with no or the wrong belief, that happened throughout time and still today. In the name of god, of cause.
Again, i understand religion can offer many benefits and does so in an easy to consume package. But why does it have to be something based on a two thousand year book that pretends to be perfect in every way by definition.
It doesn't have to be anything. You don't have to have five fingers on your right hand. you could have six. Or four. You could have four arms instead of two. Lots of things don't have to be the case, but we take things as they are offered because supposing you can do better than a complex, evolved system is not a reasonable claim unless you actually, deeply understand the system you are criticizing.
Perhaps you can come up with a better alternative to the great religions of the world. Perhaps you can engineer a land animal faster, stronger, and better adapted to life on a savanna than the cheetah. Perhaps you can do lots of things.
Doesn't mean you actually can or will, especially not in this lifetime, especially not on your own.
Look for the meaning inherent in our existence as natural creatures, duh. Once you stop assuming, a priori, as an implicit part of the definition, that meaning must be supernatural, the lack of supernatural revelations stop posing any problem.
Life is suffering. There is always pain and sometimes joy, due to our biology. Which is why religion will always exist, even if it's just a lucky rabbit's foot.
Not sure how to put this politely, but it's the other way around. Religion offers certainty (though a false one). It tells you exactly why we're here, why we suffer, and exactly what we have to do to alleviate the suffering.
It is science that destroys that false certainty and replaces it with tentative truths and the obligation to find meaning.
Nope. The real problem is it is being misused by evil people always in every culture and moreover no one can question to correct it because based on belief with no evidence.
They try to share experiences, about certain possibilities of the human mind, and how they feel.
But those are weird, complex, possibly frightening, and they are also sometimes mis-communicated. Also - they challenge deeply held personal beliefs.
So it's natural that people will resist.
The nature of enlightenment was always hard to explain. Until you listen to Gary Weber(http://happiness-beyond-thought.com/):
He explains it simply - you know that internal voice, that's always focused about "I"? and always remembering the past and making-plans/dreaming about the future ?
Think about a time you'd had less of that ? so it's possible to make this voice disappear.
And than - you think when you want to think, when you need ideas they sometimes pop into you head, like regular creativity , but most of the time - peace.
I admit that we may just be at a dead end with our current consciousnesses in the mass. Some individuals find a way "through," but on the whole most of us will be happy denying most of reality and we'll burn out because of that (resource depletion).
I guess it just chaps my hide to see somebody who is a "professional knower" about this stuff being smug and on top of the world when he obfuscates far more than he explains. I'll hit the hay :).
But the research about it seems to be advancing nicely(Gary weber is a good starting source for that too), and the problem of the extreme cost of the relevant tool for training/research - the fmri - may be soon solved by a startup, openwatr.
So maybe this could still be a part of the solution to the resource depletion problem.
I mean, yeah, "at all costs" is a little extreme, but the alternative isn't to only value the simple, close by, and cheap. I kind of like the fact that humanity strives for better and better things - it's what took us from the age of 50% of children dying by the age of 5, to barely any dying. It's what took us from the age of the majority living in poverty and hunger, to the majority living far better lives.
It's what will hopefully take us from all the problems we have now, to all the solutions to those problems that we can't even conceive of yet.
I may be misunderstanding you completely - but if I'm not, what's wrong with that?
My point is that a real look into "better" will often (probably! Not necessarily!) lead a person to minimize his or her ill effects. Naturally, being decent, they'll try to maximize their good effects. I pointed to a tradition like zen to evoke minimalism, to evoke noticing the smallest details. There are certainly many ways to improve.
I guess I spoke in a strong way so that my words wouldn't be immediately lost. If it sounds like I'm devaluing too many just plain good aspects of humanity then I can say I didn't mean to say that.
Now...I do quibble with the set of facts you offer that indicate the now is the best. Yes, life is so good for so many. Though we are clearly not in balance. We are heating up the atmosphere and we don't have a solution. That is an actual problem, right? Yes, it's easy to ignore. Easy to hope that "something" comes along. But I don't call it responsible.
Well I don't really agree with that, what makes you think so? The economy is not zero sum. Progress is not zero sum. When polio was eradicated, it didn't mean somebody else pays the cost of anything. When doctors learned to wash hands before surgeries so that mothers wouldn't get infected in childbirth, who did that hurt? When Bourlag invented strains of wheat that fed twice as many people for the same land input, ushering in the green revolution, it was all upside.
Similarly, when Wikipedia came along to make a great database of most of the knowledge of the world, that was a great thing. When Apple created the iPhone and ushered in an age when almost everyone has access to all that info in their pockets, it was a great benefit.
Who are these people who are paying the cost? Worldwide poverty and health has also improved tremendously over the last 20 years.
> We are heating up the atmosphere and we don't have a solution
Well yep that's a major problem. Can't say we disagree there, but I don't think the solution is to strive for minimalism (this seems to be neither practical nor desirable).
Well, populism and xenophobia are simple, as simple as it gets, they appeal to base urges that demand no thinking (indeed require to not think at all). The whole world is nowadays "close by", thanks to cheap flights everywhere. And the labour that produces all the consumerist crap we, well, consume, is dirt cheap.
You should rejoice.
And now, if we can't afford people with different preferences, or intellectual interest (or - I guess - those who disagree the above - for some reason), what to do with them?
I would never ask someone to do a thing or live a way that they don't agree with. I'd only ask them to fully understand what their way is...how it affects the fabric of the world. If they pursue their interest with full depth, they'll eventually understand. And then they'll do the right thing.
I agree that the tone of the Scientific American article was annoying and somewhat condescending. But overall I don't have any issue with the ideas presented. I don't see the conflict.
One thing that's happened regarding Buddhism is that western people have picked up the things they like and ignored the rest. In reality, Buddhists think that the experience of Oneness that they reach is real. They may give it different names. But I haven't heard it said by them that the experience is limited to the mind (by which you must mean an individual brain in a head). I think western "apologists" for Buddhism have done that, in order to curry favor...to win over a handful of materialists.
They are wasting their time. A materialist is already someone who staked out the least courageous position of engagement in life. What good is their approval?
Technology is a natural part of evolution and if you do not adapt to that you will not survive.
What you can have is growth with a less negative impact which will be incentivized by growth. But you can't suddenly stop growth and expect the results to be positive and the only reason you can even think about it being possible today is that you are piggybacking on what growth made possible.
I'm definitely piggybacking on what growth made possible. I wouldn't have been born otherwise. But I can still see a shitty situation and call it what it is.
Finally, technology is tool-making. We make tools to improve our lives. However, some tools grow too large and come to dominate us. At some point continually "adapting" to outsized tools becomes a fools game.
And yes it might be that we grow ourselves into extinction as a species if we can't optimize enough but there is a reason why people flee places that don't have growth and look for places that do.
Although I'd point out that: Zen has no God or gods or beliefs.
Zen knows that ideas in the mind prevent you from doing this, that's why they are so militant about banishing concepts.
For example, when we westerners say "God," we attach all kinds of stuff to it. Near-eastern King, sitting frowning on a throne.
Buddhism would agree with none of that. And so, to them, that kind of God doesn't exist.
A field which surrounds us though...this is the Buddhist idea of God and it's precisely what our culture is at war with. As materialists, we continually deny that such a thing could be. My argument is that we should seriously think about this idea.
I'm no religious scholar, though I practice zen. I certainly have some things wrong and am happy to be corrected.
“Knowledge is as wings to man's life, and a ladder for his ascent. Its acquisition is incumbent upon everyone. The knowledge of such sciences, however, should be acquired as can profit the peoples of the earth, and not those which begin with words and end with words.”
- The Baha'i Writings
* subscribing to an aesthetic hunch about fundamental physics (which has no consequences on how to lead our lives at all!), with
* subscribing to an epistemology of revelation, and the ancient, unethical, incoherent scribblings of the Abrahamic religions (with big, bad consequences for our lives),
as both being instances of some harmless, justified (or justifiable) "faith"? That's very wrong.
His hunch about the underlying structure of physical law is not comparable to people clinging to religion in today's world, neither in justification nor in consequence.
Note BTW that he cites prevalence of atheism among philosophers as an instance of convergence towards truth (not as a reason for accepting atheism). Please don't foist simple logical fallacies on him.
> Of course, I've found that most atheists tend to have faith in science without actually understanding science
Funny, I've found most atheists to be better versed in both science and religion than most theists.
> I would always encourage atheists to investigating with an open mind, and to learn more about their system of beliefs, and more about logic.
Thanks. I'd expand your enjoinment to study one's system of belief and logic to include not only atheists, but everyone; surely the rank of atheists will then continue to swell.
As an aside, it would help if you could break down your writing in paragraphs.
Science is merely a set of tools and processes, nothing more. Seriously, NOTHING more. If, as GP says, you are taking it on faith that some "scientific conclusion" is true, you're merely deferring expertise on the matter. Much like when your tech support guy says your hard drive is shot and you need to replace it, you just trust that he's more likely right than not, and take his advice. If you didn't want to believe him, you could go back to school to learn everything there is to know in the field and find out for yourself whether or not your hard drive is salvageable. You may even come to a different conclusion!
With Religion, there is nothing you can do to understand it. You can only "have faith". If your spiritual leader or holy book tells you the world is a certain way, you can only believe it, or disbelieve it, you cannot do anything to investigate the veracity of those claims. Truly, this is the most authoritarian of world views: Believe this because I tell you to!
There's a few thousand years of Talmudic scholarship you might want to read up on, just to pick one of the more obvious from among the many counterexamples.
Almost every religion is a much broader and much more intellectually diverse intellectual project than their respective fundamentalists would like you to believe.
Just because people spent a long time debating the interpretation of their holy book (and accompanying oral tradition) does not make it any less grounded in the supernatural, at the end of the day. Followers still have to either believe it, or not, where "it" is that some divine revelation inspired those original sources.
Religion, broadly speaking, aims to answer a variety of questions; most foundationally but least importantly "how does the world work?". And science indisputably provides better answers, which is why many theological schools of thought are more than happy to cede their authority in the topic. It free them up to focus on the more important question: "given the world, how best should we live in it?".
It turns out that narrativizing and anthropomorphizing the world around us provides an immensely powerful framing device for thinking critically about community values and for approaching consensus on what those values should be, both across cultural lines and between generations.
In other words it doesn't really matter whether the Torah was divinely inspired or not. It's the grain of sand that the Talmudic pearl accreted around, no more, no less.
I agree. I take a fundamental interpretation approach to the Bible. I firmly believe that science must jive with my belief-set. If it doesn't, there are serious questions that I need to answer.
Personally, this calls into question things like evolutionary theory, or creation theories that don't match what I can extract from the Bible.
I don't purport to know everything, nor do I think I'll ever manage to answer all the outstanding questions. I also don't claim correctness on either side of the question. What is important for me is cognitive resonance.
> Religion, broadly speaking, aims to answer a variety of questions; most foundationally [...]
No. I would counter that religion (in general) seeks to answer the "why" of it all.
> It's the grain of sand that the Talmudic pearl accreted around, no more, no less.
"Seek and you will find..." in this case, we go looking for answers and eventually, (sometimes) in matters that can't be handled scientifically, we settle on suitable answers.
It's not clear the Bible has much to say about these at all.
Fundamentalists insist that it does, but there are many other strands of Christian thought that disagree.
If we assume there is a ground truth (as an axiom, inductively suggested by persistent results from physical experimentation), then we still can say nothing of the God that created it (this, all of this expansive and perplexing existence). In fact, science as it stands cannot answer questions that we cannot test.
How do you test for the existence of a being that is not a part of what we live in? Even if the God overlaps with our existence (a core tradition of Christianity being omnipresence) what test can we leverage to show He is here?
All of this leads to the dispersed opinions of many, many people. We have supposed two things so far: there is a God, and there is some absolute representation of our universe (above I called it existence).
Given ground truth, how might we approach the texts that are said to explain God? As a society (my guess is many people do this) we tend to overlay what we "know" onto things we don't understand.
For example, in Genesis we have words like erev and boker which are translated consistently as "morning" and "evening". But its ancient Hebrew, we don't actually have the meaning as it was intended. We have lost the societal context that informed the deeper meanings of those words. So we guess. We think they mean morning and evening. But that's at best ambiguous.
They could mean "order" and "disorder", as they are used in other contexts to mean that.
I'm sure I could meander about and come up with a logically sound chain of interpretations, but I only left this here to illustrate just how difficult the interpretation of simple things can be.
If you're Orthodox, both the written Torah and the Oral Torah (Talmud) were capital-R Revealed by a supernatural authority at Mount Sinai.
Yes, the study notes for the book were revealed with the book. If you find this confusing, well, go pick up Rudin's Principles to Mathematical Analysis.
I don’t think that’s true - I think an atheist can gain a lot of understanding about human nature by reading religious works.
> you cannot do anything to investigate the veracity of those claims. Truly, this is the most authoritarian of world views
In practice that is true for science too. I take it on faith that scientists are trustworthy and doing their best to uncover truth.
As an example, Leviticus is fascinating if read as "here's the political structure of a herding culture". It's visibly more of a constitution than a religious text, including the allocation of taxes, a permanent administrator class, a penal system, and a statute of limitations on debt.
Sheer nonsense. The Enlightenment, the greatest period of scientific advancement in human history, was driven by people who saw it as their sacred duty to understand God's Creation. Atheism in the form of the Soviet Union dragged us backwards.
Nonsense - that ignores that fundamentally science is a human activity. Because it is a human activity it is open to the same biases, self-deceptions, fads, groupthink and political pressures as any other human activity.
Apart from that, implicit in your defense of science and attack on religion is a belief that true knowledge can only be attained through the scientific method, and nothing which is true can be acquired outside of experiment and observation, which itself is a faith based (though pragmatic) argument.
That being said, I make no claims for the existence of a God, Abrahamic or otherwise. At the same time you could be a little more circumspect and humble about what you know to be true.
The scientific method IS the best tool our species has when it comes to determining truths about the laws of nature. In fact, it's practically the only tool we have. Almost every major advancement our species has made has been because someone had a theory, figured out a way to test it, and came to a conclusion with some degree of confidence that they were then able to act upon and share with others.
Since this is a thread about philosophy, "truth" (and how one determines it) is extremely relevant. Epistemology is a whole branch of philosophy. If you're truly skeptical, one could even argue that true knowledge cannot be attained at all, and all we can hope to do is arrive at conclusions with as high a degree of confidence as possible.
My point is that science offers a way to increase confidence in a conclusion, religion does not.
It's as simple as that.
But I also believe freedom is better than slavery. I value wisdom over ignorance. And I believe those things without any proof. The scientific method cannot prove to me that murder is wrong, and yet I hold it to be true.
So despite the fact that I agree with the power of the scientific method, I'll cop to truths that I believe without proof.
The study of evolution gives a mechanism through which a sense of morality is 'hardwired' into us for its adaptive benefits. Or, less teleologically, the study of our own brains can provide evidence for us as humans just not liking certain things.
Additionally, a technique like group simulation could show that allowing murder is against the average individual's interest.
I'm not claiming I come to my beliefs through deliberate scientific reasoning, but I do think that science has important things to say about most things.
>I think that the idea of a singular thing called "truth" is actually fairly harmful.
I disagree with that.
You believe these things on the same kinds of evidence you use for believing the sun will rise tomorrow.
You employ a rhetorical move that some theists have used in the past -- namely, casting doubt on the veracity of scientific beliefs by painting them in the guise of religion.
This is also a convenient move that lets you characterize huge swaths of academia that disagree with you by questioning their psychological motives for holding the beliefs they do.
And now that philosophy and science are reduced to priesthoods, what makes them better than religion? After all, most atheists "have some personal, emotional reason for not believing in God rather than a well-thought out chain of reasoning; but most people believe things for personal and emotional reasons, so it isn't surprising." As if the motivations behind a given belief have any ramifications on its truth value.
This effectively allows you to justify belief in God without actually making any arguments for it -- as if in the hypothetical vacancy of science and philosophy we're left with no choice but to default to theism.
I would encourage you to investigate the philosophy of belief and God in more detail, and to be "open to discovery of truth even when [you] may not like it." There is a great deal more literature on the subject than the pithy writings of Dawkins et al (of whom I am no fan) that deal with the subject in much greater detail, and with much more nuanced arguments than you're giving the entire discipline of philosophy credit for.
Also, for the record, I am not an atheist or an agnostic. I am not a philosopher, and I didn't even major in it college. There's a lot of great philosophy out there I haven't even tried to tackle. But I appreciate its value.
I think it's clear you're arguing in bad faith. And I'm disappointed that such an argument is the first thing that comes up on an article like this on HN.
You think he is commenting on the scientific method but he is really commenting on human nature.
People really are treating Science as a kind of religion.
Understanding this motivation is important because it shows the precarious position we are in - any event which causes a loss of faith in “Science” the religion could quickly turn our culture away from its relatively pro-science culture.
Maybe I totally misunderstood OP, but it seemed like they were conflating the state of popular discourse with the state of academic philosophy.
As an aside, I don’t think that people are likely to stop trusting in science or the scientific community because of the state of discourse surrounding science. I think that people are much likely to distrust science simply because they don’t like certain findings or it’s not politically aligned with them, imho. At least in America.
Are you not doing the same thing back to him ?
If you disagree, then please tell me the definition of science.
2) here are several definitions: https://www.merriam-webster.com/dictionary/science
2. The question is whether you can show that what the person you're replying to is arguing in favor of is not science. Pasting some definitions proves neither that you understand them nor that they apply.
I think we found a point we can agree upon!
"Scientisits" use those processes to attempt to arrive at true conclusions. This can be very difficult.
Over time, "science" (the nebulous institution) can build up enough accrued "truths" that it can make interesting conclusions with a decent degree of confidence. (Note: scientific studies that are reported by clickbait headlines are not what I'm talking about.)
The point is, that you can personally go learn how it all works, follow the studies, and attempt to replicate them, even, to arrive at the same conclusions. Or, if you find issues along the way, you can point them out and contribute to the corpus of "scientific knowledge" by removing an invalid "truth".
If you want to define science as that which uncovers truth, then, for that to be true, you must be able to define truth without resorting to tautology (that is, there must be a way to verify 'truth').
Excluding a subset of what's false doesn't necessarily lead you to discovering or being able to accept (integrate) the truth.
From there the practice of science is repeating and testing claims about nature, and communicating these tests through various social processes such as peer reviewed journals or running through the streets naked shouting "Eureka!"
No, science is the practice of using the assumption of consistent laws of nature to build predictive models. In a way, it is more an elaborate extended test of the utility of the assumption than a belief in it (though obviously belief may support the practice and observed utility of the resulting models may create or reinforce belief.) But belief in the assumption is neither necessary to nor sufficient for science.
They didn't say that. They said science was a collection of tools and processes. Obviously tools and processes exist that aren't a part of that collection.
That doesn't mean that red isn't a colour because there are colours that are not red.
Seriously, people question if time exists or how many dimensions their are. Consensus is what allows medicine to operate and people to build computers etc, but it’s not accepted Truth.
People simplify things like to use hypothesis > experiment loop as Science. But, in practice simple observation is often used for things like Botany.
Ex: Philosophy is not science becase it does not work. You can’t argue from first principles and end up with a modern CPU.
I'm slightly amused by this statement, as modern science is descended directly from philosophy. Indeed without philosophy, I could easily argue there is no science as we know it.
Now days we call the useless bits philosophy, 200 years ago back when people took Phrenology as a real thing that was not true.
Thats just it though, science is natural philosophy. Doesn't get changed by modern terminology at all.
Your personal definition of science and philosophy don't override the overall usage of the terms. Feel free to encourage their use, but know those definitions aren't the overall accepted definitions in use.
Anyway, it's really not my definition for Science, if you want to quote wikipedia: "Science (from Latin scientia, meaning "knowledge") is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[a]" (https://en.wikipedia.org/wiki/Science) Notice the last of specific implementation details, that's what I am referring to.
"The earliest roots of science can be traced to Ancient Egypt and Mesopotamia in around 3500 to 3000 BCE" well before natural philosophy. The Greeks are given a lot of credit, but they the ideas are much older than them.
It seems incredibly unlikely someone would cite the popularity of agnosticism to establish the certainty of atheism, so I am fairly confident that was not the move he was making.
Looking back at the prompt:
> "Your colleague David Chalmers has fretted that “there has not been large collective convergence to the truth..."
Pretty sure he was just giving a counterexample: a convergence which he happens to believe to be true, along with compatabilism for free will. He's not making some appeal to popularity to resolve the issue.
After reading it I'm not even sure whether he considers himself an atheist or agnostic.
You probably mean the colloquial use of agnostic fornegative atheist - the position of someone who does not believe that a god/supernatural being exists -, in contrast to positive atheism, the position of someone who believes that there is no god/supernatural being.
There is no problem with this colloquial use, I just wanted to clarify that the "professional" terminology does not match it.
I was just quoting someone else's use, I didn't really mean anything by it.
I've read about Anthony Flew's framework, but it still seems like unnecessary architecture to me, since I still haven't encountered situations where lay terms fail to describe someone's position in a reasonable way.
But you know, come to think, if there's one thing religion has taught me-- people can self describe however they like, and there's no sense fighting them on it. If people want to be called just plain atheists, or negative atheists, or double plus agnostics, or pastafarians, that's fine, I can go with whatever everybody's preference is.
Presumably he wouldn't cite the convergence towards atheism as an example of convergence to the truth if he'd consider it mistaken. So that assumption is safe, I'd think.
Same could be said of agnosticism, which he also specifically mentions.
Probably an atheist if you're just listing that as a more probable position things could "converge" towards-- I was really just surprised at brobdingnagians' confidence in Maudlin's religious views, based on a passing reference to the entire body of philosophers, from which inference brobdingnagians later levels a charge of hypocrisy, because elsewhere Maudlin discloses an unevidenced belief when pressed on another (nonreligious) matter without even indicating how strongly he holds that possible hunch.
Thought it was worth an aside that brob might not have perfectly pinned Maudlin down.
Faith is when its literally unprovable by anyone but you still believe it. Its completely different.
Your terminology of trust vs. faith is great. I will incorporate it into my arguments in the future.
1. A belief that the scientific method will definitely approach reality (a realist/pragmatist claim)
2. A belief that the current consensus is right - (but it could be disclaimed)
3. A belief that science will explain everything, so one could discard other studies, such as philosophy or theology.
Each of these positions can be disputed and may be wrong (especially since they were not acquired by the scientific method, but arrived at by philosophical thought), so to some extent you need some "faith" to accept these statements to be true.
No scientist believes position 2 in general. Regarding particular subject matters and well-confirmed theories, on the other hand, it's pretty much uncontroversial. No cell phone would work and no plane would fly if the underlying physical theories were flat-out false.
Position 3 is probably analytically true, too. The idea that there is something that can in principle not be explained by taking a close scientific (=skeptic, methodical, experimental, precise) look at it hardly makes any sense.
My point 2 was not supposed to mean that current theory was wrong as much as there (most likely will) be a more accurate better theory that will replace current ones. No-one disputes that (non-Microsoft) cell phones work. The problem lies on the philosophical side - Newtonian mechanics caused a big problem in that it was quite strongly suggestive of determinism. But more underlying theories, such as QM make the situation more difficult and complex. Assuming the current or fashionable theory of the time is correct is very likely to lead to a temporary or fashionable philosophy.
Position 3 - Science is an empirical study of reality - it cannot in any way explain why it is not some other reality, for example. If it is governed by logic/maths, unless every mathematical construct has physical existence (which is a rather weird and unrealistic idea), you cannot explain why this mathematical construct has existence and another does not. You may argue that other studies may be pointless, but you definitely cannot claim analytically and for certain that science can ever explain everything - so this is the point at which you are bringing in an unreasonable faith in this case. And science without study of philosophical implications is rather shallow and cannot give a reasonable mind satisfaction (not that philosophy is satisfying).
FWIW, and I'm not an expert, but I'd say that the Popperian view you summarise reflects philosophy of science circa 1960's, and things have moved on since then.
As an introduction, Alan F. Chalmers' What Is This Thing Called Science? is good. It covers Popper in chapters 5 to 7 (Introduction & Naive Falsificationism, Sophisticated Falsificationism, Limitations). Then it moves on (as it were) to Thomas Kuhn's The Structure of Scientific Revolutions, Imre Lakatos' Falsification and the Methodology of Scientific Research Programmes (great short read, btw, applying those ideas to maths: Lakatos' Proofs and Refutations), and Feyerabend's Against Method. Chalmers then covers Bayesianism, new experimentalism, the nature of scientific laws, and scientific realism.
They've all pointed out problems with Popper's demarcation criterion. Not sure there's really a good catchy replacement, though. It seems to amount to "it's complicated".
One of the problems with Popper's account is the following: an observation is never about only one hypothesis, but about it and a whole set of auxiliary hypotheses. So, if your observation does not line up with the prediction, do you throw out the theory, or some auxiliary hypothesis? (Eg, Uranus didn't move as predicted, so Newton's physics was wrong. But wait, it was not a problem with Newton's physics, but there was another planet, Neptune.) Quine's notion of a "web of belief" captures that quite nicely - you don't just take one hypothesis, test it, and drop it if falsified, but you adjust the whole edifice such that it becomes coherent again. See .
Not to contradict anything you've said, but my original message above was in response to something very different than I think you're talking about. I see a lot of comments where people seem to treat Science as some unquestionable list of facts and treat Scientists reverently. It's like they've made a religion out of it and don't dare question the priests - which is ironic/silly when they're refuting conventional religions.
Anyways, maybe my current view is a little too hard line on good science relying on falsification, but I still reject the notion that any scientific belief is verifiable / provable. A given theory may be the best we have, but a new piece of information could change it radically.
All I wanted to add is that things are more complicated than just that one demarcation line, and philosophy of science has not stood still over the last half century.
What I mean is, people don't find an experiment about nanostructures that doesn't work and start going "hey I think I have disproved quantum mechanics!". Even when the OPERA faster-than-light neutrino debacle was going on, physicists were largely saying "We are pretty sure that there is some mistake in either the model or the experiment such that these neutrinos are not moving faster than the speed of light." In fact the objection goes a little further than that: according to Newtonian mechanics, geocentrism is perfectly admissible. There is absolutely nothing wrong with constructing a geocentric frame of reference and doing Fourier expansions of the motions of planetary bodies. No experiment has disproven it because it's just a mathematical choice of accelerated reference frame to analyze the motion of the planets. According to unmodified Popper, each of the first two should have led to a rejection of the scientific principles which led to them, while the latter would mean that geocentrism and heliocentrism are pseudoscience and there was never any scientific switch between them -- all of that sounds wrong.
So Kuhn introduced the idea that there is a separation of science into two parts, "theory" and "model". A scientific theory like quantum mechanics or heliocentrism is a platform for building models and deciding what questions are worth asking and how one goes about asking them. They are a "platform for computation" in a sense, and most of them are "Turing complete," there is nothing that they can't model somehow. So classical mechanics turns out to be able to do something with quantum mechanics if we use something like Bohm's pilot wave theory. And Couder and Fort's droplets on a vibrating oil bath show an experimental realization of particles which nevertheless diffract in a classically explicable way, underscoring this point. Kuhn said that theories need to be abandoned during some sort of "scientific revolution" but was very hazy on how exactly that happened. But he was a huge fan of Popper and wanted to say that Popper was fundamentally right about the way that we model systems, discarding models immediately when they do not fit experiment and coming up with better models.
Kuhn picked up a lot of flak because one of the things Popper's works were trying to do was to discredit things like psychiatry and astrology as being "pseudoscience" rather than real science because they could explain everything and thus never stuck their neck out -- thus Kuhn's work seemed to need some extra structure about the manner that we actually conduct such a revolution, otherwise astrology might not be a pseudoscience but an "eventual science" or so: if theories are just some sort of aesthetic agreement on behalf of the existing scientists then what stops us all from deciding that we rather like reading our weekly horoscopes?
This challenge was to my mind best resolved by the "research programmes" idea of Imre Lakatos. He philosophizes that theory choice -- fundamental progress in science -- is best seen as motivated by lazy grad students. Like, laziness is a virtue on this account: grad students have to make a contribution to the published literature that excites their peers and makes a name for themselves, and they do not have much time to do it.
So, why do people use the Copenhagen interpretation for everything if very few people philosophically accept its ontology? Because it is mathematically equivalent to all of the other interpretations but is astonishingly easy to use, just "yeah the wavefunction collapsed so now this is reality, I don't strictly have to care about that collapse happening across spacetime instantaneously because that's not observable anyways, so here are my experimental results." Lazy grad students will choose that ten times out of ten over coming up with the correct pilot-wave mechanics and simulating it. Why did heliocentrism win if Newtonian mechanics says that geocentrism is 100% experimentally valid? Because the heliocentric models are easier to build and reason about with straight mechanics, and lazy grad students will take Newton's law of gravity any day over those epicycles.
You can in some respects view this as Occam's razor but Occam's razor is painfully ill-specified. A better view of it is that it's a survival-of-the-fittest, a theory of scientific evolution. So, theories are "genes" which make it easier or harder to publish interesting discoveries that are modeled with those theories in scientific journals. Based on others reading those papers and extending those results in various ways, theories "reproduce" and the ones that reproduce most effectively are the ones that best adapt to their (ever-changing) environment.
I don't agree with their arguments. Just pointing out that not everyone thinks faith is this Kierkegaardean leap.
Jewish and Christian scriptures also don't define faith that way, for the most part. The Gnostics would have been more on board.
Faith in science is not an unreasonable assumption—says me who holds that assumption—but it's faith nonetheless.
One would have to constrain the definition of "the universe" to a tautological one in order for science, the careful use of rigorous empirical analysis, to be the best at investigating it.
We use other forms of analysis to study different things, forms that may share some aspects with science, but aren't quite the same. Some things just aren't amenable to scientific study. The humanities, for instance.
Faith is when one acts on a belief without having any rational reason for it. My rational reason for believing in science is simply: it works!
Argumentum ad consequentiam or appeal to consequences.
Once you let go of thinking that truth is knowable you'll realize that there is some faith that you have.
It's not in science per-se; but it might be faith that you exist. That you can reasonably conceive of thoughts. That those thoughts order the world and that the world that you perceive is a representation not unduly clouded. That the actions of your being are guided by your own will.
Truth might not be knowable in an absolute sense, but you can use these little starting points of faith then come to appreciate that there is a truth continuum and science is a good approach to discovering knowledge.
Philosophy seems so stupid before you start learning it, but it really is beautiful and I regret not being able to pour over it more.
What characterizes an AaC is the following structure: if X were true then the consequences would be good/bad; therefore X is (probably) true/false.
So if eli_gottlieb were saying "I believe in science because it would be lovely if science were right", that would be an AaC. But that's not at all what he's saying.
The form of his argument is more like this: "by assuming X we can arrive at a bunch of conclusions, and they consistently seem to be correct ones; therefore X is probably true". And that is (a rough sketch of) a pretty good argument.
(There are nits to pick -- more with my paraphrase than with Eli's original. For instance, science isn't a bunch of propositions so much as it's a way of trying to find truths, so "believing in science" isn't so much "believing that X" as "endorsing X as a method". And "... therefore X is probably true" isn't quite the right thing; "... and this constitutes evidence for X" is better. Etc.)
In the seems.
But: If you define "faith" broadly enough then sure, everything with any element of belief to it involves "faith". Derive everything rigorously from axioms? That means having "faith" in your axioms! Do literally anything else at all? That means taking a step at some point that doesn't have a watertight logical justification, so it's "faith"! -- So, sure, Eli's argument (or at least my sketchy reconstruction of it) involves an element of "faith", just like literally every other argument anyone could ever possibly make.
I guess this is what you mean by denying "that truth is knowable". If by "knowable" you mean "... with literally no room for any sort of uncertainty" then I vigorously agree with you -- but I have the impression (perhaps wrongly) that you were offering this as some sort of correction or education for eli_gottlieb, and I see no reason to think he needs it. All he's said is that he believes in science because it works, and that is perfectly consistent with taking "believe" to mean "accept as our current best approximation, to the best of our knowledge, to the truth, with the understanding that later discoveries might change our opinions greatly" or something of the sort.
But my core argument here is that we have no starting point. If you want to call them axioms that's fine, but I'm not here to quibble about language. Even if we had no experience of senses and minds leading to erroneous perceptions and thoughts, we would still have no way to prove that our minds are reliable. At best we would have smaller priors. But quite the opposite, we routinely encounter evidence of erroneous perceptions and thoughts. In dreams, lunatics shouting on the street, or our seemingly invisible optic nerve.
The problem with saying:
> Faith is when one acts on a belief without having any rational reason for it. My rational reason for believing in science is simply: it works!
Is first defining faith incorrectly and then second by appealing to its outcome as a measure for its justification. Theologians manipulate their mental state with their own methods. When they say they encounter manifestations of the divine and say they have a rational reason for their religious beliefs by saying "it works!" they're doing the same thing.
Only by humbly admitting that our starting point requires some first acceptance can we then make progress. If you want to take it as an axiom that your mind is reliable, that's fine for you, but it isn't something that's necessarily true. I love Russell, but when he tries to break out of this by calling solipsism a self-refuting idea he's being intellectually dishonest. He was more honest when he was younger and he said solipsism is boring. And there we fully agree.
 Faith is about mental state, not action. From Google dictionary: "Complete trust or confidence in someone or something."
 With the axiomatic qualifications / faith.
And the rest of us are saying: you don't need a foundationalist epistemology to have an epistemology.
I'm arguing that mere truth is unknowable without faith. We have evidence of derangement all around us and it takes faith to proceed as though our actions are our own and are unclouded. From a bayesian perspective, one of our priors must be that there is some probability that the very mind that is doing the thinking may be of a nature that is unable to effectively reason. But there is no escape because the mind is the centre of all investigation.
Does the infantry man's faith in ballistic charts affect where a mortar will land? Yes it turned out that Newtonian physics doesn't describe absolute truth, but it sure is useful if you need to blow that enemy up over on that hill.
> one of our priors must be that there is some probability that the very mind that is doing the thinking may be of a nature that is unable to effectively reason.
An interesting thought experiment, but what practical use can I apply it to?
btw I'm not implying that only lines of inquiry that result in practical uses are worthwhile, but, science for the most part is related to this.
I guess there is a distinction to be drawn between faith and blind faith. "Faith" in the atomic model allowed prediction of elements that had not yet been discovered. That faith was rewarded when the elements turned up exactly where expected (even though the orbital model of electrons turned out not to be the absolute truth). "Faith" in Einsteins theories required a belief in time dilation. Turns out we can't have GPS without accounting for it.
This is quite different to, for example, religious faith which so far has not had any of these predictive powers.
Could you please define "mere" truth?
>From a bayesian perspective, one of our priors must be that there is some probability that the very mind that is doing the thinking may be of a nature that is unable to effectively reason.
Nonetheless, that probability can be very low, putting us firmly in normal epistemological territory and not requiring any "faith". In fact, millions of people regularly make the rational, but clouded, assessment that they are too drunk to drive, and take a taxi home.
(Also, from a Bayesian perspective, the form of the generative model and priors is up to the experimenter. From a formal epistemology perspective, you can either try to designate some optimal universal prior, or you can take a variational approach and simply say you want to minimize KL divergence of your predictive posterior from your available data.)
My great grandmother's nickname was, apparently, Jennie. No official records note this fact. But it's something told to me by my mother, who knew her. And because of everything I know about my mother, I have faith that she is telling me the truth. Without any evidence except my mother's word, I can't prove it, but I don't think it's irrational for me to believe it.
It's not irrational because it was never particularly implausible in the first place. If there was an apparent law of the universe saying your great grandmother's nickname cannot possibly be Jennie, and you insisted that your mother told you and that's that, you'd be talking about "faith" on the level usually addressed in the disputes between religious and nonreligious epistemologies.
Knowing your great grandmother's nickname doesn't significantly affect your life, and assuming your mother isn't a compulsive liar, there's little reason to doubt her. Furthermore, it also strikes me as the kind of information that would probably be passed down intact because it's easy to remember.
complete trust or confidence in someone or something.
"this restores one's faith in politicians"
So you may reconsider whether you are using a different definition of "faith" to the one you were reading
> cites a consensus in personal belief in atheism and agnostics [sic] among philosophers [...] as a good reason for philosophy not accepting the existence of God
because that isn't what he's doing at all.
The interviewer asks him what he thinks about David Chalmers's lament that philosophy isn't converging to agreement on the truth on big questions like the existence of God. (Note: Chalmers's actual paper on the subject, which you can find by chasing a couple of links from the OP here, puts the existence of God in its list of big questions but concedes later on that on that particular question there is convergence. So the interviewer has misrepresented him a bit.)
He responds by saying he doesn't agree: that philosophy is converging to agreement on the truth on the big questions the interviewer asks him about.
Of course he doesn't go into why he thinks the positions he says philosophers are converging to are correct. The arguments on any of them could fill many books, and have in fact done so. The interviewer asked him a short question about philosophy converging to the truth; he gave a short answer to that question.
[EDITED to add:] After writing that, I notice that FabHK's reply points out the same thing. I'm leaving this here in case the extra detail I give is helpful.
I found this statement really fascinating: as you said "Of course" I take it that you presume that the belief in some number of gods to be innate? Do you think it an evolutionary advantage and manifestation?
The part I elided (many atheists exhibit an essentially credulous belief in "sience" without knowing what it is) I have observed too (though "many?" I don't know) but humans are, you know, humans.
And this is only about people who argue the issue -- what about those who simply don't care and never talk about it?
Atheists/agnostics who do not understand science, yet believe it anyways likely do so because they are confident that they, or anyone else, could understand it if they put in the necessary effort to do so.
The same can not really be said of theists/deists. I'd be happy to be proven wrong here, of course.
I think "Of course" is used because his beliefs, which follow, may be expected by the reader based on his previous statements.
They view the creation story in contrast with other creation stories of the religions of the time. Notably, that there is a single, eternal God directing all things for good rather than a pantheon of gods in conflict.
quoting OP because you kind of just confirmed his point. no, we do not know that the Universe is 13.7 billion years old, that we’re all made of the stuff either made in the Big Bang or inside of the stars. these are theories, the most fundamental axioms of which still escape us, if they exist at all. their success should not be turned into the deceptive illusion of having reached true knowledge.
I was last in an academic context with philosophers of mind in the mid-90s, at which point Searle's Chinese room was already considered a tired argument (it was seen as an argument to incredulity). The hard problem of consciousness and qualia seem odd to present as worth 'leaving behind': they are surely even more pressing as synthetic thought becomes more capable. More than one of the other areas you suggest were certainly active research in the 90s. (Though I was on the periphery, looking at evolutionary engineering, the research group that I was part of contained both scientists and philosophers of mind and was focused on cognition and affect, most definitely including rewards, agents, learning and representation). So I wonder whether the issue is what you hear about rather than what philosophy is being done.
It's been an incredible, fate-changing 50 years. The last 5 years? Really good targeted advertising, computer vision successes passing for "machine learning" and some tentative new medical treatments.
Also: the replication crisis throwing much of science in doubt. If anything we're in a time of stagnation punctuated by vaporware commercial hype.
Embodiment, to pick one of your examples is a huge topic in philosophy: https://philpapers.org/browse/embodiment-and-situated-cognit...
Also, there is me. I have a PhD in philosophy and work towards an MPhil in CS to cover the areas you mention. Some philosophers are on it, trying to keep up.
In my view even using the word 'consciousness' is a bad thing because it is not well defined and has too much baggage (some people relate consciousness to spirit or soul, for example).
Instead we should use different concepts, the ones I enumerate above. They cover sensing, prediction of future rewards (emotion), acting and learning. All are concrete and well defined, with possible implementations in AI.
"Too much baggage" isn't particularly well defined, though, nor is it a strong argument to say that some people relate consciousness to spirit or soul (some people also use terms like "momentum" or "norm" in imprecise ways but it doesn't keep practitioners for whom those terms have precise well-defined meanings from deploying them effectively).
"we should use different concepts... All are concrete and well defined"
Or perhaps we should use concepts from clockwork engineering or mill machinery. Very concrete and well defined.
But what I took objection to was the suggestion that philosophy is or should be primarily interested in clearing up some of the conceptual problems attendant to contemporary computer science and engineering.
I'm sure that's one interesting area of philosophy, but it is only one, modest area.
There is the philosophy of aesthetics and literature, of ethics and politics, of religion and science, of epistemology and knowledge, of the philosophy of history and the history of philosophy, and so on.
From what you said about existing philosophy ('Chinese room', qualia and 'hard problem' of consciousness) and the philosophy you would like to see (as doing the same thing as AI but worse), you completely discarded what the vast majority of what philosophers - now, and most certainly in the past - have actually done and been interested in.
That is why I said you had a parochial view of philosophy.
All that aside: I would be interested to hear how you think AI should change the philosophy of language and meaning.
The course can properly set the perspective of RL and shed new light on the philosophy of mind, if you take what it says and then extrapolate to humans and other agents.
What I find fascinating about RL is that it can be defined concretely. Consciousness can only be defined by reference to other words, in a less exact and concrete way. RL can also explain how meaning appears, based on future reward prediction. The rich sensations we have can be explained by an encoding-decoding architecture based on reconstruction error. Many difficulties in RL map back to difficulties humans have in choosing how to act - the exploration-exploitation tradeoff, instinct vs reason (two different ways to perform RL, one based purely on rewards and the other based on an internal model of the world and rewards). Some of the problems related to multi-agent RL are also covered in Game Theory, such as the prisoner's dilemma.
Regarding philosophy of language: we have today numerical representations of meaning in words and phrases. They are usually represented as vectors in high dimensional space or sets of vectors. For example it is customary to use 300-1000 dimensions for representing words. These vectors have a nice property - the closer two words are in meaning, the smaller the angle between the two vectors. They are derived by trying to predict a word by the context, or vice versa, on a corpus of text several Gb in size.
Many ideas from the philosophy of language, such as the meaning of words being related to the 'game' being played (activity with a purpose), emerge naturally from successful AI models. I'd say that where philosophy had a glimpse, AI has a testable implementation that can solve real world problems. Where philosophy uses mere words, AI uses probability distributions and datasets to define such models. The brain is probably doing something similar.
Other things that AI has managed to do so far: to encode images into latent representations and back, to synthesise images. Same for speech - we have speech recognition and text to speech. Some modules used in neural networks are analogous to imagination, attention, memory, emotion, intuition and many other aspects of the mind. On narrow domains computers can already best humans at perception.
The piece that is missing in AI to match human level is the prior knowledge encoded in our genes. We have been optimised to learn and function well in our environment by evolution. That means our verbal areas in the brain have a notion of invariance to time translation, visual areas have invariance to space translation, and conceptual areas have an invariance to permutation. There might be more invariances but we just don't know yet and that's why AI models are not yet up to par with humans. But we can still learn a lot about ourselves by analogy with AI agents. And that's where I think philosophy should listen.
From the article: Yes (with qualification) and yes. Already in Republic (Plato again!) we have an argument—a clear and compelling rational argument—that even the highest political office should be open to women. The argument? List what it takes to be a good leader of the state, then note the conditions that distinguish the sexes. There just is zero overlap between the two lists.
IMO there is not much of interest in philosophy regarding AI (except for the thinking and imagination of AI engineers). AI is still just computations as far as we know.
From the article: Almost all believe in consciousness and most don’t have a clue how to explain it, which is wisdom.
Already in Republic
- First thinking what could lead to progress.
- Then progress in the form of 2 lists that answer the question if women should be allowed for the highest political office.
Dreyfus was also pretty much right about everything AI related from the 1960s on. It will take nothing short of a true self-driving car to refute something he said.
This is my opinion:
The essential blocks of Dreyfus are in Martin Heidegger (which I also haven't studied in depth). Helping you obtain a partially understood, utilitarian version of my partially understood, utilitarian version of Dreyfus's partially understood, utilitarian version of Heidegger would be... enabling.
You really need to "spend time with the philosophy of other people" if you want to move ahead with the notions explored in those two links.
I have spent some time to answer my own questions. But I have neither the motivation nor the time and maybe not even the intelligence to learn and to understand the ideas most philosophers.
First you think and imagine
The article defends (whether successful or not I'll leave open) philosophy against the common charge of not having progressed much recently, Whitehead's famous The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato being perhaps the most popular form this criticism takes (whether that does Whitehead justice is another question I'd like to leave open here).
Here is a definition of philosophy that is much closer to the concept of philosophy the article defends: philosophy is what academics who work in philosophy departments do.
philosophy leads to science
> Philosophy is what academics who work in philosophy departments do.
IMO this is too restrictive. You reduce philosophy to a job title or diploma title. I guess, many philosophers would not agree with this definition.
> I make a strong counterclaim: all progress in philosophy comes from progress in the hard sciences.
I agree that new knowledge and new abilities lead to new ideas and questions and concerns.
But philosophy is not limited to physics or chemistry; IMO the only hard sciences.
Physics and chemistry are hard sections within an open ended spectrum:
- The begin (1D view) or lower level layer (3D view) are unknown. Is string theory correct ? What are strings made of ? Is mental conscious the lower layer ? Is all a simulation ?
- The end (1D view) or upper layer (3D view) are soft science. Like biology, sociology, psychology, economics.
Yanis Varoufakis: Live at Politics and Prose
I'd say that it is usually what we treat philosophy as when these discussions come up. As you said, it's probably too restrictive, and the truth is we're all philosophers to some degree, and all working off of certain philosophical underpinnings. But the article (and many philosophers) end up treating philosophy as the above poster describes ("what academics who work in philosophy departments do").
If we use a broad definition of philosophy then of course it is useful, and of course its changed over time (most would consider it progress, some wouldn't). But this doesn't seem to say anything about whether or not the small subsection of philosophers who work as academics in philosophy departments have made useful contributions recently. The fact that they struggle so much with this question suggest that they might not have.
The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
Whence all progress in AI since Turing, Shannon, Zuse et al has come from programmers and not philosophers.
You mean, all of the progress since the philosophers who laid down the foundations of the field?
I have proposed
the following two definitions:
1. Philosophers in the original article: best understood as acedemic
2. Progress in AI/maths/hard science: comes from those who actually
"do the maths/implementation/repeatable measurement" as opposed to
using natural language only for discussing their ideas.
In my opinion the purpose of all science is truth, and truth (pace
Socrates and the slave boy) must -- among other things -- be
reproducable by others, ideally by every human. Technology for truth
has improved over time, with mathematisation (and edge case
programming and exectuion on a computer) as the current state of the
art in reproducibility. When Frege succeeded in formalising
first-order logic, the sacred heart of rationality, informal methods
became second-class. All substantial progress in subjects formerly
restricted to informal methods has since come from formalisation and
If you don't agree with my (1, 2) above, than that's fine, we are talking abotu (slightly) different things.
* the formalization and regimentation of natural language has always been a fairly central concern in philosophy (that's where formal logic comes from);
* mathematics can be, and used to be, done in largely natural language.
I invite you to think historically, and in terms of ongoing differentiation of science: the drive towards formalising/axiomatising mathematics which was started in earnest at the end of the 19th, beginning of the 20th century, has been accelerating. These days mathematics is partly verified in interactive theorem provers like Isabelle/HOL, Coq, Agda and Lean. A Fields medallist (Voevodsky) dedicated his post-Fields career towards more mechanisation of Mathematics. I predict that in 100 years from now, mathematics that is not formalised in a mechanical tool will not be publishable in reputable venues.
They still remain in a framework of axioms we made. This gains nothing, and what's more, many scientists used to know this. Everything you measure you measure according to a ruler you or someoone else ultimately made. Yes, numbers are more precise, but more importantly, they're just numbers. And like what Douglas Adams said about money.. it's very odd how much revolves around numbers, seeing how it's not the numbers that are unhappy, guilty, and so on. Never bought into that, and always preferred the company that puts me in.
> And so in its actual procedure physics studies not these inscrutable qualities, but pointer-readings which we can observe, The readings, it is true, reflect the fluctuations of the world-qualities; but our exact knowledge is of the readings, not of the qualities. The former have as much resemblance to the latter as a telephone number has to a subscriber.
— Arthur Stanley Eddington, The Domain of Physical Science (1925)
> The danger of computers becoming like humans is not as great as the danger of humans becoming like computers.
-- Konrad Zuse
> But the moral good of a moral act inheres in the act itself. That is why an act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the delegitimation of the very idea of nobility.
-- Joseph Weizenbaum
How would you measure something like nobility? Do things you cannot measure exist? Can things you cannot prove mathematically true? Can they be right? Should a person who doesn't love wisdom, or people for that matter, even be allowed program machines that decide over the lives of others?
In game theory (such as prisoner's dilemma) there is a concept of cooperation and betrayal. When an agent interacts with another agent, she has to decide whether it is in her best interest to cooperate or exploit the other. Depending on the social environment and the existence or future interactions with the same agent, the choice can change. A noble human would be one who does not betray the larger good for its own limited gain. Thus nobility emerges from the cooperation/betrayal strategy in a multi-agent game.
As far I can see von Neumann's theory of economic games has been the single biggest step forward towards a better understanding of ethics.
I am in awe of the empirical work in Neuroscience. The last few years have seen a "Cambrian Explosion" of new measurements. We can now measure live neurons at scale! I do think this work is also much more interesting than arm-chair thinking about the brain, consciousness, embodied cognition etc. However, as a working programmer/logician/foundations of maths person I'm in much better a position to compare and contrast formal work in my field with philosophers contribution than I can in neuroscience.
Here's from the introduction to his paper Why Heideggerian AI failed and how fixing it would require making it more Heideggerian:
> When I was teaching at MIT in the early sixties, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind works. We in the AI Lab have taken over and are succeeding where you philosophers have failed. We are now programming computers to exhibit human intelligence: to solve problems, to understand natural language, to perceive, and to learn.” In 1968 Marvin Minsky, head of the AI lab, proclaimed: “Within a generation we will have intelligent computers like HAL in the film, 2001.”
> [...] As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a “universal characteristic” – a set of primitives in which all knowledge could be expressed, — Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
Dreyfus agrees with you, in a way, although where you criticize philosophers doing AI, he criticizes the philosophical prejudices of AI practitioners, who often hold beliefs derived from Cartesian views on the mind. He especially criticized the grand claims of early AI researchers, but I think the criticism is still easily applicable.
Here, for example, from his book Being-in-the-world:
> Having to program computers keeps one honest. There is no room for the armchair rationalist's speculations. Thus AI research has called the Cartesian cognitivist's bluff. It is easy to say that to account for the equipmental nexus one need simply add more and more function predicates and rules describing what is to be done in typical situations, but actual difficulties in AI—its inability to make progress with what is called the commonsense knowledge problem, on the one hand, and its inability to define the current situation, sometimes called the frame problem, on the other—suggest that Heidegger is right. It looks like one cannot build up the phenomenon of world out of meaningless elements.
For example, there's the discipline of epistemology, study of knowledge. There are actual working versions of science, that deals with all the hairy compromises and complications of reality. Car Vs picture of a car.
Still the contribution is philosophers had a major role in creating the scientific method (eg Karl Popper).
"Modern meanings of the terms science and scientists date only to the 19th century. Before that, science was a synonym for knowledge or study, in keeping with its Latin origin. The term gained its modern meaning when experimental science and the scientific method became a specialized branch of study apart from natural philosophy.
From the mid-19th century, when it became increasingly unusual for scientists to contribute to both physics and chemistry, "natural philosophy" came to mean just physics... chairs of Natural Philosophy established long ago at the oldest universities are nowadays occupied mainly by physics professors. Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867)."
I guess I bother making the point, because that history was surprising to me when I first learnt about it.
Karl Popper obviously didn't create the method out of whole cloth, but the ideas he formalized & championed became a real platform. Nearly any scientist/science advocate (degrasse tysson, dawkins..) dealing with an "is this science" question paraphrases him Popper.
But you have that problem in science too like quantum mechanics, string theory..
If anything, it is a large portion of scientists and engineers who are engaged in denialism about consciousness.
> Not in the least. As Bell said, study Bohm’s pilot wave theory and you see that everything can be explained perfectly well, with no funny business at all logically or conceptually. We are stuck with non-locality, as Bell proved, but maybe in the end you need non-locality for the deep simplicity of law that I anticipate.
I'm not a physicist, but isn't dumping locality a bit of a bigger deal than this quote suggests?
I think that Bohm’s pilot wave theory is appealing to non technical people because it replace all the scary equations of quantum mechanics with a wave and a particle. The trick is that the wave has a scary equation, and the interaction of the wave and the particle has a scary equation, but you can hide the scary equations behind a nice animated graph of a wave and a ball.
For a nonrelativistic particle, the pilot wave is equivalent to the other quantum mechanics interpretations. The other interpretations can be extended to the special relativity case, moreover all the work in the last 50 (or more) years in particle physis use the extended versions. Nobody know how to extend the pilot wave theory, and perhaps it's impossible.
The appeal of Bohm is that it results in QM that is deterministic, without needing to resort to ill-defined notions of collapse.
I'm quite sure "hiding the scary equations" has absolutely zero to do with it, and it honestly comes across as offensive to suggest that it is.
Actually, the wavefunction collapse is a very well defined mathematical construct; so, if you are ignoring or trying to avoid it, you are literally "hiding the scary equations."
> The Copenhagen interpretation is the oldest and probably still the most widely held interpretation of quantum mechanics. Most generally it posits something in the act of observation which results in the collapse of the wave function. According to the von Neumann–Wigner interpretation the causative agent in this collapse is consciousness. How this could happen is widely disputed.
> Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem. The idea is that the environment causes the classical appearance of macroscopic objects.
Even John Bell of Bell's theorem suggested that the greatest problem facing physics was integrating the non-locality of QM with relativity, and the fact that pilot waves place this non-locality front and centre is a feature not a bug, because other interpretations can more easily sweep this under the rug only for difficulties to show up later.
Also, there are many perfectly fine relativistic extensions of pilot wave theories, so that claim is factually incorrect. They haven't settled on a canonical one, but pilot wave physicists get probably 0.01% of the funding of other physics work, so that's hardly surprising.
So, it is possible to define "free will" in such a way that it is compatible with determinism (e.g., "my will is free if its decisions are the product of my ordinary processes of thinking–even if those ordinary processes are determined by natural laws–as opposed to the product of any extraordinary external interference such as drugs, brainwashing, mind control implants, etc").
Part of the dispute is over whether such a definition of "free will" matches people's everyday pre-philosophical usage, and also whether it works for the purposes of other disciplines in which the concept of "free will" might be invoked, such as ethics or law. Compatibilists would say it is close enough, and can be made to work for those purposes. Incompatibilists say it is a radical redefinition, and undermines those purposes.
> Part of the dispute is over whether such a definition of "free will" matches people's everyday pre-philosophical usage, and also whether it works for the purposes of other disciplines in which the concept of "free will" might be invoked, such as ethics or law.
Fortunately, this is an empirical question that has been explored by experimental philosophy . It turns out that most lay people employ Compatibilist reasoning, so I think this question has also been answered.
 The latest that I've seen on the topic: https://www.researchgate.net/publication/274892120_Why_Compa...
I suggest you read the study and others like it. If you flat out ask people what they believe, with no further context, they tend to support incompatbilist intuitions. But when they test their responses to moral questions, they employ Compatibilist moral reasoning, and agree with claims that are Compatibilist.
A compatibilist would say that for the purposes of discussion we describe this event as the exercising of will, even though from a physics perspective the outcome could be predetermined.
Is that correct?
If so that seems similar to a recent discussion on Bohr's view of the irreducibility of biology to physics.
A point worth adding – there are different forms of determinism. Physical determinism says people's decisions are predetermined by physical laws and initial conditions. Theological determinism says people's decisions are predetermined by God's will. A compatibilist could be either. (Although most compatibilists you'll encounter these days are in the physical not theological camp.)
> According to [David] Hume, free will is not the ability to choose otherwise given the exact same inner and outer condition. He considers it the hypothetical ability to choose otherwise, had the actor been in a psychologically different state because of different wishes or beliefs.
(Sorry for the rough translation, but I think it brings the point across.)
In general, the debate seems to be on what you take "free will" to mean. Philosopher/neuroscientist Joscha Bach  defines it as "the ability to do what one has recognized as right", contrasting it e.g. with compulsory behavior.
 https://bach.ai, although the source of the quote is the German podcast https://alternativlos.org/42/
“Non-determinism” implies randomness which is NOT what we are talking about when we say something like “Susie made a decision.” She makes a decision for reasons, and those reasons are the determinant (cause) of her choice. When you look into what decision making actually is, I.e. the procedures that underlying it, you find that it MUST be the case that our thinking is mostly if not all deterministic for free will to be of any meaning at all.
So really, the question is “why wouldn’t they be compatible?” And it says a lot about our introspective blind spots that we have for so long thought that free will and determinism were at odds with each other.
A bit sloppily put, a compatibilist believes a choice is free if the agent making it is not coerced or constrainted, even if the choice that the agent will make is determined.
Different compatibilists approach this as either being descriptive (this is how we tend to actually make moral judgements), or normative (free will is defined in these terms and moral responsibility for our actions follows from that).
Most incompatibilists agree that "freedom to act" is a necessary criterion for free will, but not that it is sufficient.
Because the future has not yet happened, all possible future choices are indeed free in a sense, even if one of them is predetermined.
When we look back, I think we tend to think of free in a different way, that is, given that a choice has already been made, we can now say that it was the only choice that could have happened and did (determinism or not, I think we often tend to think this way in ordinary life).
When we judge moral responsibility, we tend to think of the actor in that moment of making the choice. At that moment the choice was still "free", because the future had not yet happened, and so we ascribe moral responsibility to these actions.
1) Where the decision is coerced.
2) Where the decision is not coerced and so ostensibly 'free' but determined by external factors (ie. the laws of Physics).
3) Where the decision is genuinely 'free', thus violating the laws of Physics as we know them.
Compatibilists would reject 3) but would describe 2) as the exercising of 'free will'.
Compatibilism is "compatible with" determinism, it doesn't depend on determinism. So even if we turn out to be non-deterministic beings, that fact is irrelevant to Compatibilist free will.
The most common Compatibilist view is probably one that focuses on an agent's reasons for acting. If an agent acts for internal reasons, they are acting of their own free will (reasons are beliefs, judgments, inclinations, etc.). If an agent's reasons are subjugated to another agent's reasons (coercion), then they are not acting of their own free will.
Basically, Compatibilism is very similar to the way the law works. We judge whether a person's cognition is compromised in some way, and so whether they are "fit" and so can make choices of their own free will, and then we examine whether they actually did make a choice of their own free will in order to determine whether they are responsible.
(Not sure what you mean with "reject 3)" - they basically say it doesn't happen, and it need not happen for free will.)
The thought experiment is always "suppose we could turn back time, and arrive in exactly the same situation as we were before - could I have decided differently?". The traditional notion of free will sort of requires that you could have decided differently. The compatibilist notion of free will implies that, no, you could not have decided differently - but that doesn't mean that you have no free will.
EDIT to add: Sam Harris has a book about his version of compatibilism:
Interestingly enough, maybe it's not the meaning of the word "free" that is under debate here, but instead that of the word coerced?
For more detail, https://plato.stanford.edu/entries/compatibilism/#FreWil.
It's actually the other way around: if you acted according to your own free will, then you are morally responsible for your choice.
If it's still not clear why I put the implication that way around, note that not everyone thinks that you have a moral responsibility in the way Peter Singer argues, but basically every philosopher thinks it's wrong to kick a dog.
In other words, making a choice of your own free will entails moral responsibility, not the other way around like you said. I don't know what Singer has to do with any of this.
Exactly... "necessary", not "sufficient".
Classical free will, going back to Hobbes, is really just that sometimes we act unencumbered by external forces. Determinism posits that no one can act other than how one ultimately does.
Those could be compatible.
There is a difference between me deciding to stay in a room after careful deliberation, and me being chained to a wall in that room in spite of my protest.
Even if the future is fixed, there's still a difference between a person being forced to do something by an external actor and a person doing something they would ordinarily do.
All Hobbesian free will is committed to is that being chained to a wall changes your circumstances. Being kidnapped deprives you of meaningful options.
There is a separate version of free will that insists that agents are the ultimate cause of their actions, rather than mediated causes, which would not be consistent with determinism. But that is not the only definition of free will (and I don't think a very common one, except by incompatabilists).
A lot of this hinges on what people actually mean by the individual phrases.
If you sometimes engage in deliberation, and sometimes deliberation changes your mind, then even if that deliberation would have always inevitably come out the same way, you still know that in those cases deliberation affected your subsequent actions. Your deliberation (informed though it may be by external factors) is sufficient for most sensible versions of agency.
(EDITED to be more comprehensive.)
The single person case has two descriptions of the person. The chatty, social person, who tells us of his thoughts and feelings and why he chose tea instead of coffee. Contrast that with the account in terms of atoms and molecules. Neurons fire, muscles contract, etc.
Where have we seen this before? In the kinetic theory of gasses. The pressure of the hot gas in a heat engine pushes the piston down. Alternatively the molecules bounce off the top of the piston, hammering it down with countless tiny hammer blows. Which account is true? Why not both?
In the single person case the differences between the two accounts are so large that we are sucked into seeing them as rival accounts. But philosophers have never managed to clarify why the accounts are incompatible. Both accounts run out of steam. We don't really know the secrets of our own hearts and often resort to rationalizing the choices that we freely make, after the event. The atoms and molecules account asserts that those choices are in principal determined, and offers a wonderful excuse, in the exponential sensitivity to initial conditions, for why determinists cannot be expected to make predictions.
Sure, we can say that our free choices are determined by atoms and molecules following the laws of physics. But take the prediction problem more seriously. Imagine that I have a gadget that predicts whether you will chose tea or coffee. If I tell you in advance, you can chose the other way, defeating the prediction. The philosophical problem arises due to viewing the atoms and molecules that make up your body as the gadget that predicts your choices. You cannot escape their power over your actions. On the other hand the point seems fatuous. Those atoms and molecules are you. "You cannot do other than what you do" is just a triviality because the two accounts are two accounts of a single underlying reality. They are bound to be compatible, no matter how much the customary language jars and creates the appearance of conflict.
- Lawrence of Arabia
Its akin to p-values in statistics (probability of evidence given hypothesis) being used in place of a what we really want to know (probability of a gypothesis given evidence).
> Q: Your colleague David Chalmers has fretted that “there has not been large collective convergence to the truth on the big questions of philosophy,” such as God, free will and consciousness. Does this lack of convergence bother you?
> A: I disagree with Dave here. Overwhelmingly most philosophers are atheists or agnostics, which I take to be convergence to the truth. Most are compatibilist about free will and believe in it, which I also take to be convergence to the truth. Almost all believe in consciousness and most don’t have a clue how to explain it, which is wisdom. It is not that there isn’t convergence, it is that the outliers who do not converge get much more attention than the great mass of convergers, who don’t particularly stand out.
This isn't evidence of philosophy making progress, because atheism among the educated is a scientific and cultural trend, the debate about consciousness is primarily an ancient fight about definitions, and believing in consciousness without being able to explain it is certainly not progress in any productive sense.
That's some hard-hitting philosophy right there.
I don't have any issue with the current consensus, it just seems incorrect reasoning to say consensus is convergence to truth. Why isn't consensus just groupthink, a notoriously bad way to arrive at the truth?