> We collectively changed our mind because younger geologists adopted the continental drift theory.
I like this quote by Max Planck, which applies not only to science but also politics and technology:
> a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it
As for the original argument in the article, the problem is where to draw the boundary. If you want to e.g. get to the bleeding edge of Physics research you can't question everything people have discovered along the way or you wouldn't get anywhere in your lifetime. Believing in experts is not necessarily bug, it's also a feature. Because that's how we transmit knowledge across generations in an efficient way. The hard part is not the questioning, it's figuring out what's worth questioning and what's not.
> We collectively changed our mind because younger geologists adopted the continental drift theory.
That's one way to look at it I suppose, but it's not the whole story. Alex du Toit was a big proponent of continental drift, he published "Our wandering continents" in 1937 at the age of 59. Otto Ampferer was 66 when he published "Thoughts on the motion picture of the Atlantic region" in 1941. It just took a lot of time for the theoretical concepts and hard data to come together, and by the time that happened in the lat 50s and early 60s many of the originators and early exponents of the idea had already died long before of old age.
Yes it can take time to change people's minds, sure, but many of the people doing the important work of mind-changing were no spring chickens themselves. The idea that it was all young hippies in the 60s fighting against old fuddy duddies is simplistic, there were plenty of fuddy duddies fighting the good fight, and had been for many decades.
This has also been studied with evolution in the 19th C., and its exactly the opposite. The older generation believed it first, before the younger.
This quote is used, without evidence, and it never made any sense. People who know more, and are incentivised to act on their knowledge, often change their minds.
Expertise is a contrary force to partisanship. (And as far as people cite Kuhn: he was wrong. He did not set out to disprove his views, if he did, he'd find plenty of examples to the contrary).
It's also easy to see that the "older generation dying out" is completely obvious, and happens anyway, no matter what amount of mind-changing is going on.
As minds change, some young and some not, the older minds go away, so that by the time the new idea is accepted, yes, most of those accepting it were young at the time the idea was first presented. Naturally.
The hard job of the experts is determining the minimal experiments to show people that will convince them of far reaching theories. For me that is the true measure of expertise in an educator.
Your good physics instructors will have you go through the photoelectric effect, the Millikin drop experiment, electron/beta ray diffraction experiments (possibly a few other). Maybe you build a transistor. You will see the necessity of quantum mechanics to explain your experience. Generally, at that point, you might question other quantum effects, but you are confident if you do the more complex experiements, you will see the necessity. You will probably have learned what those experiments are as well.
Black holes are another story. I recall still encountering astrophysicists that wouldn't be convinced of the necessity of their existence. Special relativity you can get to accept via the Michelson-Morley experiment or similar experiments proving the constancy of the speed of light. From there you can sort of accept a need for a relativistic theory of gravity. But the evidence for GR starts to come from complex astrophysical measurements that you cannot really replicate and need to trust the (processed) data. The gravitational wave signatures can be convincing if you already know the math... but too many people feel that a simpler theory is just around the corner--a simpler theory without the singularities. Dark matter and dark energy face similar (but with their own variations of) skepticism.
> minimal experiments to show people that will convince them of far reaching theories. For me that is the true measure of expertise in an educator.
For me, some of the best lectures I had in grad school were the ones that basically said “we know X because of this experiment”. And then went into the details of the experiment and the results. It’s simultaneously exciting and scary to learn how easy and simple many experiments are.
To this day though, the thing that really surprises[1] me is how timing affects a field.
For example, in my field (molecular biology), the technique of geometric amplification of DNA by PCR was widely available before I was in undergrad. This was a tool we took for granted. However, for my undergrad advisor, it was still a new-ish technique that was exciting.
In grad school, Sanger sequencing machines were widely available, so we could directly sequence DNA quickly and easily. But the lab I was in still had the equipment and ability to run large sequencing gels to sequence DNA manually.
By the time I was done with grad school, high throughput sequencing (NGS) and the human genome were completed. New students now take both of these things for granted. Knowing the full (ish) sequence of an organisms genome makes many experiments significantly easier (or even possible!). CRISPR/cas9 technology is the same - a tool that has come of age within the past decade or so. But these are resources and techniques that new students can just accept as available. They aren’t magic and novel to them… they are just another tool to be used to answer their questions.
[1] surprises isn’t quite the right word… impresses? Actually… encourages or amazes might be the better word to use. I find this flow of progress amazing to watch.
Only if you are happy to spend your life never doing anything but questioning what's already there
The corpus of knowledge of any one field takes more than a lifetime to properly validate by yourself, and lots of them cannot be validated at all because you don't have infinite funds
Alas mankind dropped the ball in regards to dealing with anti-senescence, so picking your battles should be self-evident
What you focus your research on is always a big personal gamble. Be it questioning something that seems evident or walking along beaten paths that still might turn out to be wrong. But as long as you stick to the scientific method you will at least generate a result so that future researchers wont waste their time on the same approach.
In the end evidence hopefully prevails on the societal level. It might just take some time.
I'm not sure if LGBT rights is an example or a counterexample for this phenomenon. A lot of countries made huge strides in that area in the last 0-3 decades, which wouldn't have happened if nothing about all the old people (which are overwhelmingly in power) changed, especially because over the same period politicians became older (so the age cohort in power stayed roughly the same in lots of places). However, when you do look at polls by age group it seems to me like the changes in old adults are rather modest, while young people overwhelmingly support.
I think this anomaly might be indication that LGBT rights issue actually have great support from the people in power because they don't really care but it's convenient topic to draw away minds from income equality.
It's not like the society made a great strides in how we imagine Santa Clause when Coca Cola pushed the current vision of him. Or it was not a great stride when people started talking about personal carbon footprint promoted by BP to draw away the blame from corporations.
It might be just a coincidence that LGBT rights issue rose to prominence right after Occupy Wallstreet was squashed with nearly military force. Or it might not be.
Are those two events connected? They both happened in the same decade and received some press. That's all I can come up with. And both things you mention -- LGBT rights and class consciousness -- have been around for a while.
These arew for racism, but similar hockeysticks with the exact same timing exist for other social justice topics. Media, as one, started harping on identity politics and never let their foot off the gas.
If you're just saying that rainbow capitalism is in alignment with capitalists' interests, and anti-capitalism movements are not, then that makes sense.
Yes. Social justice movements are perfect channels for champagne socialism. The activists get to fight and fight and fight and the problem never goes away. They get to feel like they're such good people and have no need to concern themselves with whether their methods work. (Their allegiance is to their methods first, ostensible causes decidedly second)
I don’t know if it’s fair to say that revolutionary wealth redistribution movements, like Occupy, are failing because everybody is distracted with social justice movements. The Left is a coalition for social justice and also for wealth redistribution, to varying degrees. That’s why I’m saying that I’m not seeing a direct connection where a success in one area subtracts from progress in another.
It might be even easier to see & understand this phenomenon by looking at the history of rights for women and black people. These also took generational turnover, and attitudes among the older generations changed more slowly, and we’re still not even done yet. I can see this right now pretty clearly between my adult family/friends and our children, where the adults are still getting used to and learning how to talk about gay rights, are squeamish and sharing cringy talking points about transgender rights, meanwhile the kids are all fully on-board with equal rights for all and just have no problems with non-binary gender or sexuality. It’s funny after multiple rounds of this happening over multiple centuries that we still have trouble actually standing by our motto of liberty and justice for all.
These are, yes, examples of cultural belief changing slowly and happening less because people changed their minds and more because younger people came to a new conclusion. I’m curious what you meant about it maybe being a counter-example, what did you mean there?
> I'm not sure if LGBT rights is an example or a counterexample for this phenomenon. A lot of countries made huge strides in that area in the last 0-3 decades, which wouldn't have happened if nothing about all the old people (which are overwhelmingly in power) changed
Older people may be in government/business power, but one could argue that LGBT rights rose because those people aged out of the majority/worker culture group. Even if the age group in power is old and possibly even disagrees with the view, placating the masses must be done one way or the other, and codifying LGBT rights is an absurdly cheap way to do so, considering the other political topics.
This book is so important to the conversation. Kuhn goes into detail on how everyone believed in Phlogiston until the adherents died and only then did the science progress into modern thermodynamics. He gives example after example of this pattern. He coined the phrase "paradigm shift" in the book to describe the pattern.
I think the great mistake every current generation of scientists makes is that they believe that this generation is fundamentally "right" and cannot undergo any further revolutions - at least on domains they "know" about. All other domains are up for grabs of course, and silly dogmatic them for believing otherwise!
Only when someone goes against the "scientific consensus" does any progress get made in any field. It's too bad we are humans with egos and cannot let go of ideas that we hold too close to our hearts. Perhaps it's an evolutionary trait that we are so short lived? Without such short lives dogma might win the day!
> Only when someone goes against the "scientific consensus" does any progress get made in any field.
That's a fundamental misunderstanding of Kuhn's model.
Progress gets made when scientists do normal science within the prevailing paradigm. That's the activity that turns fancy theories into something useful. Over time, scientists encounter issues that cannot be resolved within the paradigm. When a paradigm shift occurs, science takes many steps backwards and several steps sideways to deal with such issues.
It is very important and extremely apropos to the world today, in
which urgent structural progress is held back by fleeting and trivial
minority interests.
At the heart of the matter is power.
Nobody has to wait for the incumbent generation to die off because
they are right, but because they are powerful.
They select what's on the syllabus
They get to grade your college work and fail you
They get to reject or admit you to the research group
They hold the grant money
They review your papers
The great myth of science is that it's a rarefied battle for
truth. Sure, but it's also a battle between people - and real people
can be small-minded, petty, vain, prejudiced, greedy, insecure, and
spiteful, and they will put their personal needs ahead of truth.
Science does itself a disservice because it considers itself separate
and above human psychology and politics. Some of the best reforms we
could make to scientific research are not around money, publishing or
peer review, but around appointments. Character matters. In the
longer, mature game, one good and moral scientist who is definitely
half-right is worth three assholes who seem perfectly correct.
I think there is also often an incentive problem. Could someone have monetized their disbelief in phlogiston? A lot of modern science is often largely inconsequential outside the business of science, so you need to monetize within the community, which makes "attacks" on what is common knowledge probably not very worthwhile.
This is probably an evergreen observation, but it is worth repeating.
Because when you are yourself becoming yourself an enthusiastic scientist (so, by doing science) you would dismiss entirely any parallel between religion and science. However, this is fundamentally how the current system is designed, and no matter what field you look at. From the most basic inquiry such as diet to the most abstract mathematics, it is almost impossible to find a correct scientific approach, especially in popular culture.
The first thing an inquiring scientific mind would have to do would be, each time the word science is uttered, to ask "what is 'science' supposed to designate here? The current published consensus? The scientific community? Leading experts on the state of the art, or experts in public recommendations? The scientific method?". These are just a subset of what 'science' is used to designate, but lead to very different conclusions.
Not to distract from your point, but I have found there was nothing more destructive to my trust in science than becoming a scientist. I think once you see how the sausage is made you start to see science for what it is: just another complicated, fallible process built by many fallible people with a wide set of perverse incentives that produce a lot of good things and a lot of garbage.
> I think once you see how the sausage is made you start to see science for what it is: just another complicated, fallible process built by many fallible people with a wide set of perverse incentives that produce a lot of good things and a lot of garbage.
To me this sounds like a major misconception of what science is.
Scientists aren't expected to be infallible, let alone right at the first try. Science is an iterative process of building knowledge and understanding of how things work, which by definition means there's always stuff that is not known and misconceptions on how things work. The output of science is progress, hut the bleeding edge is often riddled with swing-and-misses.
As a clear example, see how the plate tectonics theory was addressed initially by the scientific community.
Science is iterative yes, but when we say that we mean we cannot know everything all at once so of course some theories will be wrong. But, what the commented is saying is that it is not just that we do not know everything so our theories are incomplete, it is that the "scientifically derived knowledge" that we do know, is mostly false.
Their are entire disciplines with a 80%+ false results rate. This is not an iterative process, this is we did a scientific study and found with 99.999% certainty that X is true, but its not. This is not iterating towards truth, this is just claiming to have authority and being absolutely wrong. And given how science tries to be iterative, these wrong results can then be used to derive more wrong results. Iterative processes cannot function with incorrect axioms.
You cannot just hide behind science as an authority and assume it will fix everything and you can just unthinkingly trust any results it generates.
> But, what the commented is saying is that it is not just that we do not know everything so our theories are incomplete, it is that the "scientifically derived knowledge" that we do know, is mostly false.
This is your personal assertion, not a fact. And a baseless one, at best.
Science is based on seeing stuff for yourself. If you ever come across something that fits your personal definition of "scientific derived knowledge that is mostly false" then you have on your hands clearly something noteworthy for the scientific community to see. If however your personal finding is something that no one but yourself is able to verify then that's something else, and it is not supportive of your thesis that everyone is wrong.
> that produce a lot of good things and a lot of garbage.
Speaking for myself:
You seem to be looking at the tiny fraction at the bleeding edge, where getting good & bad results is really a miracle. Who made the universe so understandable that we can be hit & miss, as allowed to miss & miss & miss….
And away from the bleeding edge, there are far more papers but the most important an area, the fastest the junk is identified and dropped, the faster the gold is identified and spun into more gold.
—-
You really don’t want to see me code on my ikigai project.
I am constantly tossing both bad & good code out for better. Making the gold code an asymptotically small percentage of my work, but producing something I am extremely proud of.
That the sausaging works is magical! Not depressing!
The point is that you shouldn’t have been turned off by how the science sausage is made. Yes, science is a messy complicated process driven by imperfect people with human flaws. But it also works in spite of all that. If science only worked when done by the best people, it wouldn’t have changed society as much as it has. Science allows for all people to participate, and it acts as a filter for better results in the limit.
Yes at times it produces garbage, yes some scientists are holding back progress. But the iterative nature of the process is designed to continually correct for that. Science is and has always been two steps forward, one step back.
>But it also works in spite of all that. If science only worked when done by the best people, it wouldn’t have changed society as much as it has
The vast majority of things in the sciences that have really changed the world are those that could be used to make products that just work. We need reproducibility by entrepreneurs and engineers, not academics if we want to change the world.
Maybe Academics would be better off as an idea generating group, rather than as guardians of correctness, as the latter just doesn't seem to be working very well. Evolutionary strength of ideas is what drives survival of ideas, and correctness just isn't that big of a part of an ideas survival in Academics outside of Math.
The other poster wasn't arguing that scientists are always right, they were pointing out that science is full of politics and that gets in the way of your ideological view of science.
I don't find it a distraction. I had to think, "this is such an old idea, when did this realization struck me?".
It was more than a decade ago, on my third research project, when I went abroad to visit another research institute.
Before my experience was only with individuals with extremely high rigor and standards, with who we would discuss of Poincaré, Feyerabend, etc. This was not typical of everyday life in research centers.
Well, nobody said exactly what they meant by 'science', so there are indeed many ways to interpret that statement. I was referring to education, news reporting on paper, etc. This the main subject of the article, for which there is little difference between mathematics and other sciences.
Even then, I don't think there are many mathematicians in the world who can claim they are paid to do pure mathematics without any connection to other sciences. Probability, modeling of some phenomena, programming are always supposed to be the final purposes. These different approaches are in competition with regards to grants, and thus need to argue how one or the other is better suited to tackle the real-world problems.
For example in logic research, there are arguments between the two big camps of program certification and program verification; those ultimately rely on feasibility in terms of programmer training, financial cost, computational complexity.
The camps have very different views and can use fundamentally different mathematical foundations.
A great illustration is the HoTT project, often discussed on here.
I mean, yes, you're right that many people just parrot whatever they learned in high school even in the case of maths (somehow it's surprisingly easy to get people on Facebook to argue about whether square roots of negative numbers exist).
And if I interpret your second paragraph correctly that's more about the fact that deciding which research programs are getting grants is not a very "scientific" (or logical) matter but has a lot to do with how humans behave on a societal level. That's certainly true, although I'm not sure how to solve that. Nobody thought that we'd have any use for number theory for millenia and now cryptography is everywhere.
Mathematics still has the unique property that it is, in principle, independently verifiable by anyone, even by a (relatively dumb) computer. The validity of results is rarely in question (Mochizuki notwithstanding), and if there are disagreements, they are mostly philosophical or concern the question of whether or not we should accept certain axioms. But I don't consider the latter a problem - working with different sets of axioms makes mathematics richer, not poorer.
Where science becomes useful is in applied sciences. Applying science is ruthless: it either works or not at all. When it works, it's typically very valuable to people. Most of our recent history is a boom in applied sciences that started accelerating about 600 years ago when Galileo started publishing some notions about the earth revolving around the sun while also providing and documenting the tools that he used to come to that conclusion. Galileo was censored by the church but the cat was out of the bag as loads of contemporaries verified his results and started building on his theories.
Most of the progress in applied sciences is directly connected to progress in theoretical science. And a lot of that is of course enabled through experimentation. Which is a form of progressive insight that accumulates knowledge. But only if we publish it and build on it. And of course a lot of mistakes get published as well and there are always loose ends, alternative theories, wrong theories, etc. So, there are no absolute truths. Science is never finished, complete, or even internally consistent. Newton was on to something and then Einstein challenged some of that and after him quantum theory challenged some of his notions. Progressive insight. None of it is necessarily wrong and lots of things happened in between. Good science starts with challenging everything and then trying hard to falsify any theories that emerge from that. You look for negative evidence, not for positive evidence.
Bad science simply doesn't work when applied to reality. Flat earthers earning a Darwin (whose ideas they'd typically also reject) award by dying in apparent attempts to be scientific about their delusions is a thing. There are all sorts of crazy individuals who launched themselves using some death trap to "prove" that the earth was flat. It's not very valuable science of course because we already knew the earth wasn't flat. And there are no known flat earth theory applications that have any economical value.
There's a whole branch of pseudo science around intelligent design, flat earth, etc. that basically tries to "fix" the inconsistencies between religion and mainstream scientific knowledge by using scientific methodology. Which is of course mildly ironic and tragic at the same time.
However, applied pseudo science usually ends in tears: because it just won't work. Applied pseudo science is of course a thing. Homeopathy, and other snake oil medicine is ever popular. It doesn't work, of course. Or at least not in a way that can be verified in a way that holds up scientifically. But that doesn't seem to stop people from believing in it anyway.
Yes, one should be skeptical of "the science says so" or "one specific expert says so". It's very much a mindset to embrace.
But it doesn't really work that way, the cost of good faith skepticism is so high as to not be in reach of most people.
For example, a lot of people are skeptical of climate change and reject the scientific consensus. This starting point of skepticism is to be encouraged, if only it would be done in the way the article describes.
You can't challenge the consensus without becoming an expert on the matter yourself. At the absolute very least it would require some hundreds of hours of reading and analysis.
Most people do not have the time, will or expertise/education level required to seriously and in good faith provide any type of challenge at all, other than outright denial. And it obviously doesn't help that there's many rabbit holes encouraging lazy skepticism: denial, conspiracy theorists, it's a business model to be anti-science.
In the end though, the general population doesn't care about science or the pursuit of "real truth". Only about its outcomes. When science invents a better battery, that's great. Yay science. When science tells me to eat less meat, that's bad science, has to be false.
The problem is when legitimate challengers DO arise, the "scientific consensus" is very quick to gate keep their little subfield. I know how "the scientific consensus" is used to suppress new scientific research. So when I hear it used in popular science conversation, often used to push a political narrative, I just cringe.
As an example - with zero socio-political implications by the way - consider the case of Dan Schectmann. He's an Israeli physical-chemist who discovered crystal-like structures with 5-sided symmetry patterns. Problem was, common wisdom in crystalography was that this was impossible.
> “People just laughed at me,” Shechtman recalled in an interview this year with Israeli newspaper Haaretz, noting how Linus Pauling, a colossus of science and double Nobel laureate, mounted a frightening “crusade” against him, saying: “There is no such thing as quasicrystals, only quasi-scientists.”
He was ridiculed and apparently even lost his job, or lost potential employment, for insisting on his finding. 30 years later he got his own Nobel prize in Chemistry:
This is also a problem when experts from different fields disagree.
For example, I have no epidemiological credentials, but I am an expert in information/media literacy as well as digital information dissemination and retrieval. So watching many aspects of the US's COVID response was like watching a train wreck. But in a lot of places, disagreeing with the/a government response meant you were a scientifically illiterate rightoid(TM) anti-vaxxer.
"I respect epidemiologists but most of them are terrible science communicators and our institutions are making tactical and strategic errors because our current information system is not suited to what they're doing," wasn't really seen as a viable opinion. The Overton Window did not allow it. That's less than ideal.
I was thinking about this with the news about how poor our Omicron booster response has been where only a few percent of people have taken them. I think a significant part of this reaction can be traced back to a year ago when two top scientists resigned over boosters and the scientific community as a whole seemed negative on them. However, the two scientists were authors of a Lancet article that provided insight into their way of thinking, as it largely focused on vaccine equity and was also concerned about scaring people away who hadn't gotten the initial vaccines. These arguments are temporary and do not focus on the merits of the boosters alone. As I predicted at the time, now that the boosters have gotten better and these arguments are moot, the public still remembers their first impression of boosters as negative and unneeded.
Also, around the same time I remember seeing several people comment about how people like me that used masks were "as bad as antivaxxers" for "not trusting the science" by not removing all precautions other than the vaccine. As someone well versed in reading actual scientific papers, it was painful to read these comments.
> a lot of people are skeptical of climate change and reject the scientific consensus
That's because science doesn't work by "consensus". It works by building models that make accurate predictions. The models climate scientists currently have don't make accurate predictions. So people who are skeptical of the "consensus" are right to be skeptical (though that in itself doesn't necessarily mean they are skeptical for the right reason--see below).
> You can't challenge the consensus without becoming an expert on the matter yourself.
That's way too strong. You don't need to become an expert in the sense of being able to build your own models. You just need to have enough knowledge to be able to judge whether the models scientists have make accurate predictions. That's a much easier job.
I think you are right, though, that many people who are skeptical of various scientific claims aren't skeptical on the basis of what I said above, but for other, invalid reasons.
"There is very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period, including the more rapid warming in the second half of the 20th century, and the cooling immediately following large volcanic eruptions."
I agree, becoming an expert is too strong. Here's an example.
Physicians in the early 1800s were taught many medical practices and concepts that had no basis in fact (the dynamics of Hippocrates' 4 body humors, the value of bleeding, germs were not the basis of disease, miracle elixirs abound, etc). You'd think all medicine practiced in that time would be worthless. But in fact, a small number of heterodox doctors recognized the contradictions and limits inherent in the dogma of the day and chose to think and practice largely independently of it, testing methods themselves to retain what worked and discard what didn't, and sharing lessons learned with peers they respected. They personally employed the scientific method and learned from it, despite the established orthodoxy.
Each of those experimental revelations did not depend on expertise, only on a rigorous determination to be methodical and fair-minded and refrain from employing or spreading unfounded claims. That's more a matter of discipline and personal integrity than expertise.
I think the biggest problem is that category I mentioned: skepticism on the basis of simply not liking the scientific conclusion, because it negatively affects me.
This lazy skepticism has been suppressed for good reasons, but probably has gone too far. An extremely authoritative approach that is dogmatic and doesn't tolerate the slightest of deviations has only emboldened conspiracy theorists.
All of this in a backdrop of extreme political polarization. A perfect storm.
> It works by building models that make accurate predictions.
Have climate change models been shown to be making poor predictions? From what I've seen, it appears they've predicted temperatures should warm and we've seen average surface temperatures generally increasing over time. It seems they've predicted stronger hurricanes and those have also been happening more frequently over time. They've predicted increasing amounts of droughts and those have also been happening more frequently over time.
Weather is a complex system and there's always going to be some amount of variance. Are you claiming that people have trouble understanding how statistics work? If so, that seems to be the same argument that the OP is making (that you need to understand how things work before you have informed skepticism). If you don't understand how statistics work, you might have trouble following statistical predictions that have generally been show to be correct for several decades. That doesn't appear to be a problem with the predictions of climate change models, but with the interpretation of said models by people who haven't bothered to understand them.
> Have climate change models been shown to be making poor predictions?
Yes. Even the IPCC admitted that in the AR5, though you had to look carefully at footnotes to see it.
> it appears they've predicted temperatures should warm
No, they predicted that temperatures should warm at a particular rate based on a particular rate of CO2 increase. More precisely, there are three groups of climate models, each based on a different rate of CO2 increase: basically, a "business as usual" rate of CO2 increase, a "some reduction" rate of CO2 increase, and a "drastic reduction" rate of CO2 increase. The output of each group of models is averaged to come up with a prediction of warming based on the rate of CO2 increase that the model assumes. The "business as usual" models predict the most warming, the "some reduction" models predict somewhat less warming, and the "drastic reduction" models predict less warming still.
Actual CO2 increase has been about the same as the "business as usual" scenario, but actual warming has been less than the amount predicted by the "drastic reduction" set of models. If the models made accurate predictions, actual warming should have been about what was predicted by the "business as usual" set of models. But it wasn't; it was well below the 95% confidence interval for that set of models. (Note that, whenever actual warming is compared to models, it is compared to the average of all the models--all three sets, with different assumptions for CO2 increase. That actually makes no sense, but climate scientists do it anyway to obfuscate how much the models have overpredicted warming.)
> It seems they've predicted stronger hurricanes and those have also been happening more frequently over time.
No, they haven't. Studies of frequency of storms show no change in frequency over time. The amount of damage done by storms has increased, but that's because of the huge increase in the number of people and the value of housing and other property that are in the paths of major storms.
> Studies of frequency of storms show no change in frequency over time.
Right. Which is why I said the models predicted stronger hurricanes. Which is what you're saying. Looks like we'll have to agree to agree here. The models are making accurate predictions and you'd need to know a lot about the domain to suggest there are flaws. No one would argue a lay audience would be making those arguments. Temperatures are getting warmer, storms are getting stronger, droughts are happening more frequently. The models aren't perfect but that's why they're called models and they're generally predicting what's happening.
> Which is why I said the models predicted stronger hurricanes. Which is what you're saying.
No, it's not. I said that the damage due to storms has increased, but I gave the reason why, and it has nothing to do with hurricanes getting stronger, any more than it has to do with hurricanes getting more frequent.
Studies have not shown any increase in the average strength of storms either.
> Studies have not shown any increase in the average strength of storms either.
“On average, there have been more storms, stronger hurricanes and increase in hurricanes that rapidly intensify,” NASA reports. In 2020, the world saw a record-breaking hurricane season. The facts show what the models predicted. You can continue to argue that there's no evidence, but you'll continue to be wrong.
The OP's argument seems not be refuted here. You need to be an expert and know a lot more than a lay audience would to find any criticisms with existing models. The vast majority of climate change deniers are science skeptic and not going so deep they have a valid basis for their skepticism (and in some cases, it's becoming clear it's not just skepticism but willfully ignoring the actual data for whatever reason).
It's a *very major problem* on HN and every internet forum:
*skepticism is confused for intelligence or insight.*
It's rampant in application towards ML. People don't want to seem like they're 'hyping AI' because that is associated with immaturity, so they run the other direction and claim "very little has been done, nothing to see here" - which is actually far further from truth. These people often don't have a great deal of knowledge or experience within the field their bashing; however they're well received, simply because *skepticism is incorrectly received as insight*.
There may or may not be a fix that can be engineered for this, but at least reminding our community that pointing out the things a post does correctly (perhaps using theory that isn't explicitly stated there) is a more academically challenging task than pointing out flaws can be helpful.
Fully true. Also people considered well educated gravitate towards convenience, not truth. It's a fundamental characteristic of our species: to seek immediate short term benefit and to avoid discomforts.
You can escape it by training yourself in rigorous critical thinking. But why would you do that? It's a lot of work and only leads to painful conclusions. Nobody else is doing it so your superior and truthful insights will be rejected.
Knowledge needs to be cheaper to verify than to produce. Otherwise it's simply not worthwhile to produce at all. Choosing to trust "authority" doesn't fix anything. One still has to be the smartest person in the world at determining who's right.
If you show me a battery you say lasts longer, I can verify your claim quite easily. If it doesn't last longer, without knowing a single thing about batteries, I have every reason to believe I'm right and you're wrong. There's a chance I am in fact wrong, but I'll be expecting you to put in the effort to convince me otherwise.
If someone tells me to significantly change my life based off their climate models, and they laughably fail [0], I have every reason to believe they don't have a good climate model. If they don't have a good explanation, and even worse, get angsty when asked for one, I'm out.
It's perfectly possible some small group of people or even an individual possesses knowledge that could save the world from imminent destruction or produce some great benefit. But if there's no way to verify such knowledge, there's no reason to care about it.
> If someone tells me to significantly change my life based off their climate models, and they laughably fail [0], I have every reason to believe they don't have a good climate model
What changes do you consider significant? What's your threshold for a climate model good enough to justify lifestyle changes?
It seems obvious to me that we live on a finite planet with little hope of escaping limits on its resources. So looking only at potential arable land and current oil usage I'd say we're obviously not operating with sustainable lifestyles in most of the west. After all, that oil represents hundreds of millions of years of solar energy, converted to oil. We are depleting it much faster than it's being generated.
If I was a sincere believer in the threat of CO2 driven global warming I'd want:
- better temperature measurement tech (even the data can be controversial, big problem)
- better measurement tech for all potentially relevant factors
- a well-reasoned model that can better predict what's "variance" today, the more precise the model the quicker you can prove it right
Without any data or models we already knew CO2, all else being equal, should have a net warming effect of some kind. What the failed models suggest is we don't know much more than that. Without any advances, even if the current trend continues indefinitely, I don't see this being settled in my lifetime. You'll have a much easier time convincing me I should care about CO2's effect on cognition.
I don't think all Malthusian concerns are of the same kind in terms of epistemic controversy. I could see humanity dying because of something that could have been prevented with collective action. What I don't see happening is humanity saving itself by collectively choosing to defer to the right people. It almost seems like a logical contradiction. If we do save ourselves from catastrophe by collective action I except there to be collective conviction. If we die because we were too stupid to listen to somebody then we were bound to die either way.
No. The business model is to feed people information, that if believed, will allow for wealth and power extraction from the believers.
Eg, if climate change is accepted by the masses, all sorts of corporate friendly legislation will be/is being enacted, with Al Gore et al ready to benefit, and with loss of freedom and money for the individual. And at the cost of the environment they are purporting to save. (Electric cars anyone?)
Control of the narrative is powerful stuff, that plays to people's embarrassment at their apparent ignorance in the face of 'expert' opinion. Where we should just remember experts are invested via education, work, status in their expertise and that what they say may have little to do with the truth. Especially when it conflicts with the golden source, personal or anecdotal experience.
And it has always been that way, even for long running narratives.
There was another fun example of how you can’t rely on supposed experts when a University of Nebraska professor Ana-Rhodes Short embarrassed herself by weighing in on the poker cheating controversy. She created a whole game tree assuming that the player who claims that he got cheated could only have one possible hand instead of a range of hands and also somehow thought that running it twice would change the odds. She also claimed that she used it as a lesson for her unfortunate students. Then she got extremely defensive, blaming toxic men, etc. when poker players pointed out her errors. (For the record I’m leaning towards no cheating and extremely poor handling of the case by the player who claims he got cheated).
Could I trouble you to a link to Rhodes's statement if it's still available/if you have one? I wanted to see her claims myself, but I couldn't turn up anything after a few minutes of googling or looking at her twitter.
And science, as a whole, only gets better at explaining phenomena, I'm not aware of anything that's had to be walked back, unlike claims made by actual religions like "Native Americans are a tribe of Israelites who were punished for their sins by God, by giving them a different skin colour", which didn't survive the event of genetics.
There's skepticism, and then there's contrarianism, and a lot of people mired in the latter think they're practicing the former.
On an unrelated note I hardly ever hear "scientism" outside of Young Earth Creation apologetics...
Have you heard of the replication crisis[1]? There are many examples of claims that were once considered scientific, such as the idea that thinking of florida makes people walk slower due to an association with retirees, that are now rejected.
I'm sure there's a kind of lindy effect here, where the longer a scientific result has been accepted, the less likely it is to be rejected, but the claim that science never walks back is simply false.
I hear about scientism frequently from scientists and philosophy majors, your lack of hearing it might come from a perceived hostility due to the fact that you imply people who speak of "scientism" are akin to creationists.
It's quite probable that I've heard it most from those who reject geological evidence or evidence for evolution for belief based reasons, because of my personal history, and, well, apparently my lack of engaging with philosophy majors[0] and scientists who apparently discuss it frequently (although it strikes me that it'd be people specialising in the philosophy of science who'd use it, rather than people doing actual scientific research). Also, I'm not entirely sure why I should care about what philosophy majors think about the scientific method.
That said, if you have blogs etc. from scientists or scholars of the philosophy of science I can read discussing "scientism" (FYI, my spellcheck is insistent that's not a word) , then I'll happily read them.
[0] Can I make the usual philosophy major joke about how I don't interact with them "other than when I'm ordering my burger at the drivethrough"? Heyoo, please, take my wife, I can't get no respect.
> although it strikes me that it'd be people specialising in the philosophy of science who'd use it, rather than people doing actual scientific research
I think this might to some degree demonstrate the point (lack of self-awareness/criticality among practicing scientists).
> Also, I'm not entirely sure why I should care about what philosophy majors think about the scientific method.
Cross-domain perspectives can offer valuable insight into complex problems is one reason, but there are surely many others.
> That said, if you have blogs etc. from scientists or scholars of the philosophy of science I can read discussing "scientism" (FYI, my spellcheck is insistent that's not a word) , then I'll happily read them.
Is constraining one's information sources to only the ideas of those who are being critiqued a logically, epistemically, and scientifically sound approach?
> Can I make the usual philosophy major joke about how I don't interact with them "other than when I'm ordering my burger at the drivethrough"? Heyoo, please, take my wife, I can't get no respect.
You can, and I encourage it - the more information you reveal about the manner in which you think, the more data it gives others to model the style and quality of your cognitive abilities (and luckily, this opportunity exists at the individual level, and collective level). Observing people describing how "science" appears to them (aka: "is", circa 2022) is one of my hobbies, and there seems to be substantial clustering of very common logical, categorical & epistemic errors in people's thinking across different dimensions. Ironically, this seems like the type of thing that science would/"should" be very interested in, considering the causal importance of human cognition in the end state of the world we live in. But alas, hyper-complex non-deterministic environments seem to have a repulsive affect on the curiosity of Scientific Materialists. Since this seems to be the predominant (and increasingly so) metaphysical framework in advanced western nations in this era of human history, I do not expect things to get better anytime soon, assuming something unusual does not occur to change the path we are currently on.
>I'm not aware of anything that's had to be walked back, unlike claims made by actual religions like "Native Americans are a tribe of Israelites who were punished for their sins by God, by giving them a different skin colour",
Phrenology? How about all those lobotomies? I guess you can't call it walking back when they're incapacitated.
About what? Yes, they were opposed in their time, but their findings eventually became the accepted facts. When I said "walk backwards", I meant exactly that - we haven't been washing our hands for a 170 years and had to say "actually, scientific evidence has shown that we were completely wrong, the people who considered Semmelweiz a loonie were actually right."
Science has a definite tendency to go forwards, not backwards. Because scientists like to replicate results, and tend to reject things that don't properly explain the observed facts. Unlike statements of faith from, well, actual faiths, as opposed to "scientism".
Your model needs a definition of what is a "«finding»", destined to become an «accepted fact», and a "«have[-]been» idea", for which «scientific evidence [will] show[] that we were completely wrong». Because they are not easily distinguishable.
Others have recommended Kuhn; I am partial for Imre Lakatos, and recommend the reading of at least the transcripts of his lessons at the London School of Economics.
--
Edit: I do not know what meaning you intend to be giving to 'scientism', but partisan cheering is not sport, nor science, nor religion.
Just for the sake of argument, I can name some an entire field of science that was invalidated in light of genetic & neuroscience evidence: phrenology. At the time, it was the newest advancement in the gleaming era of scientific Enlightenment. It just happened to justify colonial policies of that time. Now, a couple hundred years later, we're walking back on a widely supported but misguided "scientific" field.
Terrain theory, which was originally dismissed entirely when germ theory won, has been making a major comeback in the past decade or so under new names, such as the "gut microbiome".
Miasma theory as well, when it was originally proved wrong, has held back understanding of viral spread - experts stay away from aerosol spread and prefer droplet spread as much as possible because it's too similar to miasma.
Science is the rejection of argument from authority, which I think is a better way of saying it because it also includes political and religious authority.
That being said there's a nuance here: if I am not well versed in an area of science, I am probably going to defer to people who are... not because they have some special claim to truth, but because unless I have some evidence that they're deceptive or incompetent they are probably more likely to be right in their given field of expertise.
This is no different from, say, having a certified heart surgeon do your heart surgery instead of your buddy who's a vet but has never done an operation on a human being.
> Science is the rejection of argument from authority, which I think is a better way of saying it because it also includes political and religious authority.
I agree with the rest of your comment, but this part doesn't really fit with the rest of your thinking.
"Science is superficially the rejection of argument from authority" or "Science is the narrow rejection of argument from authority" are probably closer to the truth.
If they didn't attribute that, they totally ripped off Arthur C. Clarke's first law:
> When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Thank you for the citation. I don’t remember if the source was cited or if it was shared as received wisdom.
Ps ripped off is quite harsh. Much of what I learned as a doctoral student was the biases absorbed and transmitted by my advisor, it was only specific academic contributions that he claimed as original.
Maybe that was the lesson to be learned all along? When a distinguished but elderly scientist is almost certainly right, he probably heard it somewhere else first.
On a pedantic note, you are believing in the expertise of those who told you to be skeptical.
Socrates strikes again!
Edit: Not sure why all the downvotes. The point I’m making is that even “don’t just trust peoples ideas” is itself an idea, and that almost all ideas are inherited from prior generations in some form or another. Socrates is the person most famous for popularizing skepticism - we didn’t invent this concept recently, and we believe in it mostly because those we respect told us to (somewhat ironically)
I don't think you have to be an expert on anything to value skepticism (e.g. an encyclopedia editor), and vice versa (e.g. an astrologist). They are independent. However, most scientists are both.
> The experts who are leading you may be wrong. (…)
It's not that the experts are wrong per se. It's understanding and embacing that they* and their employers have plenty of conflicting interests. Money and ego being at the top of the list.
These people - even Bill Nye and NdT - are not above the rest of us. Full stop. They shit too. The fact that they are collectively so fond of promoting otherwise is a key tell / signal. They'd do a lot better in the trust dept if they found some humility.
The Scientific Method can be trusted. However, the humans that implement and execute it cannot. To believe otherwise is naive, and anti-history.
Unfortunately, it is practiced by human beings, who are susceptible to all the flaws - psychological, social, political, cultural etc - we are all constrained by.
One thing that grates on me is when people say science "proves" X. Or and A/B test "proves" Y. Science never proves anything, it provides evidence for the existence of some phenomenon or relationship. People need to repeat this to themselves whenever they are tempted to say the word "prove": Some arbitrarily complex hypothesis can also explain all of my evidence. Said in plain language, there is always another possibility.
You're being prescriptive that we always treat "prove" as if it has the meaning as we use it in math and logic. That's not how language works. Words have different meanings in different contexts.
Top definition of "prove" according to Google: "demonstrate the truth or existence of (something) by evidence or argument." According to that definition, it seems completely reasonable to me to say that science can prove things.
I think colloquially, lay people may not know the ontological origins of the word but when they use the word "proof" they mean it as some indisputable fact. At least most people I know use it in that manner.
Evidence? The dictionary disagrees with you. Dictionaries are compiled from the ways that people actually use language in the real world. Also, the etymology of "prove" has nothing to do with indisputable facts:
Middle English: from Old French prover, from Latin probare ‘test, approve, demonstrate’, from probus ‘good’.
Edit: I feel I was being a bit forceful with my replies here. You airing your pet peeve triggered my pet peeve with linguistic prescriptivism. Sorry about that.
The institution of Science is not the same as the process of science. Their functions are not the same. Science as an institution is a mechanism of social control with all the features the author describes in the article such as the memorizable facts. Grant writing itself is no longer about science but is entirely political. Only rarely can institutional Scientists thread the needle of researching real scientific interests while also identifying ways to meet political goals set in grant requirements.
The scientific process is decoupled from Scientific institutions.
I think science is primarily attempting to externalize the expertise (the way the experts do inference), to formalize it, so it can be independently scrutinized, and reproduced. Of course, this is based on belief that experts can be fallible (as stated in the article), but also on belief that opaque expert's intuition is not as valuable as clear understanding of the inference process.
Of course, this doesn't make the experts useless. The expert intuition is still mostly right, even if it lacks explainability. It is also a useful shortcut.
It would be much better to say that the belief in the ignorance of experts is a prerequisite to science. Science itself is the process of proving a hypothesis to some degree of certainty. It is not mere skepticism. If you have no plan to put in the work necessary to prove your hypothesis, then statistically speaking, you generally are better off listening to the experts and textbooks. Science is the difference between healthy skepticism and mindless armchair contrarianism.
The author makes some good points, although it seems ironic to me that he invokes the authority of The Great Feynman to re-enforce them. A couple of thoughts:
> Science is the belief in the ignorance of experts
It seems a stretch to use this to justify "nobody knows anything". Its seems to me that Feynman wasn't saying that he believed that experts are ignorant, but that part of his belief system included the possibility that experts are fallible.
> Receive science as a set of facts. We are sometimes given advice and told that it comes from “the science”, as if it settled anything.
In principle yes, but in practice you often just have to learn some stuff and accept it. Almost no-one can verify a scientific domain from first principles - there just isn't time in a person's career or life. Instead you understand things to a certain level of abstraction and trust that the stuff below that is well-founded. But yes, good scientists stay curious and sceptical about the stuff they don't know.
There was a lot "follow the science" during the pandemic w/r/t vaccination and masks and, while in principle I appreciate the author's point, attempting to create a more facts-based public policy will necessitate trusting that some facts are true without demonstrating the validity of the stuff upon which they rest.
> Paul Graham puts it well: To be a successful scientist it’s not enough just to be right. You have to be right when everyone else is wrong. Conventional-minded people can’t do that.
I'd argue that pg's definition of "successful" here is something like "famous" or "paradigm breaking". But there are very, very many scientists who are being successful by grinding away at expanding the frontiers of knowledge, piece by piece. They're not being "right when everyone else is wrong", but they're doing successful science.
> There was a lot "follow the science" during the pandemic w/r/t vaccination and masks and, while in principle I appreciate the author's point, attempting to create a more facts-based public policy will necessitate trusting that some facts are true without demonstrating the validity of the stuff upon which they rest.
I'm curious how you came to believe that this is necessarily true.
> Paul Graham puts it well: To be a successful scientist it’s not enough just to be right. You have to be right when everyone else is wrong. Conventional-minded people can’t do that.
I suspect the definition of "conventional-minded" in this phrase is a tautology, since Paul does not have knowledge of all people's minds, despite how "clearly" it may seem to him that he does.
Its not about belief in the ignorance of experts, but rather the principle that truth does not derive from authority. I don't believe experts are ignorant, I'll even go so far as to sometimes accept their claims as authoritative, but on demand they must be able to demonstrate where that authority derives from and it damn well better not the word of yet some other authority.
I used to be an avid advocate of this notion, but I've become somewhat disillusioned about the idea of encouraging people to think for themselves. The whole Covid vaccine debacle was the latest example of the frustrating truth that "thinking for yourself" just isn't trivial.
People that advocate for it (like me) just wishfully assume that their listeners, if they just try, will reach the same conclusions they did. But that just isn't true. I've met many people that held (what I consider to be) debunked beliefs which they proclaimed to have reached through "doing their own research". In a way these people are harder to sway than those that "just feel it's true". They are armed with research papers, lecturing experts (self proclaimed perhaps), and logical constructs. They hold their ground in an argument.
And perhaps they are not at fault. The truth can be non-intuitive on several levels, and distilling it from truckloads of data, papers, and experts really isn't easy. Take for example the seemingly simple question of whether or not climate change is man-made or not. It is not at all an easy matter for someone to reach their own conclusion about this. Any argument, how ever sound it may seem, may be wrong in some subtle non-intuitive way that would more digging and learning to figure out.
I've come to realize that in this question (and many others) I can't justify trusting my own judgement and the best heuristic I could come up with is "trusting the expert consensus". Sure it can be wrong, but I hold the belief that it tends to self correct eventually, and in many cases (especially when many people need to form an option) it's the best we can do.
> I've become somewhat disillusioned about the idea of encouraging people to think for themselves.
If the average person shouldn't think for themselves, who do you think should think for themselves? Only someone with a degree? Only someone who can do the math?
> the best heuristic I could come up with is "trusting the expert consensus".
How do you decide which experts to trust? Do you leave that up to the government to choose for you?
You rarely hear the average person questioning consensus beliefs from experts in the 'hard' science / engineering fields (with notable exceptions, eg. climate science/evolution, but there are obvious incentives involved). Social sciences and economics seem to be the exact opposition, where there are rarely notable consensus opinions, and 'experts' are routinely questioned.
It makes sense, though, since expert opinion from the social sciences often relates directly to how people implement their lives. If the average person relied on the expert advice from physicists, they'd probably question that as well. So, the more directly relevant an 'expert's advice, the more it is questioned, as a matter of course.
>the problem is most people do not have the expertise and/or time it takes; some trust in so called experts is ok to a degree.
I think the issue is that far too many people have blind trust in "experts". It is okay to admit to yourself that you just don't know enough about something to make an informed decision. You can hazard a guess on how likely something is to be true based on the opinions of "experts", but your confidence in that guess should never be very high for something you don't understand. Unfortunately people like to be confident, and are very susceptible to suggestion and social pressure, and therefore routinely place blind trust in a variety of "experts" in every field from medicine to government to finance. Very often these experts are either partly wrong, completely wrong and/or or spreading misinformation/disinformation due to some hidden agenda (political, economic, social). The end result is large masses of people who are very confident about things which are wrong and/or distorted (which ends up being directly reflected in the quality of our elected leaders and society as a whole).
I think there is a significant fraction of people for whom their system of reasoning about the world fundamentally follows the same design patterns as religion and the best we can hope for is that they "convert to the church of trusting the experts". Covid really exposed the thought process.
Basically, as I surmise was happening in their mind, the distancing / mask guidelines became the sins, anyone who forgot or followed it imperfectly even once was a bad person, anyone who did get it must have sinned (or someone near to them did), and so long as you never sin it won't happen to you. Implicitly anyone who got it did something to deserve it, as there's no such thing as bad things happening to good people randomly.
Whilst the underlying case is true, I feel what is missing is where scientism has been beneficial net overall. Arguably, vaccination is a good case: the statistics of benefit are huge. the personal experiental outcome tends to negative feedback only: So, underling belief in the statistics of decent testing is important and has a community upside.
Or, the truly remarkable advances in materials physics especially in VLSI. actual applications of quantum effects initially misunderstood but actually now designed in (sometimes?)
The physics behind modern VLSI was known to something like 8 sigmas of certainty before it was designed into devices. A lot of medical science, on the other hand, comes from one study with a p-value of 0.05 that didn't replicate when another lab tried it.
Science is having confidence in the former. Scientism is having confidence in the latter.
It's worth pointing out that the standards for evidence in particle physics are so high because they have to be: particle physics experiments create so much data that you can find a huge number of correlations with high confidence without much difficulty. Biology and medical science are completely unlike that: there is comparitively very little data to draw any conclusions from so standards of evidence are necessarily low, and yet still useful (that said, the lower standards also allow a lot of junk to appear which is difficult to filter from the rest if you are not very familiar with the subject).
I think we should point out that part of that 8-sigma confidence I am quoting isn't just the initial experiment (which may have had as little as 3-4 sigma confidence intervals for quantum tunneling), but also that there are further experiments that re-confirm quantum tunneling by relying on it to observe some other effect.
That is something that is also lacking in the social sciences, medical science, and even some parts of modern physics: experiments that confirm other results by taking those results as fact and finding new effects.
Experience isn't only first-hand though; those studies count. The correct comparison is what experts say vs what actually happens. For example, various proponents of local realism before experimental confirmation of Bell's Theorem.
I do think it's stated a bit strongly though, I'd prefer s/ignorance/fallibility/ for similar reasons to what you're articulating: I grant a higher prior probability to a statement given by a (relevant!) expert. Of course that's relative to the statement itself: an expert might get nonsense from log(-10) to log(-9) but that's still not holding any water.
The problem would be it's the 'argument by appeal to authority' fallacy. True science must always be open to test. The thing is we're on derivative downstream consequences of the science. They come with some caveats to "question everything"
But "we don't know how anaesthesia works really" is a big problem. All we've got is observational science (which Feynman hated and analogised to stamp collecting) that "it works"
Your key point is that scientism, rei-ification of science is borderline belief-system behaviour, at the very least. Your pejorative language says you think its pushed well into the space. Fair enough.
What I am doing is saying "baby/bathwater" time. Public communications depends on not legitimating anti-vaxx (for example) but if it depends on belief system behaviour rather than being able to apply objective science outcomes to demonstrate the "why" -it's got problems.
What you are arguing for is a pro-establishment stance, it is not a pro-science stance. By conflating the two you devalue science, you are muddying the bath water and poisoning the baby, making science politics and destroying its credibility.
If you remove nuance from public communications to avoid legitimizing "anti-vaxx" rhetoric, a proposition that I find anti-democratic and self-defeating given that it assumes the public is too dumb to hear nuance, then you cannot use science as a shield because science is nuanced.
I would absolutely prefer what you want. I am unsure in a post truth world how to make it work. Perhaps my view is the other side of the blatant anti scientific posture of the Koch funded right: no amount of reasoned nuance about AGW will stop a coal mine owner claiming its nasa funded lies and no amount of reasoned nuanced argument will make the coal mine worker want to believe science, over his paymasters lies.
Not that leftists haven't promulgated antiscienfific crap before now, lysenkoism comes to mind. It's just not the current problem.
While composing this comment, did you consider that the dimension of time is implicit (and hidden from perception) in the version of "is" that you are using here?
Also hidden: sub-perceptual heuristics. And culture. And surely many other things beyond my knowledge.
The hard problem of consciousness is so poorly understood by science, I wonder if that is why it is simply ~ignored in conversations like this, where it is so fundamentally important.
Utilitarianistic approaches are really bad... Scientism falls flat on its face, since it's a self refuting philosophy(!).
It's truth cannot be demonstrated according to its own principles, (i.e. hard sciences are the best / superiour or even only source of genuine knowledge of the world). The truth of scientism is philosophical in nature and not the result of a scientific experiment.
We shouldn't adhere to untruth just because it "serves" our ends in some ways or another.*
By the way, how do you arrive at your value judgements of "good" and "bad", "serving" something vs. the opposite?
How do you determine "the benefits" in a relativistic worldview, most modern people, as well as you seem to adhere to (see "post truth world").
For whom was it "net beneficial"?
Doing science and producing results is not necessarily a result or proof of the philosophy the scientist adheres to. One could believe all sorts of things and get scientific results, so I don't see scientism as beneficial in any sense.
It is wrong as a theory and it undermines science itself in a major way.
Scientism undermines the presuppositions science philosophically rests on*and which cannot be proven by science itself.
Therefore if Scientism believes that you can only believe "truths" that are the result of scientific experiments and you have to rely on presuppositions that cannot categorically be proven by science itself, you cannot do science anymore.
Scientism is the enemy of science, logically speaking.
*
I still believe that "the Good" and "Truth" are directly entangled and necessary.
*
a) belief in an external world, independent of mind, languagy or theory,
b) the nature of the world is orderly, especially its "deep structure" that lies under and beyond the manifest world of ordinary perception
c) objective truth exists
d) our sensory and cognitive faculties are reliable for gaining truth and knowledge of the world, and tehy are able to grasp the world's deep structure that lies beyond the sense-perceptible world,
e) various types of values and "oughts" exist
f) the laws of logic and mathematics exist
Most people are still not aware of the extent of the corruption of clinical research, and the manipulation of statistical data by “health authorities.” This article clearly lays it all out and is probably the most important article written this year:
Science is not concerned with truth, which is out of its domain.
Truth is a matter of philosophy, which provides a theoretical framework, which comes before any empirical work can be done.
(Read this article: https://plato.stanford.edu/entries/scientific-underdetermina...)
Therefore we have to assume that the current scientific "trend" or scientific "vogue" is only the "best error" we currently believe in. Or do we?
Who determines, if they really are "the best error" / "the best models we have"?
Doesn't that involve an unjustified belief in something called "progress"?
Why don't we ever assume we might have regressed?
Of course, if you believe in one of the pillars of the society-permeating quasi religious belief in science, then "science is always self-correcting".
But I would implore you to read Thomas Kuhn and / or Ludwik Fleck, regarding how "scientific knowledge" is not something from the heavens or the olymp of science, where the "wise men" sit and determine whats "true" or not, solely motivated by their unquestionable and uncorruptable love for the truth, but mostly a social process, as it is with all other human endeavours.
It is still a human activity and scientists are no saints or even gods, even if many of them wish to be.
They are still prone to all the human errors we have, like pride, greed, pettiness, fear, lust for power etc., all of which can motivate them to struggle with another, spin intrigues and boot out competitors from getting their theories accepted, because it could undermine their own research funds, positions or legacy.
We CAN regress!
Science is not concerned with truth... again.
Believing otherwise is scientism, a bastard philosophy derived from empiricism / positivism, which actually undermines science at its base, since it's main epistemological assumption ("truth comes from science") falls flat based on its own criteria, meaning it's self-refuting. Does this assumption come through science as well? Show me the experiment, where this assumption was produced and determined to be true.
It is a philosophical assumption, an universal truth claim and it wasn't derived from a scientific experiment, so it can't be true.
Science is NOT the end all be all determinator of what is true or right.
How can it be, since scientific experiments are an extension of sensory data (empiricism) and we cannot have sense data of the entire universe, therefore universal truth claims cannot be possible?
These are outside ot science's domain.
Science needs philosohpy, otherwise they cannot justifiably use certain categories, such as universals, which assume regularity, for which we don't have a justification, since we still have "the problem of induction", which Hume correctly pointed out.
https://stanford.library.sydney.edu.au/archives/sum2016/entr...
And "the scientific method" is not some magical formula which produces truth, as well.
As nothing is really set in stone in science, that doesn't derive directly from philosophy (i.e. math), how come nobody ever conducted scientific experiments into the validity of the scientific method itself?
People should not make "The Science"(TM) into a cult and it shouldn't inform all aspects of our lives, especially since science cannot make value judgements, since as Hume said one cannot derive an OUGHT from an IS. But to know that, a lot more scientists should be trained in philosophy, especially metaphysics and logic, which they are usually not (I speak from experience in the field of science).
Even if one isn't one of the "experts", one can still have their reservations regarding their propounded "truths". Just because he is an "expert", I don't have to believe in his conclusions, solely based on his credentials.
This would be an authority fallacy.
Always remember:
"Dr. Horton recently published a statement declaring that a lot of published research is in fact unreliable at best, if not completely false.
‘The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.’ (https://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736...)"
-> taken from https://www.globalresearch.ca/editor-in-chief-of-worlds-best...
All of it and all of them can and should be questioned, even if you're not from another field of inquiry.
Otherwise this is just another religion, where the "experts" are the new priest-class.
We should all be aware of the limitations of science.
> Science is NOT the end all be all determinator of what is true or right.
> for which we don't have a justification,
Science can be used to self justify. Try it! You can split into groups, try to settle questions with science and without it, and see which group succeeds the best. You can even use more than a binary split!
Why is this invalid for science, but valid for philosophy (or even "reason")?
There's a certain type of person on HN that, for whatever reason, believes nothing is truly knowable.
They won't say it like that, but you can see it in the way they interact and argue. In many ways its the fallacy of the compromise rearing it's head, they have an underlying belief that the truth is probably somewhere in the middle.
Of course, if you ask them if sexual relations with a 5 year old is acceptable they'll suddenly find something that is truly knowable, but it typically takes such extremes to get them to admit that yes, some things in this world are actually knowable.
Especially when the manufacturers are proven to be criminal enterprises with a mind boggling list of crimes, as well as other blatant competing interests let alone in govt institutions...
Quote from the article below:
'Dr. Marcia Angell, a physician and longtime Editor in Chief of the New England Medical Journal (NEMJ), which is considered to another one of the most prestigious peer-reviewed medical journals in the world, makes her view of the subject quite plain:
“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine” (source)" '
I like this quote by Max Planck, which applies not only to science but also politics and technology:
> a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it
As for the original argument in the article, the problem is where to draw the boundary. If you want to e.g. get to the bleeding edge of Physics research you can't question everything people have discovered along the way or you wouldn't get anywhere in your lifetime. Believing in experts is not necessarily bug, it's also a feature. Because that's how we transmit knowledge across generations in an efficient way. The hard part is not the questioning, it's figuring out what's worth questioning and what's not.