>As the saying goes: all models are wrong; some are useful.
models being wrong/right/useful/etc. is just a model of a relation between a model and [an aspect] of reality [or whatever else] the model is supposedly modeling.
> As the saying goes: all models are wrong; some are useful.
It's important to also recognize that everything we believe about the world except raw sensory perception is a model and, ipso facto, wrong, though perhaps useful.
The fact of the occurrence of the raw sensory perception is not necessarily a model; the interpretation of external causes is.
(Though it's possible that the actual mechanism of sensory experience involve modelling, but the only way we could draw the conclusion that that is the case is through a model of the mechanism which has the problem of all models. All other belief about the external world is necessarily a model.)
The fact that sensory information is limited, does not make it wrong. It's a certain perspective on reality, and we can easily accommodate for that.
For instance, we know that a stick isn't actually bent when it goes into water, even if it looks like it is.
More generally, we have no practical problem dealing with reality using the information given by our senses. It's not a problem in our daily lives and it's not a problem for going to the moon, Mars, creating microprocessors, etc.
Sorry for busting up the "deny knowledge to make room for faith" parade.
> The fact that sensory information is limited, does not make it wrong.
If you reread the comment you are responding to, it says raw sensory input is the only thing that is not wrong, not that it is wrong.
> For instance, we know that a stick isn't actually bent when it goes into water, even if it looks like it is.
We don't even know that there is a stick, even if it looks like there is, but we do know the sensory data itself. In fact, there very idea of “a stick” (or discrete objects more generally) is a useful, but still wrong, model.
> More generally, we have no practical problem dealing with reality using the information given by our senses.
We have lots of practical problems dealing with that, and lots of practices adopted specifically to mitigate those problems. The historically recent invention of the modern scientific method is itself a (far from entirely successful) method of mitigating a very broad and impactful class of pervasive practical problems with that.
> Sorry for busting up the "deny knowledge to make room for faith" parade.
Faith, where it concerns the material universe at all, is still a method of selectig models subject to the “all models are wrong, some are useful” rule, it's just a method of model selection that isn't focussed on predictive utility, so, insofar as prediction is your key metric for utility, it's inferior to the scientific method which is narrowly focussed on predictive utility.
Pointing out that all knowledge of the material universe beyond the facts of raw sensory data is models that are at best useful but always in some respects wrong is certainly denying lots that is commonly claimed as knowledge, but it absolutely isn't clearing the field for faith by so doing.
One of the ways philosophy goes wrong is by using trivial problems as examples. "Where is the stick", or "is this really a table in front of me" are the Hello World of epistemology. Just like you can't figure out which programming language is better than another by looking at Hello World, you can't evaluate a philosophy by its answer to where the stick is. You need to look at answers to hard problems, like "what should I do with my life?"
> If you reread the comment you are responding to, it says raw sensory input is the only thing that is not wrong, not that it is wrong.
My view is that raw sensory input is just information, so it's not wrong, and so we are in agreement there. Further, I am saying that knowledge can be derived from that sensory input, which is where we disagree.
I see what you are saying, I failed to clearly distinguish these two cases when I used the term "sensory information."
> We don't even know that there is a stick
Yes we do. But to "know that there is a stick" is subject to certain caveats. It's not an absolute, like being omniscient. For example, if something is a stick, it's still a stick, even if we actually live in the Matrix, or in the dream of an alien being. Because to be a stick is just a mental classification (i.e., concept) that we have created for a certain kind of thing evidenced to us by our sense data and by the use of reason.
> is a useful, but still wrong, model
A model can be correct as long as it doesn't overstate its own power of generalization. There is a sibling discussion going that covers the fact that Newton's Laws are correct as long as they are understood to describe phenomena (i.e. evidence) actually observed by Newton, and then only to a certain level of fidelity; but not "correct" if you hold them to the standard of explaining everything.
> Faith, where it concerns the material universe at all, is still a method of selecting models subject to the “all models are wrong, some are useful” rule,
Faith doesn't select a model. A model is based on evidence of the senses. Faith dispenses with models entirely and just makes stuff up out of whole cloth. I see your point there, I just think you're using the word "model" too loosely and in a way that gives faith too much credit.
No, a model is an abstraction by which one conceptualizes phenomena, regardless of the basis. It seems true that a model not based on structured application of sensory observation is likely to be a poor model if you judge quality by how well it lines up with future sensory observations, but a model is not necessarily a good model.
Faith, also, does not dispense with evidence of the senses, it just applied it differently.
“I perceived X describing Y as true”, where X is a (natural or supernatural) authority figure is sensory data, as are positive sensations associated with a particular belief. Now, they aren't sense data that empiricism would treat as relative to the truth of the belief at issue, but that's a different issue.
You're just arguing that a "model" based on make believe is still a type of model. I don't find that compelling.
It's like saying a scientific theory not based on empirical information is still a scientific theory.
I don't see why that point is worth defending. Except perhaps as a way to give bad ideas higher stature by lumping them in with good ideas.
> Faith, also, does not dispense with evidence of the senses, it just applied it differently.
No idea what you are talking about here. For instance, the only sense-based evidence I know of for Christianity or Islam is the historical record, and that isn't reliable enough to establish these beliefs as anything more than make believe. In other words, we can't see Jesus work miracles, which would constitute partial evidence for Christianity; the only evidence we have is that someone said they did a long, long time ago. I'm also not aware of evidence for Zeus or Thor or Brahma.
> as are positive sensations associated with a particular belief
Your last long doesn't follow, as far as I can tell, from the rest.
The first part of your argument reminds me of Searle's that a belief shared, external reality is implicit in every statement of faxt, e.g. it no sense to say that "there is snow and ice at the top of Mt. Everest, AND there is no shared, external reality."
The last line is a reference to Kant, who made an argument roughly along the same lines as dragonwriter, and said he wanted to deny knowledge to make room for faith. He wanted to preserve religion in the face of the Enlightenment. So far, he has succeeded.
The anti-intellectuals will spin this to prove their own points without any understanding of math or physics.
I've seen this happen, and they are very confident, their 'friends' don't know either. When it comes time, will anti-intellectuals trust scientists or their neighbor?
I disagree with the need for a warning. I think it would be better if this were _more_ commonly used.
So often debates arrive at a stasis like:
> "You're wrong"
> "No YOU'RE wrong"
And there they sit, each side certain that the other is an idiot.
The alternative is to admit that both parties are right according to their model, and that both models are wrong (because being right is not what models are for). I think this is better because the "which model is more useful" question sets up a lot more potentially fruitful interaction between opposite sides.
The danger you're referring to only occurs in a setting where science is implicitly authoritative in the first place. If we drop that assumption, science still produces the most useful models, but finding the most useful one for your project becomes less adversarial.
I'm going to remember this. It's a good way to have discussion if you're lucky enough to have someone who can abstract their personal views from the model that produced them.
I want to believe that, but how do you evaluate models’ usefulness if you refuse to acknowledge facts and data, and just yell “fake news” when they point to a conclusion you don’t like?
It doesn't help that model outputs are often put forward as the main reason for doing something. For example with climate change. But often the problem being discussed is more complex than the single dimension that the science based argument relies on. You have to consider ethics, economics, social effects, and a whole host of other disciplines. People intuitively get this.
You have to have an argument that will pursuade the uneducated, poorly informed neighbor first. And often model outputs just don't cut it.
>You have to have an argument that will pursuade the uneducated, poorly informed neighbor first.
An emotional/moral argument will be a coinflip with these people.
Will they side with the sad story of X, or the anti-intellectual sad story of Y?
I do not know the solution, I've seen others propose everything from hiding difficult to understand topics to calling them 'stupid' infront of their peers, etc...
Engaging them with logic and argument makes the problem worse.
Would be willing to hear ideas if people have them.
Convincing people is hard and starting from the position "I know better than them" only makes it harder. I understand that sometimes people are just wrong or are using fact incorrectly, but often this get inflated to an extreme degree by the "smarter" side.
For me an important distinction is whether people are using unsound arguments to support a position or are holding an unreasonable position. For climate change the problem (from our "let's save the planet" side) is that we want them to change opinion, not that they are misusing unsound arguments.
To solve this the only way is to start a two way conversation where you can communicate how and why you believe is important and they can do the same. It is hard to convince people that do not want to be convinced, it is even harder to lecture people that do not want to be lectured.
A good starting point is to show that you yourself are willing to reconsider their point of view as stuff like "calling them stupid" is the same as brewing social resentment.
Surely the mechanism for change is the political process. And at a smaller scale society is full of people trying to influence, change opinion and mainpulate. Surely you should just adopt those same methods?
It would be interesting to try and model people more directly. Like what political campaigns do, but with a more noble intent. Use the data to discover how to serve people's needs.
This is a similar sentiment, but perhaps more useful initially in conversation with people who may get hung up on what “model” means. It can also lead to a fruitful analogy as people easily grasp why different kinds of maps are important.
Newton’s Laws are absolute in a typical human context. A better model must explain everything covered by newton and more. Discovering that better model doesn’t make newton wrong.
No, the saying is not wrong here. Newton's Laws are wrong, period. They explain some observational evidence to some degree of accuracy. That they're "absolute in a typical human context" is literally the point of the saying I quoted.
Newton's laws captured part of what he observed at the time. And that's all a model is supposed to do.
That doesn't make it wrong. It makes it limited. You could say it's a "low resolution" view of reality.
If we limited our knowledge only to perfect models, we would be holding ourselves to a standard of omniscience, and we would never know anything.
We clearly have good enough models to design hypersonic aircraft, to pick a semi-random example. But our models are not complete; we don't know everything. But our models are not wrong.
Other commenters who warn that saying "all models are wrong" encourages anti-intellectualism and exactly right. That leads to a bad place.
The opposite of wrong is correct. Newton's Laws are not correct, therefore they are wrong. Technically.
I agree though that saying they're "wrong" has limited utility... Perhaps the more useful way of thinking about models is "give sensible predictions up to certain amount of accuracy in certain contexts".
> Newton's Laws are not correct, therefore they are wrong.
They are correct as long as they are stated with the caveat that they are only known to apply to Netwon's observations, and then, only with a limited amount of fidelity.
I haven't read Newton's original source material, so I don't know if he overstated the universality of his laws.
I feel we're arguing a very minor point here. You seem to be hung up on this "models can be correct when stated with appropriate caveats". Sure. The "all models are wrong" saying has an implicit "at representing the observable reality in full fidelity".
"All models are wrong" is the same as saying "All knowledge is wrong." That's technically incorrect and philosophically disastrous. Ideas like this have a huge impact on people in myriad ways.
> The "all models are wrong" saying has an implicit "at representing the observable reality in full fidelity".
No, it doesn't. This is exactly like saying "All knowledge is wrong" has an implicit "as knowing everything in full fidelity, i.e., being omniscient." You are holding models (or knowledge) to a completely ridiculous standard. They are not supposed to work the way you are implicitly asking them to work.
> That's technically incorrect and philosophically disastrous
How can taking a particular epistemic stance be wrong? What non-epistemic basis would you use to judge it? It might be inconsistent with the stance you have chosen, but that doesn't make it wrong.
And as far as it being disastrous, I know a number of very competent people that take that view--what disasters should I look for in their lives to see how disastrous their stance is?
It's wrong if it contradicts the evidence of raw sense data or knowledge derived logically from the evidence of raw sense data.
You can choose to reject even that, but there is no reason to make such a choice, and it would be supremely impractical to do so.
> And as far as it being disastrous, I know a number of very competent people that take that view
The view that knowledge based on sense data isn't really knowledge was advanced by Kant, and certainly helped make room for the Nazis. Marx also exploited the anti-knowledge ethos of the time with his dialectical materialism. Finally, religion thrives when the ability to know is denigrated; that was Kant's stated goal [1]. Religion is fundamentally dishonest and leads to an infinite plethora of evils.
[1] "I have therefore found it necessary to deny knowledge, in order to make room for faith" -Kant, Critique of Pure Reason
Sense data in itself can't be contradicted because it contains no propositions.
As for knowledge logically derived from the sense data--it is exactly the choice: "which logic should I choose to interpret this sense data?" that is in question.
If you choose a logic where all models are wrong and some are useful, then it won't contradict knowledge derived according to itself. If instead you choose some other way, you're again not going to run into any contradiction.
The choice of an epistemic theory can only be about utility. Truth and falsity can only come later because they must be expressed in terms of the chosen theory.
No, they're also wrong in a typical human context.
Most people have a phone with GPS these days. GPS would flat-out fail to work entirely without correcting for time dilation effects in the satellites. Newton had no concept of time dilation (nor should he have, given his context).
I'm not sure I follow your argument, perhaps you can help me understand. If you're saying that Newtonian Gravity breakdown in a specific case but are fairly accurate in other cases does that mean they're still universally correct?
I'm saying that for most everyday situations, Newton's equations provide sufficient accuracy for people to achieve their goals. However, in the case of GPS, the system wouldn't work reliably if it was only using Newton's equations.
Einstein's theory of gravity is much more robust and is highly accurate across a far wider domain. It can be used to implement a reliable GPS system.