I felt the answer should involve degrees of truth, which can roughly be calculated as "percent of people who agree with the classification" for some reference class, and this degree of truth gradually decreases. But you can't assign any arbitrary cutoff number where above it is true and below false.
The way to combine two statements would be to likewise ask what portion of people agree with both, which seems to avoid some of the critiques of degrees of truth.
This is only for problems of classification over human undefined terms, where the argument from common usage is valid.
Does this have any major problems?
The allegedly strong argument is basically begging the question. It is crucially dependent on the assumption that, for any particular ‘situation’, we can unambiguously say whether or not it can be called a pumpkin, but that is the premise that it is attempting to prove. Another way of looking at it is that while we clearly have pumpkin at one end of the truth table and clearly do not at the other, the formalism offers no way to decide where the boundary lies: this representation has its own sorities issue, and models the alleged problem without offering anything that leads to a solution - substituting 'true' or 'false' for the presence or absence of grains of sand does not make the problem any more tractable.
A secondary problem is that it depends on there being a total ordering of situations, but this fails when we realize that pumpkin-ness is context-dependent: in some contexts, a set of chromosomes is a pumpkin, while in others, nothing but a whole plant will do. Different contexts will be satisfied with different subsets of the pumpkin's atoms, and, in general, any one context will be satisfied with many different subsets.
No. As long as you concede that some pumpkins can be unambiguously called a pumpkin, then you just ask which is the first object on the list that can't be.
My other claims stand, I think. The point is that his linear model is not an adequate fit to the problem (for example, the ambiguity you introduced really is there, but the model excludes it.) Maybe there is something like question-begging in thinking that this is an adequate model.
Part of the difficulty here is that you are making a somewhat different argument than the author’s , but I don’t think you realize it. Your argument differs crucially when you write (and I quote) "go down the list and ask which is the first not to be unambiguously a pumpkin”. The author is very careful not to say this, no doubt because he realizes that would be begging the question. Also, someone like me could come along and say “if you can do that, then you can answer the question, “what is the least number of grains of sand in a heap of sand”, and go down in the history of philosophy as the person who solved the sorities paradox.”
As I don’t think you intended to make this argument, let’s get back to the author’s. If you go back to my first post in this thread, you will see that I raised several objections to the author’s argument, which haven't been refuted in this thread.
Here’s another one, specifically addressing the way the author avoids requiring that you actually find the boundary. He says that if you look at one end of the list, you see true, and at the other you see false, so you can deduce that there is at least one boundary where it is true on one side and false on the other (the argument actually assumes that there is only one such switch, but we can put that issue aside, as it will turn out to be moot.) As far as the model goes, this is correct, but the model does not fit the problem. The model is a Boolean one, and as such, excludes vagueness by construction, but pumpkin-ness is not a well-formed Boolean predicate.
This reminds me of a passage early in ‘Godel, Escher, Bach’, where Hofstadter points out that Euclid and his successors tacitly assumed that his formal system of geometry was the one and only possible model of space, but Einstein said ’not exactly.’ It took me a while to see what Hofstadter was saying here.
It also reminds me of Zeno’s Achilles vs. Hare paradox. Zeno presents a model for the race, but that model avoids, by construction, actually considering the time when Achilles catches up to and passes the hare. Zeno’s analysis of his model is correct, as far as it goes, but it does not go far enough - it is not a valid model for the problem.
I am intrigued by the way some people will accept the most unlikely proposition, if they think it is a logical deduction. Did Zeno really believe that motion was an illusion? Do you really believe that you can see a qualitative difference between a given pumpkin and that pumpkin with a single molecule removed?
I also still don't see the difference between my argument and the paper's. You can say that "is a pumpkin" is not Boolean, but as long as you concede that the first object has a property the last one lacks, some object along the way must change.
You can say unambiguously a pumpkin is not Boolean, but then you just go one level up, to unambiguously unambiguously not a pumpkin. You don't really need Boolean, you just need the property to be determined for some objects.
For any object, either it's determined or not, right? So one level up should be enough.
Unambiguously determined is naturally a boolean property. You can't have uncertainty over whether something is unambiguously determined.
If we could simulate humans, we could in theory spin up a trillion versions, show each a different "heap", and find out exactly what size makes people call it a heap.
I think you are aware, to some degree, that you have created this problem, and your second sentence is an attempt to get around it - but what you are doing in that sentence is to deny the fundamental premise of the sorities paradox! That is certainly one way to address it, and a very bold one, if I may say so. I am delighted to see that your understanding of the issues has evolved so radically in the course of this discussion.
What? No, my proposed solution (involving degrees of truth) applies equally in both examples. The first pumpkin is not 100% a pumpkin.
But if you think the first one is unambiguously a pumpkin, then you do have the paradox.
(Btw, I'd appreciate if you only replied to one of these chains. I've seen how multiple chains can confuse things and want to cut it off now. I probably should have edited the above reply into my other one, but let's fix it now, ok?)
I don't think you've successfully pointed out a flaw.
>N0us has pointed out in several places that formal theories with vagueness have issues
I asked for an issue with my proposal and haven't gotten one yet.
1. It is 50% true that Sally has red hair.
2. It is 50% true that Sally has brown hair.
3. It is 50% true that Sally has not red hair. (see 2.)
You end up running into contradictions all over the place and having a non-binary system of logic makes it nearly impossible to develop any formal system of reasoning.
I'm not completely clear on the second half of your comment but when you say "portion of people agree" I assume you are trying to use an empirical theory of truth. There are again many problems with this theory but I can think of an example that might illustrate. It's a toy example that I won't fully expand upon but it illustrates the point at hand.
There is a box in a room that is painted yellow. A group of people is asked to state the color of the box but they are all colorblind and state that it is grey. The consensus about the color of the box is clearly wrong in this case. Some people will object to this example because the meaning of words is also clearly derived from their use.
The epistemic theory of vagueness is not far off from what you are proposing and it is the theory that Williamson posits in his book. Essentially, there is a hard cutoff for all statements that are vague. It is either true that the collection of grains is a heap or it is not. An omniscient speaker could evaluate the truth of a statement about whether the collection of grains is a heap or not with definite certainty because only the omniscient speaker possesses all of the relevant information about the property of being a heap. In other words vagueness is a problem of inexact knowledge.
Another example of inexact knowledge is if you were to look out over a stadium and state "there are 23,422 people here." Barring the possibility that you are a savant and actually counted all of the people, if you made that statement and it was true that 23,422 people were in the stadium it isn't also true that you knew that there were that many people. You lack the relevant information about how many people are in the stadium to safely make the claim that there are n number of people in the stadium just as you lack the relevant information about the property of being a heap to safely make the claim that you know it is true that a collection of n grains is a heap.
This is a powerful theory which is not as easily defeated as others but keep in mind that it is not without weakness. Hopefully this at least exposes some of the concepts to you even if it's at a superficial level.
> and having a non-binary system of logic makes it nearly impossible to develop any formal system of reasoning
Surely there have been efforts (probabilistic inference - I wonder if there's a rigorous calculus - maybe something of the kind of a Bayesian network)? I mean, you won't have nice rules of deduction, sure, but I wonder... Is one of the worries for a theory of truth based on some form of fuzzy logic ("truth" not being boolean) that it wouldn't be properly epistemically closed? i.e. even with some rules for inference, one would no longer be able to satisfy classical inference scenarios of the kind of "S knows P; S knows P entails Q; hence S knows Q".
Sorry if I'm just throwing pretty words around, wondered maybe you'd know something more here / pointers.
>There is a box in a room that is painted yellow. A group of people is asked to state the color of the box but they are all colorblind and state that it is grey. The consensus about the color of the box is clearly wrong in this case.
My proposal doesn't depend on people actually being asked. It's more of "what would people say, if they were asked, and given all relevant objective information". Your scenario is ruled out.
It isn't a scenario, it's a thought experiment. "given all relevant objective information" sounds much like the omniscient speaker which is central to the epistemic theory.
A single omniscient speaker can't work, because people will disagree even with all relevant objective facts agreed upon. When two people dispute whether something is a heap, they aren't arguing over objective facts.
This clarification rules out your proposal as any kind of actually implementable system then, because it collapses to having a single authority who "knows" (how?) what the people would say.
If it doesn't involve actually asking people, then it's not really a polling of perceptions, but some person's idea of what those perceptions would be.
An approximation is carried out every time someone pulls out a dictionary to resolve a dispute over common usage.
That example mostly explains why fuzzy membership quantities aren't the same thing as probabilities.
> You end up running into contradictions all over the place and having a non-binary system of logic makes it nearly impossible to develop any formal system of reasoning.
Such systems (e.g., fuzzy logic) have been developed, so its clearly not impossible to develop them, and they tend to include classical binary logic as a special case.
First you have to fuzz the basic conjunctions not the propositions. If your have incomplete information about whether Sally has brown hair or red, whether Sally is short or tall, whether Sally is thin or fat, well, everybody starts off trying to use three numbers, one representing the brown/red uncertainty, one representing short/tall uncertainty, one representing thin/fat uncertainty. That never works out. You need seven numbers (eight that sum to one). You have a number for Sally is a tall, thin, red-head, another for Sally is a tall, thin, brunette, another for Sally is a tall, fat, red-head, etc.
Second, you have to use all the information you have all the time. The great thing about logic is that it is monotonic. Suppose that you have a pile of accurate facts about the world (so we know that they are consistent). You can work through them, one by one. If, half way through, you come up with a useful deduction, you know that you get to keep it. It will still be true at the end.
Reasoning with incomplete information is non-monotonic. Half way through you are confident that it has been raining earlier in the day because the lawn is wet. Later you trip over the lawn sprinkler and see that it has been used. Later facts can explain away early facts, and the strengths of conclusions can go down as well as up.
Notice the horrible impracticality of doing exact reasoning with incomplete information. There are too many basic conjunctions and you have to pay attention to all the information all of the time :-(
Real world reasoning is going to have to be approximate, so you know from the start that there is going to be trouble if you push practical reasoning too hard. The article talks about mathematical induction. That provides a good example of pushing approximate reasoning too hard.
We've agreed that if a bald man (meaning a few hairs) had one more hair, he would still be bald. But there is a lurking ambiguity. Are we asserting that as a mathematical truth, hard edged, absolutely true, use it as many times as you like, nothing will go wrong? We had better be, if we are going to use it as a premise for mathematical induction. Or are we assenting to the claim for social and practical reasons? We don't want be awkward by refusing assent to a proposition that is kind-of true and insisting on actually counting the hairs is unreasonable. Real world reasoning nearly always involves a degree of fudge and muddle and we don't want to insist on a degree of exactitude that we will not adhere to in our own reasoning. Especially as we are damn sure that a quest for exactitude will slow us down and make use miss important deadlines.