> I don't see what's so difficult to accept in this, frankly.
Because you're stopping once you reach the conclusion you like and not seeing where it leads.
Debating about morality is a pre-requisite for, for example, coming to agreement on a community standard on a moral question of behaviour. And if it's futile to debate about morality, then of course it's futile to try to define a community standard on a moral question of behaviour.
Without consensus on standards of behaviour, communities and societies fall apart. People value communities and societies. Therefore people value being able to debate about morality.
You can argue about and agree upon a standard of behavior without it being a moral standard. In fact, I'd argue that's exactly what most laws are. When the CA road laws say that drivers must keep a 3-foot buffer from cyclists, is that a moral rule?
Now, it may be that you still need some moral axioms to build those standards upon, but you don't need to argue about them, merely to have enough people with a roughly similar pattern. The dissenters will just be made to comply by force.
Logic doesn't care if you agree or not, it just takes you to the conclusion anyway. And if that reveals you to have an incoherent position, so be it.
Also, you're not going to get far trying to define away moral aspects. _Why_ was the cycle law enacted? Presumably to reduce cyclist deaths. Why reduce those? You get to a moral question incredibly quickly from the most arcane law.
And your thinking that you can find a majority with "roughly similar pattern" without any need for debate is effectively saying "a majority will agree on fundamental moral axioms" and now you're nowhere near relativism. You might as well have posited the article's "moral facts", at this point.
If I roll five dice, and three of them hit on 3, does that mean that 3 is some special number?
Moral relativism doesn't imply that everyone's subjective moral are all and always incompatible with each other. As people's subjective moralities change this way and the other, clusters are bound to happen - those are the "majorities".
What I deny is that any of those are the one objective morality. It's just the latest sample from the RNG.
Tommorrow will bring new ones.
Again, it's not binary: you don't have to accept a moral objectivism just because you deny relativism.
I too deny that any of the axioms are the one objective morality (although the clusters of moral belief over time are nothing like you'd get from a RNG, and tomorrow broadly speaking does not bring new ones, but that's a digression).
Your position is relativist. And you asked why relativism was so hard to accept. Well, like I said way upthread, it's because if you take a relativist position to its conclusion, you generally end up somewhere people find hard to accept. (Or less politely, you end up somewhere dumb).
And this case is a perfect example. In this case, you've pretty quickly ended up having to argue that "laws can be/are built on moral axioms that come about by chance because random moral preference generation will result in coincidental clusters of agreement"
Which contradicts the observed facts of moral development, aside from all the other problems with it.
You could spend a lot of time trying to shore this up, or could just start again with a better foundation than moral relativism.
Interesting comment. I don't care where it leads, it's a personal thing.
I don't feel the need to hold a position that applies to everyone. Taking it to something I understand better, like coding standards. I could care less what people decide the rules are, they seem completely arbitrary to me. I'm happy to go with what's gone before, and if people want to fight about new ones I'll leave them to it.
But you accept that people can fight and decide what the rules are? That you can have a meaningful discussion about which coding standard is the best, and why, and reach a conclusion you all agree on?
Because metaethical moral relativism wouldn't accept that. It says you might as well have a discussion about which colour is the best and try to reach a conclusion everyone agrees on.
well you changed a word out there. the first question is you accept that people can fight. the second question is you can have a meaningful. i accept that people and can fight but i don't accept their discussion is meaningful and i don't accept that they can reach a conclusion i agree on.
i don't see how accepting that people can fight and decide on rules, means that i think there's any meaning in it and why that means i agree, disagree or care about the conclusion.
> It says you might as well have a discussion about which colour is the best and try to reach a conclusion everyone agrees on.
> i don't see how accepting that people can fight and decide on rules, means that i think there's any meaning in it and why that means i agree, disagree or care about the conclusion.
"Fight" is probably the misleading point. People can and will fight about meaningless things, yes. Perhaps "debate" would be better, because that implies meaning in the discussion.
I think you can reasonably debate, compare and judge coding styles -- "which is more readable?", "which aids understanding better?", etc -- in a way you can't reasonably debate "which is the best colour?".
There might be a confusion between a prescriptive and a descriptive stance of moral relativism.
You can be a moral relativist and take a pragmatic position that traditions or societal consensus are worth having without disappearing in a puff of logic.
> You can be a moral relativist and take a pragmatic position that traditions or societal consensus are worth having without disappearing in a puff of logic.
Not really, because if you believe the relativism you must believe that moral consensus can't be reached.
What you are saying is very close to "You can be a climate change denier and take the pragmatic position that we need to change our behaviour to stop the earth heating up"
I think you're talking about a very naive normative interpretation of moral relativism, the fundamental argument, as I understand it, is actually that societies do reach some form of moral consensus, but that this consensus has cultural and historical roots (and possible other contingent factors), it will vary and change over nations and time.
I won't throw a segfault if I take a descriptive moral relativistic position and simultaneously think that honour killings are wrong.
I am logical and well-read enough to realize that most of our Western courts have allowed honour killings but called them "crimes of passion"[1] and until not that long ago that could be a complete defence against a charge of murder. It is still an acceptable partial defence in some courts and some judges apparently advocate its return in others [2].
So by recognizing the moral relativism inherent in that situation, am I normatively obligated to think honour killings are A-OK? I don't think so, I am however apparently outside the moral consensus on that subject, so what do I know.
I think you're talking about the anthropological meaning of moral relativism (and we probably agree there). But the rest of us are talking about the philosophical sense. There's a very good overview of that here: http://www.iep.utm.edu/moral-re/
Among other things, one of the potential consequences of metaethical moral relativism is that you and I can't actually say anything meaningful to one another about honour killings beyond "I feel they are bad" or "I feel they are good".
Or to put this a slightly different way: your culture (apparently) thinks honour killings are OK. You disagree. On what grounds? If a moral statement is true relative only to the consensus belief of a culture, and your culture says honour killings are right, then you must necessarily be wrong to disagree. (See 4f in the link for more on that).
If you reject that, as it sounds like you possibly do, then you reject philosophical moral relativism. You can keep the anthropological one, though.
Obviously we disagree about what exactly that entails (at a glance, the article you cited seems to describe multiple modes of moral relativism, not all as logically muddled as the one you describe).
I'd be curious to know what grounds you would argue against a topic you find morally objectionable. What would you say are the proper foundations for moral axioms?
Update: and to answer your question, I'm not sure I have good reasons beyond "I feel they are bad", I do aim for coherence and consistency even when I'm not confident that I have a solid logical foundation, but it's hard to feel committed to any particular consensus. For background, I was raised in two different countries with two different cultures and languages, my politics were diametrically opposite those of one of my grandfathers who I still loved, I never met the other because he was an abusive alcoholic that thankfully abandoned the family. I am a bit of a Camusian outsider I guess.
Because you're stopping once you reach the conclusion you like and not seeing where it leads.
Debating about morality is a pre-requisite for, for example, coming to agreement on a community standard on a moral question of behaviour. And if it's futile to debate about morality, then of course it's futile to try to define a community standard on a moral question of behaviour.
Without consensus on standards of behaviour, communities and societies fall apart. People value communities and societies. Therefore people value being able to debate about morality.