Hacker News new | past | comments | ask | show | jobs | submit login
Appeals to Consequences (jefftk.com)
55 points by luu 5 months ago | hide | past | favorite | 41 comments

A culture that is consistently truthful will develop a substantial advantage over one that isn't; over time they will have better information and make better decisions. The example in the article is a really good example of the problem with the fallacy - they couldn't come up with something that makes sense. The ends are unjustifiable; money is being routed to a charity that is ineffective at saving drowning puppies. There are presumably charities that can save 200 puppies for $1.2 million that should be getting the money instead. People who could save puppies for $1,000/puppy will not bother trying because they will think they are inefficient.

The argument in the original (Jessica Taylor) argument rests heavily on the idea that there is nobody who could do better out there. In a world of 7 billion people that is a bad assumption. And that is the problem with lies - we have the technical capability to solve a dizzying array of problems the the point where and the major challenge is working out what the problems are then prioritising improving our handling of them.

There needs to be an appeal to consequences example that makes more sense for the argument to start. The consequences of lying about capability in a world with highly refined knowledge, research and development systems are substantially greater than the short term consequences of this particular example. Suppressing facts will not help save puppies, it will numb the response.

The fundamental problem with the appeal to consequences argument is that it's a willingness to hide or ignore the truth, or outright lie.

I've said before that I think the ends do justify the means (consequentialism), but you should be extremely skeptical of "ends justify the means" arguments because people rarely look at the complete consequences. If you're willing to hide or ignore the truth, or lie, then I'm extremely skeptical that your motivations are actually prosocial, and I'm skeptical that the results will actually be good.

The general problem with "ends justify the means" arguments, as I see it, is not as much completeness of the consequences, but rather the risk analysis. A lot of "ends justify the means" points of view involve doing certain harm now for uncertain good in the future. Like "lets execute a few thousand class traitors today, so we can build a socialist paradise tomorrow" or "lets lower the taxes for the rich and in a few years the wealth will trickle down". At this point it is kinda hard to be a consistent consequentialist. How certain do you have to be about the paradise of tomorrow to justify an execution today?

As a good Bayesian, I was considering a risk analysis as part of the complete consequences, but didn't make that clear, so thank you for clarifying that.

To carry this through using the argument of the study that finds a small correlation between vaccines and autism: if doctors lie about this, there's no guarantee the information about the study won't come out anyway, and then it will be known that they lied about them. That, more than the results themselves, would add real credibility to the conspiracy theory that doctors are covering up a connection between vaccines and autism. And worse, it would undermine the credibility of doctors with regards to other things, unrelated to vaccines.

The OP isn't considering the full consequences of undermining the credibility of the medical field to promote one political agenda, no matter how correct that political agenda is. In part, this is because the OP is assuming covering up this information would even work, and it very well might not.

At a more fundamental level, it would be important to release this information because science needs a mechanism for discovering when we're wrong or when things change. Right now, there's strong evidence for the null hypothesis: we can confidently say that vaccines do not cause autism. But if studies started finding a correlation between vaccines and autism, we'd want to know that. Maybe the diagnostic criteria for autism improves, maybe a new chemical is introduced to vaccines by a new manufacturing process, maybe a gene becomes more prevalent that interacts with existing vaccines; the possibilities are endless. Right now we should operate on the belief that vaccines don't cause autism, because that's what the information we have indicates, but if new information indicates something else, we should by all means update our beliefs and our actions.

But our theory says that the good outcome is certain!


The problem is that true believers in the correctness of their theory are horrible at risk analysis, because they assign 100% probability to the success of their program.

Consider a variation on the trolley problem. Just like always, there are two track branches and a switch. On the main track, there are five people, on the side track there is one person. But this time things are a tiny bit different. The trolley driver is braking. And the five people on the main track are a few hundred yards down the track, so maybe the trolley will be able to stop before it runs them over. But not. But there is a chance. The one person on the side track, on the other hand, is right here, there is no chance in hell they live if you pull the switch. So what do you do? How do you even reason about something like this?

To me, it's really simple. I don't throw the switch. I trust the five people to get off the %^$#@ tracks before the trolley gets there. And I trust the trolley driver to get on the horn to alert the five people that they'd better start doing so without delay.

In the problem, the people are tied down to the tracks. Or possibly glued. Or to make it even more horrifying, riveted :)

The ends don't justify the means. Instead, the means used define the ends achieved.

The reasonably expected ends justify the well-considered means, if and only if they are net good.

In which the author presents an appeal-to-consequences argument against taking a hard line against accepting appeal-to-consequences arguments.

At least that is self-consistent, while an appeal-to-consequences argument against accepting appeal-to-consequences arguments would not be. But what sort of argument against accepting appeal-to-consequences arguments would be self-consistent?

After rolling that one around for a while, I would tentatively suggest that it is self-serving appeal-to-consequences arguments that we should be most suspicious of, while genuinely greater-good ones deserve consideration.

That did not take long to 'go meta.' Maybe it is a sign that this, like many other ethics issues, is resistant to doctrinaire solutions, which is pretty much where the author ends up.

The fallacy of appealing to consequences is not "you should do this thing because it has better consequences" (that's Consequentialism) but instead the clearly false "you should think this thing because if it were true that would be good". Here I'm responding to Taylor's broader use of it, where she's also including the idea that if saying X would have bad consequences you should avoid saying X.


* Consequentialism: we should evaluate vaccines based on the effect they have on people

* Appeal to Consequences: it would be horrible if vaccines were actually net-negative, so they can't be

* Taylor's extended Appeal to Consequences: see the post

It seems that the “extended” appeals to consequences, per the post, -is- consequentialism.

Consequentialism: evaluate X based on its consequences

Extended: evaluate X, where X is the verb “to speak”, based on its consequences

Appeal to Consequences: evaluate X based on desired consequences rather than predicted consequences

Am I misunderstanding?

You're not misunderstanding. Taylor is arguing, from a consequentialist perspective, that trying to evaluate the consequences of speaking counterintuitively leads to bad outcomes and so we shouldn't do it. This is a consequentialist argument, but a bit of an ironic one!

My post is responding by saying that it's important to have some situations where you can speak fully freely, but there are also times when you really do need to consider what might happen in response to your words.

Consequences are at the foundation of why we have ethics at all. All normative arguments are appeals to consequences when you look closely enough. The practical differences emerge from where and how far into the future you look for the consequences.

(The discussion seems ill-founded to me, without a lot more qualification on exactly which consequences one is looking to.)

You're describing Consequentialism (https://en.wikipedia.org/wiki/Consequentialism), which is a popular view, especially among technically minded people, but there are also other widely held views that work differently:

* https://en.wikipedia.org/wiki/Deontological_ethics

* https://en.wikipedia.org/wiki/Virtue_ethics

I think the parent poster's claim (if I may put words in their mouth) is that, despite the protestations of these other schools of thought, the only reason their rulesets/virtues exist are because of the general case consequences of those rulesets and virtues. They may not explicitly denote their appeal to consequences, but those ethical systems would not exist in their given forms of the actions they proscribe actions did not have consequences.

For example, a virtue ethicist might claim that honesty is a virtue, and thus we should try to be honest. But why is honesty a virtue? I'm not familiar with how virtue ethics deals with infinite regress. However, whatever the philosopher might claim, the real answer is because we evolved in a context where dishonesty was punished. This punishment shaped our sociological and neural landscapes such that we have a dim view of dishonesty in others. The virtue ethicist dresses that up one way, the deontologist might dress it up another, but the actual fact is that the human view of honesty arose from the consequences.

Put another way: I find virtue ethics and other systems are often fine practical approaches to consequentialism.

At least one virtue ethics system can answer the "why" without infinite regress - the Judeo-Christian one. Honesty is a virtue because God is not a liar. That is, the virtues are virtues because they correspond to the character of God.

Now, you could try to turn that into an infinite regress by asking where God's character comes from. But within that system, God is self-existent - he doesn't come from anywhere, he has no cause. (You can disagree with the system, but within the logic of the system, the problem doesn't exist.)

And one nit: Honesty is a virtue because dishonesty so often rewards the dishonest at the expense of everyone else. I don't think it's because dishonesty was punished, it's that others suffered when someone was dishonest. Those others got the idea that dishonesty was a bad thing, because they were on the receiving end of the consequences.

Right, making something up is a way to solve the infinite regress problem. Other religions have similar moral systems with similar problems in terms of evidentiary basis. Another example of this is natural law, which is not actually a thing, however much we might wish it were.

Because of damnation, eh? Appeals to a higher power usually come with an afterlife, for consequences.

Yes, you can make a consequentialist argument within that system. But you can also make a recursion-free virtue-based argument.

"the real answer is because we evolved in a context where dishonesty was punished" is actually a very different theory, and not even the same sort of theory!

Consequentialism, Deontology, and Virtue Ethics all try to answer the question "what is moral" or "what should we do". You're instead answering something like "how do we decide what moral theories to follow", "where do moral theories come from", or even, "where do our moral intuitions come from", which is more a question of Metaethics: https://plato.stanford.edu/entries/metaethics/

> For example, a virtue ethicist might claim that honesty is a virtue, and thus we should try to be honest. But why is honesty a virtue?

This one can be argued with Kant's categorical imperative, which says "it is unethical if making it required involves a logical contradiction", roughly phrased.

If we made it a general law that everyone must lie, then the distinction between truth and falsehood would be meaningless, and the meaning of the law would also evaporate. Therefore, lying is unethical.

> If we made it a general law that everyone must lie

This is too black and white. There can also simply be no law either way. And even if there is a law that you must lie, that law may be restricted in scope. For instance there may be a rule of politeness saying that you should answer "How are you?" with "fine thanks", even though it's not the truth.

> then the distinction between truth and falsehood would be meaningless

No this doesn't follow. Even if people lie a lot, there is still a difference between truth and falsehood.

That's not really exactly what the categorical imperative says, if I understand it correctly. It basically says you should not do things s.t. if everyone did them it would lead to an absurd conclusion. The terms here are pretty nebulous, and I would say that a skeptic could easily ask, "So? Why not?" "Well, because you would not like the world that results." Ah, consequentialism again.

I think those boil down to consequences; if it's not the bad things happen when we don't have rules / a categorical imperative / golden rule, or not achieving eudaimonia, it's bad things that happen in the afterlife.

Why regulate human behaviour if not for a reason? Dare I say it, a consequence?

I've had a theory that:

In the long run, consequentialism and virtue ethics converge.

Calibrate your chosen virtues to produce good outcomes. The more experience you have, the more closely you can understand which virtues result in which ends.

It's like playing poker, choose your bets.

>Calibrate your chosen virtues to produce good outcomes. The more experience you have, the more closely you can understand which virtues result in which ends.

Utilitarians do like rule and virtue versions of the philosophy, but it also kind of begs the question by accepting utilitarian assumptions. The key thing with consequentialism is that you are positing a world where what is "good" is known, measurable, and directly comparable and rankable. This kind of misses the virtue or deontological contention though. Those philosophies aren't saying to ignore the consequences, they're saying that you can't resolve ethical dilemmas by scoring the "outcomes" and picking what has a higher score. They will address the gaps through horror-story hypotheticals, like the "infinite pleasure machine[1]" problem, the "utility monster[2]" problem, or the "this expects too much of us[3]" problem.

You can get around it by arguing that the idea of "good" doesn't have to resolve based on hedonic calculus, but then you go back around to deciding that virtue is the chief marker of good and not utility, and then you're not really consequentialist anymore, you're just using consequentialism as a heuristic to approximate whatever other framework you have.

[1] What if we invented a machine or drug that can directly stimulate the pleasure/utility sense in people and rig everyone up to it such that they are always ecstatically happy? Would the ethical thing be to plug everyone into this machine even if they're, functionally, living in The Matrix?

[2] What if someone comes along who derives SO MUCH JOY out of consumption that the rational utility calculation decides that we should vastly over prioritize their happiness. In some formulations it could just mean that they get a bigger share than anyone else. In other formulations it posits that the utility monster's utility is so great that it's worth making other individual people miserable since their misery does not outweigh the utility monster's happiness. There is also an inter-temporal angle where you might say it's worth it to be miserable for big stretches of your life to maximize the resources available in your most utility-appreciating years. The problem, though, is that the version of yourself in the less-than-max-appreciation years is still gonna be unhappy and resent their younger/older self's happiness. There is also the religious version, where you argue that the utility in heaven is so great that you should just ignore suffering here on this mortal plane altogether. Presumably a loving God wouldn't have created a world where that even makes sense as a moral system, why create the world then? So God can't be a utilitarian.

[3] You can also call this the Chidi Anagonye problem. Basically human cognition can't anticipate all the consequences of an action, so utility-wise where do you draw the line on how far or how many degrees of separation a consequence has from an action? If we were all to be true utilitarians it would cripple us with indecision. This also suggests that smarter people have the inherent capacity to be ethically superior to others, and it can be read to downgrade the moral status of animals or the mentally disabled. One of the strongest things about utilitarianism, though, is that it is one of the few moral frameworks with very strong arguments to account for the moral status of animals and the disabled, so losing that advantage rubs a lot of people the wrong way. That's an argument from consequences though, and some here might say it's invalid. ;-)

Deontology is consequentialism. I think schopenhauer is a bit of a quack but his analysis on why Kant is actually the biggest egoist (and consequentialist of then all) is super on point

Kants categorical imparative (the foundation of deontology) redundantly repeats the ancient command: "don't do to another what you don't want done to you." Is egoistic because its universality includes the person who both gives and obeys the command. And it's cold and dead because it is to be followed without love, feeling, or inclination, but merely out of a sense of duty.

Deontological ethics or deontology is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules, rather than based on the consequences of the action. (from wikipedia).

You may be confused because you've circled the drain a bit considering a few particular philosophers and their stances. The categorical imperative is a particular deontological rule that closely resembles consequentialism.

The thing is, you choose the rules because of the outcome of a system of rules, over and above the outcomes you get from judging on a case by case basis.

It's still consequences, just further out.

Lmfao this is so wrong it's absurd.

The categorical imparative is absolutely supposed to be the antithesis of consequentialism. It is an implementation of deontology. Deontology comes about as an attempt to escape consequentialism

The fact that it breaks down back into consequentialism is as a result of deontology being not actually different from consequentialism

I think you're wrong in ways that are difficult to untangle.

To give an example of an obviously flawed deontology, "Never swim within 30 minutes after eating" is a deontological rule. Many experienced swimmers (or eaters) know that this is a rule that can be broken safely, and choose not to follow it.

Not to say that better or more complicated rules don't exist. The 10 Commandments are a deontological system, and are not identical to consequentialism.

The contrast is- following pre-established rules, which were (hopefully) written with good outcomes in mind, vs. judging the likelihood of good outcomes on-the-fly.

The 10 Commandments are a Covenant between God and his chosen people; half the Old Testament is about the consequences of breaking the covenant.

Consequences are core to ethics. The problem arises when people are scared to say "X is true" or even ask whether X is true because they are scared of bad consequences. They forget that neglecting inquiry can have bad consequences as well.

'Appeals to consequences' as used by Quinn are a fallacy because the decisions rest on the desirability and emotional judgement of the consequence. That's an obvious fallacy we can agree in the context of business.

There are more ways to evaluate consequences than their emotional viability. Taking a teological stance or a protection from death stance can lead to nailing the puppy charity to the wall on exactly how much of a lie is necessary to prevent death of puppies, the truth, people's bank accounts and the charity itself.

Using the appeal to consequences fallacy to disregard an entire category of philosophical thought is a bit silly.

> How do we leave open the possibility of the first two outcomes while avoiding the third?

It seems to me if the first outcome is true, then the third outcome is unavoidable

You seem to be demonstrating how miscommunication is very easy in the example shown!

In the example, the idea is that there is one new drug that was shown to be unsafe, but that there is a risk that the results will be misinterpreted to suggest that lots of other drugs are unsafe too. If the communication is handled well, then use of the new drug will never start, and people can continue to use other drugs which are safe.

> then the third outcome is unavoidable

Alternatively, if you mean that incorrect and misleading headlines are unavoidable... then yes, that is a persistent problem, agreed.

If you develop a new vaccine, and if you confirm through repeated tests that it causes elevated levels of autism, then yes, you can now say "vaccines can cause autism."

Or rather, in order to now claim that vaccines don't cause autism you will need to say "rigorously tested vaccines don't cause autism"

An alternative explanation (especially common in a small study) is that the control group and test group were actually biased in a way that the experimenters didn't detect, and had the experiment not been run, these two groups would still have the same rates of autism that the experimenters observed.

This is why audience matters in communication; the previous is obvious to people familiar with medical trials (and population sample trials in general) and counter-intuitive to the general public unfamiliar with the mathematics of sampling.

One thing that the modern communications era has changed is that it's much easier to find oneself talking to the general public without intending to. The common tools for mass-communication default open.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact