The argument in the original (Jessica Taylor) argument rests heavily on the idea that there is nobody who could do better out there. In a world of 7 billion people that is a bad assumption. And that is the problem with lies - we have the technical capability to solve a dizzying array of problems the the point where and the major challenge is working out what the problems are then prioritising improving our handling of them.
There needs to be an appeal to consequences example that makes more sense for the argument to start. The consequences of lying about capability in a world with highly refined knowledge, research and development systems are substantially greater than the short term consequences of this particular example. Suppressing facts will not help save puppies, it will numb the response.
I've said before that I think the ends do justify the means (consequentialism), but you should be extremely skeptical of "ends justify the means" arguments because people rarely look at the complete consequences. If you're willing to hide or ignore the truth, or lie, then I'm extremely skeptical that your motivations are actually prosocial, and I'm skeptical that the results will actually be good.
To carry this through using the argument of the study that finds a small correlation between vaccines and autism: if doctors lie about this, there's no guarantee the information about the study won't come out anyway, and then it will be known that they lied about them. That, more than the results themselves, would add real credibility to the conspiracy theory that doctors are covering up a connection between vaccines and autism. And worse, it would undermine the credibility of doctors with regards to other things, unrelated to vaccines.
The OP isn't considering the full consequences of undermining the credibility of the medical field to promote one political agenda, no matter how correct that political agenda is. In part, this is because the OP is assuming covering up this information would even work, and it very well might not.
At a more fundamental level, it would be important to release this information because science needs a mechanism for discovering when we're wrong or when things change. Right now, there's strong evidence for the null hypothesis: we can confidently say that vaccines do not cause autism. But if studies started finding a correlation between vaccines and autism, we'd want to know that. Maybe the diagnostic criteria for autism improves, maybe a new chemical is introduced to vaccines by a new manufacturing process, maybe a gene becomes more prevalent that interacts with existing vaccines; the possibilities are endless. Right now we should operate on the belief that vaccines don't cause autism, because that's what the information we have indicates, but if new information indicates something else, we should by all means update our beliefs and our actions.
The problem is that true believers in the correctness of their theory are horrible at risk analysis, because they assign 100% probability to the success of their program.
At least that is self-consistent, while an appeal-to-consequences argument against accepting appeal-to-consequences arguments would not be. But what sort of argument against accepting appeal-to-consequences arguments would be self-consistent?
After rolling that one around for a while, I would tentatively suggest that it is self-serving appeal-to-consequences arguments that we should be most suspicious of, while genuinely greater-good ones deserve consideration.
That did not take long to 'go meta.' Maybe it is a sign that this, like many other ethics issues, is resistant to doctrinaire solutions, which is pretty much where the author ends up.
* Consequentialism: we should evaluate vaccines based on the effect they have on people
* Appeal to Consequences: it would be horrible if vaccines were actually net-negative, so they can't be
* Taylor's extended Appeal to Consequences: see the post
Consequentialism: evaluate X based on its consequences
Extended: evaluate X, where X is the verb “to speak”, based on its consequences
Appeal to Consequences: evaluate X based on desired consequences rather than predicted consequences
Am I misunderstanding?
My post is responding by saying that it's important to have some situations where you can speak fully freely, but there are also times when you really do need to consider what might happen in response to your words.
(The discussion seems ill-founded to me, without a lot more qualification on exactly which consequences one is looking to.)
For example, a virtue ethicist might claim that honesty is a virtue, and thus we should try to be honest. But why is honesty a virtue? I'm not familiar with how virtue ethics deals with infinite regress. However, whatever the philosopher might claim, the real answer is because we evolved in a context where dishonesty was punished. This punishment shaped our sociological and neural landscapes such that we have a dim view of dishonesty in others. The virtue ethicist dresses that up one way, the deontologist might dress it up another, but the actual fact is that the human view of honesty arose from the consequences.
Now, you could try to turn that into an infinite regress by asking where God's character comes from. But within that system, God is self-existent - he doesn't come from anywhere, he has no cause. (You can disagree with the system, but within the logic of the system, the problem doesn't exist.)
And one nit: Honesty is a virtue because dishonesty so often rewards the dishonest at the expense of everyone else. I don't think it's because dishonesty was punished, it's that others suffered when someone was dishonest. Those others got the idea that dishonesty was a bad thing, because they were on the receiving end of the consequences.
Consequentialism, Deontology, and Virtue Ethics all try to answer the question "what is moral" or "what should we do". You're instead answering something like "how do we decide what moral theories to follow", "where do moral theories come from", or even, "where do our moral intuitions come from", which is more a question of Metaethics: https://plato.stanford.edu/entries/metaethics/
This one can be argued with Kant's categorical imperative, which says "it is unethical if making it required involves a logical contradiction", roughly phrased.
If we made it a general law that everyone must lie, then the distinction between truth and falsehood would be meaningless, and the meaning of the law would also evaporate. Therefore, lying is unethical.
This is too black and white. There can also simply be no law either way. And even if there is a law that you must lie, that law may be restricted in scope. For instance there may be a rule of politeness saying that you should answer "How are you?" with "fine thanks", even though it's not the truth.
> then the distinction between truth and falsehood would be meaningless
No this doesn't follow. Even if people lie a lot, there is still a difference between truth and falsehood.
Why regulate human behaviour if not for a reason? Dare I say it, a consequence?
In the long run, consequentialism and virtue ethics converge.
Calibrate your chosen virtues to produce good outcomes. The more experience you have, the more closely you can understand which virtues result in which ends.
It's like playing poker, choose your bets.
Utilitarians do like rule and virtue versions of the philosophy, but it also kind of begs the question by accepting utilitarian assumptions. The key thing with consequentialism is that you are positing a world where what is "good" is known, measurable, and directly comparable and rankable. This kind of misses the virtue or deontological contention though. Those philosophies aren't saying to ignore the consequences, they're saying that you can't resolve ethical dilemmas by scoring the "outcomes" and picking what has a higher score. They will address the gaps through horror-story hypotheticals, like the "infinite pleasure machine" problem, the "utility monster" problem, or the "this expects too much of us" problem.
You can get around it by arguing that the idea of "good" doesn't have to resolve based on hedonic calculus, but then you go back around to deciding that virtue is the chief marker of good and not utility, and then you're not really consequentialist anymore, you're just using consequentialism as a heuristic to approximate whatever other framework you have.
 What if we invented a machine or drug that can directly stimulate the pleasure/utility sense in people and rig everyone up to it such that they are always ecstatically happy? Would the ethical thing be to plug everyone into this machine even if they're, functionally, living in The Matrix?
 What if someone comes along who derives SO MUCH JOY out of consumption that the rational utility calculation decides that we should vastly over prioritize their happiness. In some formulations it could just mean that they get a bigger share than anyone else. In other formulations it posits that the utility monster's utility is so great that it's worth making other individual people miserable since their misery does not outweigh the utility monster's happiness. There is also an inter-temporal angle where you might say it's worth it to be miserable for big stretches of your life to maximize the resources available in your most utility-appreciating years. The problem, though, is that the version of yourself in the less-than-max-appreciation years is still gonna be unhappy and resent their younger/older self's happiness. There is also the religious version, where you argue that the utility in heaven is so great that you should just ignore suffering here on this mortal plane altogether. Presumably a loving God wouldn't have created a world where that even makes sense as a moral system, why create the world then? So God can't be a utilitarian.
 You can also call this the Chidi Anagonye problem. Basically human cognition can't anticipate all the consequences of an action, so utility-wise where do you draw the line on how far or how many degrees of separation a consequence has from an action? If we were all to be true utilitarians it would cripple us with indecision. This also suggests that smarter people have the inherent capacity to be ethically superior to others, and it can be read to downgrade the moral status of animals or the mentally disabled. One of the strongest things about utilitarianism, though, is that it is one of the few moral frameworks with very strong arguments to account for the moral status of animals and the disabled, so losing that advantage rubs a lot of people the wrong way. That's an argument from consequences though, and some here might say it's invalid. ;-)
Kants categorical imparative (the foundation of deontology) redundantly repeats the ancient command: "don't do to another what you don't want done to you."
Is egoistic because its universality includes the person who both gives and obeys the command.
And it's cold and dead because it is to be followed without love, feeling, or inclination, but merely out of a sense of duty.
You may be confused because you've circled the drain a bit considering a few particular philosophers and their stances. The categorical imperative is a particular deontological rule that closely resembles consequentialism.
It's still consequences, just further out.
The categorical imparative is absolutely supposed to be the antithesis of consequentialism. It is an implementation of deontology. Deontology comes about as an attempt to escape consequentialism
The fact that it breaks down back into consequentialism is as a result of deontology being not actually different from consequentialism
To give an example of an obviously flawed deontology, "Never swim within 30 minutes after eating" is a deontological rule. Many experienced swimmers (or eaters) know that this is a rule that can be broken safely, and choose not to follow it.
Not to say that better or more complicated rules don't exist. The 10 Commandments are a deontological system, and are not identical to consequentialism.
The contrast is- following pre-established rules, which were (hopefully) written with good outcomes in mind, vs. judging the likelihood of good outcomes on-the-fly.
There are more ways to evaluate consequences than their emotional viability. Taking a teological stance or a protection from death stance can lead to nailing the puppy charity to the wall on exactly how much of a lie is necessary to prevent death of puppies, the truth, people's bank accounts and the charity itself.
Using the appeal to consequences fallacy to disregard an entire category of philosophical thought is a bit silly.
It seems to me if the first outcome is true, then the third outcome is unavoidable
In the example, the idea is that there is one new drug that was shown to be unsafe, but that there is a risk that the results will be misinterpreted to suggest that lots of other drugs are unsafe too. If the communication is handled well, then use of the new drug will never start, and people can continue to use other drugs which are safe.
> then the third outcome is unavoidable
Alternatively, if you mean that incorrect and misleading headlines are unavoidable... then yes, that is a persistent problem, agreed.
Or rather, in order to now claim that vaccines don't cause autism you will need to say "rigorously tested vaccines don't cause autism"
This is why audience matters in communication; the previous is obvious to people familiar with medical trials (and population sample trials in general) and counter-intuitive to the general public unfamiliar with the mathematics of sampling.
One thing that the modern communications era has changed is that it's much easier to find oneself talking to the general public without intending to. The common tools for mass-communication default open.