Hacker News new | past | comments | ask | show | jobs | submit login

Its odd to me how many people in this thread are either for or against this decision, not based on a fixing a problem rationale but on moral grounds that this person deserved or didn't deserve the punishment.

Does your computer program deserve to be debugged, fixed, and recompiled when it has thrown an error? What would that even mean? Was it evil or bad because it didn't perform how we wanted it to?




I do not understand how the second paragraph of your reply is related to the first. Computer programs (currently) do not respond to incentives, nor do they react to available resources in any way other than the way in which the resource provider explicitly changes them. So the idea of "deserving" does not apply, but it does apply to entities that respond to incentives or consume and make use of resources in complex/emergent ways.

If we had strong AI, then yes, certain strong AIs would be more deserving of resources from the commons than others - that's exactly what MIRI is working on.

And if you step back from "program" to "project" (meaning the whole system of code + the autonomous decisions of humans, including groups of humans working as companies, to continue developing the code), then yes, certain projects do deserve to be fixed more than others do. A particularly difficult crash in Python 3.6 is much more worth debugging than one in Python 2.4, which in turn is much more worth debugging than one in GW-BASIC, because the net benefit for humanity in exchange for the opportunity cost varies widely.


My point has to do with the notion of retributive justice i.e. that punishment should be inflicted because the criminal deserves it - in a quasi-religious sense i.e. he is evil or deserving of hell. This usually carries along notions of moral good and evil that are almost never grounded in measurements to give them real meanings.

Conversely, I think people should think about crime in a more reinforcement learning approach i.e. rehabilitative.

I view humans as machines essentially. If a machine breaks you fix it, but there is no emotional garbage attached.

That's the concept I was trying to get across.


I agree that punishment as retribution is a poor way to run a civilized society. But punishment as incentive structure fits the framework of "deserving" just fine - full participation in human society is a privilege, not a right, and extending that privilege to people who will hurt society is a poor use of resources. And credibly threatening not to extend that privilege will cause people to try to be the sort of person who "deserves" not to be punished.

Viewing humans as machines doesn't work precisely because humans have agency. A machine generally does not decide to break, and a machine generally can be properly fixed (or if it can't, it's obvious). Attempting to convince a machine that it will suffer some negative consequence for misbehavior is unlikely to have any results at all.

Meanwhile, especially for humans making decisions about corporate strategy (which means that they're in a culture that's pretty firmly incentivized towards "is this profitable"), credibly convincing them that they will suffer punishment is a great way to change their behavior and prevent them from deciding to do something unwanted. And one way to credibly convince people that they will suffer punishment for an action is to actually punish people who do the same thing.

It's very unlikely this executive will make the same mistake again, punishment or no punishment. Rehabilitation isn't the point. But we need to make an example of them, and I say that without the slightest shred of emotion.


Basically, I don't accept what you and most people accept - that humans have free will, we could have made a choice X instead of Y, etc, etc.

I know its not a popular opinion, and I don't expect to convince you of my views in a couple sentences, but generally speaking I think the notion of punishment should always be viewed in the context of societal functioning.

If we do as you suggest and punish an executive, does that actually lead to decreased occurances? If so by how much? What about the next time it occurs? Can we completely prevent another occurance? Can we decrease its probability? At what costs to the person and society? What caused it in the first place?

These are all much more important questions to mold a system of punishment than our current focus which almost completely lacks any measurements and analyses and is still steeped in vague moralities derived from historical accident and our own intuitions which are perforated with biases and flaws.


> Basically, I don't accept what you and most people accept - that humans have free will, we could have made a choice X instead of Y, etc, etc.

My argument does not rely on free will at all. (If you notice, I have made the argument that the same thing would apply to AIs, and I am certainly not arguing that AIs have free will.)

My argument simply relies on two assumptions: first, that the agent in question sorta-rationally responds to incentives, and second, that the agent in question believes that punishment applied to other, similar agents after certain actions also could apply to them, if they take the same action.

If a human believes that they can make a better outcome themselves by doing action A instead of not doing action A, they will likely do so. Concretely, if an employee believes that they will cause a better outcome for the company and get rewarded for it personally by taking an action, with little risk, they will likely do so. This is borne out by evidence almost as plenteous as evidence that humans require food to live; it needs no controlled study. This happens all the time and essentially all productivity under capitalism requires this to hold up.

If a human believes that action A brings significant risk to themselves, they will probably refrain from doing so. And the convincing threat of prison time does actually cause humans to refrain from actions. fThis is also borne out by everyday life; there are plenty of outright illegal things you can do to make your company significantly more profitable, like poisoning your competitors, that happen extremely rarely.

So I don't understand why you are questioning "Does punishment as deterrent work?" from first principles, as if it is not settled. Do you disagree with the arguments above? I have to say that if you do, I feel like I am arguing with someone who disagrees that food is required for humans to live. I certainly can defend that position logically, I would just feel ridiculous doing so.

Nothing about changing incentives requires free will - it is a simple if statement to say, if risk outweighs reward, refrain, else proceed. You do not need any free will to potentially act contrary to the if statement. I would think if you believe people don't have free will, it makes this argument easier.

(In terms of morality, I am the sort of Christian who believes that every prisoner should be set free and that Christ has redeemed every last human, regardless of what they did, and that if every last murderer and rapist went to heaven, I would join them in singing praises because I wouldn't deserve to be there one bit more than they would. If you go just a week or two back in my comments on this site, you'll find me asking if I am morally/religiously compelled to work for the end of prisons. But I am not making a moral argument in this thread, and I hope it's clear that the argument I'm making is wildly divorced from my morality - I am trying to acknowledge that it is rational to imprison people as punishment, even as I believe that it is immoral to do so.)


So I think we're just talking by one another and about different concepts entirely. I'm not questioning whether punishment as a deterrent works at all. Obviously it does. I guess my emphasis on the morality imperative implicitly led people to believe I was indeed questioning the notion of punishment as a deterrent. I think, given a better medium and more time, we could have cleared this up much better. At any rate now I'm actually interested in how you arrived at your prison views, since religious rationalizations puzzle me endlessly and I'm always out to see if a religous person who arrives at a good conclusion in my view has any insight to offer me in convincing other religious people to change their ideas as well.


> At any rate now I'm actually interested in how you arrived at your prison views

Three things (and, note, I currently hold this position weakly and welcome further thoughts that either oppose or support it):

1. There is no story in the Bible, to my knowledge, of prison being used by people the Bible calls good. There are a few places where someone is imprisoned and God uses the situation for good - Daniel in the lion's den, Paul and Silas in the prison of Phillipi, etc. - but you never see e.g. Moses or David or anyone say to build a prison or to put someone in confinement, as far as I recall.

2. Nothing in Mosaic law (again, that I know of) talks about prison as a means for punishment / correction. You put people to death for offenses that we wouldn't consider capital today, sure. You exile them. You certainly have them pay restitution. But there's no Biblical command that a moral society should even have a prison.

3. There are plenty of passages like Isaiah 61, "to proclaim liberty to the captives and the opening of the prison to those who are bound," Matthew 25, "I was in prison and you came to visit me," etc. that seem to have an underlying assumption that everyone who is in prison is there unjustly. Even if they're meant to be read metaphorically, the metaphor only works right if you think of prison as unequivocally bad. There's no sense that some people deserve the experience of being in prison, or that justice requires leaving some people in prison.

Therefore, I have trouble seeing a society that puts people in prison and calls it moral as in accordance with a Biblical view of morality. Of course the whole idea that Western civilization is built on "Judeo-Christian" morals is flimsy in many ways, but for prison in particular, there seems to be a particular lack of support.


Prison—as distinct from captivity that is primarily slavery for the benefit (economic or, for some high-status captives, prestige) of the captor—is a fairly modern thing, and economically unviable in ancient societies independent of its moral dimension.


Humans are not machines, so trying to look at them that way is going to be flawed.

Furthermore, there is no excuse for what they did. They knew it was wrong. They knew it was against the law. It was not an accident; it was a deliberate act. There's not really any rehabilitation that can take place here.


I honestly don't know what you're on about. This person willfully engaged in fraud; they instructed their engineers to engage in a project to deliberately mislead regulators, and throw much, much more pollution in the air than they told people they did. This is not a simple mistake.


The person you're responding to literally does not see any difference between a person and a program, from any philosophical, moral, or ethical viewpoints.

I don't know how you even respond to such an absurdity beyond calling it out for what it is: absurd.


The mods tend to ding me when I do that.


To my recollection there was a meaningful amount of evidence that this wasn’t a bug. Rather it was optimizing for the test in such a way that it essentially faked the results, and the company was aware it had been done.

So, yeah, where I sit this is a good thing. If it were simply a bug that the company was open and honest about, then lets fix it and move on. If they cover it up, then there’s a solid ethical case for severe consequences in my mind.


The sentence was severe for a non-violent crime in part because restitution is not practically possible.

The damage has been done to the environment. Lives were shortened in amounts that are fairly precisely quantifiable. Competitors were economically disadvantaged. The market for fuels was distorted by a criminal act. This was all done intentionally, to cheat a system that was supposed to improve air quality, throwing such efforts into doubt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: