Hacker News new | past | comments | ask | show | jobs | submit login

I believe this is a trivial thing with a misleading title.

Of course the optimal amount of fraud for a business is zero. A world with zero fraud would be optimal for them.

And (also of course) given that combating fraud has costs, there is a level at which investing more money and effort into anti-fraud has diminishing returns.

Therefore it is not worth it to go all in into the fight against fraud, but accept that some amount of it happens.

Surely none of this is surprising.




It's very unintuitive. It's come up on threads here several times, with people incredulous about the statement.

It has ramifications for the kinds of discussions we have on HN, because the primary stimuli we get are news anecdotes, and anecdotes about fraud can be galling but still under some sane noise floor that organizations don't bother to stop.


Is it unintuitive?

The concept of diminishing returns applies to everything any living thing does. I think it's very natural, even in the 'our brains are wired for this' sense.


Yes, it's counterintuitive to many people that "the optimal amount of $BAD_THING is non-zero" for a rather large set of possible values for BAD_THING.

"You can't put a price on human life" is one common occurrence of this. I thought it was just a throwaway phrase, or a wish of how things could be, but I've regularly run into people who don't understand why we would ever fail to spend absurdly large amounts of money to save a single life.

I have had people say "Is the only reason we aren't doing something about X money" for various forms of X. I personally spent over an hour walking one such person through the concept of opportunity costs, the fact that money is representative of value, &c. with the conversation ending with them still certain that the US Government could just print a trillion dollars and solve the problem.

I have talked to people who honestly think that every person with depression should be forcibly committed to prevent them from committing suicide and respond to the "but only a fraction of those people have suicidal ideation, and only a fraction of those commit suicide" with a "if it saves just one life, it's worth it!" So forcibly committing 8% of the adult population is not too high a cost in their minds.


Well, the optimal quantity for many things is actually zero.

It's the edge cases-- the things where it's really hard to get rid of and we get some useful benefit from the related activity-- where it's nonzero.

Of course, those edge cases are our biggest world problems-- for example, pollution, because pollution abatement is hard and the industry and commerce that produces pollution is beneficial... or fraud, because fraud protection is hard and the industry and commerce that provides opportunity for fraud is beneficial.


because they somehow have this idea that they own the rights to control others. Once you consider that you can force others to do anything, regardless of reason, you can begin to rationalize anything, and it will be "for the greater good" or "for their own good". They probably even sincerely mean it too.

some kind of bald man once quoted someone: "With the first link, a chain is forged..."


> because they somehow have this idea that they own the rights to control others

The flip side, of course, being the folks who believe they somehow have no responsibility for how they use their rights to impact others.

Society can only function with at least some balance between these two extremes. Some of us need a little bit of chain.


you cannot perform crimes on others, thats about it. If someone comes to my house dying of cold, they have no right to demand I help them. I would be an asshole if I dont, and I think people SHOULD help, but you have no right to demand I do (and again, not saying I wouldnt, just that nobody gets to be entitled to it)


Unfortunately, we encounter far more complex scenarios, from "can the car dealership dump engine oil into the creek behind their maintenance bay?" to "we sold a product that provably killed thousands of people, but none in a way that can cause us direct individual liability".

Societally, we've largely decided we're all better off without a pile of frozen bodies at our door.


> Once you consider that you can force others to do anything, regardless of reason, you can begin to rationalize anything, and it will be "for the greater good" or "for their own good". They probably even sincerely mean it too.

Eh, I know lots of people who are neither anarchists nor totalitarians, so I'm not sure this is true.


i said you CAN begin to rationalize... I didnt say most would do it about everything, but as is clearly evidenced here, many would about a great many things.

"Red cars are more involved in traffic accidents, I therefore think you must be a murderous lunatic if you get a red car, and we cant have that, so lets forbid it"


> Yes, it's counterintuitive to many people that "the optimal amount of $BAD_THING is non-zero" for a rather large set of possible values for BAD_THING.

How much "earth is sterilized" risk is reasonable per year?


It is unintuitive. It hits some primal part our brain, the same as is isolated in that capuchin monkey experiment where two monkeys get varying levels of grapes. People get either really excited or really pissed off when they see people getting away with something; at the same time, when we read stories about bureaucracy --- a perennial bête noire on HN --- we rarely think in terms of the fraud we should be accepting. We just think fraud is bad, and anti-fraud is bad.


I think it is in part rooted in the difference between committing fraud and detecting that there is fraud. They're entirely different things and the misunderstanding results from the fact that. It is indeed very interesting how deeply hardwired this sort of thing is.


It is indeed unintuitive in practice.

One of the key factors is that a large part of the cost is opportunity cost. People tend to get all tangled up when thinking about such counterfactuals. We want to have our cake and eat it too. And if we can't, we persist in trying to find ways to believe that we can, and are surprised that the world continues not working that way.


The optimal amount of airline catastrophes is non-zero.

The optimal amount of pollution is non-zero.

The optimal amount of [insert almost anything bad] is non-zero.


The optimal amount of X under the assumption that it costs more and more resources to reduce X is non-zero. It's this middle part that you omitted that turns an unintuitive statement into an obvious one.


Those are all true statements, right?

The ideal amount of anything bad, is zero. But the optimal amount is going to be higher than that, given we don't have unlimited resources to spend.


Again this is totally misleading.

The optimal amount of airline catastrophes is zero. It is also impossible.

Should all humans spend all of their effort and all of their resources to lower the amount of airline catastrophes? I believe that pretty much everyone finds it reasonable to say no.


Why is that zero? I don't want to die in a plane any more than the next person, but is zero really optimal? It's least life losing, it's least catastrophic, but is it optimal? the question becomes, what are we optimizing for? If the optimization equation is for lives lost, I can make that zero real easily by just stopping air travel entirely. If no one travels by air, then no one can die by air. But of course that's not a useful solution at all. So we're not optimizing for lives not lost. We're optimizing for people being able to travel. Loss of life is tragic, but not the be all-end all. The gross reality that we don't want to face is that there's a price for a human life, the only question is how much?

Would you take a $10 10,000 mile flight across the world with a 1% chance of dying? How about a $100 flight with a .1% chance of dying?


Optimum is zero, because if you could wave a magic wand to magically make the amount of accidents zero, it would be a good thing to wave the wand.

The optimum is non-zero only if there are enough costs associated with making it zero.

That's why asking for optimum amount of fraud is misleading. It omits the costs. Once the costs are taken into account (i.e. it is clarified what the question means) the answer is obviously above zero.


"Optimum is zero, because if you could wave a magic wand to magically make the amount of accidents zero, it would be a good thing to wave the wand."

You are conflating optimum (highest value outcome for all variables) and ideal (highest value outcome for one variable). The ideal number of any bad thing is zero. I can't, off the top of my head, think of any bad thing for which the optimum number is zero. Extinction level events, perhaps.


> Is it unintuitive?

Yes when we look at it from both an individual level and a societal level. There's a very very strong aversion to loss and many decisions (including my own) are based on the concept of not losing something. Many times these decisions lead to making decisions that are sub-optimal (for the thing that's being optimized for).

By extension, many people apply this thinking to businesses and higher levels as well.


I wonder if it's the phrasing rather than the actual idea that sets people off. Many readers here understand the increasing costs of pursuing higher SLI targets, but I doubt many of them would express it as "the optimal level of availability is not 100%".


It's definitely the phrasing.


I think it's just phrased unintuitively. "Balance usability against security hoops to jump through" sound better to me. You have to get to the last paragraph of the fifth section of this article before it makes anything resembling that statement. This could have been a paragraph or three.


The optimal admount of fraud attempts is zero. Given that's impossible, one should not attempt to eradicate fraud or else you'll do nothing but seek it out.


For me and you it's not surprising, for others it seems it is.

Maybe people have gotten worse at intuitively understanding that tradeoffs need to be made in anything, and that the questions that define systems are never choices as much as what the tradeoff should be.


It's only makes sense when the cost of combating the fraud is greater than the money you are losing to fraud.


Yes, but isn't this obvious when you internalize the concept of "diminishing returns"? The domain might matter, as developers with some experience will understand for instance that the optimal amount of bugs is non-zero, but sometimes fail to generalize.


I agree. It's a self-serving failure of the imagination to believe that fraud and usability are intertwined. My issue with this kind of big corporate thinking is that it's often rooted in neglect and it harms real people by the millions.

Also, it fails to consider that corporations (or any entity which operates at the kind of scale suggested by the author) may not be optimal economic constructs for a society to begin with. Society worked fine before corporations and arguably was far more efficient given technological shortcomings. Think of how much human time was wasted on data entry, physically searching for documents in large archives, travelling between home and work on foot or by horse, doing accounting with pen and paper, manual farming, primitive irrigation systems, no synthetic fertilizer, etc, etc... Now that we've freed up people from all that work, how come everyone is busier than ever and yet there are so many poor people?

I think it doesn't make sense to look at our modern economic system as a model of efficiency, instead, I think it should be studied as a model of economic parasitism.

We shouldn't conflate efficiency driven by technology with other aspects of the socio-economic system which are canceling out much of that efficiency.


100%! There seems to be a proliferation of these long form word soups offering "deep insights" that are anything but.

All you can do is try to realize it early and hit the back button before you waste too much time on it.


> Of course the optimal amount of fraud for a business is zero.

Only if the business doesn’t have to pay to avert fraud.

> A world with zero fraud would be optimal for them.

You are misunderstanding the statement. We are not imagining which hypothetical world we wish to live in. We are making a claim about fraud levels in this world and how they feed into profits.

In this world, every reduction in fraud costs money or time or opportunity. Therefore, optimizing your profit means choosing a non-zero (often higher than you’d think) level of fraud.


Yea; I think this is one of the times where terms of art matter. "Optimal" here doesn't mean like "It would be optimal for me if I had infinite money", it's a term of art in fields like economics that describes the best case of the available variables.

This is how we end up going back and forth between "the optimal amount of fraud is non-zero" and "it would be optimal to have no fraud". Economics has to live with the world that exists, and there is no plausible combo of variables where people who are about to attempt fraud are instantly evaporated into dust.


>You are misunderstanding the statement. We are not imagining which hypothetical world we wish to live in. We are making a claim about fraud levels in this world and how they feed into profits.

Then the question should be stated clearly:

"How much effort and resources should be spent to combat fraud?"

The answer is very obviously not "all of the effort and all of the money". Therefore the matter is trivial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: