Hacker News new | past | comments | ask | show | jobs | submit login
An Anatomy of Algorithm Aversion (ssrn.com)
36 points by bookofjoe 8 days ago | hide | past | favorite | 26 comments





Or whenever you automate a decision process you take all the resilience out of it. Human social institutions are built to survive all kinds of dramatic environmental change, the kinds of machine decision making available are not.

In particular, algorithsm do not offer advice. Advice is a case where your own goals, ambitions, preferences, desires have been understood -- and moreso, what ones you arent aware of, what needs you might have that arent met... and these are lined up with plausible things you can do that are in your interest.

There is no algorithmic 'advice'


I mostly agree with the first bit.

Re: advice, well, there could be, but the people who put these things in place aren't necessarily thinking in those terms. They're thinking about a statistical edge and acceptable negative outcomes on their end and no one else's. They're not maximizing what's good and helpful for you unless it helps them also, they're probably maximizing short to medium term profit seeking. Computers are an amplifier of human bad behavior.

See also, "computer says no": https://en.wikipedia.org/wiki/Computer_says_no


Yes, and I dont think this paper or those who talk about "Algorithmic Aversion" are at all engaged with this.

Perhaps unsurprisingly author's i've read on this topic are all from the humanities, or adjacent (here law) and seem to not really to have a grasp of technology as a social and political phenomenon the way modern (aware) tech workers do.

Here I think software engineers feel intuitively and immediately that replacing any of their decision making with "algorithms" is a high risk. I suspect papers like this would benefit a lot from assessing the "algorithm aversion" amongst "algorithm experts".


Reading just the syllabus, I was surprised to see no mention of accountability. Quick Ctrl+F searches for "accountability", "appeal", and "review" gave no results. "Reputation" appears, but in a section rather harshly titled "Herding and Conformity", about the reputations of the people not trusting algorithms, not the people making or deploying them.

In my own experience, human forecasters and decision-makers tend to be much easier to hold accountable for bad forecasts and decisions. At a minimum, they stake their reputations, just by putting their names to their actions. With algorithms, by contrast, there's often no visible sign of who created them or decided to use them. There's often no effective process for review, correction, or redress at all.

The fact that high-volume, low-risk decisions tend to get automated more often may partly explain this. But it may also partly explain general attitudes toward algorithms, as a consequence.


"A computer can never be held accountable, therefore a computer must never make a Management Decision." (1979)

My only problem with your comment is that human forecasters and decision makers are also often not held accountable for their work.

They are either tarnishing their reputation or gaining reputation points.

Under that interpretation of accountability, you could easily hold a machine accountable for decisions; if it loses enough "reputation points" such that people no longer trust it to make the right decision, the machine could be replaced.

> (4) ignorance about why algorithms perform well;

Au contraire. It is the correct understanding, born out of deep expertise, that algorithms, outside very structured artificial environements, often do not work well at all.

Even provably correct algorithms fail if there is even the slightest mistmatch between the assumptions and reality, imprefect data, noisy sensors, or a myriad other problems. Not to mention that the implementations of these provably correct algorithms are often buggy.

When of algorithms are based on user input, users learn very quickly how to manipulate the algorithm to produce the results they actually want.


Weird, I have never encountered a single case of aversion to Booth's multiplcation algorithm, quicksort, binary search, DFS, BFS, Miller–Rabin primality test, or Tarjan's strongly connected components algorithm, .

Is there something special about the algorithms people are averse to? Maybe not actually working?


I'd be pretty goddamn averse if someone asked me to implement the Miller-Rabin test on the spot, from memory, in an interview.

You can see algorithmic aversion every day on HN, in the form of those who complain about the algorithmic portion of the technical interview.

They act as if the only way to get algorithmic knowledge is to grind specific examples, when really if they would read through Knuth they'd be fine.

These are people who see algorithmic knowledge as the bane of their careers rather than seeing algorithms as:

1) a great way to add value to resource constrained projects

2) a trivially simple and easy way to signal programming abilities, letting you easily breeze through the interview

I seriously would hate having to work with one of those who take pride in how little computer science they know, and because I run a math and computational geometry heavy organization, I will never hire them. But I would estimate that algorithmically and mathematically averse coders form the majority of coders.


cough cough I'm choking on the smug here!

First of all, nobody "read[s] through Knuth." [0] (I couldn't find the reference, but I recall a story about Bill Gates telling Knuth he had "read his books," to which Knuth replied that he believed Gates was lying.)

Second, the way the "algorithmic portion of the technical interview" is generally constituted currently is beyond flawed. Depending on your perspective, you can either pass it by memorizing 5 or 6 algorithms and re-using them over and over; or, it's a completely unrealistic test of anyone's ability to think about and work with algorithms and data, because there is no such thing in the real world as a 45 minute deadline. Of course, you can certainly argue that it's not intended to be a test of one's ability to work with algorithms and data, but, rather, an IQ test of sorts. But, then, we have companies that are literally giving candidates IQ tests now, so, why not just drop the pretense?

---

[0]: https://www.businessinsider.com/bill-gates-loves-donald-knut...


You and GP sound like you haven't read the article and are using heuristics to comment on what you can infer from the title. Maybe you should have used a better algorithm there bud

OP > Algorithm aversion is a product of diverse mechanisms, including ... (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.

Related:

The legal rule that computers are presumed to be operating correctly https://news.ycombinator.com/item?id=40052611

> In England and Wales, courts consider computers, as a matter of law, to have been working correctly unless there is evidence to the contrary. Therefore, evidence produced by computers is treated as reliable unless other evidence suggests otherwise.


Your belief that this rule is wrong is an example of algorithm aversion. You feel like computer systems should be judged harshly despite mistaking far more rarely than other things assessed in courts like police witness accounts or possibly even DNA evidence.


Of course there are mistakes. But still fewer than with other sources of evidence.

DNA evidence is also trusted by default and yet:

https://www.science.org/content/article/forensics-gone-wrong...

https://www.nbcnews.com/news/us-news/investigation-finds-col...


You’re missing the point. Of course there are (enormous, life-ruining) mistakes, but that’s not it.

With software, generally the only people with the means to demonstrate the software is flawed are the people in control of the software and associated data.


Don't forget the hardware. The hardware is also inscrutable by people who are not a specialist in the field of that hardware.

I don't think demonstrating that DNA evidence against you lies is any easier.

Wow, this paper is ... mystifyingly awful. It reads like some crank's blog, but it's actually written by two harvard lawyers, including a pretty famous one [1].

[1] https://en.wikipedia.org/wiki/Cass_Sunstein


The note on the primary author's name says 'We intend this essay as a preliminary “discussion draft” and expect to revise it significantly over time' so if you have cogent revisions to suggest, you should strongly consider sending them.

"Humans approximating human taste preferences perform worse on the validation set".

It's a sort of lazy argument that one can imagine a homo economicus which might might better decisions on a proxy variable, less lazily, bemoaning that they don't optimize the authors' preferred measurables.

It shows self-awareness at times

> It is worth noting, however, that the algorithm in the study was designed to optimize system-wide utilization rather than individual driver income. > The algorithm’s design weakens any conclusion about algorithm aversion, for individual drivers may have been better off optimizing > for themselves rather than the system.

It has the air of a future cudgel. The title works as a punchline, and as for the strength of the argument, well it's published (posted at all) online, isn't it.


I think part of the reason is that people understand that while in games such as chess, etc the entire state of the “universe” of the problem is provided to the algorithm, in the real world, they don’t have that confidence.

There are all sorts of cofounders to algorithms in the real world and an expert human is better at dealing with unexpected cofounders than an algorithm. Given the number of confounded possible, in real world use, it is likely that there will be at least 1 confounder.


As someone who spends almost all of my productive time on earth trying to solve problems via algorithms, this paper is the kind of take that should get someone fired. God I forget how much stupid shit academics can get away with writing. Right from the abstract this is hot garbage

> algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal).

Bullshit. Algorithm means any mechanical method, and while there are some of those that outperform humans, we are nowhere near the point where this is true generally, even if we steelman this by restricting this to the class of algorithms that institutions have deployed to replace human decision-makers

If you want an explanation for "algorithm aversion", I have a really simple one: Most proposed and implemented algorithms are bad. I get it. The few good ones are basically the fucking holy grail of statistics and computer science, and have changed the world. Institutions are really eager to deploy algorithms because they make decisions easier even if they are being made poorly. Also, as other commentators point out, the act of putting some decision in the hands of an algorithm is usually making it so no one can question, change, be held accountable for, or sometimes even understand the decision. Most forms of algorithmic decision-making that have been deployed in places that are visible to the average person have been designed explicitly to do bigoted shit.

> Algorithm aversion also has "softer" forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance.

Every performance metric is an oversimplification made for the convenience of researchers. Worse, it's not a matter of law or policy that's publicly accountable, even when the algorithm it results in is deployed in that context (and certainly not when deployed by a corporate institution). At best, to the person downstream of the decision, it's an esoteric detail in a whitepaper written by someone who is thinking of them as a spherical cow in their fancy equations. Performance metrics are even more gameable and unaccountable than the algorithms they produce

> Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms;

In other words, because they are rational adults

>(3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms;

You have to believe this to believe the algorithms should work in the first place. Algorithms are tools built and used by human experts. Automation is just hiding that expert behind at least two layers of abstraction (usually a machine and an institution)

> (4) ignorance about why algorithms perform well; and

Again, this ignorance is a feature, not a bug, of automated decisionmaking in practice with essentially no exceptions

> (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.

You should never "forgive" an algorithm for making an error. Forgiveness is a mechanism that is part of negotiation, which only works on things you can negotiate with. If a human makes a mistake and I can talk to them about it, I can at least try to fix the problem. If you want me to forgive an algorithm, give me the ability to reprogram it, or fuck off with this anthropomorphizing nonsense

> An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions.

I don't want to solve this problem. Laypeople should be, on balance, more skeptical of the outputs of computer algorithms than they currently are. "Algorithm aversion" is a sane behavior in any context where you can't audit the algorithm. Like, the institutions deploy these tools are the ones we should hold accountable for their results, and zero institutions doing so have earned the trust in their methodology that this paper seems to want




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: