Hacker News new | comments | show | ask | jobs | submit login

I think the notion that machine learning algorithms are "unreasonable and impenetrable" is seen as a huge PR boon by these companies as it shifts the responsibility away from actual humans. So they try hard to promote it.

The fact is that there is always a human in the loop. Without human supervision these algorithms deliver a small but significant portion of incredibly stupid results. So an actual human has to sit down, analyze these results one by one and decide what to do (in some cases just hardcoding the "correct" answer). The general public must be educated about this stuff so that responsibility is not muddled.

Yep. The usual incarnation of the scapegoat is "policy." It sounds much better to blame a byzantine rulebook (which is the perfect tool for diffusion-of-responsibility) than to reveal that the strategists have decided to throw a subset of customers under the bus. In the case of monopoly, sometimes it's not even a subset.

Incidentally, this also explains why there is zero interest in making rulebooks available, conscise, searchable, etc. All of these would improve fairness, but rulebooks are actually an instrument of power, not of fairness, so existing power structures will typically oppose any such changes.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact