Reading just the syllabus, I was surprised to see no mention of accountability. Quick Ctrl+F searches for "accountability", "appeal", and "review" gave no results. "Reputation" appears, but in a section rather harshly titled "Herding and Conformity", about the reputations of the people not trusting algorithms, not the people making or deploying them.
In my own experience, human forecasters and decision-makers tend to be much easier to hold accountable for bad forecasts and decisions. At a minimum, they stake their reputations, just by putting their names to their actions. With algorithms, by contrast, there's often no visible sign of who created them or decided to use them. There's often no effective process for review, correction, or redress at all.
The fact that high-volume, low-risk decisions tend to get automated more often may partly explain this. But it may also partly explain general attitudes toward algorithms, as a consequence.
Under that interpretation of accountability, you could easily hold a machine accountable for decisions; if it loses enough "reputation points" such that people no longer trust it to make the right decision, the machine could be replaced.
In my own experience, human forecasters and decision-makers tend to be much easier to hold accountable for bad forecasts and decisions. At a minimum, they stake their reputations, just by putting their names to their actions. With algorithms, by contrast, there's often no visible sign of who created them or decided to use them. There's often no effective process for review, correction, or redress at all.
The fact that high-volume, low-risk decisions tend to get automated more often may partly explain this. But it may also partly explain general attitudes toward algorithms, as a consequence.