I've been very struck during most of the AI discussions recently how little weight comments seem give to the subtlety and rich contextual knowledge that humans bring to even quite simple activities.
I know we often over-estimate the value of our contributions. I know we often find that our functions can ultimately be automated in some respect. But I find in aggregate that the leading comments reflect a very arid conception of being a human connected to other humans.
For example in the discussion about AI Lawyers very little sense of the moral aspect of another human acting on behalf of a human client. In the discussions about the replacement of programming jobs by this kind of technology, not a great deal of confidence in the importance of human judgement in building human-focused systems.
Is this just reflective of our context as people that streamline and automate, or do HN readers just think a human isn't such a complex entity?
For me this is somewhat like the T-Shirt that says "I went outside once, but the graphics were crap"...except nobody's joking.
I completely agree with you that hackerland is depressingly myopic. And the new power elite of Silicon Valley are dangerously contemptuous of human institutions.
But aside from that, I think it's just people who get used to one paradigm getting confused by another.
To the automation-centric thinker, human institutions seem to be ill-specified and allow for many absurdities. What they're not getting is that human institutions are simple frameworks to enable agents with judgment. Automation is about complicated frameworks to constrain agents that have no judgment.
People who know human systems (the vast majority of the world) are similarly confused by automation, because their assumptions are flipped.