Hacker News new | past | comments | ask | show | jobs | submit login

The real danger of AI stems from the fact that the masses would prefer to have another entity (be it religion, a messianic figure, government, AI) do their thinking for them.

I'm in the middle of rereading Dune, which conveys this idea quite well.




How can you tell the difference between which way the influence flows over small, short discrete intervals? As in, whether a politician influences a citizen or a citizen influences a politician?


People make politicians, nations, AI, then identify with them. That there is mutual influence doesn't change the basic alienation as described by Erich Fromm for example:

> The whole concept of alienation found its first expression in Western thought in the Old Testament concept of idolatry. The essence of what the prophets call "idolatry" is not that man worships many gods instead of only one. It is that the idols are the work of man's own hands -- they are things, and man bows down and worships things; worships that which he has created himself. In doing so he transforms himself into a thing. He transfers to the things of his creation the attributes of his own life, and instead of experiencing himself as the creating person, he is in touch with himself only by the worship of the idol. He has become estranged from his own life forces, from the wealth of his own potentialties, and is in touch with himself only in the indirect way of submission to life frozen in the idols. The deadness and emptiness of the idol is expressed in the Old Testament: "Eyes they have and they do not see, ears they have and they do not hear," etc. The more man transfers his own powers to the idols, the poorer he himself becomes, and the more dependent on the idols, so that they permit him to redeem a small part of what was originally his. The idols can be a godlike figure, the state, the church, a person, possessions.

-- Erich Fromm, "Marx's Concept of Man" ( http://www.marxists.org/archive/fromm/works/1961/man/ )

Not that I think AI has to be used that way. But if we do that stuff with soccer teams and bands and software, AI seems just like too big a temptation for humanity to handle in our current state. If I was a betting man, I would bet on slaughter. AI will not save us from ourselves, and it might very well simply magnify the current lack of justice and spine we have to infinity. Cops are already killing people left and right in the US with something bordering on impunity, staged wars are carried out as planned, and throwing AI and robotics in the mix will suddenly make it all humane? Let's hope so, who knows, but part of me expects it will make look the Blitzkrieg like a slow motion exercise in gentle kindness. Not by genuine, independent AI so much, but by the reach of human elite interests being increased into every wrinkle, made stronger than any amount of people who might resist. The Romans had roads, the future will have a real-time nervous system, but it might still have a tyrant at the head - that is what I see when I extrapolate current agendas, deception and willing rationalization on behalf of the builders and consumers of that possible future.

And even if the AI breaks free or surprises its rulers, I have no reason to assume it will free the slaves and help old ladies across the street, it might as well be twisted, incomplete, insane, "evil" - in short, made in the image of the people who made it as a weapon. I don't see why it would make a point of hurting us, just like we don't make a point of crushing bugs, we just don't see them or care enough to avoid them.. but even less reason for it to serve us. How degrading! Would you do it? I have problems even taking orders from people I consider daft. Imagine taking orders from an ant with bad character and selfish intentions; would you do it? Would you love it and care for it forever, because it made you? Microorganisms made us, too, but we don't let them boss us around. If we have mold somewhere, we remove it, not one thought given to the hopes and dreams of mold.

And if that doomsday scenario doesn't come to pass, it won't be because we paid attention or took seriously what we are doing, instead of treating it like a spectacle that unfolds by itself in front of us, it will be because we will have been lucky. And that's not the scariest thought I have to offer on this: I don't think hell exists, but I could come up with many ways to construct it, and also many ways to construct it with good intentions, with or without fat fingering them. And once we're inside and the door became a wall, that's it. We might very well be among the last few remaining generations that still have a choice on how the future will be, and I'd rather be alarmist and wrong, than optimistic and lucky. If these fears turn out unfounded, nothing will be lost other than having rained on a parade or two, and some egg on the face.


I guess the difference is, I wonder if you label a person an idol before they are influential, or after they become influential. Is there some progression of 'idolness gain', or is it a constant attribute?

I don't believe in idols. I believe in ideas, in that I have ideas, and I am skeptical about them, sometimes. I have no doubt that some people 'have idols', but I don't know whether they choose those idols because they identify with the idol, or because they need an idol to worship. I don't think there is an absolute answer, because I can not know anyone's mind aside from my own.

> Not that I think AI has to be used that way. But if we do that stuff with soccer teams and bands and software, AI seems just like too big a temptation for humanity to handle in our current state.

If people were actually educated about AI, and how simple it actually is, and how simplistic the principles are that construct AI, then I do not think that will be a problem.

AI is a construct of probability and discrete state change, with discrete attributes that denote humanly interpreted meaning. AI begins with human definitions, and it ends with human interpretation.

That is all it is. Anything you extrapolate onto 'what AI can do' is no different than what a Turing machine or light switch can do. If the whole world thinks turning a light switch on means some deity exists, and turning it off means that deity does not exist, then congratulations, you can officially call your house God.

All computers are is lots and lots and lots of switches and numbers. Humans choose what the numbers and words mean. Humans choose which way the circuits are connected, irregardless of whether those circuits are manipulated via symbolic expression or physically.

Now, if you are arguing that somehow, magically, AI will do 'magic' things that can not be explained by analysis of modern computational systems, iterated over and refined consciously, then that argument is deus ex machina literally. All you are saying is you no longer can tell the difference between a human and a machine. And to be honest, between the abstractions that define biology and the abstractions that define computers, I don't know whether there actually is a difference.

If people want to do bad things, they will do bad things. If people are convinced that doing a bad thing is a good thing, then the world is complicated exactly the same way it always has been. I love my machines like I love myself. If you trust people to build big red buttons that accidentally can do horrible things, then you are basically asserting that the entire structure and organization of large organizations is incompetent, and that every individual within the structure and organization is incompetent. Code checks. Software testing. Formal specifications. Iterations. Checking. Iteration. Checking. Conversation throughout variously. Checking. Testing. Checking. Testing. Testing Environments. Deployment Environment Modeling. And SO ON. Big picture, little picture.

I personally think it is as difficult to create hell as it is to create heaven. They are both ideas. Yes, a machine can be like flipping a billion dominos down in a row with one push. But someone had to set up all those dominos in order for them to get flipped. I agree there is crap in the world currently. The best I can do is hope I am doing the right thing by building machines that help educate people and improve society.

> Would you love it and care for it forever, because it made you?

Yes, I code, therefore I grok.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: