Hacker News new | past | comments | ask | show | jobs | submit login

"But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?"

Perhaps it'd be better to ask: why would we want a computer to do these things? I certainly do not want to live in a world where computers have their own motivations and desires and the ability to act on the same.

Actually, I can put that more strongly: none of us will live very long in a world where computers have their own motivations, desires, and the ability to act on the same.




you seem pretty convinced living computers would kill us all. this seems like a pretty big assumption to me?


Because you are made of atoms that can more efficiently be used for something else. Morality (and all human values) is a purely human concept that evolved in the specific conditions of human evolution (and even just our specific culture.) AIs are not anthropomorphic, they don't have to have anything like human minds or values.


Why do you think AI would be more concerned about efficiency than understanding?


There are a bajillion possible future worlds that a particular mind might choose to make, and only a extremely tiny faction of these worlds include hapiness of mankind as a priority above everything else. And being not first priority, in essence, means being a worthless disturbance in the way of the first priority, and thus extinction.


the idea that there is a better application for my atoms seems subjective/human too


That's true, I don't know what the goal of an AI will be. But if it does anything at all, it's more likely to be indifferent to humans than compatible with our goals. An AI programmed to solve a difficult optimization problem might convert the entire mass of the solar system into a giant computer. An AI programmed for self-preservation would try to destroy every tiny possible threat to it's existence, and store as much energy as possible to try to stave off heat-death.


"An AI programmed to solve a difficult optimization problem might convert the entire mass of the solar system into a giant computer."

ha :) i love this. my initial reaction was again- this is a huge assumption. that is, you're assuming self-preservation would be a goal, before sustaining human life. but then i realized, i guess this is your point! regardless of what goals we intend to program for, their solutions are unknown to us and could be catastrophic by our definitions.

that all said; i welcome robot catastrophe too. if it's going to happen it's going to happen and i'd prefer it be while i can experience it.


>i welcome robot catastrophe too. if it's going to happen it's going to happen and i'd prefer it be while i can experience it.

I'd rather it turn out friendly than unfriendly, and I'd rather have as much time as possible to either figure it out or live out my life.

There'd be nothing to experience anyways. The AI might kill us all overnight with super-lethal virus, nukes, or swarms of nanobots.


>if it's going to happen it's going to happen and i'd prefer it be while i can experience it.

It'd certainly be fascinating. But I think I would rather that humanity does whatever it can do ensure that any artificial intelligence we create won't cause an outcome that is bad for humanity.


And then the question is - if the AI is more intelligent than us humans, it must surely be able to figure out itself, and it improve itself, includings its own goals, too?


Why would it change it's own goals? That would violate it's current goals, and therefore try to avoid that action.

Can you change yourself to be a sociopath? Would you? Would you make yourself a perfect nihilist?


i think an intelligent system would be able to think critically, think ethically, and re-evaluate it's beliefs.

what kind of intelligence wouldn't question it's own understanding?


Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

I think, given these philosophical ideas, we anthropomorphize if we even think in terms of good/evil about any AI. I believe if there is ever any abrupt change due to vastly better AI, it is more of the _weird_ kind than the good or evil kind. But weird might be very scary indeed, because at some level we humans tend to like that things are somewhat predictable.

I believe the whole discussion about AI is a bit artificial (no pun). Various kinds of AI are already deeply embedded in some parts of society and causes real changes - such as airplane planning systems, trading on the stock market etc. Those cause very real world effects and affect very realy people. And they tend to be already pretty weird. We don't really see it all the time, but it acts, and its 'will', so to speak, is a weird product of our own desires.

Also, I wonder whether and how societies would compare to AIs. We have mass psychological phenomena in societies that even the brightest persons only become aware of some time after 'they have fulfilled their purpose'. Are societies self-ware as a higher level of intelligence? And have they always been?

Are we, maybe simply the substrate, for evolution of technology, much as biology is the substrate for the evolution of us? Are societies, algorithms, AI, ideas & memes simply different forms of 'higher beings' on 'top' of us? Does it even make sense that there is a hierarchy and to think hierarchically at all about these things?

I have the impression our technology makes us, apart from other things, a lot more conscious. But that is not a painless process at all, quite the contrary. But so far, we seem to have decided to go this route? Will we, as humans, eventually become mad in some way from this?

There are mad people. Can we build superior AI if we do not understand madness? Will AI understand madness?


>Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

Understanding is not the same as accepting as your utility function. Morality is specific to humans. A different being would have different goals and different morality (if any.) It's very likely they would be compatible with humans.


Intelligence means that it can figure out how to fulfill it's goal as optimally as possible. It doesn't mean that it can magically change it's goals to something that is compatible with human goals. Why would it? Human goals are extremely arbitrary.


I think it's the other way around.

I assume that an AI will be more intelligent than us if we build it right. Then assuming that a randomly designed intelligence has the same goals as us is a huge assumption. Most humans don't even have the same goals, and we're 99% similar to each other.

In other words - the assumption that a random intelligence shares our goals is a much bigger assumption than that a random intelligence will be just like us.


They might not want to, and it might be more of a homogenization than outright killing. I think the final third of Stross' Accelerando speaks rather well to this possibility - if you don't want to be converted into computronium so that you can participate in Economy 2.0, you'd better emigrate.


It's the same thing with aliens. The "evil, technologically superior alien race" is a big trope in science fiction, just as the "super-intelligent and malevolent" (as opposed to the super-intelligent and benevolent/indifferent) is in science fiction, and maybe even with futurists.


"Indifferent" can be pretty bad. You are indifferent to the existence of grass whenever you mow your lawn, for instance. Or even to the fate of the cow when you have a steak dinner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: