Hacker News new | past | comments | ask | show | jobs | submit login

i think an intelligent system would be able to think critically, think ethically, and re-evaluate it's beliefs.

what kind of intelligence wouldn't question it's own understanding?




Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

I think, given these philosophical ideas, we anthropomorphize if we even think in terms of good/evil about any AI. I believe if there is ever any abrupt change due to vastly better AI, it is more of the _weird_ kind than the good or evil kind. But weird might be very scary indeed, because at some level we humans tend to like that things are somewhat predictable.

I believe the whole discussion about AI is a bit artificial (no pun). Various kinds of AI are already deeply embedded in some parts of society and causes real changes - such as airplane planning systems, trading on the stock market etc. Those cause very real world effects and affect very realy people. And they tend to be already pretty weird. We don't really see it all the time, but it acts, and its 'will', so to speak, is a weird product of our own desires.

Also, I wonder whether and how societies would compare to AIs. We have mass psychological phenomena in societies that even the brightest persons only become aware of some time after 'they have fulfilled their purpose'. Are societies self-ware as a higher level of intelligence? And have they always been?

Are we, maybe simply the substrate, for evolution of technology, much as biology is the substrate for the evolution of us? Are societies, algorithms, AI, ideas & memes simply different forms of 'higher beings' on 'top' of us? Does it even make sense that there is a hierarchy and to think hierarchically at all about these things?

I have the impression our technology makes us, apart from other things, a lot more conscious. But that is not a painless process at all, quite the contrary. But so far, we seem to have decided to go this route? Will we, as humans, eventually become mad in some way from this?

There are mad people. Can we build superior AI if we do not understand madness? Will AI understand madness?


>Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

Understanding is not the same as accepting as your utility function. Morality is specific to humans. A different being would have different goals and different morality (if any.) It's very likely they would be compatible with humans.


Intelligence means that it can figure out how to fulfill it's goal as optimally as possible. It doesn't mean that it can magically change it's goals to something that is compatible with human goals. Why would it? Human goals are extremely arbitrary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: