Hacker News new | past | comments | ask | show | jobs | submit login

What you call will is no different in my mind than any other thing we encode into a NN - it's a different level and depth.

Creating motivation in AI is an open area, and in fact is arguably the big hairy beast when it comes to the "Friendly AI" question or really the whole "General" part of it.

You do the same thing everyone else does in this debate which is move the poles - we don't know how to build "emotions", we don't know how to build motivation - until we do or it is perhaps an emergent property of a sufficiently deep net.

Too many other strawmen in there to argue eg. the idea that we will need always tell them what to do.

The point I am making is that because the reinforcement nature of biological systems is mimicked in the basic ANN structure, it's the strongest candidate (at scale) for the building blocks of an AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: