Of course if your definition of AGI involves the ability to mimic a human, or maybe display empathy for a human, etc., then yeah, you probably do need the ability to experience lust, fear, suspicion, etc. And IMO, in order to do that, the AI would need to be embodied in much the same way a human is, since so much of our learning is experiential and is based on the way we physically experience the world.
It's pretty simple to model and predict, either for a human or a deep net given some training data.
"Emulating these primitive parts" isn't some impossibility.
We get told to use the term .agi despite public calling it .ai as that's just automation. But this feels like we're now allowed to call it .ai again? It was presented as, given these advances in automation we can't rule out arriving at apparent consciousness. But with no line between.
We do have a definition for intelligence. Applied knowledge.
However here's another thought. Several times in my life I knowingly pressed self destruct. I quit a job without one to go to despite having mortgage and kids. I sold all my possessions to travel. I've dumped a girls I liked to be free. I've faced off against bigger adversaries. I've played devils advocate with my boss. I've taken drugs despite knowing the risks etc... And I benefitted somehow (maybe not in real terms) from all of them. Non of these things seem like intelligent things to do. They were not about helping the world but about self discovery and freedom. We cannot program this lack of logic. This perforating of the paper tape (electric ant). It's emergent behaviour based on the state of the world and my subjective interpretation of my place in it. Call it existential, call it experiential, call it a bucket list. Whatever.
.agi would need to fail like us, to be like us. Feel an emotional response from that failure. And learn. Those feelings could be wrong. misguided. We knowingly embrace failure as anything is better than a static state. i.e people voting Trump as Hilary offered less change.
We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music plays my body seems to respond before my brain intellectually engages. So we need to consider physiological as well as phsycological. We have more that 2000 emotions and feelings (based on a list of adjectives). But that probably only scratches the surface. What about 'hangry'? Then learning to recognise and regulate it.
diff( current perception of world state, perception of success at creating a new desired world state (Maslow) ) = stress || pleasure.
Even then how do you measure the 'success'? i.e.I have friends with depressions and they don't measure their lives by happiness alone. I feel depression is actually a normal response to a sick world and that people who aren't a bit put out are more messed up. If we created intelligence that wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate like we do with people.
Finally I don't think they can all learn off each other. They need to be individual. language would seem an inefficient data transer method to a machine. But we indivudate ourselves against society. Machines assimilating knowledge won't be individuals. More swarm like. We would need to use constraints which may seem counter productive so harder to realise.
Wow. I wrote more than I inteded there. But yes. Emotions are required IMO. Even the bad ones. Sublimation is an important factor in intelligence.