Hacker News new | past | comments | ask | show | jobs | submit login

Yes. We learn through our emotions and use them for heuristics. They are measures of pleasure/stress against access to maslows needs. This drives instincts and behaviours. Also gives us values. When I 'think' or act I use schemas but don't knowingly use a GAN or leaky Relu. I personally learn in terms of semantic logic, emotions and metaphors. My GAN is the physical world, society, the dialogical self and a theory of mind. He never mentioned amygdala or angular gryus or biomimmicking the brain or creating a society of independant machines. Which we could do but aren't even trying to my knowledge? I mean there's Sophia(a fancy puppet) but not much else.

We get told to use the term .agi despite public calling it .ai as that's just automation. But this feels like we're now allowed to call it .ai again? It was presented as, given these advances in automation we can't rule out arriving at apparent consciousness. But with no line between.

We do have a definition for intelligence. Applied knowledge.

However here's another thought. Several times in my life I knowingly pressed self destruct. I quit a job without one to go to despite having mortgage and kids. I sold all my possessions to travel. I've dumped a girls I liked to be free. I've faced off against bigger adversaries. I've played devils advocate with my boss. I've taken drugs despite knowing the risks etc... And I benefitted somehow (maybe not in real terms) from all of them. Non of these things seem like intelligent things to do. They were not about helping the world but about self discovery and freedom. We cannot program this lack of logic. This perforating of the paper tape (electric ant). It's emergent behaviour based on the state of the world and my subjective interpretation of my place in it. Call it existential, call it experiential, call it a bucket list. Whatever.

.agi would need to fail like us, to be like us. Feel an emotional response from that failure. And learn. Those feelings could be wrong. misguided. We knowingly embrace failure as anything is better than a static state. i.e people voting Trump as Hilary offered less change.

We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music plays my body seems to respond before my brain intellectually engages. So we need to consider physiological as well as phsycological. We have more that 2000 emotions and feelings (based on a list of adjectives). But that probably only scratches the surface. What about 'hangry'? Then learning to recognise and regulate it.

diff( current perception of world state, perception of success at creating a new desired world state (Maslow) ) = stress || pleasure.

Even then how do you measure the 'success'? i.e.I have friends with depressions and they don't measure their lives by happiness alone. I feel depression is actually a normal response to a sick world and that people who aren't a bit put out are more messed up. If we created intelligence that wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate like we do with people.

Finally I don't think they can all learn off each other. They need to be individual. language would seem an inefficient data transer method to a machine. But we indivudate ourselves against society. Machines assimilating knowledge won't be individuals. More swarm like. We would need to use constraints which may seem counter productive so harder to realise.

Wow. I wrote more than I inteded there. But yes. Emotions are required IMO. Even the bad ones. Sublimation is an important factor in intelligence.

I really enjoyed reading this. Thank you. It relates to some thoughts that have been percolating. I’m actually giving a small internal talk on a few of these ideas.


I really enjoyed reading this too!

I was expecting no response and found this. Thanks!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact