Hacker News new | past | comments | ask | show | jobs | submit login

Yes, we can rule out near-term AGI, because we can also rule out far-term AGI, at least in the way AGI is defined in this talk. You can't isolate "economically beneficial" aspects of intelligence. Emulating human-like intelligence means emulating the primitive parts of the brain as well, including lust, fear, suspicion, hate, envy, etc... these are inseparable building blocks of human intelligence. Unless you can model those (and a lot else besides), then you don't have AGI, at least not one that can (for example) read a novel and understand what the heck is going on and why.



The question is, do we want "human like" intelligence, or "human level" intelligence? I'd argue that they are two separate things, and that the term "AGI" as widely used, is closer to the latter. That is, we want something that can generalize and learn approximately as well as a human, but not necessarily something that will behave like a human.

Of course if your definition of AGI involves the ability to mimic a human, or maybe display empathy for a human, etc., then yeah, you probably do need the ability to experience lust, fear, suspicion, etc. And IMO, in order to do that, the AI would need to be embodied in much the same way a human is, since so much of our learning is experiential and is based on the way we physically experience the world.


It's how I feel about it too. All the structures that provide motivation, drive, initiative - are mandatory. Those have evolved first in the natural world, for reasons that I think are semi-obvious. Complex intelligence has emerged later.


I think we need to have an agent centric approach. To view the world as a game, with a purpose, and the agent as a player learning to improve its game and the understanding of the world. Interactivity with humans would be part of the game, of course, and the AI agent will learn the human values and the meaning of human actions as a byproduct of trying to predict its own future rewards and optimal actions. Just like kids.


Fruit flies have a fear of water.

It's pretty simple to model and predict, either for a human or a deep net given some training data.

"Emulating these primitive parts" isn't some impossibility.


Are lust, fear, hate requirements for intelligence? There are part of human intelligence for sure. I feel the a problem is that we don't have a good definition of intelligence.


Yes. We learn through our emotions and use them for heuristics. They are measures of pleasure/stress against access to maslows needs. This drives instincts and behaviours. Also gives us values. When I 'think' or act I use schemas but don't knowingly use a GAN or leaky Relu. I personally learn in terms of semantic logic, emotions and metaphors. My GAN is the physical world, society, the dialogical self and a theory of mind. He never mentioned amygdala or angular gryus or biomimmicking the brain or creating a society of independant machines. Which we could do but aren't even trying to my knowledge? I mean there's Sophia(a fancy puppet) but not much else.

We get told to use the term .agi despite public calling it .ai as that's just automation. But this feels like we're now allowed to call it .ai again? It was presented as, given these advances in automation we can't rule out arriving at apparent consciousness. But with no line between.

We do have a definition for intelligence. Applied knowledge.

However here's another thought. Several times in my life I knowingly pressed self destruct. I quit a job without one to go to despite having mortgage and kids. I sold all my possessions to travel. I've dumped a girls I liked to be free. I've faced off against bigger adversaries. I've played devils advocate with my boss. I've taken drugs despite knowing the risks etc... And I benefitted somehow (maybe not in real terms) from all of them. Non of these things seem like intelligent things to do. They were not about helping the world but about self discovery and freedom. We cannot program this lack of logic. This perforating of the paper tape (electric ant). It's emergent behaviour based on the state of the world and my subjective interpretation of my place in it. Call it existential, call it experiential, call it a bucket list. Whatever.

.agi would need to fail like us, to be like us. Feel an emotional response from that failure. And learn. Those feelings could be wrong. misguided. We knowingly embrace failure as anything is better than a static state. i.e people voting Trump as Hilary offered less change.

We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music plays my body seems to respond before my brain intellectually engages. So we need to consider physiological as well as phsycological. We have more that 2000 emotions and feelings (based on a list of adjectives). But that probably only scratches the surface. What about 'hangry'? Then learning to recognise and regulate it.

diff( current perception of world state, perception of success at creating a new desired world state (Maslow) ) = stress || pleasure.

Even then how do you measure the 'success'? i.e.I have friends with depressions and they don't measure their lives by happiness alone. I feel depression is actually a normal response to a sick world and that people who aren't a bit put out are more messed up. If we created intelligence that wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate like we do with people.

Finally I don't think they can all learn off each other. They need to be individual. language would seem an inefficient data transer method to a machine. But we indivudate ourselves against society. Machines assimilating knowledge won't be individuals. More swarm like. We would need to use constraints which may seem counter productive so harder to realise.

Wow. I wrote more than I inteded there. But yes. Emotions are required IMO. Even the bad ones. Sublimation is an important factor in intelligence.


I really enjoyed reading this. Thank you. It relates to some thoughts that have been percolating. I’m actually giving a small internal talk on a few of these ideas.

Thanks!


I really enjoyed reading this too!


I was expecting no response and found this. Thanks!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: