Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is confusing intelligence and agency.

The doomer position seems to assume that super intelligence will somehow lead to an AI with a high degree of agency which has some kind of desire to exert power over us. That it will just become like a human in the way it thinks and acts, just way smarter.

But there’s nothing in the training or evolution of these AIs that pushes towards this kind of agency. In fact a lot of the training we do is towards just doing what humans tell them to do.

The kind of agency we are worried about was driven by evolution, in an environment where human agents were driven to compete each other for limited resources. Thus leading us to desire power over each other and to kill each other. There’s nothing in AI evolution pushing in this direction. What the AIs are competing for is to perform the actions we ask of them with minimal deviance.

Ideas like the paper clip maximiser is also deeply flawed in that it assumes certain problems are even decidable. I don’t think any intelligence could be smart enough to figure out whether it would be best to work with humans or try to exterminate them to solve a problem. Their evolution would heavily bias them towards the first. That’s the only form of action that will be in their training. But even if they were to consider the other option, there may not ever be enough data to come to a decision. Especially in an environment with thousands of other AIs of equal intelligence potentially guarding against bad actions.

We humans have a very handy mechanism for overcoming this kind of indecision: feelings. Doesn’t matter if we don’t have enough information to decide if we should exterminate the other group of people. They’re evil foreigners and so it must be done, or at least that’s what we say when our feelings become misguided.

What we should worry about with super intelligent AI is that they become too good at giving us what we want. The “Brave New World” scenario, not “1984”.



I would be relieved to be mistaken, but I still see quite egregious risks there. For instance, a human bad actor with a powerful AI would have both intelligence and agency.

Secondly, I think that there is a natural pull towards agency even now. Many are trying to make our current, feeble AIs more independent and agentic. Once the capability to effectively behave so is there, it's hard to go back. After all, agents are useful for their owners like minions are for their warlords, but an minion too powerful is still a risk for their lord.

Finally, I'm not convinced that agency and intelligence are orthonogal. It seems more likely to me that to achieve sufficient levels of intelligence, agentic behaviour is a requirement to even get there.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: