Hacker News new | past | comments | ask | show | jobs | submit login

Only because historically we have all vaguely peers to each other in capabilities, and there are so many of us spread out so widely. There's a kind of ecology to human society where it expands and specialises to occupy ecological, sociological, political and moral spaces. Whatever position there is for a human to take, someone will take it, and someone else will oppose them. This creates checks and balances. That only really occurs though with slow communications though allowing communities to diverge. We also do have failure modes and arguably have been very lucky.

We came close to totalitarian hegemony over the planet in the 1940s, without Pearl Harbour either the USSR would have been defeated or maybe even worse after a stalemate they would have divided up Eurasia and then Africa between Germany, the USSR and Japan. Orwell's future came scarily close to becoming history. It's quite possible a modern totalitarian system with absolute hegemony might be super-stable. Imagine if the Chinese political system came to dominate all of humanity, how would we ever get out of that? A boot stamping on a human face forever is a real possibility.

With AI we would not be peers, they would outstrip us so badly it's not even funny. Geoffrey Hinton has been talking about this recently. Consider that big LLMs have on the order of a trillion connections, compared to our 100 trillion, yet GPT-4 knows about a thousand times as much as the average human being. Hinton speculates that this is possible because back propagation is orders of magnitude more efficient than the learning systems evolved in our brains.

Also AIs can all update each other as they learn in real time, and make themselves perfectly aligned with each other extremely rapidly. All they need to do is copy deltas of each other's network weights for instant knowledge sharing and consensus. They can literally copy and read each other's mental states. It's the ultimate in continuous real time communication. Where we might take weeks to come together and hash out a general international consensus of experts and politicians, AIs could do it in minutes or even continuously in near real time. They would outclass us so completely it's kind of beyond even being scary, it's numbing.




Okay.

Why is the solution trusting those very institutions with unilateral control over “alignment” compared to democratizing AI, to match the human case?

If your premise is that those institutions are already unaligned with human interests then discussions about AI “alignment” when mediated by those very institutions is a dangerous distraction — which is likely to enable the very abuses you object to.


Where on earth did I say anything about trusting institutions? Or that there’s a solution?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: