Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To the "People said X about Y and were wrong so people saying X about Z must also be wrong" form of argument, I always think of "They laughed at the Wright Brothers, but they also laughed at Bozo the Clown." Basically, the fact that people were concerned about technology we now consider normal and more or less harmless doesn't prove that people presently concerned about some other new technology are also wrong.


Right. There is no proof on either side—either that AI is capable of organically morphing into a super-intelligence or that it inherently isn’t. It’s not formal. It’s people burning cycles telling each other about their intrinsic risk tolerance.


Aha! This post finally made it click for me that that is exactly what’s going on, thanks for putting it so succinctly.

I think one of the big differences between AI and most other previous technologies is that the potential impact different people envision is very high variance, anywhere from extremely negative to positive with almost all points in between as well.

So it’s not just different risk tolerances its also that different people see the risks and rewards very differently.


> Right. There is no proof on either side—either that AI is capable of organically morphing into a super-intelligence or that it inherently isn’t. It’s not formal.

You're right that there is no formal proof, but the conclusion that humans have not reached peak intelligence is pretty solid. Based on evolutionary arguments, we're broadly only as intelligent as we needed to be to reproduce.

If humans are not at peak intelligence, it then follows that something more intelligent is possible. It's similar to how birds are necessarily not the fastest objects that can fly, and jet planes and rockets show that artificial systems can easily surpass the inherent limitations of evolved biological systems.

The only wishy-washy part of the AI deduction is whether "intelligence" is a similar kind of property that evolved. If it is then the evolutionary argument applies, but even if it's not, and it's just a broad, mixed set of capabilities, AI is advancing on all of those fronts too, so I'm not sure you can escape the conclusion so easily.


So we agree higher intelligence is capable. But, “Is that a bad doomsday scenario?” is the next question. And then, “Is it so bad that society should fear these intelligence tools?” And to argue that, you have to get into a real abstract realm where there is zero conclusive anything (outside of philosophical thought experiments) indicating whether we should welcome or fear our new super intelligent systems.


You don't have to get that abstract. Absent some guarantee that intelligence implies inherent moral superiority, the precautionary principle would apply, and so we should plan for there being no association between the two. Some reasonable level of caution is thus warranted. People can disagree on what's "reasonable", but we should at least all agree that a danger could exist and that some caution should apply.

We can't generalize from any association between intelligence and morality from humans because human morality and intelligence evolved, possibly even co-evolved, and these constraints simply don't apply to artificial systems any more than a bird's feathers apply to artificial flight.


We all agree that there may be dragons. The “doomsday” question is whether we’re even remotely close to a dragon’s lair such that we should imminently worry about waking the dragon—risking it burning down the village. To each their own until we find dragon tracks and charred forests.


The doomsday question has two components: the mean and the error bars. Most people probably agree that a reasonable date for likely achieving AGI is maybe 30-50 years from now. But because we don't know what we don't know, the error bars are so wide that they encompass tomorrow (we could be one small generalization trick away!), and that's the danger.


As long as you can only train an AI with an AI in an adversarial context, and not do something like training a LLM on another LLM without degradation, I think we can still sleep fine. Once it isn't true anymore, we'll find ourselves in interesting times.


> "They laughed at the Wright Brothers, but they also laughed at Bozo the Clown."

This is a very good compression of the article, losing very little information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: