Hacker News new | past | comments | ask | show | jobs | submit login

Both your arguments so far are standard ones addressed in "Superintelligence: Paths, Dangers, Strategies" [1].

Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish [2].

An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped [3].

1: https://www.amazon.ca/Superintelligence-Dangers-Strategies-N...

2: https://en.wikipedia.org/wiki/AlphaZero

3: https://en.wikipedia.org/wiki/Stuxnet




> Sometimes AI progress comes in rather shocking jumps.

Shocking in an academic sense sure, but not in a "revolutionize the world in a single day" sort of way, which is what would be required for the paperclip maximizer scenario to pose a serious threat. AlphaZero was impressive, but not _that_ impressive.

> An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly

100x faster than any human _and_ any and all previously developed AIs. It would also have to be sufficiently sapient to be capable of contemplating the possibility of world domination, with all the prerequisite technological advancements that implies (likely including numerous advancements in the field of cybersecurity driven by previous generations of AI).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: