Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Machine intelligence does pose an existential threat to humanity. The question is, how does that threat compare to all of the others? Is it greater or worse than climate change, the possibility of a bio-engineered virus, growing income inequality, nuclear war or asteroid strikes? It's true that malevolent machine intelligence has the potential to systematically exterminate all human life in a way that many other threats do not. But the question is, what are the odds of that actually happening?

The first issue is that the development of machine intelligence is wildly unpredictable. We have made incredible progress with statistical optimization and unsupervised categorization in recent years, but we have very little to show in terms of machines that can do human level reasoning, creativity, problem solving or hypothesis formation. One day someone will make a break through in those areas, perhaps solving it all with a single algorithm as the essay suggests. But we have no idea when that day will be and absolutely no evidence that it's getting any closer. sama does note these points and states the timeline for a dangerous level of machine intelligence is outright unknowable. I can only assume that the second part of this piece will explain why we should be concerned about something that might or might not occur at some point in the near or distance future, as opposed to the very real and quantifiable threats that the world is facing today.

The other issue is that we have no idea what the nature of machine intelligence will be. The only model we have for intelligence of any kind is ourselves, and the basic aspects of our reasoning were shaped by millions of years of evolution. Self-preservation, seeking pleasure and avoiding pain, a desire to control scarce resources...these were all things that evolved in the brains of fish that lived hundreds of millions of years ago. They aren't necessarily the product of logic and reason, but random mutations that helped some organisms survive long enough to produce offspring. A machine intelligence will start completely from scratch, guided by none of that evolutionary history. Who knows how it will think and see its place in the world? If someone explicitly programs it to think like a human, and it cannot change that programming of its own accord, it might indeed decide to think and act like a sci-fi villain. But it seems like the most likely outcome is completely unpredictable behavior, if it chooses to interact with us as a species at all.

This Superintelligence book has sparked a meme among very smart people. That's just how culture works, I guess. Some ideas catch on among certain groups and others don't. But I can't wait for the technical intelligentsia to move on to something else so that we can get back to the business of making stupid machines that are incredibly good at optimization and prediction. The world has a lot of real and pressing problems, here and now, that affect lives in a negative way. Hopefully we can use statistics to do more with less, and bring relief to those who need it instead of worrying about what-if scenarios and unanswerable questions.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: