Hacker News new | past | comments | ask | show | jobs | submit login

I am not a visionary like the people you mentioned, but one thing that I can see go wrong with large-scale AI automation is not that the scenario of robot uprising or something, but more of software bugs that we all are familiar with.

There have been lots of cases where a bug in an automated trading system causes millions of losses[0][1], if we have a larger system that manages everything from the power grid, the gas lines, self-driving cars, your smart home, to your own personal calendar, it's not hard to imagine the potential damage if there is even a single bug.

Can be things like edge cases that nobody thought of, or simply unexpected sensor reading caused by natural disaster, etc.

[0](http://dougseven.com/2014/04/17/knightmare-a-devops-cautiona...) [1](http://www.bloomberg.com/news/articles/2013-08-20/goldman-sa...)




Yes, it's something we're thinking about in the insurance industry. Lots of headaches and edge cases that come about with self-driving cars, smart homes, etc. I also recall the Patriot Missile Software Problem [0]

[0] http://sydney.edu.au/engineering/it/~alum/patriot_bug.html




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: