Hacker News new | past | comments | ask | show | jobs | submit login

The biggest problem won’t be AGI. It will be the thousands of shitty AI and ML models which predict things with 99.9% accuracy, meaning people (read judges etc) assume it’s 100% accurate regardless of how often it gets used.

Look at the Postmaster General scandal in the UK. Now imagine that in all systems, because AI in inherently statistical in nature.




You don't need AGI to be able to implement abusive policies. But it definitely helps if you want to be able to do it at a scale humanity is not in a position to cope with. Stable dictatorships are a very likely outcome of such technology. Also in places where we currently do not have dictatorships.


Which human is 99.9% accurate most of the time? What's the difference here?


None. So we inherently distrust their testimony.

But now take a system that works in 99% of cases, and that system said you are guilty.

I guarantee most judges will find this enough evidence to convict. Look at the post master scandal.

They won’t even audit the systems.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: