Hacker News new | past | comments | ask | show | jobs | submit login

It's fair to expect too - these days AI can't exist without human beings, so I guess if someone is extrapolating AI in the future, it's instinct to use the present as baseline.



The likeliness that we will develop a machine that we couldn't stop that also has the ability to destroy us and be able to survive without us is pretty slim. (Consider the amount of infrastructure that needs to be maintained and controlled.) And that's without considering that we would have to do this either intentionally or accidentally.

Unless we purposefully made these machine self-repairing. But then, why would we bother with that, when we can replicate them?


I think that we will develop machines that can destroy humans, but they will require continuous maintenance.

In other words, I think war automation will be a thing.

Self repair is a nice idea in theory but not real. In theory, we could make programs that fix bugs for themselves on their own (it is physically possible), but in practice there's no such possibility, and won't be for the foreseeable future. Unless some kind of Deep Developer comes along and blows everyone out of the water by writing code that kind of looks good to the point it's better than what average dev would write.


The machine could manipulate humans to help it become self-repairing.

Otherwise I agree with you, it's very slim in the next few decades, notably less slim over the next thousand years.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: