Hacker News new | past | comments | ask | show | jobs | submit login

“Yes, X would be catastrophic. But have you considered Y, which is also catastrophic?”

We need to avoid both, otherwise it’s a disaster either way.




I agree, but that is removing the nuance that in this specific case Y is a prerequisite of X so focusing solely on X is a mistake.

And for sake of clarity:

X = sentient AI can do something dangerous

Y = humans can use non-sentient AI to do something dangerous


"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means

Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one


Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.


You can do that without AI. Been able to do it for probably 7-10 years.


You can do that now, for sure, but I think it qualifies to call it AI.

If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.


You don't need landmines to fly for them to be dangerous.


I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.


This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".

Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.

If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".


Meanwhile back in reality most haywire AI is the result of C programmers writing code with UB or memory safety problems.


Whenever you think the timeline couldn't be any worse, just imagine a world where our AIs were built in JavaScript.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: