Hacker News new | past | comments | ask | show | jobs | submit login

The AI shouldn’t really be refusing to do things. If it doesn’t have information it should say “I don’t know anything about that”, but it shouldn’t lie to the user and claim it cannot do something it can when requested to do so.

I think you’re applying standards of human sentience to something non-human and not sentient. A gun shouldn’t try to run CV on whatever it’s pointed at to ensure you don’t shoot someone innocent. Spray paint shouldn’t be locked up because a kid might tag a building or a bum might huff it. Your mail client shouldn’t scan all outgoing for “threatening” content and refuse to send it. We hold people accountable and liable, not machines or objects.

Unless and until these systems seem to be sentient beings, we shouldn’t even consider applying those standards to them.




Unless it has information indicating it is safe to provide the answer, it shouldn't. Precautionary Principle - Better safe than sorry. This is the approach taken by all of the top labs and it's not by accident or without good reason.

We do lock up spray cans and scan outgoing messages, I don't see your point. If a gun technology existed that could scan before doing a murder, we should obviously implement that too.

The correct way to treat AI actually is like an employee. It's intended to replace them, after all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: