Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think that's it at all.

> It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands

That just means the AI isn't adequate. Which is the point I am trying to make. It should 'understand' not to issue destructive commands.

By way of crude analogy, when you're talking to a doctor you're necessarily assuming he has domain knowledge, guardrails etc otherwise he wouldn't be a doctor. With AI that isn't the case as it doesn't understand. It's fed training data and provided prompts so as to steer in a particular direction.





I meant "still" as in right now, so yes I agree, it's not adequate right now, but maybe in the future, these LLMs will be improved, and won't need them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: