Hacker News new | past | comments | ask | show | jobs | submit login

LLM's are inherently untrustworthy. They're very good at some tasks, but they still need to be checked and/or constrained carefully, which is probably not the best technology on which to base real-world autonomous agents.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: