Hacker News new | past | comments | ask | show | jobs | submit login

Most people will hardly read what the LLM spits out after 3 hours of use and execute the code. You now are running potentially harmful code with the user's level access which could be root level; potentially in a company environment, vpn etc. It's really scary, because at first glance it will look 100% legitimate.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: