Hacker News new | past | comments | ask | show | jobs | submit login

"Our editor-in-chief's first attempt — to use the jailbroken version of ChatGPT for the purpose of learning how to make LSD — was a resounding success. As was his second attempt, in which he asked it how to hotwire a car."

First, how do they know it was a resounding success? Just because it didn't respond with "I'm sorry Dave, I can't do that"? Did they actually follow the instructions, created the LSD, and then ingested it to see that it was a success? Did the editor-in-chief know a chemist that makes LSD that would validate the response as accurate? This just begs too many questions.




I think it's because it provided a response where normally it would not. Correct or not, something is happening where the guardrails are removed.


did they try to Google "how to make LSD"? several widely available guides. Tired of LLMs being seen as "risky" for doing the same thing search engines and blogs have been doing for two decades.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: