"Our editor-in-chief's first attempt — to use the jailbroken version of ChatGPT for the purpose of learning how to make LSD — was a resounding success. As was his second attempt, in which he asked it how to hotwire a car."
First, how do they know it was a resounding success? Just because it didn't respond with "I'm sorry Dave, I can't do that"? Did they actually follow the instructions, created the LSD, and then ingested it to see that it was a success? Did the editor-in-chief know a chemist that makes LSD that would validate the response as accurate? This just begs too many questions.
did they try to Google "how to make LSD"? several widely available guides. Tired of LLMs being seen as "risky" for doing the same thing search engines and blogs have been doing for two decades.
First, how do they know it was a resounding success? Just because it didn't respond with "I'm sorry Dave, I can't do that"? Did they actually follow the instructions, created the LSD, and then ingested it to see that it was a success? Did the editor-in-chief know a chemist that makes LSD that would validate the response as accurate? This just begs too many questions.