> It’s entirely possible that some dangerous capability is hidden in ChatGPT, but nobody’s figured out the right prompt just yet.
This sounds a little dramatic. The capabilities of ChatGPT are known. It generates text and images. The qualities of the content of the generated text and images is not fully known.
Think of the news about the kid who got recommended to suicide by ChatGPT, or chatgpt providing the user information on how to do illegal activities, these capabilities are the ones that the author it's referring to
And that sounds a little reductive. There's a lot that can be done with text and images. Some of the most influential people and organizations in the world wield their power with text and images.
> The capabilities of ChatGPT are known. It generates text and images
There's a big difference between generating text which does someone's homework and text which changes peoples opinion about the world (e.g. the r/changemyview experiment done by Meta, in which their AI was better than almost all humans (it was 99th percentile) at changing peoples view, and not a single user was able to spot it as being AI[1])
If you're disagreeing with the precise wording of "capabilities" vs "qualities of the content", then sure, use whatever words make sense to you. But I don't think that's an interesting discussion to have.
Yeah, and to riff off the headline, if something dangerous is connected to and taking commands from ChatGPT then you better make sure there’s a way to turn it off.
Plus there is the 'monkeys with typewriters' problem with both danger and hypothetical good. In contrast, ChatGPT may technically reply to the right prompt with a universal cancer cure/vaccine. Psuedorandomly generating it wouldn't help as you wouldn't recognize it from all of the other queries of things we don't know of as true or false.
Likewise what to ask it for how to make some sort of horrific toxic chemical, nuclear bomb, or similar isn't much good if you cannot recognize it and dangerous capability depends heavily on what you have available to you. Any idiot can be dangerous with C4 and detonator or bleach and ammonia. Even if ChatGPT could give entirely accurate instructions on how to build an atomic bomb it wouldn't do much good because you wouldn't be able to source the tools and materials without setting off red flags.
This sounds a little dramatic. The capabilities of ChatGPT are known. It generates text and images. The qualities of the content of the generated text and images is not fully known.