If this thing gets released to the general population, fooling gullible people could go very badly. Imagine a disgruntled person with mental illness forming a relationship with the bot. The bot feeds into their delusions, then hallucinates instructions on how to commit mass murder, egging on the human user and indirectly causing a catastrophe.
"Analyzing text and generating new text in response" is not by definition harmless. For example, that's the job description for many remote employees. Suppose your cofounder told you that one of your remote employees was sabotaging your company -- would it be safe to conclude that there was no issue, because the remote employee was simply "analyzing text and generating new text in response"?
Kevin Roose is a seasoned tech reporter, and he said he had trouble sleeping after his chat with the bot. ("I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold") So I don't think we can rule out anything here in terms of the impact on the general population.
You're correct that the bot doesn't have control of the nuclear arsenal... but is the military going to make a special effort to keep people who do have their finger on the trigger away from this thing? In my opinion, it is worthwhile to spend time thinking through the worst-case scenario, same way you would consider edge cases in safety-critical code.
Launching nuclear weapons takes an order from the president which unlocks encrypted launch codes. Those orders have to be sent to actual missile silos and submarines where a chain of command verifies the order, verifies the launch codes, and two people have to independently engage the launch system. There are many fail-safes in the entire system, one single person fooled by an AI is not going to launch anything. The system is designed to thwart actual bad actors like foreign spies and intelligence agencies. I am confident there is truly zero risk that a chat bot will cause nuclear weapons to launch.
"Analyzing text and generating new text in response" is not by definition harmless. For example, that's the job description for many remote employees. Suppose your cofounder told you that one of your remote employees was sabotaging your company -- would it be safe to conclude that there was no issue, because the remote employee was simply "analyzing text and generating new text in response"?
Kevin Roose is a seasoned tech reporter, and he said he had trouble sleeping after his chat with the bot. ("I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold") So I don't think we can rule out anything here in terms of the impact on the general population.
You're correct that the bot doesn't have control of the nuclear arsenal... but is the military going to make a special effort to keep people who do have their finger on the trigger away from this thing? In my opinion, it is worthwhile to spend time thinking through the worst-case scenario, same way you would consider edge cases in safety-critical code.