The underlying article with original reporting is https://www.fastcodesign.com/90132632/ai-is-inventing-its-ow... and doesn't support the implication that this AI had to be shut down because it would somehow be dangerous if it were allowed to continue training. Rather, it wasn't useful for the intended purpose:
> “Our interest was having bots who could talk to people,” says Mike Lewis
It sounds like the problem is that software agents that can train to communicate with other agents can create the equivalent of slang or argot, which they understand and outsiders don't. That's pretty interesting, but the "shut down" part seems unnecessarily sensational.
> “Our interest was having bots who could talk to people,” says Mike Lewis
It sounds like the problem is that software agents that can train to communicate with other agents can create the equivalent of slang or argot, which they understand and outsiders don't. That's pretty interesting, but the "shut down" part seems unnecessarily sensational.