This seems like the canary in the coal mine. We have a company that built this tool because it seemed semi-possible (prob "works" well enough most of the time) and they don't want to fall behind if anything that's built turns out to be the next chatgpt. So there's no caution for anything now, even ideas that can go catastrophically wrong.
Yeah, its data now, but soon we'll have home robotics platforms that are cheap and capable. They'll run a "model" with "human understanding", only, any weird bugs may end up causing irreparable harm. Like, you tell the robot to give your pet a bath and it puts it in the washing machine because its... you know, not actually thinking beyond a magic trick. The future is really marching fast now.
Yeah, its data now, but soon we'll have home robotics platforms that are cheap and capable. They'll run a "model" with "human understanding", only, any weird bugs may end up causing irreparable harm. Like, you tell the robot to give your pet a bath and it puts it in the washing machine because its... you know, not actually thinking beyond a magic trick. The future is really marching fast now.