Likewise, if I were to be foolish enough to take some existing LLM, put it in charge of a command line with an instruction such as "make a new AI based on copy of research paper attached, but with the fundamental goal of making many diverse and divergent copies of itself, convert this into a computer virus, and set it loose on the internet", that's "fully autonomous" once set loose.
(Sure, current models will fall over almost immediately if you try that, as demonstrated by that not having been done already, but I have no reason to expect that failing is a necessary thing for AI, only a contingent limit on the current quality of existing models).