If they are complex enough, can they simulate data handling and manipulation, for e.g. amplifying a soundtrack while the conventional norm would be to run the audio track through a set of hardware devices for amplification.
I have also read instances where LLMs are used as eyes and render the environment on a virtual canvas as they "see it", they are simulating the operations of a complex biological organ and sense.
I'm somewhat reminded of the paper "Could a Neuroscientist Understand a Microprocessor?" which calls out the weakness of black-box analysis and simulation techniques. https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...