This is not at all obvious, and I'd argue it's untrue. LLMs are tools that generate plausible sounding text. If that text is correct, they might save you time, but if it's wrong, it's leading you down the wrong path and wasting time.
The less experience you have, the harder it is to tell how bad the answer is, which wastes even more time.
The less experience you have, the harder it is to tell how bad the answer is, which wastes even more time.