There are still some tropes from the GPT-3 days that are fundamental to the construction of LLMs that affect how they can be used and will not change unless they no longer are trained to optimize for next-token-prediction (e.g. hallucinations and the need for prompt engineering)
Each new model opens up new possibilities for my work. In a year it's gone from sort of useful but I'd rather write a script, to "gets me 90% of the way there with zero shots and 95% with few-shot"