One of the great things about LLMs: since they aren’t deterministic prompt engineering will be even more susceptible to magical thinking than it has been with Google
The big difference is feedback loop. With prompt engineering you can change your prompt slightly and immediately see if benchmarks improve. With SEO you had to wait at least days, and even then it was impossible to rule out external effects like others adding/removing links to you.
Anyone who had to deal with retraining models understands the pain of retraining. What prompts offer as a feature while at the surface may seem superficial, they are groundbreaking, as they completely remove re-training and the entire ML-Ops deployment lifecycle.
Yes, I’ve seen people at work calling it directly to generate code to talk to APIs with the temp set to zero. They still frequently receive different or broken code.