I went back. It’s fine at solving small problems and jump-starting an investigation, but only a small step better than a search engine. It’s no good for deep work. Any time I’ve used it to research something I know well, it’s got important details wrong, but in a confident way that someone without my knowledge would accept.
RLHF trains it to fool humans into thinking it’s authoritative, not to actually be correct.
This is exactly the experience I've had. Recently started learning OpenTofu(/Terraform) and the company now had Gemini as part of the Workspace subscription. It was great to get some basic going, but very quickly starts suggesting wrong or old or bad practices. Still using it as a starting point and to help known what to start searching for, but like you said, it's only slightly better than a regular search engine.
I use it to get the keywords and ideas, then use the normal search engine to get the facts. Still, even in this limited capacity I find the LLMs very useful.
RLHF trains it to fool humans into thinking it’s authoritative, not to actually be correct.