Data point of one: ChatGPT 3.5, even the free product, is so much better at answering technical questions than Google.
Some questions I had successfully answered recently:
> "I would like to animate changing a snippet of code. I'll probably be using Remotion. Is there a JavaScript library that can animate changing one block of text into another?"
> "In Golang, how can I unit test a http mux? How can I test the routes I've registered without making real http calls?"
> "Hello ChatGPT. I have an application using OpenTelemetry. I want to test locally whether it can collect logs and metrics. The application is running in Tilt. How can my local integration test read the logs and metrics from the OTel collector?"
ChatGPT is better on average for sure than Google for arriving at a correct answer, but they fail in different ways. When Google fails, it's usually in the form of, "I cannot find an answer. Better ask someone smart for help." but when ChatGPT fails, it's often giving an incorrect answer.
Depending on your fault tolerance and timeline, one will be better than the other. If you have low tolerance for faults, ChatGPT is bad, but if you are on a crunch and decide its OK to be confidently incorrect some small percentage of the time, then ChatGPT is a great tool.
Most industry software jobs, at least the high paying ones, are generally low fault tolerant and that's why ChatGPT is not entirely replacing anyone yet.
So, even in your example, and even if you write the code all yourself, there is still a risk that you are operating above your own competence level, do exactly as ChatGPT instructs, and then it fails miserably down the line because ChatGPT provided a set of steps that an expert would have seen the flaws in.
I would also use the axis of how easy it is to tell if it’s wrong. If you ask an LLM for code and you quickly get a syntax error or the wrong result, it’s not going to waste much time or, usually, make you look bad. If you ask it to do some analysis on a topic where you don’t have enough knowledge to tell if it’s right, however, that’s a lot riskier because you get the negative reputation.
This is the big problem with Google’s AI results: before, the wrong answer was from seoscum.com and people would learn to ignore them. Now the wrong answer is given Google’s corporate reputation and also there’s no way to conditionally distrust it so you learn not to trust them for anything.
Google doesn't really say it can't find an answer; instead it finds less relevant (irrelevant) search results. LLMs hallucinate, while search engines display irrelevance.
> Data point of one: ChatGPT 3.5, even the free product, is so much better at answering technical questions than Google.
That's not the point of Google. It gives you a start to research the answer you need. ChatGPT just gives you an answer that might not be correct. So how do you define "successfully answered"?
In programming there are always tradeoffs. It's not about picking the 1st answer that looks like it "runs".
Some questions I had successfully answered recently:
> "I would like to animate changing a snippet of code. I'll probably be using Remotion. Is there a JavaScript library that can animate changing one block of text into another?"
> "In Golang, how can I unit test a http mux? How can I test the routes I've registered without making real http calls?"
> "Hello ChatGPT. I have an application using OpenTelemetry. I want to test locally whether it can collect logs and metrics. The application is running in Tilt. How can my local integration test read the logs and metrics from the OTel collector?"