> ChatGPT is not going to tell you how to design a system, architect properly, automate, package, test, deploy, etc.
If you ask the right questions it absolutely can.
I’ve found that most people thinking ChatGPT is a rube are expecting too much extrapolation from vague prompts. “Make me a RESTful service that provides music data.” ChatGPT will give you something that does that. And then you’ll proceed to come to hacker news and talk about all the dumb things it did.
But, if you have a conversation with it. Tell it more of the things you’re considering. Some of the trades off you’re making—how the schema might grow over time, it’s kind of remarkable.
You need to treat it like a real whiteboarding session.
I also find it incredibly useful for getting my code into more mainstream shape. I have my own quirks that I’ve developed over time learning a million different things in a dozen different programming languages. It’s nice to be able to hand your code to ChatGPT and simply ask “is this idiomatic for this language?”
I think the people most disappointed with ChatGPT are trying to treat it like a Unix CLI instead of another developer to whiteboard with.
Every person I've noticed who says that ChatGPT isn't good at what it does has the same thing in common - they're not great at talking to people, either.
Turns out when you train an AI on the corpus of human knowledge, you have to actually talk to it like a human. Which entirely too many people visiting this website don't do effectively.
ChatGPT has allowed me to develop comprehensive training programs for our internal personnel, because I already have some knowledge of training and standardization from my time in the military, but I also have in-depth domain knowledge so I can double-check what it's recommending, then course correct it if necessary.
> Every person I've noticed who says that ChatGPT isn't good at what it does has the same thing in common - they're not great at talking to people, either.
I think that the people who nowadays shit on ChatGPT's code generating abilities are the same blend of people who, a couple decades ago, wasted their time complaining that hand-rolled assembly would beat any compiled code in any way, shape, or form, provided that people knew what they were doing.
> But, if you have a conversation with it. Tell it more of the things you’re considering. Some of the trades off you’re making—how the schema might grow over time, it’s kind of remarkable.
You're not wrong, but I would caution that it can get really confused when the code it produces exceeds the context length. This is less of a problem than it used to be as the maximum context length is increasing quite quickly, but by way of example: I'm occasionally using it for side projects to see how to best use it, one of which is a game engine, and it (with a shorter context length than we have now) started by creating a perfectly adequate Vector2D class with `subtract(…)` and `multiply(…)` functions, but when it came to using that class it was calling `sub(…)` and `mul(…)` — not absolutely stupid, and a totally understandable failure mode given how it works, but still objectively incorrect.
I frequently run into this, and it’s quite maddening. When you’re working on a toy problem where generating functioning code is giving you a headache - either because it’s complex or because the programming language is foreign or crass - no problem. When you’re trying to extend an assemblage of 10 mixins in a highly declarative framework that many large-scale API contracts rely on to be correct, the problem is always going to boil down to how well the programmer understands the existing tools/context that they’re working with.
To me, a lot of this boils down to the old truism that “code is easier to write than maintain or extend”. Companies who dole out shiny star stickers for producing masses of untested, unmaintainable code will always reap their rewards, whether they’re relying on middling engineers and contractors alone, or with novices supercharged with ChatGPT.
It can't tell you a straight answer or halucinates API.
It can't tell you "no, this cannot be done", it tries to "help" you.
For me it's great for writing simple isolated functions, generating regexes, command line solutions, exploring new technologies, it's great.
But after making it write a few methods, classes, it just gets extremelely tedious to make it add/change code, to the point I just write it myself.
Further, when operating at the edge of your knowledge, it also leads you on, whereas a human expert would just tell you "aaah, but that's just not possible/not a good idea".
I think that's a fair description. While I have not yet found ChatGPT useful in my "real" day job (its understanding of aerospace systems is more than I would have guessed, but yet not enough to be super helpful to me), I have found it generally useful in more commonplace scripting tasks and what-not.
With the caveat of, I still need to understand what it's talking about. Copy-pasting whatever it says may or may not work.
Which is why I remain dubious that we're on the road to LLMs replacing software engineers. Assisting? Sure, absolutely.
Will we get there? I don't know. I mean, like, fundamentally, I do not trust LLMs. I am not going to say "hey ChatGPT, write me a flight management system suitable for a Citation X" and then just go install that on the plane and fly off into the sunset. I'm sure things will improve, and maybe improve enough to replace human programmers in some contexts, but I don't think we're going to see LLMs replacing all software engineers across the board.
In a similar vein, ChatGPT can be an amazing rubber duck. If I have strange and obscure problems that stumps me, I kinda treat ChatGPT like I would treat a forum or an IRC channel 15 - 20 years back. I don't have "prompting experience or skills", but I can write up the situation, what we've tried, what's going on, and throw that at the thing.
And.. it can dredge up really weird possible reasons for system behaviors fairly reliably. Usually, for a question of "Why doesn't this work after all of that?", it drags up like 5-10 reasons for something misbehaving. We usually checked like 8 of those. But the last few can be really useful to start thinking outside of the normal box why things are borked.
And often enough, it can find at least the right idea to identify root causes of these weird behaviors. The actual "do this" tends to be some degree of bollocks, but enough of an idea to follow-up.
If you ask the right questions it absolutely can.
I’ve found that most people thinking ChatGPT is a rube are expecting too much extrapolation from vague prompts. “Make me a RESTful service that provides music data.” ChatGPT will give you something that does that. And then you’ll proceed to come to hacker news and talk about all the dumb things it did.
But, if you have a conversation with it. Tell it more of the things you’re considering. Some of the trades off you’re making—how the schema might grow over time, it’s kind of remarkable.
You need to treat it like a real whiteboarding session.
I also find it incredibly useful for getting my code into more mainstream shape. I have my own quirks that I’ve developed over time learning a million different things in a dozen different programming languages. It’s nice to be able to hand your code to ChatGPT and simply ask “is this idiomatic for this language?”
I think the people most disappointed with ChatGPT are trying to treat it like a Unix CLI instead of another developer to whiteboard with.