ChatGPT is entertaining because it can pump out low impact content at essentially zero cost. A poem about global warming in the style of Shakespeare? No problem. The world wasn’t hanging on its every word, but that’s ok. I think we are all richer for having this capability. I see it as analogous to having a vase of flowers. It’s a nice thing to have even if the flowers are imperfect.
ChatGPT doesn’t worry me though, at least not yet. It’s not reliable enough to do most things people will pay for. Can it write some python code? Sure, and sometimes quite impressively. But too much of the time it’s wrong. And as long as that continues to be true, software engineers will keep their jobs. Ditto with essay writing and even holding a coherent conversation, frankly.
I happen to think we’re at a local maxima in terms of what purely statistical language models can achieve. I think we’re going to need some planning, some working memory, and a knowledge graph to get things working, but I could be wrong.
Higher abstractions is software have allowed for increased productivity and additional application. Making it easier to write code will only drive more adoption and customization of technology. We are very far from software eating the world: there have only been a few bites and much of the meal is left.
Is there some evidence that fewer software developers are employed now than in the past? The latest layoffs in Silicon Valley notwithstanding, I can't see how software development will become a less important trend than it already is..
ChatGPT targets low-end, offshore customer service and phone call answering applications for technical support. Neither it nor Copilot is any replacement for a software developer.
It can write high impact content if you give it a high impact prompt. I asked it to write a short story by giving a bit more to chew on, and it gave me an excellent result [0] (IMHO) Then I can talk to it, and give high level comments to tweak / rewrite the output. It's a completely different way to be creative, but it is very powerful !
This. I think comments about the quality of an unrefined output from these kind of generative models completely miss the point. ChatGPT and Stable Diffusion are more impactful when you think about them as a paradigm change in creative workflows rather than something that will blow your mind with its output the first time you prompt it.
These models ain't gonna make any jobs obsolete (for now), just redistribute productiveness between those who will eventually learn how to harness them and those who won't.
Right. The fact that I can copy and paste a leetcode question into ChatGPT and get a well-formed answer is amazing, but let's not forget the non-trivial amount of work that goes into the writing a question, with an answer and test cases.
Exactly this is always the problem when adding a layer of tooling. You have to fix things on the meta-level now, plus understand all the stuff of the level you decided to use that tooling to cover for you. This also defines whether such a tool becomes useful or not, because it breaks down into the time invested in dealing with things.
Impact on education is huge. Current style of take-home assignments will loose any educational effect. Negative in the short run; one can only hope that one can switch to new styles of education and turn it into something positive, but it's not a given that will happen.
ChatGPT will be sold (and quite profitably) as a chat support agent for customer service and technical support questions. And it will do a great job at that.
ChatGPT may be entertaining, but it still represents a potential threat to the job market. For example, many journalists and content creators are already being replaced by AI-powered tools. Automation is not just limited to physical labor; it's starting to encroach on creative work as well. ChatGPT is just one more tool in the arsenal of AI-driven automation, and the trend of replacing people with machines will only continue.
Journalism is unfortunately an area where people are too often not interested in paying for the truth. And I don’t mean that in a partisan way, I mean people aren’t willing to pay for it at all. It’s quite sad.
I agree with the general sentiment. I have heard more doom and gloom about the end of the software industry in the last week than I think I ever have in my career. But the question that keeps popping into my head is, "What would the process of laying off all the devs and replacing them with ChatGPT even look like?".
And the truth is, I don't see any way it could work. Too many questions. Would product owners interface with the language model directly? Would they trust it to do devops? to debug? What would happen if the behavior of the system wasn't quite what they wanted? What if they ran into situations that required very in depth analysis of performance via logging? Would/Could a large language model orchestrate all of this?
At the end of the day, what devs are paid for is their knowledge of a complex system. That is always going to be needed.
> "What would the process of laying off all the devs and replacing them with ChatGPT even look like?".
Why does it have to be all or none? Why can't it be incredibly incremental?
> At the end of the day, what devs are paid for is their knowledge of a complex system.
I've turned to ChatGPT to answer questions to things I don't know 2-3 times this week. I can only imagine where this technology is headed in the next 5 years.
However, you can make the argument that I knew "exactly" how to game it/what to ask because of my experience. I used what I know to ask specifically for what I don't know. :)
There are a lot of outsourced dev shops that honestly you wouldn't trust to do devops or to debug things anyway. It would make a lot more sense to start automation in that market and work your way up
They have been saying most software devs will have no jobs soon since, at least as I have seen personally, since the 70s. AI, CASE, RAD, no-code etc. This time is the first time that I feel it might actually happen.
On HN the level of engineering is very high and many people here note that the code chatgpt provides is often subtly wrong and, like this article states, it cannot generate code for many problems or it is hard to describe the issue in a natural language like that. The thing is; most programmers I see have all the same issues however they cost money and cannot work as fast and tirelessly.
As an experiment, if you need to do one, just pick a random cheaper (although I have seen more expensive doing exactly the same; problem is, the amount of 5 stars pushed the rate up but might not mean good or competent code) coder from upwork/fiverr and give them a simple project; something CRUD, in Laravel or node; something chatgpt will come up with no problem. And check the screenshots (upwork 'spies' on devs to see if they are working) of them trying to 'solve the issues'; most of them will be google/SO searches and trying to fix pasted code so it works.
Of course I cannot say that this is most programmers, however, in my experience of 20+ years hiring devs from upwork (which was elance and something else before it was called upwork), it is most programmers. They now, if smart, will use chatgpt; but so can you... Saving time and money.
Some of the big money making entrepreneural devs (usually marketers) I know don't understand code at all (their code is always very weird to read as they don't understand basic concepts), but are still coding and making good money providing solutions for their clients. Chatgpt is a dream come true for them: speeding up their work 1000x while everything else stays the same.
> As an experiment, if you need to do one, just pick a random cheaper (although I have seen more expensive doing exactly the same; problem is, the amount of 5 stars pushed the rate up but might not mean good or competent code) coder from upwork/fiverr and give them a simple project; something CRUD, in Laravel or node; something chatgpt will come up with no problem. And check the screenshots (upwork 'spies' on devs to see if they are working) of them trying to 'solve the issues'; most of them will be google/SO searches and trying to fix pasted code so it works.
This probably would only work if you had a sample of work from said devs from before the availability of GPT.
If you hire someone to do a job ChatGPT can do now.. they're going to use it
I am not so sure; I just hit up a bunch of my ex colleagues one by one, asking about chatgpt; they have 0 idea what i'm talking about. They (programmers), mostly said AI is for content. So yeah, they are not going to use it as they have no clue it exists. I told them now, but this will be the norm outside the twitter/hn bubble really.
Also, when it starts costing money (which @sama said it will), they won't use it, just like they are not using copilot because it's 10$/mo.
This is spot on. I'm very glad to see this post mention the "No Silver Bullet" essay. Despite being written in 1986, it continues to provide a relevant, skeptical view on the impact of these LMMs on software. It even has a section addressing whether AI can provide the "silver bullet", which is drawn upon by the OP.
My immediate response to all the rightly-deserved excitement for ChatGPT is to reread that essay and reflect on whether this new tool will provide me more than marginal gains in my productivity. Experimenting with it a bit over the last day, my initial answer is, no, this doesn't provide more than marginal gains outside toy examples.
These tools may very well factor into my workflow in the future, but I don't see them fundamentally changing the way I construct, support, debug, and maintain software.
I would find this a lot more convincing if they had put a customer with an underspecified problem in front of ChatGPT and seen how well or poorly it could nail down the true requirements. I would not be shocked at all to learn that ChatGPT is not good at that task, but I have been surprised by its other capabilities, so I'm not willing to write it off without giving it a shot.
Hey, there! OP here. This would be a neat experiment to try.
One related experiment, however, already suggests the result. Someone tried to get ChatGPT to solve advent of code challenges: https://github.com/golergka/advent-of-code-2022-with-chat-gp.... These challenges are very clear, and it already seems to struggle to get the answers. One the second day, it took 12 tries to get it right. If it struggles this much with clear requirements, I don't think it'll do well with vague ones.
It's not perfect, it doesn't keep track of what the user actually saw for example. It just stores a timestamp for each opened thread and highlights the comments that are newer than that timestamp.
I wouldn't have thought to solve it that way, but I definitely think its successor will be capable of coming up with "good-enough" solutions for a lot of problems humans currently work on.
Actually, humans are often just coming up with "good-enough" solutions in the first place.
holy shit. ChatGDP blows my mind. Thanks for coming up with this! Shows what is possible (and can be a great additional learning tool for novice programmers as well!).
Regarding new comments: You can actually ask dang by mail for this exact feature. It already exists but still in beta or so. I have it and adore it. It works like this: When you go to a comment thread you've been before, it shows all new comments with a vertical red line to left side of the comment. Great feature. I already use it for years?! Could be rolled out actually... :D
I have briefly played with chatGPT. I don’t understand how you refine a program specification. I tell chatGPT what I want in a few words. It prints out a couple hundred lines of code that kind of work but not quite how I want it. So I delete the code and then rephrase with more detail. This time I exceed the 4k character input limit. Repeat above trying to cram more detail into 4k characters. I often have better luck being more general and hoping for the best.
My example case has been managing a list of items where each item has some properties. I’m after a simple CRUD backed by xml.
It took me a few attempts, but I have gotten this to work fairly well. I started with explaining my ideal app like this:
We're going to design a (language) application that performs the following tasks, delineated by semi-colons:
Then I listed about a dozen high-level user stories, separated by semi-colons, and ended the list with a period. Then I clarified my instructions like this:
We will design each step together, where you ask me for any specifications needed to complete the step and I respond with those specfications. When you feel we have completed designing a step, ask me if I have any questions before proceeding to the next step.
It started with a high-level recap of my user stories, asked if I was ready to proceed, and then began to describe the first step. Over about two hours of back-and-forth, I ended up with a 20-page document of high-level architecture interspersed with code samples, db schema samples, and answers to questions that popped up during the "discussion." And I got about halfway to a working prototype during those two hours.
Considering you can write a whole implementation plan, mockups, code the thing, go through user acceptance, and then at the end, they say 'but this really isn't how things work in the reality, we need to change fundamental assumptions'. I don't think ChatGPT has a chance.
Perhaps this writer has confused ChatGPT with GitHub Copilot.
The purpose of ChatGPT (as stated by the devs) is to help improve the ability of AI agents to participate in conversational dialog. In other words, they're trying to make a software agent that can take chat calls for support and stuff like that.
Copilot, on the other hand, is (sometimes) tasked with writing code from a concept or comment and -- even more often -- used to "autocomplete" some line of code you are already typing, or to suggest the next line of code that's most likely to appear in a sequence.
NEITHER program, however, is capable of solving any new problems or creating any information -- such as information about how to write software that does something which has not been done before.
Both ChatGPT and Copilot (and all other LLM software) is only capable of spitting out the next most likely word based on what you have typed and what it has analyzed in the past. Therefore, neither program will be able to offer us any new ideas or solve any new problems. Neither program can create software architecture nor perform debugging.
Having said that, many companies will pay for ChatGPT because it's a nicer and faster chat agent than the ones we have now -- even better than many of the human ones for support-type tasks. And Copilot is a fantastic timesaver that I gladly pay for because it often does suggest the next correct line of code based on my own typing.
> But neither ChatGPT nor some larger descendent model will ever be able to write the most difficult pieces of our software given natural language descriptions of desired functionality.
Strong assertion that I think will be proven wrong in the next decade, barring any catastrophes. Maybe not a direct descendent model, but likely one that has quite a bit in common.
Would you want to drive a car where some ChatGPT put together the software for your engine, for your breaks, for the steering wheel etc.? There are chips out there printed on flexible plastics and they get smaller in fast steps, which makes it easy to put them almost everywhere. What kind of software would you like to see running on these? What software do you expect to run in all kinds of infrastructure around us?
I've found that ChatGPT works best if you break up the problem into as many sub-problems as you can find. I've seen many collegues try it out, and they give it waay too large pieces to bite over.
I understand and agree with the point, but being able to identify the ambiguity of a natural language specification and ask clarifying questions seems well-suited to a chat-style model of AI interaction.
Kind of the Devil's question, but what would happen if one were to use a model of the size of ChatGPT, keep it on all the time, and have it connect to the Internet and try to make money online?
Hand-crafting new code could become a hobby activity. Most will simply be hand-finished. Take the crudely made starter code and smooth it out. Because why wait for an imperfect human to do it when a GPT might provide equally imperfect output?
A never ending code review where the revisions become the responsibility of the reviewer. That won’t burn anyone out.
ChatGPT doesn’t worry me though, at least not yet. It’s not reliable enough to do most things people will pay for. Can it write some python code? Sure, and sometimes quite impressively. But too much of the time it’s wrong. And as long as that continues to be true, software engineers will keep their jobs. Ditto with essay writing and even holding a coherent conversation, frankly.
I happen to think we’re at a local maxima in terms of what purely statistical language models can achieve. I think we’re going to need some planning, some working memory, and a knowledge graph to get things working, but I could be wrong.