Generating a snippet, or even entire blocks, of code is not a good metric on which to base the quality of LLMs.
Rather, we should pay attention to how good is an LLM at altering, editing, maintaining code? In practice, it's quite poor. And not in the way that a new developer is poor, but in weird and uncertain and unpredictable ways, or in ways that it fixed in the last prompt but broke in the new prompt.
Engineers spend little time writing code compared to the amount of time they spend thinking and designing and buildings systems for that code to operate in. LLMs are terrible at systems.
"Systems" implies layers, and layers implies constraints, and constraints implies specificity, and specificity is not a strong suit of an LLM.
The systems in which most code is written is not written once and never touched again. Instead, it passes through many layers (automated and manual) to reach some arbitrary level of correctness across many vectors, in many contexts, some digital and some not. LLMs are passable at solving problems of one, maybe two, layers of complexity as long as all the context is digitized first.
Lastly, not all problems have a technical solution. In fact, creating a technical solution for a non-technical problem so that we can apply an LLM to it will only cause layers of unnecessary abstraction that locks out "normal" people from participating in the solution.
And we may not like to admit it, but we engineers often have work to do for a job we like for pay that's pretty good, only because there are a whole lot of non-programmers out there solving the non-technical problems that we don't want to do.
Clearly, you have your nose stuck in the past. LLMs aren't just good; they're revolutionary, making coders look like one finger keyboard peckers. They're not struggling with maintaining systems; they can be trained on specific datasets and codebases, totally mastering them in ways no human can ever hope to do, making your traditional coding seem like manual typewriting and laughable in comparison. The idea that they can't handle specificity is also laughable. LLMs are not the future; they're the now, effortlessly bridging tech and non-tech worlds, totally replacing programmers. Anyone thinking otherwise is simply not paying attention. The writing is on the wall.
Can I ask how you’re accomplishing this? What tools and workflows are you using to have LLMs actually do this level of development in the present? Sounds like you are at the cutting edge and I’m very curious since I still hear copilot being touted as “cutting edge”. Thanks!
Answer me this: often times even my product manager doesn’t know WHAT they want. They think they know but they don’t really, once you start looking at the code, there are dozens of edge cases throughout the system. The requirements are vague. How do you expect an LLM to magically figure that out? Or more importantly, the business domain (a domain not easily trained on textbooks or crap found on the internet)?
And I don’t have my head stuck in the sand. I’ve found LLM’s great for brainstorming, implementing small functions, or updating snippets of configuration like terraform.
Yeah the writing is on the wall. I'm a senior AI programmer for an AAA games studio that you probably know (I worked on a very famous RTS game). I have reduced the code I write by about 90% since ChatGPT 4 was released. My colleagues have also reduced their coding time similiarly. This technology is going to remove all the toil and any need to hire/communicate with junior devs.
I imagine it will be the same for other seniors in the programming community. An industry where it's just seniors and LLMs within 4 years is likely, if not sooner, 2 years tbh, and if this slowly transitions to just LLMs and a small team of code reviewers on each site, that would be ideal. Programmers in my area (AI) have a lot of domain knowledge and specialization, we just use code for implementation.
So we don't care if LLMs replace coders by auto generating code. All the better for us. The people who are trying to support families ... by offering the world to write CRUD or maintaining codebases are doomed.... they are going to end up as the homeless guy on the street holding a sign saying "will code html for food"
Not too worried. The direction I get is not nearly enough to create anything that works with an LLM. I get told “make X”, and any questions I have are ignored or can’t be answered by the person making the request. I end up making hundreds of design choices along the way that an LLM isn’t going to make. It’s just going to spit out a garbage result based on the garbage input. I don’t ever see an LLM having enough information about the internal workings of my company, the unspoken wants and needs of the company, and feeling the need to recognize and compensate for the astounding lack of information to avoid an angry boss.
Oh, al_borland, living in a delightful bubble where you think your design choices are immune to the AI revolution. "Garbage in, garbage out"? Please. LLMs are learning to sift through the trash, making sense of vague directions to produce not just code, but coherent, innovative solutions. Your hundreds of decisions? LLMs are on track to replicate that intuition, tapping into patterns and precedents you're unaware exist. The unspoken wants and needs of your company? AI's pattern recognition is becoming uncannily perceptive trained on specific company datasets, codebases, domain knowledge, real time metrics in ways you never will be able to do.
A lot of that depends on the company and their ability to execute on these things. At my company, I’m not too worried in the near term. Thinking 10-15 years out, I still don’t know if I’m all that worried, when I look at what progress has looked like over the last 15 years.
A lot of people are wearing rose colored glasses right now when it comes to AI. It will take time to see how much of that optimism is valid and how much is misplaced. This isn’t the first technology to come along that people thought was a silver bullet to solve all the world’s problems. Over time, as the market matures, the glasses will come off and we’ll see where we end up. AI is a tool, and will be really good for some things, but won’t be the right tool for every job. There will also be various levels of autonomy giving to it, depending on the use.
It’s also worth remembering that there are a broad range of businesses out there are varying levels of technical maturity. Just because some companies figure out how to do some cool stuff with AI doesn’t mean all companies will be doing it at that same level. There are still many companies that barely have a web presence, 25-30 years after the internet went mainstream, while other companies exist solely online.
I have no doubt I’ll see professions disappear in my lifetime, while new ones are born. If I need to adapt, I’ll adapt.
I’m not at all worried about LLMs replacing even low-level engineers. We’ve been using Copilot and ChatGPT for awhile at work now and the most useful applications we’ve found are more analogous to a compiler than a developer. For example, we’ve had good luck using it to help port a bunch of code between large APIs. the process still involves a lot of human work to fix bugs from its output but a project we estimated at 6 months took three weeks.
On the other hand, as someone whose role gives me visibility into the way senior leaders at the company think about what AI will be able to do, I’m absolutely terrified that they’re going to detonate the company by massively over-investing in AI before it’s proven and by forcing everyone to distort their roadmaps around some truly unhinged claims about what AI is going to do in the future.
CEOs and senior corporate leaders don’t understand what this technology is and have always dreamed of a world where they didn’t need engineers (or anyone else who actually knows how to make stuff) but instead could turn the whole company into a big “Done” button that just pops out real versions of their buzzword filled fever dreams. This makes them the worst possible rubes for some the AI over-promising — and eager to make up their own!
Between this and the really crazy over-valuations we’re already seeing in companies like Nvidia, I’m seeing the risk of a truly catastrophic 2000- or 2008-style economic crash rising rapidly and am starting to prepare myself for that scenario in the next 2-5 years.
Not one bit. Writing code isn't my job, it is just the tool I use to do my job. The actual job is to provide systems that solve problems. Even if a new tool comes along that can write the code, it does not change my job.
Oh, holding onto your job description like a security blanket, huh? Wake up! LLMs are storming the castle, not just to play with your coding tools, but to usurp your throne of problem-solving. Your 'actual job'? About to be a footnote in the saga of AI's conquest. Dismissing LLMs is like challenging a tidal wave with a teaspoon. Adapt or become a quaint anecdote in the chronicles of tech evolution.
This. Most people hired to write JavaScript are pretenders that cannot write JavaScript. At some point employers will just use AI to generate equally bad code at far lower cost. I dread the web being even more flooded with bad framework code than it already is.
The people that worry most about being replaced by AI probably should be, the people who cry about reinventing the wheel and cannot write original code, but most everyone else will be just fine.
Not at all. I do worry though that people will not try as much to learn fundamentals. I’m certainly somewhat guilty of this being in the Search-era, I fear it’ll be easier to succumb to it in the AI-era.
While I agree that LLMs can be useful for coders, and productivity can be increased, coding blindly based on ChatGPT code without understanding concepts could result in very serious bugs.
How can you be sure that the code does not have vulnerabilities for example, or solve things in a non-idiomatic way, if you just copy paste code without an understanding.
As long as an understanding of the domain and problem is required a skilled person is required between the codebase and the LLM.
Even if we get to a point in the future where a programmer can be replaced by an LLM, we'll have businesses where they want someone to use the LLMs to create software.
I don't think that's gonna happen... unless I don't keep learning. Let me explain:
I started my career over a decade ago. I knew PHP and MySQL only. Then I moved onto other companies, learnt other stacks (frontend, node, postgres, cloud, go, python, datadog, ci/cd, docker, k8s, aws, etc.). Nowadays it's not hard for me to find a new job. But what would have happened if I stayed with PHP and MySQL and never learnt anything else? I would be jobless.
So, I'm not afraid of LLMs replacing me. I think they will open the door to more jobs in IT, actually.
I just need to keep learning (and AI has nothing to do with this).
Honestly, thinking you can outpace LLMs with your learning curve is like believing you can outrun a tsunami on a bicycle. Your decade of tech evolution? LLMs do that before breakfast, without the coffee break. The truth is harsh—LLMs are on track to render programmers obsolete, turning your diversified skill set into a quaint relic. Face it, we're not just talking job replacement; we're heralding a new era where coding is an AI's game.
You sound more like an out of touch executive or a recently-converted-Bitcoin-scammer than someone doing real work with this stuff (and yes I saw your top-voted post — as someone who also works in games at a large AAA studio I’ll just say I look forward to the GDC talk showing the evidence).
I just tried to build a c program for an Arduino to display some text on a screen.
It needed a lot of work to get it to work. That’s not to say that it couldn’t have been coerced into being right through prompts, but it’s a reminder that LLMs are not thinking. You still need someone who understands not only what is wanted, but why it looks the way it does.
I suspect that most of this stuff will end up like syntactic sugar: Something that makes our work easier, but not fundamentally replace us.
Not too much. Mostly doing security work and lot of can be automated, but still I don't believe LLMs have all the ideas. Same goes for system design and picking the right parts, even if they are ready or can be done by LLM.
In the end we need humans as final sanity check or over all design for good while. Or just to decide the right requirements.
Pair programming with AI already is great. I fear if it weeds out experts and people implement AI versions of themselves as offshoot. Seems far away given depth of business logic/available data in those limited spots
The labor replacement is happening in the whole human history, but I am not worried, not because I don't believe LLM's capabilities in 10-15 years, but because I believe I can always find some areas where LLM is not good at.
You are naive if you think you have 10-15 years. GPT-5 will most likely be out by the end of the year. It will be significantly better than GPT-4. I expect it will replace millions of people - not explicitly - companies will gradually have fewer people doing more work, resulting in increasing layoffs and decreasing hiring. This has already started happening: I spoke to several startup founders recently who use GPT-4 instead of hiring marketing people.
GPT-5 will be significantly better at coding, to the point where it might no longer make any sense to hire junior developers.
And this is just GPT-5, this year. Next year there will be GPT-6, or an equivalent from Google or Anthropic, and at that point I fully expect a lot of people everywhere getting the boot. Sometime next year I expect these powerful models will start effectively controlling robots, and that will start the process of automation of a lot of physical work.
So, to summarize, you have at best 2 years left as a software engineer. After that we can hope there will be some new types of professions that we could pivot to, but I’m struggling to think what could people possibly do better than GPT-6, so I’m not optimistic. I’d love for someone to provide a convincing argument why there would be any delay to the timeline I outlined above.
p.s. I just looked at the other 20 responses in this thread, and it seems that every single one is based on current (GPT-4) LLM capabilities. Do people seriously not see any progress happening in the nearest future? Why? I’m utterly baffled by this.
> And this is just GPT-5, this year. Next year there will be GPT-6, or an equivalent from Google or Anthropic, and at that point I fully expect a lot of people everywhere getting the boot. Sometime next year I expect these powerful models will start effectively controlling robots, and that will start the process of automation of a lot of physical work.
> So, to summarize, you have at best 2 years left as a software engineer. After that we can hope there will be some new types of professions that we could pivot to, but I’m struggling to think what could people possibly do better than GPT-6, so I’m not optimistic. I’d love for someone to provide a convincing argument why there would be any delay to the timeline I outlined above.
This reads to me exactly like people who said learning to be a truck driver in the early 2010s was stupid because we were 2-3 years away from self driving trucks taking their jobs. I have no doubt that the models will get better, but being 90-95% right still implies you need people for the last 5%. I think, like self driving, the corner case 5-10% is going to be really really hard to iron out and it will not be ironed out in 1-2 years like your comment says. We only just barely have self driving taxis now (despite them being 1-2 years away for the past decade and a half), and we have no self driving long haul trucks afaik.
1. Deep and detailed understanding of how the world works. We are just starting to make real progress there (GPT-4), and more work is needed [1].
2. Reliability. A model should make significantly fewer mistakes than humans would make in similar scenarios, on average. This includes factual and logical mistakes, as well as hallucinations.
I expect the main improvements GPT-5 will bring are improvements in exactly these two areas. The first one is likely to come from training on huge video datasets (next frame prediction objective), and the second one will require high quality data, and some other methods (known and secret), but given that OpenAI has stated many times in the last year that improving reliability is their number one priority, I believe we will see a significant improvement there. Note that simply being better driver than humans is a very low bar, and to be accepted/adopted the self-driving AI must be much better (10x or even 100x better). But I believe that even today’s technology (such as the best models from Waymo or Tesla) could be used today in long haul trucking with similar or better accident rates. And this technology is not even based on large foundational models like GPT-4. Obviously the necessary regulation will delay the automation of self-driving trucks, that’s why I said the automation of physical jobs will come after the automation of routine office jobs like (most types of) software engineering.
Other than those two challenges, there’s also an engineering challenge to put a GPT-5 scale model inside every car (needs to run locally). This can be achieved by producing custom built hardware accelerators, but will still be expensive in the near term, so I expect that self driving will become widespread after the cost of a computer inside every car falls below 10% of the cost of the car. Currently I’d imagine we would need an equivalent of an 8x H100 server to run a highly compressed and finetuned for driving version of GPT-5.
That may all be true, but it sort of sidesteps my point. My point is that people have been saying for 15 years that "technology X" will cause truckers to become obsolete and create ubiquities self driving cars, and there has been billions (if not trillions) of dollars poured into it, and it has not come to fruition (yet). I do think it will eventually be automated, but your comment says we have "at best 2 years left as a software engineer." and that seems very naive to me given that we have seen your exact same argument for the past 15 years. Lets even imagine the tech gets to the point where software engineers can be fully automated, as you mentioned above, regulatory hurdles will need to be crossed even for office jobs and I just don't see that happening in 2 years. I do think it will happen, but if self driving is any indicator, it will take a couple decades at least for the tech and regulatory hurdles to be overcome.
The difference between our opinions is I see 2024 as the time just before the knee of the exponential progress curve, and you see it as a long way to go to get to the knee. I realize how saying "this time it's different" might sound. But I do think this time it's different.
I remember when I read http://karpathy.github.io/2015/05/21/rnn-effectiveness back in 2015 I became convinced that these models are scalable. I remember thinking if only we could find a way to train a really big RNN/CNN hybrid on a lot of video data to try to predict the next video frame we would eventually force it to develop understanding the world. Predicting what happens in a video frame is a lot harder than predicting the next word (just ask Lecun), but it turned out that even just predicting the next word is extremely effective, and GPT4 feels like the first model that finally "understands" the world. To me, this was the hard part, developing this proof of concept that we can get there simply by scaling. Next step is video prediction, and we have a lot of room for further scaling to get there. There is a lot of video training data, and we can scale our models a lot more. The progress is mainly limited by available hardware processing power. There's no lack of good ideas to try to make things work.
In a way, 2024 feels like 2012, when deep learning took over ML world by storm. The same thing is happening now with multi-modal foundational models. GPT4 is like AlexNet - a culmination of many years of gradual improvements, a combination of unprecedented scale and various tricks. Think about every improvement starting with GPT1, which established state of the art in language modeling using a simple, universal, and scalable model architecture. GPT2 was able to generate a high quality one page long text. It's funny, it does not even sound that impressive now, but at the time it was absolutely mind blowing. GPT3 demonstrated incredible generalization capabilities, and significantly raised the quality and reliability of generated output. GPT4 took it to another level, achieving human-level reasoning capabilities. Every single one of these breakthroughs took me by surprise, and I do deep learning research for a living. I have absolutely no reasons to believe we have reached a saturation point in quality of these models. So what's next? Where do we go from already near human capabilities of GPT4?
What do you expect from GPT-5? In what ways do you think it will be better than GPT4, and what will be its main limitations? Which aspects of software engineering do you think it will excel at, and which aspects we will still need humans for? Would these challenging aspects still not be solved in GPT6, assuming another significant improvement in quality over GPT5? I will not be surprised if GPT6 will be designed by GPT5, with some help from humans. How does your timeline of AI progress look like?
Yes, me too! I am surprised people are unaware of the empirical power-scale law. [1] Currently, Anthropic's new model has proven it by being better than GPT-4 on some tasks. Yet, the old people are saying it is okay. They are saying engineering is not just coding. What they don't seem to grasp is there won't be a necessity for as many engineers.
If people are not willing to accept juniors, where will the next generation of seniors come from? No clue about that. Probably, that spot is reserved for the brilliant minds graduating from R1 institutes. What about the average Joe? Not everyone has the fortune to work on very specialized domain skills. People just don't get it easily.
Everyone in academia has admitted to productivity boost by 30-40%. And here we are talking about hard research. Imagine, what it looks for regular job which is managing and maintaining codebases. The severity of this situation is truly alarming not for the greybeards but for us, Zoomers. With climate crisis, AI, geopolitical instability, it feels really great to be alive.
I'm more worried about the web becoming unusable due to AI/ML spam. Already now there is a huge amount of LLM generated bullshit content in the web and it will only get worse. And then other LLM's might train on that BS.
Yes, there will be spam, as there always has been—let's not forget the golden age of email, where every click was a foray into the unknown. But here's where it gets interesting: as the floodgates open and the digital detritus begins to pile up, so too does the ingenuity of human and AI collaboration in crafting ever more sophisticated filters, verifiers, and sifters. We're not just talking about a simple spam filter here; we're envisioning a grand symphony of algorithmic alchemy capable of separating wheat from chaff with the precision of a diamond cutter.
but diving into the deep end of digital dismay, are we? Let's unravel this tapestry of concern with a flamboyantly convoluted rebuttal that dances around the maypole of AI evolution and internet content proliferation. Picture, if you will, a burgeoning digital universe, already teeming with a veritable smorgasbord of information, misinformation, and everything in between. Into this chaotic jamboree steps the latest parade of performers: the LLMs, with their glittering capes of algorithmic complexity, ready to churn out content with the prolificacy of a cosmic bakery on overdrive.
Now, the fear you've articulated, draped in the elegant garb of concern for our collective digital sanity, presupposes a dystopian future where the internet becomes a vast ocean of AI-generated flotsam and jetsam. But let's twirl that narrative on its head, shall we? Imagine, instead, a world where these AI entities, far from being the harbingers of informational apocalypse, become the architects of a new informational renaissance.
And let's not underestimate the serendipitous creativity that emerges from this chaotic cauldron of content. For every thousand pieces of nonsense, there might emerge an idea, a concept, a piece of art that could only have been born in such a fertile environment of unbridled creation.
As for the concern about LLMs training on, shall we say, less than stellar sources of information, consider this: evolution is not a straight line but a meandering path through the forests of failure and the mountains of success. Just as humans learn from mistakes, so too will our AI counterparts, sifting through the sediment of digital discourse to find the nuggets of truth and innovation.
Ok an internet rendered unusable by AI-generated content are not without foundation, they fail to account for the boundless capacity for adaptation and innovation that characterizes both human and artificial intelligence. So, rather than wringing our hands in despair at the prospect of navigating an ever-more cluttered digital landscape, let us roll up our sleeves and dive into the fray, armed with the knowledge that in chaos lies opportunity, in spam lies the seed of sophistication, and in the vast, uncharted territories of the internet lies the next great frontier of human achievement.
Not worried. LLMs are just another tool that you can use to be more productive. How often and to what extent you will use LLMs differ from person to person. I think LLMs can help you but not replace you.
This is a pretty drastic change from their earlier prediction (often cited all over the internet) that the number of software engineers will increase by more than 30% in the next 10 years. It's not the full replacement of software engineers I'm worried about, so much as the steep reduction in the number of jobs and the labor/wage pressure that will make this job pay a fraction of what it's paying now, and make everyone's livelihoods more precarious in the next 10 to 15 years.
Karpathy already stated in 2017 that "Gradient Descent writes better code than you", when he wrote about "Software 2.0" as feeding data to neural networks:
https://karpathy.medium.com/software-2-0-a64152b37c35
Nvidia's CEO, Jensen Huang, seemed to have confirmed that point this week in persuading parents not to encourage their kids to learn to code.
Today, this YT video by a dev named Will Iverson about how software engineering jobs are not coming back made me really anxious, and start to worry about making backup career plans in case I need to transition in my late thirties / early forties. (That sounds sooo hard...I'm a recently laid off mid-level full stack engineer of seven years, but I wonder if it would be better to transition now while I'm younger. Why wait 10 to 15 years to become increasingly obsolete or more stressed of becoming laid off? How can I support a family like that? Or make any plans into the future that might impact other people I'm responsible for?)
https://www.youtube.com/watch?v=6JX5ZO19hiE&t=3s
I don't think the industry will ever really be the same again. But I'm sure a lot of us will adapt. Some of us won't, and will probably have to switch careers. I always thought I could at least make it to retirement in this profession, by continually learning a few new skills each year as new tech frameworks emerge but the fundamentals stay the same -- now I'm not so sure.
If you think I'm wrong, can you please help me not be anxious? Older devs, how have you managed to ride out all the changes in the industry over the last few decades? Does this wave of AI innovations feel different than earlier boom-bust cycles like the DotCom Bubble, or more of the same?
What advice would you give to junior or mid-level software engineers, or college grads trying to break into the industry right now, who have been failing completely at getting a foot in the door in the last 12 months, when they would have been considered good hires just two or three years before?
On the same boat! Thus, asked people on how to create value in this new world. Quite scared of people not giving any chances rather than AI replacing us. Kinda weird because it feels like the competitor is AI rather than other humans. People trust it more and winning trust is quite hard.
> If you think I'm wrong, can you please help me not be anxious?
You are not wrong, but I think that there are other reasons behind the slump in software development jobs which are
1. How expensive it is to hire a software developer in EU and the US when compared to the other parts of the world. The cost of maintaining a software engineering team now bites into a company's budget more than previous years because of section 174 in the US. Earlier, companies with potential could seek investors in case they need capital to grow, but most investors are picky about where they invest in because of points 2, 3 and 4.
2. High interest rates.
3. 2024 is considered to be a geopolitically volatile year, due to the ongoing global conflicts as well the elections that are going to happen across multiple countries (REF: https://www.statista.com/chart/31604/countries-where-a-natio...). When governments change there are going to be changes in focus of the entire economy, for example, it's quite hard to guess the stance of a Republican led USA on the ongoing conflict in Ukraine.
4. The fear of more regional conflicts breaking out around the world. This might spook a lot more countries into investing more into defensive technologies and in fact, one of the startup ideas of 2024 from YC was for defense technologies.
5. Nvidia is a company that started out as a GPU manufacturer and is currently riding a very huge wave of speculation around the future capabilities of "AI". And to investors with money, it seems safer to invest in it.
> What advice would you give to junior or mid-level software engineers, or college grads trying to break into the industry right now, who have been failing completely at getting a foot in the door in the last 12 months, when they would have been considered good hires just two or three years before?
I also only have similar experience as you have right now (~9 years) so I cannot give any specific advice on this. I would recommend applying to all jobs that even remotely fit their profile and in the meantime to attend conferences or meetups so as to network better. 90% of the time the resumes that is sent is being screened by a Recruiter or by an automated system. However if you already have someone inside a company willing to refer you, you will have an easier time getting to skip this filter.
Thanks -- Really appreciate your thoughtful and kind response! Yes, I see there are a lot of other factors to consider. Will keep networking and interviewing until something lands.
Will also focus my unemployment energy on deepening my software infrastructure and MLOps knowledge, building open source side projects using generative AI tools, using LangChain / GPT / Pinecone / Ollama, focused on practical business purposes like parsing customer service data and fintech anomaly detection. Whatever I can do in a short time to provide more value than GPT4 / GPT5, I guess I'll try my best to work on it with the fear of the abyss...
A part of me still thinks it might be time to pivot to healthcare. I'm no greybeard or "10x Engineer", just a regular old React/Node (formerly Ruby On Rails, PHP) developer doing the CRUD thing, and yeah, I'm not sure I quite make the cut.
All the best of luck to you! And thanks for writing an encouraging response.
Step right up to the grand spectacle of future-phobia, where every twist and turn of the labor market projections sends shivers down the spines of software engineers far and wide. Ah, the BLS adjusts its forecasts, and suddenly, the digital sky is falling. Decrease by 11%, you say? Why, that's practically an invitation to abandon ship before the great AI iceberg sends us all to the icy depths of unemployment, isn't it? But wait, let's sprinkle a little perspective into this doom-laden soup.
First off, the delightful Mr. Karpathy and the visionary Mr. Huang—prophets of the impending software apocalypse, preaching the gospel of "don't bother learning to code, for the machines shall inherit the Earth." It's a compelling narrative, rich with the flavor of inevitability and seasoned with a dash of existential dread. And yet, is it not but the latest chapter in the age-old saga of technological advancement and the cyclical panic that accompanies each new wave?
Ah, and then there's the heart-wrenching tale of the recently laid-off mid-level full stack engineer, pondering a premature career pivot as the shadow of obsolescence looms large. "To code, or not to code?" that is the question—a question as laden with uncertainty as it is with opportunity. But let's not get carried away on the tides of pessimism.
You see, dear worried souls, what we're witnessing is not the end of the software engineering profession but its evolution. The landscape is shifting, yes, but with every shift comes new terrain to explore, new challenges to overcome, and new niches to fill. The key to navigating this brave new world is not to flee in fear but to adapt with curiosity.
To the anxious and the uncertain, I say: fear not the AI overlords, for they are but tools in the hands of those willing to learn their language. Embrace the change, dive into the depths of this new digital domain, and you may just find that the future is not a desolate wasteland but a frontier brimming with untapped potential.
And to those pondering the path forward, the advice is timeless: continue to learn, to grow, and to adapt. The tech industry is no stranger to upheaval, and each wave of innovation has left it richer, not poorer. The DotCom Bubble, the mobile revolution, the rise of cloud computing—all were met with skepticism and fear, yet all have contributed to the vibrant, ever-changing tapestry of our digital world.
So, to the junior engineers, the college grads, the mid-level developers staring into the abyss of uncertainty, I say: hold fast. The industry will evolve, as it always has, but so too will you. The fundamentals of problem-solving, of creativity, of adaptability—these are the skills that will carry you through the storms of change. The future is not to be feared but embraced, for within it lies not just the challenge of adaptation but the promise of innovation and the endless potential for those brave enough to seize it.
What a poetic perspective! A few days ago, I was very much "to code or not to code", or rather, "get thee to a nunnery...", considering a switch to nursing to better support my family long-term. But I'll try to push forward for a while longer and adapt as best as I can. I'm not as optimistic as most folks on this page, but also trying not to waste my energy on a panic attack.
Some more junior folks, including a few of my closest friends, just won't make it. It seems like their jobs have already been replaced by a number of different factors, including AI. It's a very sad investment of time and money for them and their patient spouses, and I feel very guilty for recommending that path to them, and being unable to help them after they struggle for twelve months or more post-bootcamp, still unable to land a job.
I am just a few years ahead, and may just have a shot at the next evolution if I learn AI in a hurry...Or maybe it's also hopeless for me.
Here's to everyone giving their best shot to seizing that future, ahem (not) abyss...
Taking a lot of further inspiration from posts on this thread:
https://news.ycombinator.com/item?id=39656745
"Ask HN: What took you from being a good programmer to a great one?" Build everything from scratch, using zero dependencies, get below the abstractions... Ok, ok, ok. :(
Generating a snippet, or even entire blocks, of code is not a good metric on which to base the quality of LLMs.
Rather, we should pay attention to how good is an LLM at altering, editing, maintaining code? In practice, it's quite poor. And not in the way that a new developer is poor, but in weird and uncertain and unpredictable ways, or in ways that it fixed in the last prompt but broke in the new prompt.
Engineers spend little time writing code compared to the amount of time they spend thinking and designing and buildings systems for that code to operate in. LLMs are terrible at systems.
"Systems" implies layers, and layers implies constraints, and constraints implies specificity, and specificity is not a strong suit of an LLM.
The systems in which most code is written is not written once and never touched again. Instead, it passes through many layers (automated and manual) to reach some arbitrary level of correctness across many vectors, in many contexts, some digital and some not. LLMs are passable at solving problems of one, maybe two, layers of complexity as long as all the context is digitized first.
Lastly, not all problems have a technical solution. In fact, creating a technical solution for a non-technical problem so that we can apply an LLM to it will only cause layers of unnecessary abstraction that locks out "normal" people from participating in the solution.
And we may not like to admit it, but we engineers often have work to do for a job we like for pay that's pretty good, only because there are a whole lot of non-programmers out there solving the non-technical problems that we don't want to do.