Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Copilot: The Agent Awakens (github.blog)
206 points by meetpateltech 5 days ago | hide | past | favorite | 251 comments





> When we introduced GitHub Copilot back in 2021, we had a clear goal: to make developers’ lives easier with an AI pair programmer that helps them write better code. The name reflects our belief that artificial intelligence (AI) isn’t replacing the developer.

Later:

> GitHub Copilot’s new agent mode is capable of iterating on its own code, recognizing errors, and fixing them automatically.

Is this Microsoft/GitHub acknowledging they initially missed the mark, except they aren't really clear in the post that they're abandoning the approach of "AI pair programmer / not replacing the developer"? Seems really strange to re-iterate their "clear goal" and then in the next paragraph go directly against their goal? Are they maybe afraid of a potential backslash if it became more clear that they're now looking to replace (some) developers / the boring parts of development?

I have no opinions either way, but the messaging of the post seems to go into all different directions, which seems strange.


The irony doesn't end there, later in the post they say

"We’re excited to share a first look at our autonomous SWE agent and how we envision these types of agents will fit into the GitHub user experience. When the product we are building under the codename Project Padawan ships later this year, it will allow you to directly assign issues to GitHub Copilot, using any of the GitHub clients, and have it produce fully tested pull requests."

- Effectively they have completely automated SWE job, pair programmer was just a marketing speak, real intention is clear.


> it will allow you to directly assign issues to GitHub Copilot, using any of the GitHub clients, and have it produce fully tested pull requests.

I hope I can turn this feature off, i.e. that it's not a feature other users can use on my repositories. I'm already getting AI slop comments suggesting unhelpful fixes on my open source projects, I don't need "the anointed one" sending over slop as well – while replacing the work of real humans, to boot.


I thought there are restrictions about who can assign issues and who they can be assigned to? So, I would not expect that other users would be able to use it on your repositories (unless an exception is made, but I think it would be better if an exception is not made, so that other users can't use this feature on your repositories).

Ah, you're right, that makes sense. When I read "assign issues" I didn't connect it with GitHub's "assign issue/PR to user" feature – I don't use it very much since I'm the only one with write access on my projects. Thanks for pointing that out.

> Effectively they have completely automated SWE job, pair programmer was just a marketing speak, real intention is clear.

Frankly, this was obvious to me since the Copilot Workspace announcement.

It's so hard not for me to not slide completely into nihilistic despair as a student right now. I chose wrong, and now there is nothing I can do. Day in and day out I'm talking about my projects and internships as if my entire field that I've dreamed about for the past decade isn't about to get torched. With the pace that this field is getting solved I probably won't even have enough time to "pivot" to anything before they also get solved or upturned as well.

Call me a doomsday prepper, but frankly I haven't heard a compelling argument against this line of thinking that is actually in line with the absurd development curve I've seen. 4 years ago these models weren't capable of stringing together a TODO app.

I really, really want to be wrong. I really do.


I've been in the industry long enough to have been around for a few crashes. My outlook is: this industry has always faced threats that looked like it was going to spell the end of our careers, but we always come out the other side better than ever.

I don't think LLMs are fundamentally more threatening than off shore developers were. Sure, we lost jobs during that time, but businesses realized eventually that the productivity was low and they wanted low level people who were responsible.

I think that will continue. We'll all learn to control these agents and review their code, but ultimately, someone needs to be responsible for these agents, reviewing what they produce and fixing any shitshows they produce.

I won't rule out the possibility of LLMs that are so good that they can replicate just about any app in existence in minutes. But there's still value in having workers manage infrastructure, data, etc.


I’ve been developing professionally since 1996. It’s different this time.

The first crash happened in 2000 not because most of the ideas were bad. But because enough potential customers weren’t on the internet.

Things didn’t recover until 2009-2010 when high speed internet was ubiquitous and more people started having computers in their pockets with high speed internet and the mobile app stores.

Between that time was the housing crash and the entire economy was in a free fall.

But, the Big Tech companies were hiring like crazy post COVID and it’s clear they don’t need that many workers. They definitely aren’t going to be doubling in head count over the next 10 years.

On the startup VC funding side, VCs only fund startups as a Ponzi scheme hoping they can either pawn their money losing startups on the public market - who has gotten wise to it - or via acquisitions and regulators are now much more leery of big acquisitions.

There are too many developers chasing too few jobs and with remote work, competition has gotten stiffer.

Just today someone posted on LinkedIn that they posted a job opening on LinkedIn, didn’t use Easy Apply, forced people to answer a questionnaire to slow down submissions and still got over 1000 applications in 3 hours.

AI is already removing the need to hire junior developers, slowly it will be good enough to lower the number of “senior” [sic] framework developers doing boilerplate.

Did I mention by hook or by crook, the federal government will be hiring less people and getting rid of employees and they will be flooding the market? All of those “dark matter developers” that were content with their government jobs are now going to be competing for private sector jobs


So what do you even do then? I'm completely at a loss now.

I submitted this article to HN earlier. I’m not the author.

https://news.ycombinator.com/item?id=42818169

Short version is don’t be a “ticket taker”. Move closer to the customer/stakeholder and further away from the IDE. Think in terms of adding business value and focus more on strategy than tactics (pulling well defined stories off the board).

https://www.levels.fyi/blog/swe-level-framework.html

I didn’t just pull “scope”, “impact” and “ambiguity” out of thin air. The leveling guidelines of all the tech companies boil down to this in one way or the other.

This is Dropbox’s for instance.

https://dropbox.github.io/dbx-career-framework/

I’ve been moving closer to the “business” for a decade now after spending almost two decades as your bog standard enterprise dev. I haven’t done much active coding except some simple Python scripts in almost 3 years.

My focus is now strategic cloud consulting focusing on application development. I’m not saying necessarily “learn cloud”. Learning how to deal with “the business” and leading implementations that deliver business value is the objective. The “cloud” just happens to be my tool. I’m slowly adding Big Data/ML/“AI” to my tool belt.


Hmm. I maintain a pretty big open-source project, so I guess I'm already kinda that? I honestly love computing moreso than I love coding. I'm not very familiar with business concepts though.

I really hate to say this. But open source contributions don’t matter either. It’s only what you do for a company. No one has time to look at an open source repository. Every open job these days have thousands of applications. They aren’t going to look at your GitHub repo.

I see now. You mean real business stuff. At that point I may as well do a startup.

So taking the wrong lessons from what I’m saying :).

It’s even harder doing a startup straight out of college with no business skills, no network, and no real world experience.


I guess I'm just confused now. I can't do technical since that's too commodified, but I can't do business since I'm a youngster with no real world experience.

My bad. I’m carrying on two threads within the same post. This was my suggestion to another question

https://news.ycombinator.com/item?id=42968258


> My outlook is: this industry has always faced threats that looked like it was going to spell the end of our careers, but we always come out the other side better than ever.

I don't think there ever was as big treat to intellectual jobs. If LLMs ever get really good at programming (at the level of senior) there is 0 reason to keep majority of programmers employed. In addition it's not likely that it would be like other historical events of replacing workers with technology, because it most likely won't create new jobs (well, at least not for humans). So if LLMs won't run out of fuel before reaching that level I'm afraid we are fucked.

> I won't rule out the possibility of LLMs that are so good that they can replicate just about any app in existence in minutes. But there's still value in having workers manage infrastructure, data, etc.

Why would AI advanced enough to spin entire app from scratch have problems with managing infrastructure and data?


What do you define as a “senior developer”? Someone who “codez real gud” and can pass “leetCode hard” interviews or the tech industries definition of a senior developer who operates at a certain level of scope, impact and “dealing with ambiguity” and can deliver business value?

The former type of senior developer will be a commodity and see their pay stagnate or even go down as companies find cheaper labor, AI and more software development gets replaced with SaaS offerings especially with enterprise devs.


> a senior developer who operates at a certain level of scope, impact and “dealing with ambiguity” and can deliver business value?

Is there any chance for me (a student) to become like this? I'm fine with coding changing (I just love computing) but I'm scared of the entirety of the field being completely torched.


Please take my advice with a huge grain of salt. It’s been literally decades since I was an entry level developer. I try my best to keep my ear to the ground and look through the eyes of people at all levels of the industry. Part of my job is mentorship as a “staff software architect” at a consulting company.

What would I do these days? I would stay in computer science and if possible get an MBA. I dropped out of graduate school in 2001. But what I learned helped me a lot.

If you can’t go to graduate school, at least take a few business classes. I think the only way to survive will be focusing more on the business than the technology and work for a consulting company.

I don’t mean being a “consultant” who is really just a hands on keyboard coder doing staff augmentation. I mean working for one of the Big 5 consulting firms or one of the smaller equivalents.

The US is definitely moving toward privatization and the first thing they do is bring in more consultants.

I don’t work for any of them. I specialize in strategic cloud consulting. But that market seems congested at the low end.


As far as I've heard, MBAs have also become completely saturated as well. Out of the frying pan into the fire.

I get you're trying to be "consoling", but frankly the bajillion pivot ideas, hopium arguments, endless counterarguments, and other indirection is why I think there's nothing optimal that can be done. All I can do is go through the motions with my current internship and major and rely on Christ rather than this fickle world. I made the wrong choice. Nothing that can be done.


I got nothing then

I agree with you. I just don't know what to do anymore.

Your logic works for seniors, but honestly I'm unsure how it works for anybody that just wants to break in.

Isn’t that what everyone said about outsourcing too?

My view is LLMs will compete with outsourced developers (and contractors/consultants for one-off jobs), where job context and project scope is already subject to a communication gap.

A big role of full time employees is not just to code, but to interact to various degrees with PMs/Sales/Customers/the rest of the company stakeholders to varying degrees.

Ultimately someone has to know enough of the technical side of both the product and company to actually _know_ what to prompt/review for.

Sure, if the entire industry becomes agents selling agent-developed products to other agents and providing agent-on-agent support, then… yeah. But that is a shell game.


> A big role of full time employees is not just to code, but to interact to various degrees with PMs/Sales/Customers/the rest of the company stakeholders to varying degrees.

That’s true the further you get up in your career. But most of the time, it is:

- junior developers get tasks where both the business problem and technical solution is well defined and they need a lot of handholding and supervision

- mid level developers get tasks where the business problem is mostly well defined and they use their own judgement to create the technical solution. They should be able to lead work streams or epics

- Senior developers are responsible for leading major deliverables with multiple work streams, epics and are over seeing mid level developers and juniors. This is usually the first level where you spend the majority of your time dealing with strategy and “ambiguity” and with the “business”.

- staff - strategy involving many large implementations.

AI can already do a creditable job as a junior level developer and is rapidly moving up to being able to do the work of a mid level developer.

No matter what your title is, if you are just pulling tickets off the board with well defined business cases, you are doing mid level developer work. My definition is what I’ve seen, heard and read about from every major tech company.


Ok, except I guess I would say your definition of mid-level and junior both fall under what I would consider “junior”— maybe I would call what you define as “junior” as “intern” ?

I don’t see how LLMs completely eliminate anyone who is doing anything more than simply pulling well-defined tickets off a board


While I’ve spoken to people at other BigTech companies about their leveling guidelines, the only one that is still in my personal possession after I left is Amazon’s :).

But this is a high level industry summary.

https://www.levels.fyi/blog/swe-level-framework.html


> LLMs will compete with outsourced developers

I guess the question is whether the person you are replying to is potentially living in a country where most of the work is currently being outsourced, as this could significantly impact their career path.

It is interesting that you bring up outsourced work, as I strongly believe that a lot of the bad code generated by AI is the result of not feeding it enough information.

When you outsource work, you are usually forced to document things more thoroughly to work around language and domain knowledge. Basically, clarity is a requirement and maybe outsource companies will experience the most impact.


I was a junior after companies had already decided to out source low-level development roles. And I also faced the roadblock of lacking a degree, or any college at the time, so internships were not an option.

What I did was learn the skills that companies were hiring for and kept applying until I finally found some tiny company willing to pay me peanuts ($8.50hr, I'm not joking, I continued to work two jobs that entire year). They got a cheap worker, and I got experience that I leveraged into a better job.

How does that translate to your situation? If you're in college, find internships, it's the easiest way to get your foot in the door. Are you out of college or never went? Time to look at job postings, evaluate the skills they are looking for, and learn those skills.

Yeah, it sucks but that's also a fact of life in this industry. I have to "reskill" every few years too, because not every job I've had segues into another job. In reality, every senior developer decays over time into a junior because the tech landscape changes pretty quickly, and your choices are to mitigate that decay through learning the tech that's being hired for, or become a people manager.

I'd suggest working on your defeatest attitude though. As someone with pretty low self-esteem myself, I get it. Just four hours ago I was calling myself an idiot for making a mistake, but instead of wallowing, I took the time to "prove it" and verify that I was the root cause of the the issue. If I was, I would take those findings and learn from them, but it turns out, all I proved was that I was not responsible and I got to pat myself for building out a system that allowed me to figure this out.

You're going to have to find a way to tell yourself that you're proud of what you've done. Nobody else is going to say it. And rejection sucks. You have to learn to graciously accept rejection, and objectively look at what you've done and compliment yourself. I take the "shit sandwich approach" of finding two good things about what I've built, and one point of improvement. YMMV there, but it definitely helps with the mental health to compliment yourself when you deserve it.


> How does that translate to your situation? If you're in college, find internships, it's the easiest way to get your foot in the door. Are you out of college or never went? Time to look at job postings, evaluate the skills they are looking for, and learn those skills

I’m saying this ironically as a 50 year old (check my username) - “okay boomer”.

That doesn’t work anymore. Internships are much harder to get than they use to be.

“Learning in demand skills” doesn’t work either. Everyone is doing it. Every job opening literally gets thousands of applicants within the first day with people who also has the same generic skillset.

When I was looking for your standard C# CRUD enterprise job where they wanted AWS experience last year and the year before as a “Plan B”, I applied for literally hundreds of jobs and heard crickets. Not only had a coded and led major projects dealing with AWS and before dealing with AWS, I worked at AWS in the consulting department (full time).

Plan A offers came quickly (within two or three weeks) both times. Full time positions doing strategic consulting (personal outreach) and one or two offers from product companies based on my network. But that doesn’t help when someone is just starting out.

By the way, I also started out in 1996 by getting a return offer to be a computer operator based on an internship. But it ain’t 1996 anymore. It’s a shit show out here right now for people with experience.


Sucks to hear about the internships. I figured they'd still be relevant as I was mentoring people in an internship pipeline just 5 years ago, but a lot has changed since then. I do wonder how the effects graduation rates, as one of the reason we had so many interns at my previous job was because the local engineering school required an internship to graduate.

You're right though, shit is fucked. I didn't want to say that and have the person in our conversation thread get even more disheartened - that isn't helpful to them. But I agree with you and my experience job hunting just last year mirrors what you are saying. I've been thinking of what I'd do if I got laid off and well, sounds like it won't be a good time.


I mean the foundations didn't go away, they just got more profound (advances in programming language design, distributed algorithms, formal methods etc.). Previously closed down layers just got open sourced (RISC-V, FPGAs). I estimate that 98% of all engineering efforts are always hidden beneath some facade that takes away its spotlight (through politics, committees, marketing etc.). I'm close to 15 years in and there are still programming languages, data structures or protocols I never heard of.

The world was never as complex as it is today, advancements were never that accelerated, and expectations on scalable software were never this high. Do you really buy the marketing fuzz that the work is "done" just because your software runs on hyperscaler #3 or in a k8s cluster? The amount of available open source projects steadily increases, those can (and should) be used to learn from and contribute something back. Free and open source software is used everywhere and whole businesses are built on some, yet Linux and all those other projects are just increasing in complexity. Sure, everybody wants to be the expert and yet nobody really is. Fact is, unfinished projects are everywhere and there's a lot of work to be done.

LLMs have the chance to make personal computing even more personal and should be treated as valuable assistents to learn with. LLMs won't ever be the desired oracles of some kind (yes, I don't buy that "AGI is near" crap), they'll rather be good personal sparing partners. APIs still break constantly and there are transient errors everywhere. I can imagine some small shops and personalized apps, yet people that aren't into tech won't magically get into it because of some progress in machine learning. If you're in it just for the money times might get challenging here and there (what isn't?), but if you're in it for the engineering times can look pretty bright, as long as we make good use of our ambitions. There are still some engineering efforts to take before a smartwatch can also act smart in isolation. Our tooling just took a leap ahead - go make use of it, that's it.


This must be your first hype cycle then. Most of us who are senior+ have been through these cycles before. There's always a 10% gap that makes it impossible to fully close the gap between needing a programmer and a machine doing the work. Nothing about the current evolution of LLMs suggests that they are close to solving this. The current messaging is basically, look how far we got this time, we will for sure reach AGI or full replaceability by throwing X more dollars at the problem.

So Work=0.1^Ct where C is the development pace. Everything points to the C of AI being large. How quickly does Work become a rounding error?

Sure, C=log(t), but it could also be C=ke^t. Everything to me feels like it's the latter, I really want to be wrong.


> So Work=0.1^Ct where C is the development pace.

Did you see the bit where he said "Most of us who are senior+ have been through these cycles before". They rolled out similar equations in previous hype cycles.

The LLM's were released about 3 years ago now. Over the weekend I made the mistake on taking their word on "does GitHub allow administrator to delete/hide comments on PR's". They convincingly said "no". Others pointed out the answer is of "yes". That's pretty typical. As far as I can tell, while their answers are getting better and more detailed, what happens when they reach the limits of their knowledge hasn't changed. They hallucinate. Convincingly.

That interacts with writing software in an unfortunate way. You start off by asking questions, getting good answers, and writing lots of code. But then you reach their limits, and they hallucinate. A new engineer has no way to know that's what happened, and so goes round and round in circles, asking more and more questions, getting complete (but convincing) crap in response, and getting nowhere. An experienced engineer has enough background knowledge to be able to detect the hallucinations.

So far, this hasn't changed much in 3 years. Given the LLM's architecture, I can't see how it could change without some other breakthrough. Then they won't be called LLM's any more, as it will be a different design. I'm have no doubt it will happen, but until it does LLM's are a major threat software engineers.


The thing is the total amount of work to do keeps increasing. We're putting firmware in lightbulbs now.

When everything includes software, someone needs to write and maintain that software.

If software becomes cheaper, we'll just use even more of it.


Cmon man, look at nature, exponential curves almost never are actually exponential. Likely it's the first part of a logistic curve. Of course you can sit here all day and cry about the worst outcome for an event in the long list of things no one can predict. It sounds like you've made your mind up anyways and refuse to listen to reason, so why keep replying to literally everyone here telling you that your buying into the hype too much.

You're young, and so we'll give you a pass. But as stated, _the entire point of tech is evolving methods_. Are you crying because you can't be one in a room of hundreds feeding punchcards to a massive mainframe? Why not? It's _exactly_ the same thing! Technology evolved, standards changed, the bar raised a bit, everyone still went to work just fine. Are you upset you won't have a job in a warehouse? Are you upset you aren't required to be a farmer to survive? Just chill out man, it's not as terrifying as you think it is. Take a page out of everyone who ever made it and try to actually listen to the advice of people who've been here a while and stop just reflex-denying any advice that anyone gives you. Expand your mind a bit and just consider the idea that you're actually wrong in some way. Life will be much easier, less frantic, and more productive


People keep telling students basically to “think happy thoughts” and are not being honest with them. The field is contracting today while more people with experience are chasing fewer jobs and then AI is hallowing out the low end.

Every single opening gets 1000s of applicants within the first day. It’s almost impossible to stand out from the crowd if you are either new to the industry or have a generic skillset.


Honestly I think moving up in a layer of abstraction is not the same as something resembling an intelligent agent.

If "resembling" intelligence was enough, all programmers would've been replaced long ago.

I've said this on here before, but replacing programmers means replacing our entire economy. Programming is, by and large, information processing. Guess how many business's services can be summed up as "information processing"? I'd wager most of them.

So maybe you're fucked, yes, but if so, we all are. Maybe we'll just have to find something to do other than exchange labor for sustenance...


Why all these approaches have not succeeded is that to close the gap, you have to backtrack on all the effort made so far. Like choosing a shortcut and stumbling on an impassable ravine. The only way is to go back.

It will be fine. It may not be what you expected, and it may be harder than you expected, but programming and software engineering won't go away. The job is changing and we all have to either change with it or find something else.

Typist used to be a career. People's entire jobs revolved around being able to operate a typewriter quickly. That skill became obsolete as computers were introduced, but the role didn't go away (for a long time anyway). Plenty typists learned to use computers and kept doing transcription or secretarial work like they always had done. Some refused to learn and took other career paths while a new generation of computer users came in.

This has happened quite frequently in this industry. The skills we use now are about to be made obsolete, but our roles will still largely exist.

The scary part is that we know right now that our skills are about to be obsolescent, but we don't yet know what the next thing is actually going to be. It's hard to prepare.

I'm still fairly early in my career. I plan to cope by learning how to use these new AI tools. My core engineering skills will always be useful to some degree, but I have to stay with the times to stay competitive. That's it, that's the plan. Learn about the new thing as it's being built and try to stay ready.


In addition to the good responses you've gotten about not overreacting to hype cycles I'll add that you should also try to spend less time worrying about the unknown. I understand the appeal of a straightforward career path of college major -> internship -> junior role -> mid-level -> senior all in the same field. That works out great for many people, but you should also be aware that there are a lot, and I mean a lot of people whose path ended up looking nothing like that and are leading happy comfortable lives.

Even if the worst case happens and the field gets wrecked by AI it won't be the end of the world. There will always be work for smart and reliable people. You might end up having to learn some new skills you weren't expecting to, but hey that's life. I have quite a bit of sympathy for someone with 30-40 years of experience who sees their career swiped away; retooling and getting hired in a new area can be quite hard at that stage. But for someone in their early 20s? There's absolutely nothing that can prevent you from adapting to whatever the new economy looks like in a few years.


>Call me a doomsday prepper

>Seems obvious that I missed the boat on LLMs.

Don't worry. As the other commenter said it: we've seen it all a few times already.

I clearly remember how some people reacted to Ruby on Rails as if it's going to replace them just because it provides... well a framework.

LLMs won't replace even a junior dev anytime soon. Not to mention senior dev etc.

People who's main job was creating landing pages and simple shops might be in trouble.


Oh man the hype around RoR was nuts. You’d get these enterprise Java developers getting their first taste of a “python/php/ruby” language and they’d all make gushing blog posts about how quickly they put together some (rather simple) app. They’d all say how many orders of magnitude more productive they were thanks to RoR.

And to be fair they weren’t wrong. RoR was one of the first “opinionated” platforms of its kind, really. (Well, that isn’t really true but we’ll just pretend it is… it sure was hyped that way). It did make a lot of the pain points of web apps easier… it handled database migrations, testing, ORM stuff, template stuff, etc. It was opinionated. It had some very vocal people pushing it (DHH of 37signals). It was the zeitgeist of the time.

Hell Twitter was started on rails, from what I remember. Eventually they turned into a java shop but yeah. But that was going to be the natural progression. People found all the edges of RoR and we learned what worked and what didn’t and where to apply it and where it doesn’t belong.

But things like RoR didn’t make developers less valuable it made them more valuable.

And to somehow tie it back to LLM’s… it will be the same thing. Software will eat the world and all LLM’s will do is accelerate it. But we don’t know all the edges yet because the story is still unfolding. But I promise it won’t be the end of developers… anybody who uses LLMs daily can already tell you they’ll never be able to replace a dev. They are developer assistants not replacements no matter what the hype says.

…end ramble (which is clearly not the output of an LLM)


LLMs won't replace even a junior dev anytime soon. Not to mention senior dev etc.

except of course they have already … in many places … and counting :)


Actually curious - where have they? Zuck's claim didn't seem to be true. I'd imagine a few places that have low technical acumen have tried, but I'll want to see how much they pay for SWEs willing to clean that codebase after they let LLMs run amok on it.

Any examples maybe?

If you want to thrive in this world you need to change your attitude ASAP. New tech waves happen all of the time. Embracing them is the path.

How? Seems obvious that I missed the boat on LLMs. I don't have any ideas anyway (I have only one "idea", and it's someone elses and I have no faith that it'll get me a foothold). Robotics will be solved pretty soon by 10x reasoners if this development curve continues. Everything I can "change to" has a 2 year delta and that is a 100x capability change within the AI space right now.

> Robotics will be solved pretty soon by 10x reasoners if this development curve continues

Don't get high on your own (industry's) supply. This foward-looking BS is targeted at clueless investors, Level 5 self-driving cars have been a "few years away" for almost 2 decades now, and here we are, still having to deal with ADAS like cavemen, and looking back at the trail of dead companies that believed they could solve self-driving.


IMHO what you should be doing is building stuff to show off.

More great ideas will flow out of that activity.

As for missed the boat, how do you mean? We remain in early days!

Finally, try to avoid making your own reality. You write as if you know the future, and none of us do!

And even when we are right, how we respond to that future really matters!

You could sulk in gloom

, or!

You could be building things and or showing them off.

It is that showing off that nets good new opportunities.


I want to build but I don't know what. The ideas don't appear. This is what I've been trying to express.

what are you attempting to achieve with this idea? what kind of foothold? ideas are everywhere, they are cheap. the idea plus the execution, timing, marketing, and approach are all factors in something being successful. maybe you are thinking you need to make a startup or something to be successful.

i understand the feeling you have a little bit, but agree with the others that you don't need to despair too much about the industry, there is still a great need (and will be) for humans to understand the systems we are using and be able to get in the weeds to solve problems.

totally agree we might need less people writing/wrangling code, and it might put downward pressure on salaries... on the other hand, there might be upward pressure on salaries as developers will have a higher output and the ROI for hiring an effective developer in this environment will go up. especially when production is on fire, the AI that wrote the code that is on fire might not be the best source of how to solve it.

to me this is all basically a big unknown, without substantial reason to panic though, even if it feels overwhelming and hopeless from a certain perspective at the start of a career. currently a lot of development feels pretty sluggish to me, we fight with build tools and multiple languages and eke out these incremental improvements - if developers can work much much faster, that's great, but then we hit a limit to like... OK we need to let the product changes "settle" for a while and get user feedback about the changes, we can't actually ship 14 major product updates in a week because users will have no idea what the fuck is happening. but maybe we can do more advanced things with rapid split testing and automated success metrics to deploy pages that "self-optimize" or something, and there might be new second and third order ideas that come from this where it takes a human to manage and direct and build and solve things at that level.


I dunno, a job? I don't think one other person's idea is enough.

> ideas are everywhere, they are cheap. the idea plus the execution, timing, marketing, and approach are all factors in something being successful

And water is everywhere but you need a boat to get across it. I don't think I have a boat. I don't know if I can build one. I don't know if anyone will let me on their boat. s/boat/idea generation/g.


can you describe how any idea of this form connects to a job? I think what I'm saying is you don't need any "idea" - it's enough to have skills. You might be overestimating the bar you need to it or what it takes to get jobs in general.

The only thing I think that makes my resume as a student appealing is having real-world projects that show I can meaningfully develop. I'd need similar for GenAI, I'd imagine.

Just use AI to learn what you want to pivot to and don’t be cry baby please

It's not knowledge but applications of knowledge to actually work in the field. I don't know how to meaningfully make the applications to succeed.

> Call me a doomsday prepper, but frankly I haven't heard a compelling argument against this line of thinking that is actually in line with the absurd development curve.

Are the current economics viable indefinitely? I think not. This AI investment exuberance will be curbed as soon as investors start demanding returns, and we've already seen harbingers of that (the Deep Seek market scare). What appears to be a quadratic growth curve inevitably turns out to be sigmoid.

Right now, the Hype train is at maximum speed and seems unstoppable. Despite the early hype, the Internet didn't replace colleges or brick and mortar stores[1], iPads didn't kill computers[2], and AI won't replace software engineers. This is not to say there will be no impact, but it's being oversold.

1. Khan academy and Amazon notwithstanding. But physical retail stores are still here and doing okay, and have adjusted to leveraging the new paradigm.

2. Leading up to peak iPad, it was believable that the iPad would kill PCs - it was an unstoppable juggernaut lifting Apple profits to record heights https://www.bloomberg.com/news/articles/2012-03-21/ipad-the-...


I could tell you not to worry, but I don't think I will.

How about embracing it? Think it all the way through: If software development completely disappears as a profession, what impact would it have on other possible jobs, on society? What are the potential bad outcomes from that? What would that mean for your ability to survive and enjoy yourself? What would you do?

You'd find a way to make the best of it, I suppose. And the best is as good as it gets. Maybe this imaginary new world sucks, maybe not. You're young, from the sound of it. You get to be around for the future, when I'm long gone, that's kinda cool. I'm sure you'll find a way, you seem like a clever person.

I find there's something powerful about thinking the worst case scenario through, and feeling somewhat prepared for it. Makes it easier to focus on the present.


Pivot into cybersecurity? As a pentester, a mountain of security bugs in a mountain of AI produced slop that no one understands is the ideal provider of job security, I guess

Maybe pentesting can be partly automated, but "the devil is in the details" and a pentester's primary quality is to look where the automated software won't.

I don't know, truly. The future is somewhat foggy.


> Any "normal" engineering field will be solved with the right domain knowledge.

Oh no, don't worry. Nobody will be trusting GenAI only for Real Work like aerodynamics, structural mechanics, electromagnetics, plasma physics, you name it. For sure there will be (already is) AI-based surrogate models, fast preconditioners, superresolution methods etc. But you will for the duration of our lifetime need humans who understand both physics and programming to actually use the damn tools and ensure they are used correctly.


Required time to move into a regulated engineering field is 4 years. 4 years in the AI world is currently a 10^4 = 10000x capability delta. It's not like the engineering and academia curmudgeons will replace their employees as much be utterly destroyed by the AI labs hiring maybe one or two tenured professors per field to make an internal startup and then letting them oversee what the megascaled reasoners spit out at breakneck pace with a sufficiently low hallucination rate.

I really wish to be wrong here.


You'd better get out of tech then. Change is inherent in the field. It's the whole point.

Yeah kinda seems like this guy would be better off pivoting to landscaping or baking or something. Maybe construction, there's always work in construction. Pays well too!

Never loose hope, I'm assuming you are young if you're a student right now. Time is on your side, no matter where the industry goes, just flow with it, try to learn the new technologies so you're never behind others who are competing for same positions.

I want to learn, but I can't display that I learned them. You know the whole thing where real projects are more valuable than simply "I did a tutorial"? I struggle to find ways to apply LLMs practically beyond like one project (which is basically someone else's anyway). That's my struggle.

Do you not see the contradiction in what you're saying? "LLMs will replace us all... I can't find a use for LLMs".

Get to replacing us already. It's a gold rush, someone needs to get rich selling shovels.


I'm not trying to express that, it's that I can't find many meaningful projects with them.

If you can't figure out something innovative and ground breaking to do as a project try to copy something which already exists and you find useful.

It can be an app or website, or something else. By trying to replicate something you will begin to learn and understand what the limitations and opportunities in these tools are.

Developers hanging out on eg Hacker News are the very tip of the wave. 99% of all developers don't visit sites like this. It will take a long time (10+ years.) before AI moves through all fields. Companies which are inherently software focused will be first. But that's not where the long tail of software is.

The long tail is in companies which does no software currently. Places where software is something you buy to do inventory or keep track of invoices or time sheets.

I use these tools everyday now and they are both magically awesome and stupendously stupid, sometimes in the same reply. Things always change less in the short term (say less than 5 years), and more in long term (10+ years). Just as it was with smartphones, arguably the most recent big revolution which is now integrated everywhere.

To reiterate. To find a project chose something which already exist and you use and have use of. Then copy it. Then improve it (to better for your use.)

Personally I think this new revolution will make software exponentially more abundant and infinitely customizable and adaptable to individuals. My guess is also that in 10-20 years we will have more people than today who do "programming".

A lot of tasks which are now hacked together in Excel sheets will be a lot easier to make into "proper" programs.

In this world the people who know enough about many things to efficiently use agents to accelerate them will be the most valuable.


"Project Padawan" looks fairly similar to Devin, at least from a user experience perspective. From personal experience, Devin was pretty terrible so we'll see if Microsoft does any better...

Cue a load of buggy code with tests modified to pass instead of fixing why tests fail.

Maybe they've just got an automatic prompt of "no dickhead fix the fucking code don't frig the tests"


IMO: Copilot (and Devin) has always missed the mark. Its always been a lazy, bad product that feels like it was made by a team that doesn't even want to use it themselves. Going more agentic is only going to make it worse; Copilot's product leadership appears obsessed with more comprehensive replacement of their customer workflows, but ideal customers want deeper and more fluid integration into the workflows.

Its one of those facts that seems so obvious once you realize it, but no one at Github clearly does. Who is buying and using these things? Seriously, anyone at Github, answer that, not who you think is buying them, but who is actually buying them. The answer isn't CEOs or CTOs; its Software Engineers (or CTOs, for their Software Engineers). Github's leadership needs the answer to be CEOs or CTOs, because the scale of investment (to produce such a shit product) is so large that only per-customer revenue commensurate to the replacement of SE salaries justifies it.

I know of four companies (including my own) that had a corporate Copilot subscription for their devs, and over the past quarter/this quarter are replacing it with a Cursor subscription, at the request of their devs. I'm super bullish on Cursor and Supermaven.

- I think they understand their ICP way better.

- I think their ICP is actually excited to spend money with them.

- I think these new companies have demonstrated that they are more willing to build more than just a panel in VSCode; whereas Github is bogged down by legacy interests.

- I think this deeper level of integration into existing workflows is what pushes AI past the hump of "oh i want to use that". Speeding up existing workflows by 30% feels insanely good. It grows the pie. Smaller & smarter, not larger & derivative.

- I think, from a business perspective, MS/Github has and continues to royally screw up by literally subsidizing the cost basis of their competitors by building VSCode and hosting billions of lines of open source code competitor models train on. I love it as a user. But it costs them millions of dollars, and every dollar of that spend makes their competitors stronger.


GitHub is way to established to course correct. Their entire sales channel is enterprise. They structurally can’t see what makes cursor and stuff so much better.

Plus I think they have to much money sloshing around to care. Unlike the scrappy startups they have beefy enterprise accounts as cash cows.

It’s a tail as old as time, really.


I dont think making Copilot better by handling its bad ouput means replacing developers. And GitHub certainly isn’t saying the goal is to replace developers.

I accept that is how you are interpreting it and I can see the argument. But Github isnt trying to get one over in their messaging.

And besides I just dont agree with the idea that it takes the developer out of the loop. Whose controlling this better version of Copilot? Whose goals is it advancing? The developer.


The goal has always been to eliminate programmers.

Nobody wants to pay a bunch of desk workers six figures to make their business go brr, but they currently they have no choice. Trust me, every executive resents this to their core and they want all the programmers to go away - including github executives.

20 years ago you would hire a few expensive architects who would try and design the product in so much detail cheap jr programmers could build it. It didn't go well.

4GL languages tried to abstract away all the hard stuff - again it didn't go well.

"Low code" was big just before the AI thing. It didn't go well.

Attempts are outsourcing are constant.

Now we have LLMs. So far this has come the closest to the dream of eliminating expensive programmers. We'll see how it goes.


Yup. Even in the 90s it was Microsoft’s plan to turn software development into just clicking buttons, e.g. Visual Studio. Just think of all of the business value the middle managers at a Fortune 1000 could produce with a bunch of cheap labor in some underdeveloped country with only three months of training (paid by them) to learn which buttons do which

But who is copilot being marketed to?

> I dont think making Copilot better by handling its bad ouput means replacing developers

The blog post goes through more than what I mentioned in my comment. For example:

> When the product we are building under the codename Project Padawan ships later this year, it will allow you to directly assign issues to GitHub Copilot, using any of the GitHub clients, and have it produce fully tested pull requests. Once a task is finished, Copilot will assign human reviewers to the PR, and work to resolve feedback they add.

How is that not trying to replace at least a small section of junior/boilerplate developers?

The developer might be the one who listens to the product team, maybe even creates the issue and finally reviews the code before it gets merged. But I'm having a hard time imagining the flow above as "Pair programming" or a developer working with a "co-pilot", as they're trying to say it's all about.


> How is that not trying to replace at least a small section of junior/boilerplate developers?

This was a danger before AI ever became a thing. If that’s all someone is doing, there was always a danger of being outsourced to someone who would work for less than you would.

And today in 2025, ignoring AI, there are thousands of generic framework developers struggling to get a job because every job they apply to has thousands of applications and companies can choose any good enough developer.

It was always hard to break the can’t get a job <-> don’t have experience cycle. Now it’s going to be harder.

The solution at least for awhile is to run closer to the customer/stakeholder.


This sort of obfuscation of Microsoft/GitHub's real intentions *IS* deliberate. Unfortunately it's not just them, but pervasive across nearly all AI companies.

> Are they maybe afraid of a potential backslash if it became more clear that they're now looking to replace (some) developers / the boring parts of development?

Did they care about the writers, musicians, artists, journalists that had their jobs displaced or currently reduced? I don't think so and they got away with it.

They don't care and in 2025, programmers of all ranks are next.

Look at their actions and don't fall for the blog posts or statements.


I don't think those are contradictory. The agent is iterating on the code it writes in order to be more useful.

> When the product we are building under the codename Project Padawan ships later this year, it will allow you to directly assign issues to GitHub Copilot, using any of the GitHub clients, and have it produce fully tested pull requests. Once a task is finished, Copilot will assign human reviewers to the PR, and work to resolve feedback they add.

This makes it seem like a very basic/rudimentary developer could be replaced by the new autonomous agent. Or am I misunderstanding what they're announcing here?


Ah sorry I missed that part. You are right that this sounds like more replacement for people

They are not going to say "we are making an agent to replace programmers. We will use their code and their guidance to train it"

What else would the goal of agentic generative AI be other than to replace humans doing the same work?

For me that's a no-brainer also. I guess my question is more about why their messaging is so messy and direction-less. Why not just say outright what their plan is?

Backlash towards the direction off of GitHub? I mean GitHub has to be one of the biggest assets when it comes to training data for source code, no? Projects should have already moved off of it when Microsoft got in the drivers seat.

I don't see a contradiction

> post seems to go into all different directions, which seems strange.

I asked AI to give feedback on the negative points in the article, with a focus on the human impact. I asked Gemini Flash 2, GPT-4o, Llama 3.3, and DeepSeek v3 and they had diverging thoughts in some areas, so your take seems valid.

Here is the overall summary

- *Consensus*: All LLMs agree on core risks (deskilling, code quality, workflow disruption).

- *Divergence*: Job displacement and bias concerns split opinions, with some LLMs deeming them speculative.

- *Scope*: Gemini and DeepSeek v3 extend discussion to broader AI ethics, while Groq/Llama 3.3 and GPT-4o prioritize strict article alignment.

I was actually surprised by the Gemini 2 response

---

While the article focuses on the benefits of GitHub Copilot and its new features, it's important to critically evaluate the potential downsides and unintended consequences. My initial response aimed to provide a balanced perspective by highlighting these concerns, which are relevant to the broader discussion about AI in software development.

Here is the full analysis https://beta.gitsense.com/?chat=ec90dd73-0873-43ab-9da0-c613...


Telling people about things AI said to you is a little like telling people about a dream you had. It probably won’t be as interesting to them as it was to you.

> While the […] focuses on […], it's important to critically evaluate the potential downsides and unintended consequences.

This is just boilerplate that RLHF-aligned models put into their responses when you ask for an opinion on almost anything. It’s AI corporate-speak. It is not meaningful.


> It’s AI corporate-speak. It is not meaningful

It is if you want to better understand how models are trained and in what directions they are leaning.


They'd better get on the IntelliJ integration fast— if I'm going to switch editors in order to use an LLM coding assistant, I may as well just switch to Cursor, which has a strong head start on them on the assistant and also has a much better autocomplete.

I'm honestly surprised to see no mention here of them moving to replicate Cursor's autocomplete—IMO that is where Cursor's real edge lies. Anyone can wrap a chatbot in some UI that allows you to avoid pasting by hand, but Cursor's ability to guess my next move—even several lines away—makes it far and away more efficient than alternatives.


The feature you are referring to was also announced in VSCode and is called Next edit suggestions for Copilot. Currently in preview: https://github.blog/changelog/2025-02-06-next-edit-suggestio...

Github has completely abandoned the Intellij Copilot plugin it seems. Even model selection is not supported. This is good for Jetbrains though because they have their own competing AI service. Jetbrains AI doesn't support multiline edits in tab completion or chat, but it does in the inline prompt mode (although its limited to the same file only).

IntelliJ with Cursor-like autocomplete or Cursor with IntelliJ-quality general IDE tooling (lookup/rename symbol, diagnostics, and general UI) would be the ultimate editor.

IntelliJ’s autocomplete was really bad last time I tried it, and if it’s still only single line it’s still bad. Fortunately GitHub copilot in IntelliJ is good, maybe as good as Cursor except that it can’t delete/rewrite code or jump to different locations.

IMO agents aren’t nearly as important for either team to focus on, because they can be used outside of the IDE or in a separate IDE. I think the teams who develop the best agents will be big model-trainers and/or teams dedicated to agents, not teams writing IDEs.


Try Augment https://www.augmentcode.com/

The IntelliJ integration works really well. Not sure why they aren't more widely known.


Yeah. I just wish that VSCode didn't feel so crude coming from 10+ years using JetBrains IDEs. Things I feel are table stakes like nice test run/debug functionality seem like big hurdles. Perhaps it's just a learning curve & I need to get used to it, but whenever I dive into how to replicate functionality I feel is important it seems the answer is at best "it's complicated".

It's a shame as this is by far not the only thing in which I have interest that seems to have fully shifted over to VSCode


JetBrains has got its own version in the pipeline as well: https://blog.jetbrains.com/junie/2025/01/meet-junie-your-cod...

I wish cursor was an extension of VSCode and not a fork.

Does is matter in practice? Is there stuff you can do in VSCode that isn't possible in Cursor? I'm not a user of either, so honest question.

For one, you can’t debug c# code in cursor without using a hacky third party extension. Because the c# debugger is only licensed to run in official vscode instances. And only way you find out is you try to run c# and get a runtime error saying that it can’t run for that reason, you google/chatgpt the issue, find your way to some old GitHub Issues threads where someone mentioned that’s a possible solution.

I don't know Cursor, but VS Code is a very full-featured editor with many years behind it; I rather doubt an upstart editor could achieve full feature parity with it so quickly.

But that's almost beside the point: even if it had perfectly identical functionality, people would still want to use VS Code, if only for its well-established ecosystem of extensions.


Cursor is a fork of VS Code so most of the UI is identical and it can use the majority of extensions. Some extensions are MS only though and they may start using this as a moat, who knows!

> Some extensions are MS only though

Is that really true? According to https://news.ycombinator.com/item?id=42931088 ("VSCode Marketplace Web Pages No Longer Allow Direct VSIX Downloads"), you can still manually download extensions via cURL et al, if you really want to. Probably will disappear in the future though.


One example: Microsoft's closed-source Pylance extension (their replacement to the previous open source Python language server) has DRM that will refuse to run on non-Microsoft builds of VS Code.

Wow, I had no idea. Been test driving VS Code for a little while, and while the constant popups/notifications are distracting and annoying, extensions with DRM in them kind of makes it a lot less interesting.

This was the example I was gonna give, thanks.

Also the extension “store” is a clone or something in cursor and I’m pretty sure might not have the same data (installs, current version, etc)

If I was Cursor I’m pretty sure I’d rather not be maintaining a fork of VSCode…


Since it is a fork of VS Code you can install any VS Code extension in Cursor (although manually): https://www.cursor.com/how-to-install-extension

AFAIK, Cursor is a fork of VS Code, so everything you wrote also applies to Cursor. Hence my question.

I read somewhere they had to make a fork because it wasn't possible to implement some features if it was an extension alone. Can't find where I read it though.

Yup. As an example extensions can not read the content of the terminal. The API is there but not allowed to be used in published extensions

I tried Cursor a couple of years ago and wasn't impressed - has it improved a lot? I only use autocomplete, not the chat function and at the time found CoPilot superior.

It has improved but you're missing out if you aren't using the big ticket features. I tab myself to solutions, too, but if there's a react view to do, I dish out the composer and am literally 10x faster - what would previously take a day now takes an hour. If there's an interface to create out of a json blob, I paste the blob and just tell it to make an interface, then clean up the types a bit, etc.

I'd written off AI autocomplete as pointless after trying GitHub Copilot's a year ago.

But Cursor's tab-autocomplete is actually really useful. It feels like it very much knows what I'm up to.


Cursor is ten times better than VSCode and Copilot. Its extraordinarily good at reducing two-minute tasks to 10-seconds, and the more you use it the better you get at identifying these two-minute tasks.

Example (web dev): hit cmd+k --> "this is a two column layout. make sure the columns are the same size". It just does it. To do that myself I would have had to switch to a browser, google flex box, go to that classic flexbox cheat sheet that we all know and love, tweak around with the different values of justify-content and justify-self, realize that was the wrong approach, then arrive at the correct answer of making sure each column flex-grows identically. two minute task, now 10 seconds.

hit cmd+k -> "flow these columns one-after-another on smaller screens" done. thirty second task, now 10 seconds.

hit cmd+k -> "enable or disable the rendering of this component via props" done. new prop added, prop is flowed through to a `display` css property, easy.

The autocomplete is pretty good, but can get annoying. You definitely have to get used to it. However, the cmd+k quick fix thing is insane. Its literally made me at least 200% more productive, and I think that might grow to 300% as I learn to use it and it gets smarter (they just added Gemini 2.0 Flash; can't wait to try that out).


Years?

Tried cursor on my amiga II and wasn't that impressed tbh

Cursor was an AmigaBASIC compiler for AmigaOS.

I tried it last month on a medium size personal project and was blown away by the autocomplete. I'd previously staunchly refused to try it on the grounds that I'm too productive in IntelliJ, but at this point I'm most likely going to start paying for both.

I don't know if I'm ready to use it as a daily driver, but there are certain kinds of tasks—especially large refactors—where its ability to rapidly suggest and accurately make the changes across a file is incredibly valuable. It somehow manages to do all of that without ever breaking my sense of flow, which is more than I can say for Copilot's suggestions.

And yeah, I'm with you that autocomplete is the way to go. I think chat is a red herring that will have long-term negative effects if it's used extensively in a codebase. Autocomplete keeps you in touch with the code while still benefiting from the co-pilot, and Cursor's UX for that is far and away the best I've seen.


did you use intellij with copilot auto complete before that?

i've started using aider with https://aider.chat/docs/usage/watch.html works great and you can keep using jetbrains IDEs

People are sleeping on codeium. I've found their AI assistant to be much better than cursor

Cursor has obviously figured out marketing better.

I switched to Windsurf 2-3 months ago, feels a lot better for me.


I was looking at Windsurf and Cursor as well, what are the differences?

It's hard to to rate the quality, I just feel like it does a better job of knowing the codebase and what I am working on via whatever mechanisms they have implemented.

I also find the DX better, I only really use the right click to mark code to talk about, and then the chat. The accept/reject changes UI works better imo.

In short, I barely have to do anything to use the AI features, just feels right.

---

Just try both, I didn't feel like Cursor suited my style much, Windsurf had me hooked instantly.


Agreed. Windsurf is a lot better

What do you find better about it?

Assuming windsurf.org is the correct website, I don't get a sense that it is legit or ready for prime time.

The FAQ link goes nowhere (afaik there is no FAQ), the page language selector is buggy - it randomly shows me other languages and is stubborn to accept when I switch back to English. Also, my first attempt to reach the main page was a 502 error.

Also, I don't see anywhere that tells me who makes this editor.

I'm supposed to trust some unknown group of people and install their software?


I don't even know what windsurf.org site you're referring to—for me windsurf.org redirects to goaccess.org, which is a sports organization.

The link you're looking for is https://codeium.com/windsurf


How's their autocomplete? I'm honestly not interested in tighter integration of chatbots. What blew me away about Cursor was how much better it was at autocomplete. I honestly probably would have tried it sooner if people emphasized that strongly enough in online dialogs, but it weirdly always seems to get relegated to an afterthought compared to the flashy chatbot, which was... fine, I guess?

The free Codeium autocomplete was what I was using for the past year and it was really good. And Windsurf added Supercomplete (basically Cursors tab tab compete), but only in paid version.

Because the composer is actually the bees knees, especially on larger projects where you need to reference say 5 different files with interface definitions and 3 other libraries using them.

That doesn't really answer my question about autocomplete. I don't actually find these editors useful, they try to do too much too fast. Cursor wasn't much better than Aider, which wasn't great.

Where I do find value is in the autocomplete/next edit functionality.


Yeah the tab completion is so much better than copilot 3 months ago (which is when I switched to cursor 100%) it isn't even funny. Copilot was getting less useful as the time went by - I guess they wanted to make it cheaper and dropped the ball on quality. Cursor OTOH sometimes reads minds.

“Copilot puts the human at the center of the creative work that is software development. AI helps with the things you don’t want to do, so you have more time for the things you do.”

… until we train our model on your usage data and totally replace you


If we actually could get to a world where programmers can be replaced, we'll also likely find that vast swathes of the population will be replaced. Then we'll need a totally new conversation on how society should look. That conversation is coming no matter what, because there will not be a global consensus to stop ML development, esp. on the war front.

In a world where programmers can be replaced, it's less clear to me that plumbers, janitors, electricians, construction laborers, etc, are also all out of a job.

Here is a saying that I really think summarises "AI will replace white-collar jobs and Robotics will replace blue-collar jobs".

Yeah, but it doesn't look like there's been much progress on the robotics front.

Until reasoner AI 10x's research and solves it far ahead of schedule.

You don't even need robotics, just a good multi-modal reasoner model with domain knowledge that you can attach to a headset manna-style [0]. The only thing that makes blue collar work different from any minimum wage labor is the System 2 reasoning required, and that'll get solved.

[0] https://marshallbrain.com/manna1


It sounds like you've never used a welding torch, installed a kitchen sink, or done similar blue collar work. These jobs will never be replaced by robots, or by a non-trained person wearing a headset.

> It sounds like you've never used a welding torch, installed a kitchen sink, or done similar blue collar work. These jobs will never be replaced by robots, or by a non-trained person wearing a headset.

Why do you think they will never be replaced by robots?


Not the person who said it and I wouldn't say "never"...

But I will say that until we have a robot that can fold laundry, we won't have a robot that can go into your crawlspace and replace a chunk of crusty old galvanized pipe with copper or pex.

Robots have excelled, so far, in controlled environments. Dealing with the chaos of plumbing in a building that has been "improved" by different people over the course of a century is the opposite of that.


We do have robots that can fold laundry (in a regular laundry room, and supposedly trained with a generalist policy that can learn other tasks).

https://www.youtube.com/watch?v=YyXCMhnb_lU


One thing is as sibling post commented, the complexity of such jobs are staggering from a robotics point of view.

The other thing is that the salary of the plumber or welder is in the range $20/hr to $40/hr. Can you make a general purpose, agile robot function at a total cost of ownership that's substantially lower than this?


Also, you know, muscle memory. The idea that you could slap a headset on a rando and just walk them through doing a trade is ludicrous. It's a great teaching tool, but you can't use it to replace a trade worker real-time.

Think secondary effects. What does a world in which almost every programmer can be automated look like? It looks like massive extremely fast technological development, how to build and program robotics will be solved almost instantly. With solved robotics goes every other labor.

We don't get the same current X productivity with 1/100th the people. We get 100x productivity, controlled by a few people / megacorps, until they lose control of it too.


I still don't see it. I think for quite a while replacing a single engineer will be costly and resource-intensive. I think many companies would be happy to replace all programmers with AI that are just as slow as programmers, but marginally cheaper. That doesn't mean that we'll suddenly have massive extremely fast technological development.

As a programmer, I think it's easier to replace programmers than many other occupations. We work with generic DSLs called programming languages, which can be easily handled by AI. Most of what we produce can be easily parsed by an AI. We work in small, incremental tasks which is perfect for AI. Most of us use the same set of tools. What else? UI is similar everywhere, especially in LoB apps. What we produce is repetitive, most of us practically doing clever meshups of the code on the Internet.

Thank Crom I'm old. But I'm worried for my kids.


"The report of my death was an exaggeration." - Mark Twain.

I don't see it as "replacing programmers" but rather, trying to solve the types of problems that programmers solve, but more cheaply.

I'm not sure your age relative to mine, but I got my start a hair breath after 9-11 in the defense industry. I've experienced the "tectonic" shifts in programming since that time. In each case, Programmers seemed to shift away from the work that was being done more cheaply, to work that require more specialization that was harder to be done.

I see the promise of AI as a threat to my job, but the reality is likely more like what happened with automobiles vs horses. Less horse shoe makers, more car mechanics.

Some day, I may be too tired and too dumb to become a mechanic. But today is not that day.


Cobol was also supposed to give power to business people so they could in plain English language, "program" the computer. It never happened but for sure advances in PL changed how programmers interacted with computer. We went from machine code to Python,React which is huge technological leap.

The same goes for all the AI agents, it will change how we work but will not make programmers obsolete. I'm more worried about 3 million engineering grads per year from India replacing me than CoPilot/CursorAI


Doing the small incremental tasks is the easiest part of programming. All the complexity lies in taking giant specs and breaking it into manageable goals and then small tasks. AI is still pretty far from doing that, but I agree, I would not want to be a teenager right now and face the prospect of entering the workforce after 10 years more of progress.

Unless work is a thing of the past by then, and we were the last generation of suckers who lived under the reality where 40hrs/week was just the way life is.


In the long run, yes, but if programming were so rote and simple, we all wouldn't have jobs in the first place.

Look, technology has always replaced jobs, throughout history (how many of you know professional lamp lighters, elevator doormen, or switchboard operators?) But the thing is, tech has always replaced the jobs that are easiest to automate ... not the "intellectual work".

I would be far more worried about being a real estate agent, food preparer, or heck, even a lawyer (for some specialties at least). I think they're all at more risk than programmers.


real estate agents, lawyers are all licensed professionals... they will not be replaced but possibly these professions will be overcrowded.

Sure, programmers work in formal languages, but they can't make mistakes. If your marketing copy contains a few lies or your hero image has a guy with six fingers, the recipients will still infer the intended meaning (or ignore it). If your program has a subtle bug, the computer will faithfully do the wrong thing.

LLMs are great for tasks where small mistakes don't matter, and useless for ones where they do. Generating a 10,000-line Rails app where 10 of those lines are wrong is not very useful.


Of you think most programmers won't mess up 10 lines every 10000 lines then I have a bridge to sell you.

I mean this is literally happening as we speak - the process has started and it's accelerating.

Governments need to be talking about this like yesterday, and very few are from what I'm seeing.

AI companies and others using AI are going to profit from this massively, at the expense of many more. We need a better tax system to redistribute these profits across society as a whole.

The issue is capitalism but AI exploding is going to exacerbate wealth inequality like nothing else.


They are not hiding it anymore and are testing as to how much they can get away with total job displacement. First artists, then musicians, then writers and now programmers.

Better prepare for 2030 then as I am already doing so [0].

But from this year and in the next 5 years I'd also read this very carefully [1].

[0] https://news.ycombinator.com/item?id=42563239

[1] https://www.weforum.org/publications/the-future-of-jobs-repo...


Care to share how you are preparing?

Save as much money as possible and start tinkering with hardware is my approach. Not sure if the right move isn’t tinkering with water and sewage pipes at this point…

Every second programmer is considering moving to trades.

More time for the things I do?

All AI tools I've used turned my daily into fighting the agent.

While LLMs can save time on tedious repetitive tasks, they are atrocities in producing bug free code. Even if they reached 99.999% accuracy without, they wouldn't be worth it: If I wrote a function and it turns out buggy, I can, even months later, investigate and find the culprit.

If an LLM introduces a bug, I would rather scrap everything. That is, every piece of code the thing produced for a given project.

If my boss tells me my job is to provide a patch, I quit.

And, LLMs have proven so far that they can't produce innovative solutions.


If LLMs could produce innovative solutions, they wouldn't be large language models. They'd be valuable and indispensable software engineers to covet instead.

Don't you agree that having a free artificial junior developer at your beck and call is better than not having freely and quickly produced code that can help point you into the direction your engineering needs to go in?

As a senior developer, don't you also fight with managing your subordinates? Don't you have to solve the management problem and do leadership tasks?

As a senior developer, don't you also have to deal with code that is not bug free, as you yourself don't always produce bug free code? Especially on the first try.

Maybe your approach to LLMs is wrong? Maybe you expect one shot solutions when that is not how LLMs are supposed to be used? Instead, you could invest in working with this new tool and then see phenomenal productivity gains. And also maybe grow a new capability in software development with the use of LLMs in your engineering.


I wouldn't agree to call any of that a developer, adding junior to the term doesn't move the needle.

I wouldn't call a calculator a mini accountant. These things can operate much faster than an army of mathematicians but they remain tools. Of course l, tools humans can leverage. Productivity gains are phenomenal.

Perhaps my input to the topic wasn't clear. I use LLMs. I use LLMs in the context of software engineering. I don't treat them as my peers. I don't dream of a future where this tech can solve more than its often misunderstood scope.

We are letting ourselves be confused by those who do have an interest in, or can't do better than, up selling.

Engineers are already having to deal with very difficult to reconcile side effects. Maybe you aren't seeing it yet, or your comment would have at least recognized and touched on those.


If they don't, it will be Cursor, Trae, Cline, Roo Code or Goose? Seems this is coming this year in a big way regardless if Copilot does it or not. I'm guessing we all have to pivot how we work or get left behind?

If they really put the human at the center then they'd contribute back and publish the weights so that we can run this locally and build our own AIs on top, etc.

Not my quote but: "The trillion dollar problem AI is trying to solve is wages. Your wages."

Not wanting to come across all Luddite about it, but we really ought not to stumble blindly into all this.

That said, I remain sceptical. The 10% of my job which is coding just isn't the difficult part.


> Not my quote but: "The trillion dollar problem AI is trying to solve is wages. Your wages."

You know that your job as software engineer is automating tasks that other people could be doing manually, right?


True, but if the coding becomes trivial then you'll be replaced by a good PM who can work with an agent team.

Coding was always trivial (see code generators, snippets, templates). The issue is making all the little pieces work and adjust them as needed.

No-code database work was solved with Filemaker Pro decades ago. It turned out that you also need an attention span and an interest in the subject.

Most of the flowchart automation software I've been playing around with is already good enough. The Python ecosystem is good enough. Ollama is good enough. The small, purpose built models people seem compelled to make are good enough.

If I can replicate your SaaS business model by crawling your site for a description of services, what does that do to the landscape?


As always, the edge cases, which you will be aware of by talking to domain experts or the experience of managing the production services for years. There's a reason we're still running decades old software written in no longer maintained programming languages. The only valuable specs was always the code.

Totally agree with your point, and in fact my first programming gig was encapsulating COBOL next to these domain experts. Very generally speaking the edge cases exist because you are trying to be all things to all people.

Rebuilding an existing, predefined service for exactly one use case is straightforward. The tools available are "good enough" to do the job for one person. That person hasn't been building another SaaS, they've been posting well designed documentation on github.


And that's why software engineering =/= coding

I'm currently reading Modern Software Engineering by David Farley and I can say that we won't see programmers replaced unless all the concerns pointed out in this book and others has been resolved.

> The 10% of my job which is coding just isn't the difficult part.

Even so, is there any reason to believe that the other 90% won't also be automated?

As you said, the goal of these systems is to replace wages.


I'd describe a lot of my job and my team's job as "figuring out what to do". Lots of talking to people, debating options, weighing up trade-offs.

Customer support, "pre sales" stuff too.

Distilling complex situations involving tech but also people into bullet-point reports for management.

To reference another one of these neat little phrases: building the right system != building the system right. The former is hard to automate, the latter is indeed more open to AI involvement.


I use Copilot and it’s a game-changer for my productivity, but I really wish it was capable of natural language searching. So for example I could ask it “show me all places in the code where x is assigned a value but a flush() command is not immediately issued afterwards”.

I answer those kinds of questions by piping my entire codebase into a large context model (like Claude or o3-mini or Gemini) and prompting it directly.

Here's a recent example:

    files-to-prompt datasette tests -e py -c | \
      llm -m gemini-2.0-flash-exp -u \
      'which of these files contain tests that indirectly exercise the label_column_for_table() function'
https://gist.github.com/simonw/bee455c41d463abc6282a5c9c132c...

Your code is pretty small if it fits within the context if any major LLM. But very nice if it does!

https://github.com/bodo-run/yek

This is more sophisticated for serializing your repo. Please check it out and let me know what do you think?


Yek is fantastic -- I've converted my whole team to using it for prompting. As input context windows keep increasing, I think it'll just keep becoming more and more valuable -- I can put most of my team's code in Gemini 2.0 now.

doesn't it get very expensive quickly if you don't use prompt cashing

I've had the occasional large prompt that cost ~30c - I often use GPT-4o mini if I'm going to ask follow-up questions because then I get prompt caching without having to configure it.

> So for example I could ask it “show me all places in the code where x is assigned a value but a flush() command is not immediately issued afterwards”.

Could this not work? (with wathever flag to display the surrounding lines)

  ripgrep 'x =' | ripgrep 'flush()'

I think Cursor can do this, if you @codebase, isn’t there something similar in copilot? E.g., your codebase being vectorized, indexed, and used as an embedding?

I have had Cursor review all my file content solving similar things, but I would think it's limited to VSCode search capabilities and IMHO it's not great. I love how Pycharm handles indexing so search is fast and accurate. If they ever get agents going at the same quality as Cursor I would probably go to Pycharm for that advantage alone.

Cursor implemented their own code indexer so its RAG is not limited by VS Code search.

Very cool! I'm working on a similar agent, but FOSS (https://github.com/ai-christianson/RA.Aid) --It'll be really interesting to see how the GitHub agent works.

On first impressions, it looks like they are taking the route of integrating tightly with VSCode, which means they'll be competing with Cline, Cursor, and Windsurf.

IMO it might be good for them to release this on the web, similar to the replit agent. Integration with GitHub directly would be awesome.


Very neat!

   It can suggest terminal commands and ask you to execute them
People were already blindly copy pasting commands from StackExchange answers, but at least those are moderated. I wonder how long it takes before someone nukes their project or root directory.

> "ask you"

I get the concern, however. But, short of nuking the actual .git directory, the upsides are worth it, in my opinion. Cursor offers filtering via a mini-prompt in its YOLO mode, so does Windsurf. The idea is killer, it allows it to progressively build and also correct its own errors. e.g. Cursorrules can be told to run tests after a feature is generated, or typecheck, or some other automated feedback-loop your codebase offers. That's pretty neat!

Better yet, setup a dev container first. Then, at most, your local DB is the only concern. If still paranoid (as you should be), suspend your network while the agent is working. :D


That's why you need Jetbrains local history feature.

nuke a project or root is the best case scenario

The likely case is that it almost never does anything harmful. I've never once seen an LLM tell me to run rm -rf /

yet

I will need to rebuild that dev container.

I feel like anytime I try these "agentic" programming tools they always fall on their face.

Devin was pretty bad and honestly soaked up more time than it saved. I've tried Cursor Composer before and came away with bad results. I tried Copilot again just now with o3-mini and it just completely hallucinated up some fields into my project when I asked it to do something...

Am I taking crazy pills or do these tools kinda suck?


You might be asking it too much, or not giving it enough context.

I've found the Cursor Agent to work great when you give it a narrow scope and plenty of examples.


Perhaps, but at that point I feel like I'm spending more time feeding the tool the right prompt and context, going back and forth with corrections, etc... when I could just write the code myself with less time and hassle.

I've definitely had far more success with using AI as a fuzzy search or asking it for one-off pieces of functionality. Any time I ask it to interact directly inside my codebase, it usually fails.


I keep going back and forth on whether Agents are good for the software development discipline.

While I think it's extremely short-sighted that we continue to push full steam ahead on AI automating away jobs, I can't deny that LLMs have given my development flow a decent productivity boost. 80% of the time, my workflow with Cursor looks similar to the golden path depicted in this blog post - outline the changes I want made -> review the code -> suggest edits/iterate -> ship it. There's undoubtedly a class of problems where this feature can slot in and start chipping away immediately.

The other 20% of the time, Cursor will hit a wall and is unable to complete the task through just prompting. It will either introduce a subtle bug in its logic, or come across an error that it incorrectly diagnoses. These stumbles can happen for a variety of reasons:

  1. Poorly documented code - the LLM infers the wrong responsibility for a piece of code, or is led astray by old comments

  2. Misleading or unhelpful errors from 1st/3rd party libraries

  3. Task is too complex - perhaps I asked for more than I should have
In any case, the "self-healing" functionality that Agents rely on to iterate is often insufficient. Prompting for a fix usually just leads me in circles or further down the path of a bad solution. In these instances, I have to drop the coding assistant altogether and do things the old fashioned way - gain a sufficient understanding of the code and figure out where the LLM went wrong (or just write the solution from scratch).

I guess going back to my initial point, it feels like the easy answer is that Agents are good if you're a senior/experienced developer. This means that in the short-term the demand for junior engineers will dry up, since we have Agents to do the rote work, but doesn't this mean that we're effectively choking out the pipeline for experienced devs? Though they're low in complexity/value, the tasks we will handoff to Agents are immeasurably useful for building software fundamentals.

It seems like in 2025 we've suddenly forgotten about "teaching a man to fish"...


Can anyone speak to weather it's worth going back to co-pilot from cursor. On the face of it $10 a month for unlimited messages looks compelling. Is it really unlimited? From these videos it's starting to look pretty similar to cursor...

I want to know as well. For me, everything I tested so far still can’t beat the autocomplete of Cursor both in speed as intelligence…

Update: I have used co-pilot agent mode for a couple of hours today.

It's definitely catching up with Cursor but not there yet. In particular: - Edits take quite a bit longer to apply, breaking flow - Autocomplete predictions (equiv. of Cursor Tab) not as good

But in the past 6 months or so it's gone from being pretty hopeless to very useful. If I was forced to use it instead of Cursor it wouldn't be a huge deal any more.


I've been switching back and forth a bit at work recently, and I find Cursor still has a slight edge.

I use both too.

1 year ago Cursor was way ahead. Copilot had only one model and it was 4k input 4k output and it was forgetting about the previous reply. It was horrible.

One year later, the input context is at 128k token if you use vs code insider (64k for stable). You have multiples models, etc. they still cut corner like o1 barely answer (I guess they severely restrict the amount of output token) but sonnet 3.5 works surprisingly well.

They do have rate limit, you can check their issues tracker on GitHub, there are complaints about rate limits every day.

All in all, for $10 it’s good value. Cursor is also great for $20 and you get more models and more features.

Copilot is catching up fast, but they are not there yet. But they finally woke up.


https://github.blog/changelog/2025-02-06-next-edit-suggestio...

Next Edit might finally compete with Cursor Tab. I have not tested it yet though


Thanks - having tried Co-Pilot again today after a 6-month Cursor hiatus, I think this is a good summary

I am so over this LLM obsession. Please let it end

Sorry, this is as paradigmatic a shift as microcomputers, the consumer internet and smartphones.

If you really really hate it, you'll either have to leave the industry or stay way out of the mainstream.


Those shifts were plainly and immediately obvious.

LLMs have yet to deliver on any of their promises and every interaction with them shows the fundamental limitations.


I am so over this internet/computer/combustion engine obsession. Please let it end ;)

I downloaded Insiders and installed Github Copilot Chat, getting:

"GitHub Copilot Chat is not compatible with this version of VS Code. Please make sure that you have the latest versions of the extension and VS Code installed."


Same here

Sorry about that. Can you switch to copilot chat pre-release extension. Should be a big button "switch to pre-release".

We are tracking this issue here https://github.com/microsoft/vscode/issues/239836


How is anyone surprised by their incentive to replace SWE jobs? SWE's have an immensely big piece of the companies budget, and that pain is what has led us to where we are now.

Remember, this pain is also human made. We created how software was made, people latched onto it, acted as if it was a science, not realizing that humans can also CHANGE the way software is made.

You can't change physics, but you can change human made concepts (up to a certain degree ofcourse, CS itself is still key).


Was using Github copilot with VSCode. Found it really helpful for small things.

I gave Codium Windsurf a spin to evaluate it on an internal app built: a simple svelte-kit/typescript app ontop of Twilio. It's been on todo list to host it (on lightsail), so the suits can use it themselves.

Asked Windsurf to enhance some capabilities: adding a store library backing to dynamodb.

Windsurf's code failed to run. svelte-check reported 6 errors. I asked windsurf to fix the errors, and it did, creating 10 errors. Repeat one more time and resulted in 16 errors. If I wasn't busy, I would have seen how many errors it would get up to before I got bored. It felt like repeatedly opening a JPEG and resaving it.

Giving up on using Windsurf. Gonna try Cursor next. After that, back to Github.


So what happens when GitHub's auto-SWE is good enough to take on "Write me a GitHub clone, but with additional features X,Y, and Z"? Will they have regret it?

I know this is slightly far-fetched and AI-coders are coming regardless of if GitHub is working on it, but it does seem like these companies are destroying part of their moat (codebase and SW infrastructure). (I also realize their bigger moat is brand and existing user base).


If that works, they can always put more money into improving GitHub.

I see this as a technology that "lifts all boats" but like so many technologies, it does lift moneyed boats a lot further.


GitHub's real moat is their good and responsive support (even for free users), infra (GH Actions, GHCR, etc.), and value as social network. Indeed, people use GitHub because all their peers do (network effect).

Gitea focuses on self-hosting AFAIK


I mean, we have GitLab, Forgejo, Gitea, Sourcehut, and many others all vaguely in that space and they're not eating GitHub's breakfast. If anything, it reinforces just how wide that moat of brand, momentum, and influence/power is, and it might get even wider in an AI world.

Does anyone remember 4GL languages[1] and CASE tools that were all the rage in the '90s and were supposed to make software developers obsolete?

[1] https://en.wikipedia.org/wiki/Fourth-generation_programming_...


What's the current best free ($) coding assistant? I like Gemini + VS Code, but it seems a little hamstrung by what VS Code extensions are allowed to do.

I'm actually curious about this too.

I've been experimenting with having Claude write the code and DeepSeek analyze, pasting the analysis into Claude to modify the code, rinse and repeat.

Surprisingly, or maybe not, they have come up with an almost fully standards compliant LALR parser for APL -- which is notoriously hard to parse. DeepSeek gets in there "nice and deep, boy" and shakes out all the corner cases while Claude does a fairly good job of implementing the fixes without breaking anything else. Just a few more round and...

But, as one might imagine, this isn't very practical for anything other than just playing around as I have to copy/paste code into the two different chat windows.

I suppose I could get them to write an extension for gedit (my preferred 'code editor'), hmm...


I get by just fine with Codeium on VS Code, although lately I have been getting some crashes

vscode with cline or roo + openrouter + free models. The latest free gemini ones are pretty ok.

The post appears to have a lot of points in a relatively short amount of time (165 in 3h). I wonder why it has fallen of the front page of hn.

I came here by submitting the link. Indeed it’s a little suspicious

Wonderful! Another thing that will have downtime from the folks that brought you GitHub Actions (which goes down every month)

Now when GitHub Copilot and its Agents go down are you just going to be waiting for it to fix your critical issue, or will you just roll your sleaves up and do it yourself?

Can't wait for these agents to all stop working all at the same time the moment GitHub has another outage.


For someone who wants a succinct summary of all the top comments summarized in simple English:

https://chat.deepseek.com/a/chat/s/9a9c6519-9fd1-4b8a-87c0-7...



This was sorely missing. This might get me to give it another shot in case they have finally un-fucked their suggestions which have just been getting worse ever which the release.

I don’t see how this is a net positive for programmers, and for people in general. It’s hard to fathom why anyone worked on this.

There is not a limited amount of coding work that is available to do in the world. Many more things will be automated if automation is cheaper and easier, and no matter how much software development work is done by bots, there will always be work for people to do.

And this is how we get automated out of a job. To a product announcement.

You must be a pretty bad programmer if this can take your job.

Windsurf is a solid tool in this space. CoPilot and Cursor are way behind

Time to quit using Github?

awakened by replit/cursor

Too late, we all went to cursor.

Who is 'all'?

not sure, but apparently

> Cursor is the fastest-growing SaaS company of all time from $1M to $100M in ARR, hitting the $100M milestone in roughly 12 months at the end of 2024—faster than Wiz (18 months), Deel (20 months), and Ramp (24 months).

Source: https://sacra.com/c/cursor


Makes sense cause everyone went to Wiz, Deel and Ramp.

The autonomous agent stuff is what’s had me worried the past year as lots of open source projects have popped up that do similar capabilities. It’s really cool technology but it will replace humans, no matter how garbage the code quality is. I’ve seen garbage from humans, I’ve seen garbage from AI. As long as metrics are being met, business does not care how they achieve their goals.

I for one am going to welcome our new agent underlords as I’m all about self preservation.


Vs code speech makes Siri and the billions spend look more shit than it is

Wow, Copilot is so behind Cursor it's pathetic

And the M365 copilot is utter trash as well. It's the only thing we're allowed to use at work and it can barely write functional PowerShell.

Its lightyears ahead on the name. Remember that book. Guys remember that book that says adding more programmers actually doesn't speed up but extends the projects completion time!

I read that book I am learned. The unlearned probably thinks the name refers to a planes copilot.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: