Hacker Newsnew | past | comments | ask | show | jobs | submit | keeda's commentslogin

Technology is primarily an accelerator. It just accelerates things, both good and bad, that were already in motion or at least already possible. Which is why things like healthcare got better but things like wealth inequality got worse.

What needs to change is the system in which that technology exists inside, because otherwise removing technology will still keep us on the same trajectory to the same destination, only much slower and possibly with much more pain.

What we're seeing right now with layoffs and everything else is simply an acceleration of our current trajectory. We were always going to get here, AI just got us here a few decades ahead of schedule.

For once, however, we have a technology that could let us change this trajectory. I've said this before, but the capital class held so much power because it took a lot of people, and hence a lot of capital, to take on large endeavors that created new wealth. But things were rigged such that those who provided the capital also captured most of that new wealth.

Now, just as AI lets companies (i.e. capital) do the same things with fewer people, it also lets people do the work of entire companies by themselves... i.e. without capital. That is a big enough shift in power dynamics to alter the trajectory in previously inconceivable ways.


> The world only needs so much software.

Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.

But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.

I just knew that this field would never get saturated because it was impossible to run out of things to write software for.

But these days...

I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.


> Pretty much any cognitive thing we do manually could be done in software.

Yes, although I suggest being careful with that kind of thinking.

https://www.orwell.ru/library/novels/The_Road_to_Wigan_Pier/...


Ooh, I hadn't read that one, have put on my list. I couldn't read the page properly because ads keep popping up and making the page jump around... but it seems the linked section was be about displacement of workers? If so, that's always been true of all technology, but that's less a problem with technology and more with the social system it is applied in. I just posted this comment elsewhere that may be relevant: https://news.ycombinator.com/item?id=48078930

It's not about the displacement of workers. It talks about a fundamental principles-level objection to unbounded "progress". It's not an absolute argument and Orwell himself says so, but it is worth keeping in mind.

Try reading it here: https://www.george-orwell.org/The_Road_to_Wigan_Pier/11.html


I've been laid off from every job I've held (and once I was even re-hired a month later!) so I know the feeling. There seem to be others here who are also impacted and I fear the overall trend will only continue, so I wrote up my thoughts on how to future-proof your job search. I do think the GP comment about institutional knowledge could be a key part of it. Hope this helps: https://news.ycombinator.com/item?id=48067459

A few people here have been impacted, so I want to talk about something constructive that could help them. As someone observing industry trends from the outside for a while, my advice to those looking to get hired these days: Build something useful from scratch – on your own –- that you can show off as soon as possible.

The buzzword everyone is looking for is "high-agency." (No, not those agents, but yes, those will help.) Basically employers want someone who will start something from scratch and take it to the finish line by themselves.

The interesting thing about this is, it is by definition not something you can put on your resume; It is something you show, not tell.

Yes, you need to do this even as you go through the absolute hell that is a job search. But trust me, this will a) help get a much better job, and b) help in the long run throughout and beyond your career. This will be the most valuable skill in the future.

You don’t need to use AI, but looking at the timeframes and skills in demand, yes, you very likely want to use AI.

A few other thoughts:

1. Target an area you are very familiar with. This will sharply cut down the time to MVP. This will be a challenge for the more junior folks, who should consider reaching out to senior mentors. Mentors, consider outsourcing a suitable personal project to them.

2. It could be something you are an expert on at work, if your employment contract and IP laws allow. As a bonus, releasing this as open source, or even a competing product if you’re so inclined, will have that intangible bonus of sticking it to your ex-employer.

3. Even if heavily using AI, keep your hands-on skill active. Most companies still do old-school leetcode interviews.

4. Bonus if you do something multi-disciplinary. Sprinkle in a domain you have no background in -- design, writing, sales, marketing, data science, frontend, whatever. You'll definitely need AI for this, and even when you make mistakes, few will harshly judge somebody down on their luck trying to expand their boundaries.

Hope this helps, and all the best!


I understand that your advice comes from the right place. However "High-agency" is the "Full-stack Engineer" of the AI era.

A single salary covering many disparage positions and roles. It's been reworded b/c with AI, apparently you don't even need to be an engineer (or expect to be paid as one) anymore!

Nothing new under the sun.


Hmm, I think "generalist" is the more current term for "Full-Stack Engineer." But that's more about technical skills. "High-agency" is more a combination of personality traits and technical ability.

So in terms of buzzwords it would be something like generalist + self-starter + go-getter + hustler + finisher.

They won't say it, but everyone wants basically a solo founder, except one who (to your point) gets paid as an employee.

Which is why I am saying this is going to be the most important skill. If they don't pay you enough, you could just go be a solo founder for real.


The bottleneck was ALWAYS the code, which is why everything was built around it.

This is the key line right here:

> Negotiating, agreeing, communicating the shared picture of what we are building has become the work. And it’s just as hard as it was.

But if software (via code) is what we ultimately produce and sell, how did we get here? The main reason is the following lemma:

Lemma A: "The loss of fidelity of what can fit in any one person's head scales superlinearly (exponentially?) as the scope of work scales up." Or more colloquially: "It is impossible to fit a large scope of work in any one person's head." This is largely because any non-trivial task is a fractal of smaller dependencies.

The chain of logic to today's situation is then obvious:

1. Writing code requires humans who are slow and expensive.

2. To do large things we need large groups of humans.

3. As the number of humans grows (like beyond 5? 10?) it becomes impossible to keep them aligned, largely because Lemma A.

4. We need to coordinate these humans, so: enter managers!

5. But even a manager can't manage too many people and coordinate with all other managers because, again, Lemma A. Enter hierarchy!

6. As the size of the organization grows, so does the coordination overhead (exponentially, if Google AI overview is to be believed) until as,that quote surmises, the majority of the work is just that.

7. Coordination costs (or "Conway Overhead" as I call them) are very well understood in the literature, but this also brings in undesirable dynamics like bureaucracy, politics, organizational metrics (also due to Lemma A, but now triggering GoodHart's law!) and eventually territorial disputes and empire-building. Lots of friction and subtle mis-alignments.

As you can see the overhead scales superlinearly with the number of leaf workers added. And for the same reason, once the leaf workers are decimated because one worker can now do the work of a whole team, the entire organizational overhead above that is gone, which is also a superlinear change! Assume a conservative 2:1 reduction in ICs and a 1:5 manager:reportee ratio, a simplistic hierarchy that was:

1 CEO -> 5 VPs -> 25 Dirs -> 125 Managers -> 625 ICs

now becomes something like:

1 CEO -> 12 SVPs -> 60 Sr. Managers -> 310 Sr. ICs.

Not only did that eliminate 300 ICs (mostly junior I suspect) it took out 60 managers and removed an entire layer of Directors from the hierarchy! Worse, the leaf-layer will probably get decimated 5:1 not 2:1, and this will also eliminate coordination-specific roles like Program Managers. The rest of the hierarchy is much fewer but mostly more experienced (or politically savvy) people. They will be paid more, but not superlinearly more, of course, what do you think this is, socialism?

It's very much a pyramid scheme of cards built on that one bottleneck. And this bottleneck applies for pretty much all knowledge work. Once that bottleneck opens up, everything collapses. This is why I fear that the coming job changes are going to be much more disruptive that people realize, something I'm extra concerned about as a parent of high-schoolers.


I think all coding will become vibe coding, but it will be no less an engineering discipline.

Note: I still review pretty much every line of code that I own, regardless of who generates it, and I see the problems with agents very clearly... but I can also see the trends.

My take: Instead of crafting code, engineering will shift to crafting bespoke, comprehensive validation mechanisms for the results of the agents' work such that it is technically (maybe even mathematically) provable as far as possible, and any non-provable validations can be reviewed quickly by a human. I would also bet the review mechanisms would be primarily visually, because that is the highest bandwidth input available to us.

By comprehensive validations I don't mean just tests, but multiple overlapping, interlocking levels of tests and metrics. Like, I don't just have an E2E test for the UI, I have an overlapping test for expected changes in the backend DB. And in some cases I generate so many test cases that I don't check for individual rows, I look at the distribution of data before and after the test. I have very few unit tests, but I do have performance tests! I color-code some validation results so that if something breaks I instantly know what it may be.

All of this is overkill to do manually but is a breeze with agents, and over time really enables moving fast without breaking things. I also notice I have to add very few new validations for new code changes these days, so once the upfront cost is paid, the dividends roll in for a long time.

Now, I had to think deeply about the most effective set of technical constraints that give me the most confidence while accounting for the foibles of the LLMs. And all of this is specific to my projects, not much can be generalized other than high-level principles like "multiple interlocking tests." Each project will need its own custom validation (note: not just "test") suites which are very specific to its architecture and technical details.

So this is still engineering, but it will be vibe coding in the sense that we almost never look at the code, we just look at the results.


This is complete insanity for anyone that actually works on production-grade, hundred billion dollar systems that are critical to the function of the global economy.

Other than for your own pet projects, almost all of what you said has no place for "vibe engineering" / or "vibe coding" on serious software engineering products that are needed in life and death situations.


That may be true for highly critical systems, but those are a tiny, tiny, tiny minority of all software projects. I mean, how many engineers work on aviation or automotive or X-ray machine or other life-and-death code compared to pretty much anything else?

And not all "production-grade, hundred billion dollar systems" are that critical. Like, Claude Code as we all know is clearly vibe-coded and is already a 10-billion (and rapidly increasing!) dollar system. Google Search and various Meta apps meet those criteria and people are already using LLMs on that code, and will soon be "vibe coding" as I described it.

AWS meets that criteria and has already had an LLM-caused outage! But that's not stopping them from doing even more AI coding. In fact I bet they will invest in more validation suites instead, because those are a good idea anyways. After all, all the cloud providers have been having outages long before the age of LLMs.

The thing most people are missing is that code is cheap, and so automated validations are cheap, and you get more bang for the buck by throwing more code in the form of extensive tests and validations at it than human attention.

Edited to add: I think I can rephrase the last line better thus: you get more bang for the buck by throwing human attention at extensive automated tests and validations of the code rather than at the code itself.


This is you:

>> I think all coding will become vibe coding...

Nope. First of all, Let's get the true definition of "vibe coding" completely clear from the first mention of it from Karpathy. From [0]:

>> "There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." [0]

>> "I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away." [0]

So with the true definition, you are arguing that all coding will become "vibe-coding" and that includes in mission critical software. Not even Karpathy would go as far as that and he's not even sure that he even knows that it works..."mostly".

Responsibility is what cannot be vibe-coded. The major cloud providers and the tech companies that own them have contracts with their customers which is worth billions to their revenue. That is why they cannot afford to "vibe-code" infra that causes them to lose $100M+ a hour when a key part of their infra goes down or stops working.

So:

> Like, Claude Code as we all know is clearly vibe-coded and is already a 10-billion (and rapidly increasing!) dollar system.

That is not vibe-coded anymore and it is maintained by software engineers who look at the code at all times, daily before merging any changes; AI generated or not.

> Google Search and various Meta apps meet those criteria and people are already using LLMs on that code, and will soon be "vibe coding" as I described it.

Nope. As Karpathy described it, that would never happen and human software engineers will be reviewing the agents code all the times. But that would not be vibe-coding would it?

> AWS meets that criteria and has already had an LLM-caused outage!

Are they vibe coding now after that outage? I bet that they are not.

> After all, all the cloud providers have been having outages long before the age of LLMs.

That isn't the point. Someone was held to account for the outages and had to explain why it happened.

They will lose trust + billions of dollars if they admitted that they vibe-coded their entire infra and had 0 engineers who don't understand why it went wrong.

> The thing most people are missing is that code is cheap, and so automated validations are cheap, and you get more bang for the buck by throwing more code in the form of extensive tests and validations at it than human attention.

The risk is amplified with the companies reputation on the line and it's very expensive to lose. I'm talking in the hundreds of billions annually and a 10% loss of global revenues due to constant outages can cause the stock to fall.

So you do understand the contradiction you said earlier about AWS indeed strengthens my point on the limitations on vibe coding especially on mission critical software?

[0] https://x.com/karpathy/status/1886192184808149383


Even ignoring the semantic drift that has happened since he coined the term (on which there have already been a few HN threads), the key part of Karpathy's definition is "...and forget that the code even exists." Which is why I was careful to phrase it thus:

> So this is still engineering, but it will be vibe coding in the sense that we almost never look at the code, we just look at the results.

It is pretty clear that "giving in to the vibes" is simply "looking at the results." But I'm predicting that it is going to be an engineering discipline in itself. Note that I started with (emphasis added):

> I think all coding will become vibe coding but it will be no less an engineering discipline.

And then I went on to explain the engineering aspect as extensive technical validation. There is a role called Validation Engineers in many industries including semiconductors, and I posit that it's going to be everybody's primary role soon.

> Responsibility is what cannot be vibe-coded. ... That isn't the point. Someone was held to account for the outages and had to explain why it happened.

I never implied a loss of accountability anywhere, but I completely agree, and have posted about it before: https://news.ycombinator.com/item?id=46319851

That is still orthogonal to vibe-coding. People have been sloppy without vibe-coding and were still held accountable. The flaw is assuming all vibe-coding is slop, because my point is that validation will matter much more than the code, which means soon we may never look at the code. In fact, extensive automated validation is probably a better signal for accountability than "We looked at the code very, very carefully."


> Even ignoring the semantic drift that has happened since he coined the term (on which there have already been a few HN threads), the key part of Karpathy's definition is "...and forget that the code even exists." Which is why I was careful to phrase it thus:

The point of bringing up the exact definition is to draw a clear line on what defines "vibe-coding" in the first place. There is no "semantic drift" and, karpathy's entire tweet is used as the definition and I don't think you can separate any part of it at all.

In your first post, you mentioned "vibe coding" and by definition it includes not looking at the code and accepting all changes the AI agent suggests and copy pasting errors back to the agent until it is fixed without any understanding; exactly how karpathy first defined it.

> It is pretty clear that "giving in to the vibes" is simply "looking at the results."

Seems like you don't even understand what vibe coding is. "giving into the vibes" is not just looking at the results, it is not looking at the code and accepting all the agent's output without any understanding of the code.

> But I'm predicting that it is going to be an engineering discipline in itself. Note that I started with (emphasis added):

"Vibe coding" is not engineering anymore than "coding" is not software engineering.

> And then I went on to explain the engineering aspect as extensive technical validation. There is a role called Validation Engineers in many industries including semiconductors, and I posit that it's going to be everybody's primary role soon.

It already is, in the form of quality assurance. This was there for years alongside formal verification engineers and validation engineers for years and "vibe coding" is incompatible with all of this and breaks the software development lifecycle.

These roles were already a given at many companies, so there is nothing new that you actually said.


Almost no one works on stuff like that, so congrats on finding a corner case I guess.

Complete nonsense.

There are people who write software for hedge funds, quant firms, aviation and defense systems, data center providers, major telecom services used by hospitals and emergency services and semiconductor firms and the big oil and energy companies and that is NOT "almost no-one" and these companies see and make hundreds of billions of dollars a year on average.

This is even before me mentioning big tech.

Perhaps the work most here on this site are doing is not serious enough that can be totally vibe-coded and are toy projects and bring in close to $0 for the company to not care.

What I am talking about is the software that is responsible for being the core revenue driver of the business and it being also mission critical.


I could list dozens more sectors of the software industry that would far outnumber those you listed. And even within those you listed, those working on the mission critical parts are a very tiny fraction. Statistically, that is almost no-one.

E.g. there are 100s of millions of lines of code in a car, but the vast majority of that concerns non-critical parts like the dashboard; the primary Engine Control Unit has like ~10K LoC, and the number of people that work on it are proportionally smaller.

And if you think that is very well-designed code, here's something to help you sleep better: https://www.reddit.com/r/coding/comments/384mjp/nasa_softwar...


I would prefer hedge funds and traders to vibe code their software. Heck I am willing to do it if I mist.

You would never say that if you interviewed at a hedge fund or quant firm and they'll laugh at you before they look at your resume.

I am sorry I hurt you, or confused you. See, the intention of my post was to convey how bad of a programmer I will be and how badly I would vibe code their software that they lose all their money. Because I am not a huge fan of hedge funds. Or anything that makes the 1% richer. It was meant as a joke not a job request.

This take is too premature. We forget that AI is seamless for contexts that are in the training datasets (popular programming languages, open source libraries, well-documented algorithms, etc..).

It is very obviously hallucinogenic when it comes to new programming languages, new domains, and uncommon/poorly documented contexts. And AI is very poor at (3D) spatial visualization (making AI assisted CAD development incredibly hard).

AI is not capable of genuine logical thinking from fundamentals yet; these are highly trained, curated models.


LoC is perfectly fine as a metric for engineering output. It is terrible as a standalone measure of engineering productivity, and the problems occur when one tries to use it as such.

It's still useful, however, because that is the only metric that is instantly intuitively understandable and comparable across a wide variety of contexts, i.e. across companies and teams and languages and applications.

As we know, within the same team working on the same product, a 1000 LoC diff could take less time than a 1 line bug fix that took days to debug. Hence we really cannot compare PRs or product features or story points across contexts. If the industry could come up with a standard measure of developer productivity, you'd bet everyone would use it, but it's unfeasible basically for this very reason.

So, when such comparisons are made (and in this case it was clearly a colloquial usage), it helps to assume the context remains the same. Like, a team A working on product P at company C using tech stack T with specific software quality processes Q produced N1 lines of code yesterday, but today with AI they're producing N2 lines of code. Over time the delta between N1 and N2 approximates the actual impact.

(As an aside, this is also what most of the rigorous studies in AI-assisted developer productivity have done: measure PRs across the same cohorts over time with and without AI, like an A/B test.)


Snake oil may be a bit strong, because snake oil never works (except maybe as placebo?) whereas anything with an LLM, even though stochastic, has a pretty high chance of working.

> ... you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.

Not really, though it depends on the code; reading code is a skill that gets easier with practice, like any other. This is common any time you're ever in a situation where you're reading much more code than writing it (e.g. any time you have to work with a large, sprawling codebase that has existed long before you touched it.)

What makes it even easier, though, is if you're armed with an existing mental model of the code, either gleaned through documentation, or past experience with the code, or poking your colleagues.

And you can do this with agents too! I usually already have a good mental model of the code before I prompt the AI. It requires decomposing the tasks a bit carefully, but because I have a good idea of what the code should look like, reviewing the generated code is a breeze. It's like reading a book I've read before. Or, much more rarely, there's something wrong and it jumps out at me right away, so I catch most issues early. Either way the speed up is significant.


I think the placebo effect might be a decent comparison. It works most of the time, and you don't worry about it as long as you fully believe in its efficacy. However, once the illusion is shattered, the positive effects are diminished, and you can never fully trust the solution again.

> has a pretty high chance of working.

for MVPs, mock ups, prototypes or in the hands of an expert coder. You can't let them go unsupervised. The promise of automated intelligence falls far short of the reality.


Not only "has a high chance of working", but you can pay more to make it more reliable. It really is striking trying to run a harness openClaw thing on a smaller or quantised model, really makes you realise how much we take for granted from SOTA models that was totally impossible just a year ago, in terms of complex, generally reliable tool use.

Pretty high chance isn’t what the intent or impression the end user often has.

Indeed, and it is a complicated problem to solve. A GUI or CLI can hide footguns or make them less likely to be misused. But an AI agent is perfectly happy to use a wrecking ball to put a nail without any second thought or confirmation.

It’s a human articulation problem.

When it receives a generic vague input it is free to interpret according to how its corpus fires like any human interaction.

How to articulate better is like writing a sentence that will stand the test of model updates.


Even then. I don’t have an example off the top of my head but even perfectly clear sentences can lead the agent to strange places. Even between humans, miscommunication is easy, but then anyone sensible would ask for confirmation if their interpretation is weird. But the LLM very rarely questions the user.

I don’t think it’s fair to blame the user here. The tool must be operated by normal users.


I'm trying to think of other types of tooling that normal users can all use equally well, or in the best ways possible.

Totally agreed, to me data is just like code: extremely valuable for the functionality it provides, but in most other ways a serious liability. That said:

> I don't think "obscurity" really buys you much (especially these days, with LLMs).

Actually I think it does so even more with LLMs. As has been posited before (particularly on the threads about open source projects going closed source) security comes down to who has paid more attention to the code, the attacker or the defender. And of course, these days attention is measured in tokens.

We know that LLM's are pretty capable of reversing-engineering to figure out an application's logic, but I would bet it takes many more tokens than reading the code or other public information directly. As such, obscurity adds an important layer to security: increasing the costs on the attacker.

Security has always been a numbers game, but now the numbers will overwhemingly be tokens and scale. If the defenders can cheaply raise the costs on the attackers by adding simple layers of obscurity, it can act as a significant deterrent at scale. I wonder if we'll even see new obfuscation techniques that are cheap to implement but targeted specifically at LLMs...


Very good point.

I never shipped an app in VB6 (only dabbled extensively) but I did work professionally with a huge range of UI technologies, including vanilla HTML/JS, ColdFusion, Java Applets, AWT/Swing, C++/QT, Flex/ActionScript, Objective-C/iOS/Cocoa, React/React Native... but with each of them I would pine for the ease of doing things in VB6.

I mean, I understood technically why each framework had its complexities, especially when it came to dynamically scaling to various displays. But there were so many times, especially for very simple use-cases, I'd be like "LET ME JUST CLICK AND DRAG YOU TO THE CONFIGURATION I WANT YOU TO BE GODDAMMIT."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: