Hacker News new | past | comments | ask | show | jobs | submit login

This is probably just continued post-COVID demand normalization. Basically all software and gaming got a massive bump that was obviously going to have an unsustainable component to it. But in many cases companies planned like it was totally sustainable and would continue. That’s why even super profitable companies like Microsoft and Epic are doing layoffs in 2023.

The other reason of course is that they went a bit crazy with the hiring sprees and accumulated some unproductive devs who are probably a net drag. That’s ultimately a healthy force in the economy as those people can do more in smaller firms with less in-house expertise (albeit at lower wages).

I wonder also how much of a driver new programmer productivity tools like copilot and ChatGPT are factoring into this. If your top 10% productivity workers just got a 30%+ productivity boost, that means you can easily shed the bottom 20% productive workers and still be ahead of the game.




> I wonder also how much of a driver new programmer productivity tools like copilot and ChatGPT are factoring into this. If your top 10% productivity workers just got a 30%+ productivity boost, that means you can easily shed the bottom 20% productive workers and still be ahead of the game.

I can confidently say that this factors in 0%. Layoffs were happening in the industry even before the LLM craze.

And a 30% overall productivity boost for a "top" developer is a massive overestimate. Your best developers aren't spending their days writing boilerplate unit tests.


Second this. From using LLM within my daily IC workflow, its not that developers are getting a productivity boost, its that our job became more enjoyable. I didn't even realize how much frustration builds up over time while relying on google search for things such as that one obscure api method i forgot the name of. I have to dig and dig... Somehow using GPT in its place has brought a noticeable bump to my quality of life. Most of my dev colleagues share this sentiment. Impact on the tech economy from this? Unknown.


I would imagine that if your job is more enjoyable, you're getting a productivity boost.

Unless you're suggesting that using an llm is just as efficient as not, but is somehow more fun?


Possibly, but why assume that developers are passing on that increased productivity without expecting a relative pay increase?


Why would they get a pay increase? Everyone can use LLMs.


everyone looking for any reason when its obviously a massive recession due to govt. spending


It's monetary policy tightening (effective rate 0% -> ~5%), not government spending. There is no recession, just normalization (zero interest rate policy, or ZIRP, was an abnormality). Unemployment is close to lowest levels in history because of structural demographics. When money gets more expensive, you must perform, adapt, or die as a business. Cashflow is king, and profits juiced from government stimulus mentioned is simply evaporating as the Fed drains money out of the system.

Businesses are attempting to discover how to operate in a macro that makes labor and money more expensive than they've been accustomed to over the last 15-20 years. u/DoughnutHole touches on another important point: businesses and capital are clinging to returns that might no longer be obtainable in this new macro (as we've been papering over reality with cheap credit and financialization for some time). What is going to break first? Who knows, first time we're someplace like this. Everyone is going to fight like hell to defend their piece of the pie (note the broad layoffs and back to hiring shortly after, as well as activist investors demanding trimming labor costs aggressively).

https://fred.stlouisfed.org/series/FEDFUNDS (Pick 10Y time scale)

https://fred.stlouisfed.org/series/WM2NS

https://www.goldmansachs.com/intelligence/pages/why-the-us-m...

https://fedinprint.org/item/fedkeb/96953 | https://www.kansascityfed.org/Economic%20Bulletin/documents/... ("The labor market has so far shown remarkable resilience to the Federal Reserve’s recent monetary policy tightening. Severe labor shortages in the post-pandemic era have led many employers to hold on to workers and hire less-skilled workers—even though they expect demand for their goods or services to weaken in the future. As a result, unemployment remains low, and labor productivity has declined.")


>There is no recession, just normalization (zero interest rate policy, or ZIRP, was an abnormality).

I mean, both can be true. ZIRP wasn't made over the recessions either. Tech has been in that rush for at least 12 years now. If not longer. You mention 15-20 years after all.

We had at least one recession during your time period. It doesn't preclude another one.

>labor shortages in the post-pandemic era have led many employers to hold on to workers and hire less-skilled workers—even though they expect demand for their goods or services to weaken in the future. As a result, unemployment remains low, and labor productivity has declined."

I see this as a natural consequence to their actions, though. You require "Junior" engineers to have 3+ years of experience, discourage apprenticeships in lieu of short internship stints and headhunting acedemia, and overall try and compartmentalize engineering roles. So no surprise that labor in the Gen Z won't be as skilled as the Gen X that rose entire industries from the ground up. It's due (but not limited to):

- tech being more complex. How many people worked on Google search in 1999? How many work on search alone in 2023, let alone the other 30+ division Google has now.

- top talent more and more seeking out their own business ventures. Same questions as above. More startups and more Ycombinators to fund them. People really want to be or to fund "the next big thing" in a rate not seen since the Dot Com era.

- simply being ubiquitous. You can work in tech without being in a tech company. If you don't like the hustle of FAANG or high frequency trading, there are plenty of places who just need to maintain their website outside of tech. This won't drive growth as much, but it pays the bills.

There's many factors behind this, but I will emphasize that people requiring top talent need to invest in them. Schools don't have the same goals as employers, and schools as is are becoming more cost prohibitive as is.


This might be the most succinct description of where we're at right now, that I've read all year.


Kind of - but it ignores the fact that the tightening was to fight inflation due to insane amounts of government spending. And the government just upped the ante and is spending more. Right now the national debt is growing faster than the economy. Expect everything to get a lot worse.


How do you explain the inflation found in EU countries, even ones with vastly different government spend and national debt profiles?

You're overfocusing on one single component of a fairly complex system. Government spending, COVID impacts (pent up demand, supply chain disruptions), a massive war in Europe impacting multiple crucial raw materials - they all have a part in the stagnation / borderline recession we're seeing. Anyone claiming it's "simple" or due to one single factor either has an agenda or doesn't understand how complex economics are.


We've had massive government spending for >20 years now. When looking for explanations for new conditions, look for what has changed, not what hasn't changed.


"I'm not obese because I've been over eating. I've always over eaten" The printing isn't new and neither are the recessions. They're just making them worse.


Good metaphor. One does not get suddenly obese, it happens gradually and steadily and is well correlated with overeating. Your metaphor illustrates my point, not yours.


How's that? If someone says "why am I so overweight?" do you say that it can't be the eating because that hasn't changed?


Yes.

If you've been eating the same for the last 20 years and you've been gradually gaining weight over the last 20 years, then it's your diet.

If you've been eating the same for the last 20 years and you suddenly gain weight, it's likely not your diet, it's something else -- exercise, illness, et cetera.


... no


I appreciate the kind words and the opportunity to present my analysis.


https://fred.stlouisfed.org/series/FEDFUNDS (Pick 10Y time scale)

The Max timescale is even more interesting. Almost every time one of those gray-shaded recessionary periods begins, it's preceded by a graph segment that looks a lot like the last couple of years.


Even if you agree that there was a recession the US exited it a year ago. The economy has been growing since last August.

This (and other layoffs) are a response to increased interest rates imposed to combat inflation. Companies don't have access to as cheap credit, so they're trying to appease shareholders by cutting costs to improve profitability.

This puts the brakes on the economy and may cause a recession if it goes too far. But the US economy is not currently in recession by any definition.


This specifically and undeniably is what is killing unity too. They have always run on free money, they have literally never turned a profit in the history of the company, and with the end of 0% interest rate policy the music is over.

The fact that they borrowed bigtime to buy this adtech company (hence the push for “if you use our ads, no fee increase!”) and are now deeply deeply underwater is forcing the issue, they have gone from losing $100-300m per year to losing $800m-1b per year. But, again, unity has literally never turned a profit in the history of its existence.

The same thing is happening across the industry where the end of free money is putting some boondoggles out of business, burning out the waste that has resulted from 2+ decades of unconstrained tech growth in a ZIRP environment. A correction was inevitable and healthy, excessively cheap money has all kinds of noxious effects on an economy.

The most prominent is xbox - I don't think anyone realized just how bad things have gotten for xbox but the FTC leaks are devastating, they reveal the xbox division has been on the ropes for quite a while. They have been in severe risk of being closed if they can't make gamepass numbers perform, and they've massively undershot the scenarios in which (two or three years ago) Phil Spencer said he'd close the division. It's entirely possible that Starfield is the last straw, and if they don't see a big bump in gamepass subscribers from it (which doesn't seem to have happened) they may spin off the game-studio side and kill xbox, or pivot it into a "nettop" that competes more with appletv and switch pro type hardware.

It’s also resulted in some businesses making some crazy pricing model changes and other things, out of a real or perceived need to get to profitability. The unity change will kill the business, and things like google business spinning off domain registry (you need to use a third party service to get a complete offering on google cloud?) are pound-foolish decisions that will/could have long-term negative consequences.

But the tide is going out and we’re seeing who’s swimming naked, as they say. There were a lot of businesses and projects that were only sustainable with 0% interest money, and companies are rightfully asking if this is worth it if they have to pay 7% or 8% interest. And that's exactly the type of unproductive work that needs to be burned out of the economy.


the xbox stuff I wanted to give more explanation and sourcing/perspectives without bloating the above

PS5 has won the console war, Series X's hardware advantage has been subsumed by upscaling into a meaningless "runs 840p instead of 720p input res" thing that doesn't sell consoles and Sony has actually organically built studios that pump out exclusive content that does draw in customers. Microsoft has largely failed at building that, and instead tried to buy more studios to make their content exclusive (bethesda, tried to buy nintendo and activision, etc).

Microsoft has pretty explicitly pushed all their exclusives to PC as well, meaning Series S and the Gamepass are the only interesting thing they have going. And gamepass simply hasn't been doing the necessary numbers - there was a bump in 2020/2021 from COVID, when silicon shortages meant Series S was the cheapest and most available route to a diversion during the COVID disruptions, and gamepass gave you this ready library. But 2022 and 2023 have been the great Going-Outside and the numbers have gone down not up.

In the meantime Series S is also this albatross around their neck - they have a much lower hardware baseline for game compatibility, and can't really push forward to next-gen without losing those subscribers (who are probably not going to "convert" again if it requires a $700 purchase or whatever). like it or not they are trapped by series S and gamepass now, and there is no true next-gen console on the radar (unlike sony), only a refresh.

Another problem is that all those gamepass subscribers are using the "$1/mo for 5 years" crazy promo deals too. Which is another perfect example of crazy VC growth-hacking stuff that is being rightfully burned out.

the FTC leaks are absolutely devastating and reveal a company that is so on the ropes they're looking at closing the xbox division, or pivoting it to nettop style consoles and going after appletv and switch pro (docked) market, perhaps with an upscaling-based ARM console. They are drastically underperforming the scenario in which Phil Spencer said he'd pull the plug. But it sure as fuck isn't going to be running like it currently is, in another 5-10 years, unless they get a whole lot of market traction awfully fast.

I do think it's entirely possible that Starfield was their last shot - big flagship title from a big flagship studio, and if they didn't do the gamepass numbers (and they didn't) this is the final straw. MS has very explicitly avoided weighing in on a true next-gen console and maybe they pivot or kill the division off instead. And this leak is so embarrassing and reveals how much things are so on-fire internally, that the leak itself may push them over the edge too, on top of starfield underperforming this month. This is C-suite level deliberation laid bare.

https://www.youtube.com/watch?v=2tJBC9zXYQ8&t=2511s

https://www.youtube.com/watch?v=OUPxytMLWzI

But it's stunning to see this (ostensibly) gaming pillar/institution fall apart like this... apparently they were swimming naked too. But I guess that's not surprising, their only real traction was during the XB360/PS3 era and every single other time it's been a money pit they dump cash into in hopes of being profitable next generation... everyone kinda knew it intellectually but without the scales of losses being broken out separately, it's just an abstract point. Now we know: Xbox is losing so much cash they will probably go under shortly.

(which is a great reason to prevent them from acquiring more studios, especially when the goal is obviously to take their content exclusive to combat Sony's organic exclusives. That's anticompetitive to the core.)


Well, they aren't going to in front of the FTC and make themselves look huge and successful and dominating, now, are they.

Phil's entire job is to communicate the idea that they have to acquire third-party studios to keep the business running at all. "If you don't let us buy Activision, then we'll just be forced to fold our tents, and then there will be even LESS competition. You don't want THAT, do you?"

When in reality, Microsoft's attitude toward the Xbox division has always been more like Citizen Kane. "Why, yes, we lost $100 million last year, and we're on track to lose $150 million this year. At this rate, if things don't improve, we might have to shut this place down in... let's see... 12,000 years."


> Companies don't have access to as cheap credit, so they're trying to appease shareholders by cutting costs to improve profitability.

It's not just profitability and shareholders, although those are good indicators of whether or not you're overspending. If the cost of capital goes up, something has to give to pay for that.


It's a self fulfilling prophecy nothing more and slowly fluctuating back

Adjustment of COVID, low interest rates and Russian war.


Massive government spending? Sure. But "massive recession"? Really?


Recession = two quarters of negative growth. We've had a lot more. I'm not playing with the kafkaesque political redefinitions and find them pretty pathetic actually.


They've been playing games with the numbers but it's true, we're looking at a depression not a recession as it's going to last much longer. US debt t GDP ratio > 120% which by IMF definition means the US is in an economic death spiral. By 2028 the payments we make on all that money they printed (at our expense) will only go to the interest and no longer the principal. After that the death spiral becomes irreversible with US insolvency by 2042. Historically, inside the next ten years is when we have a depression/massive austerity for everyone, war, and/or a new monetary system. Which if you ask me is more stupid and irresponsible than a balanced budget amendment and single line item spending bills. Social Security will be insolvent in 10 years, too, and I've also heard the medicare liabilities ae being hidden because they paint an even darker picture about how far gone things are.

We've known this for a while BTW, I first learned about the timeline in 2014 watching the budget committee hearing on CSPAN. They showed the slide with the timeline yet no one talked about it. Austerity back then was scheduled to start by 2020 but we got stimulus, like they were steering into driving off the cliff.

This guy wrote and updated his book on the topic. https://www.amazon.com/COMING-COLLAPSE-AMERICA-BALANCE-FEDER...


The economy has been growing in 2023 though

https://tradingeconomics.com/united-states/gdp-growth


consolidated economic activity has been growing massively, while Main Street continues to fail. how is this "us" or "we" .. investors here tell one side of a story.. look at the cities where people live.


They aren't "kafkaesque political redefinitions". You are just choosing to use an overly simple definition because it supports your argument. The NBER determines recession based on a sophisticated methodology. One that considers the holistic status of the economy. This whole "two quarters of negative growth" is unga bunga economics


So then it's not a recession solely according to NBER - and this departs from their characterizations for the last 60 years before this last election cycle.

"A recession is a significant, widespread, and prolonged downturn in economic activity. A common rule of thumb is that two consecutive quarters of negative gross domestic product (GDP) growth mean recession, although more complex formulas are also used."

- [investopedia]("https://www.investopedia.com/terms/r/recession.asp#:~:text=W....")


A lot of big shops are choking under bureaucracy and process right now, which is the sort of things LLMs are good at.


How does an LLM help when you can't get anyone to sign off on a change, or when QA is understaffed, or when there's too many stakeholders making conflicting and sometimes nonsensical requirements?

That kind of stuff can't be solved by technology.


In short, an LLM helps in creating enough shit disguised as work to keep the people spouting nonsensical requirements busy enough to not dwell on the important parts of the system.

LLMs are phenomenal busywork generators. And as a natural consequence, if you want to stay productive, you should be suspicious of anybody that uses one.


It's the opposite. LLMs are good after you make your way through the bureaucracy and end up with well defined specs and a ton of boilerplate code & tests to churn out. I'd say most devs are spending 90%+ of their time on the former.


Idk, ChatGPT has been great boost to my efficiency filling out all those "5 min" pre-review alignment standup meeting intake documents. Now it really DOES just take 5 mins!


Standup meeting intake documents?

I think you're doing standups wrong.

Do they need a cover sheet as well? Did you get the memo?


You fix what you can fix.


Wait until Atlassian starts adding AI generated tickets.


Layoffs are rarely about productivity, especially at super profitable companies. Layoffs are primarily about short-term metrics to drive shareholder expectations. Where those metrics reflect productivity it is generally three or four degrees of separation from any actual productivity data. The thing those metrics are most good at showing are labor cost aggregates versus industry base rates.

The biggest reason for layoffs has always been, and likely will always be, the "C-Suite class" controlling expectations on labor costs (salaries/wages) in an industry. That's why they often happen in waves, many companies at once, it's a pressure release valve to keep the labor market in "control" propagated by the largest shareholders, especially those born and bred into the C-Suites. Those shareholders don't actually care how productive a company is, productivity is increasingly orthogonal to profit. They care how profitable a company is, and companies stay the most profitable when (among other things) the labor market is most effectively depressed (and layoffs are a useful depression tactic) and laborers aren't comfortable enough to fight for better wages.


I'd argue that the productivity boost is mich higher for the lower end developers than for the best ones. They are best at the simple tasks in the most verbose programming languages: Easier to think of them as higher quality auto-completion. The terser the language, and more innovative the problem is, the smalle the LLM help.

So if anyone gains from this, is the companies paying under-market, and therefore typically getting somewhat substandard devs to do boring things. There's major savings in productivity there, precisely because the training for those tasks is so good. If you had your highest performance developers spending a significant amount of time on that kind of busywork, you weren't getting that great of a performance in the first place.


Maybe this is the answer to a disconnect I've noticed. A number of devs where I work have started using ChatGPT to help with their work, but it hasn't resulted in any noticeable productivity or quality gains.

But the devs here are all very experienced senior-level engineers. Perhaps that's why we aren't seeing any gains?


I was asked to help evaluate a couple code-generating LLMs at work, so I set out to build a couple toy apps in languages and frameworks I wasn't very familiar with.

My experience was that it was about as useful as searching Google and Stack Overflow without knowing what a good answer looks like. You copy and paste a lot of code, a lot of it works, but inevitably, something breaks and you have no idea why. So you still have to go back to "the hard way" of reading the documentation and building an understanding.

I'm sure things will improve over time, especially for products built specifically for code generation, but my first and second impression is that so far, LLMs don't actually lower the bar for junior programmers.


Maybe the devs now can work fewer hours since they more productive per hour.

Otherwise you're suggesting that these senior devs are using a tool which provides no value whatsoever which seems unlikely.


Of it's anything like my job, it's likely that a majority of the slowness is in the non-coding steps - meetings, upgrades/config, business ideation and prioritization, etc.


I mean, that's like most non-junior software development jobs--at least it has been the case for all the ones I've had: Actual, physical, typing-in-of-code is usually about 5-10% of any software developer's job. The rest is tools management, configuration, compiling, merging/source control, code reviews, helping other developers, brainstorming, planning, design reviews, bug management, status reporting, project management rituals like scrum, and so on.

I think many software developers have this romantic, idealized notion that their entire job should be furiously typing code into a computer, and that anything else they have to grudgingly do is unproductive overhead.


> I'd argue that the productivity boost is mich higher for the lower end developers than for the best ones.

I'd say it depends on the task more than on dev level. If it's maintaining, improving, or refactoring of a large project LLM can't help much. When it comes to small utilities GPT-4 is a time saver. You can even write a small playable game with it in a matter of hours. Using any popular language. Simple web pages, JS are much easier. With ChatGPT-4 I've learned how to do certain things using ffmpeg. Without it would take hours of googling.

PS: talking about ChatGPT degradation with the time. I didn't see it on programming tasks. But it became noticeably faster.


This is a first order short term phenomenon. But also, the LLMs by themselves guide the developers into creating repetitive low quality code, and any bias into using a LLM will bias the developers into adopting verbose boilerplate-happy languages and frameworks.


Meh, it's nothing to do with that, it's just a layoff round.

Epic went from 2200 employees in 2020 to almost 9000 in 2023. They are now basically shedding 1 out of 6 hires they've done since 2020. That's not really about sustainability, it's just a way to drop the ones that didn't really work out.


>That's not really about sustainability, it's just a way to drop the ones that didn't really work out

These layoffs are just a way for CEOs to improve "efficiency" AKA increase margins so they get a nice fat bonus - COVID overhiring has been a convenient excuse and every company has jumped on it.


I don’t think Tim Sweeney is very bonus motivated TBH! Although if you read the article he says directly that it’s efficiency motivated.


So you don’t think they actually overhired and that that massive growth in headcount was needed and justified?


It's not an excuse. They overhired.


It depends on how they're laying off. If it's by project, they might not be segregating by skill. If it's across the board, then they might.


> Basically all software and gaming got a massive bump that was obviously going to have an unsustainable component to it.

Well, this sounds logical and all but isn't always.

For example, Google's revenue/employee exploded during covid but it didn't crater post-covid [1]. So even with a hiring spree, the company is on the same track as it was pre-covid. You might need to layoff employees to get that same covid high but that covid high is the unsustainable part not your current employee count.

So some companies may have overhired compared to what their reveneue growth without covid would've been but not all did. You still need to evaluate companies on a company by company basis.

[1]: https://www.macroaxis.com/financial-statements/GOOGL/Revenue...


Control for inflation and look at the chart again, and those numbers are after layoffs.


Anecdotal experience but most of the people I've seen being fired in one of the big ones are senior people working in customer facing roles, where the expectation is that they will be replaced by AI and wishful thinking.

Newly hired people were more junior and cheaper than previous employees so it makes no sense to fire them.


> most of the people I've seen being fired in one of the big ones are senior people working in customer facing roles

Which industry? I’m seeing a lot of middle layers being reduced, but not the rainmakers.


I've seen this on the org I work, the whole 'engineer manager' layer was removed and replaced with wishful thinking (love that phrase!). The engineer managers had a good idea of how to manage teams and were doing team/tech lead work and coaching the team/tech leads, which didn't have much experience (mostly senior devs who didn't want to code any more).

This org in particular missed, in my eyes, where to cut. There is a number of engineers who cannot do much more than copy/paste code and the org has no wish to train them. Unfortunately, the org thinks that going wide (lots of teams, lots of people) is a good way to scale.


Until hiring decisions are subjective and prone to manipulation - the "rainmakers" will not be fired. Middle management is screwed, though.

Though functions like CEO and a lot of the executive suite - are under threat of just being replaced.


> functions like CEO and a lot of the executive suite - are under threat of just being replaced

By what? A Board-hired AI?


Half of your marketing can already be replaced with AI, and CMO's strategy is better generated by ChatGPT. Quite a few people on the marketing team should be shaking...

Quite a few CEOs should just sit on the board and stick to reviewing an AI generated strategy implementation plan. Because it's still the board that decides on the strategy, while CEO typically is focused on the implementation of said strategy.

CEO as a fulltime position is definitely under threat here.


> marketing can already be replaced with AI, and CMO's strategy is better generated by ChatGPT

Agree.

> Quite a few CEOs should just sit on the board and stick to reviewing an AI generated strategy implementation plan

Generating strategy is an important part of a CEO’s job. But it’s far from the main one. Their job is, centrally, about people. Managing relationships external and internal. That is AGI-territory work.


Disagree about CEO. A hired CEO, not director/owner/founder, is not a strategy creator.


Otoh middle management often provides the human layer, something chatgpt can’t do. What would you replace manager 1:1 with? Planning? Coaching? Frustration venting? Team building?


"one of the big ones" = epic? big what, departments?


No one in game and game engine development is using ChatGPT.

It doesn't even work, even if it was productive, as barely any problems the average gamedev faces on a day-to-day have been surfaced to the training data of an LLM.


Yeah, this exactly. The only person I know who is succeeding at "game dev" with ChatGPT is an older bloke I know who just began their baby steps on a pet project with Unity.

Sure, if you're still learning how to move position from x to x+1, ChatGPT can help you with those basics, but anything more than that and they start hitting me up on Discord looking for real answers.

Something as simple as how the serialization of fields in behaviour classes was too much for ChatGPT. They read what it hallucinated up, got confused, and were calling me up to explain minutes later. For those who don't know Unity, this is something you have to do daily when building new logic into game components and ChatGPT was clueless about it.

The limitations of ChatGPT as a programming aid are very very obvious and it has a long way to go before it's really useful to seasoned professionals.


ChatGPT 4 may be better but CoPilot is terrible at making suggestions on algorithm work. It'll slip a -(x) in to parameters where it's supposed to be positive. Use the sim but wrong variable names. Swap parameters around. It's can be a nightmare and cost you way more time debugging than it saves typing.

Specifically with type-ahead suggestions I believe with typing you are more engaged in thinking about what is correct. With a CoPilot suggestion the tendency is to just skim it to see if it look good. And that's probably part of the problem; the LLM is trained on what looks good!

This may look good:

  computeNodeDistance(sourceNodesA, destNodesA, x, -1)

But it's supposed to be:

  computeNodeDistance(sourceNodesA, destNodesB, x, 1)

Good times.


Not until legal approve it, which doesn't seem like any time soon.


LLM code generation tools are producing output equivalent to a new grad who doesn't know much how to write code that integrates well in a program. It churns out a lot of text but every bit of it needs to be questioned and reviewed. As such it's not exactly a productivity booster where correctness matters, it's actually a liability. But for churning out code that isn't critical, it's great.


Anecdotally, GitHub Co Pilot is barely a 5% performance boost. Still useful, but I think a nice coffee machine does more.


Yeah, I think it depends on your specialty. If you're an enterprise developer, you save on some boilerplate for sure, and it's a nice alternative to Stack Overflow.

If you're more of a generalist though? I can see productivity gains of 25% maybe.


I'm a senior who generalizes in full stack web development due to my wearing many hats at my company.

ChatGPT and GitHub CoPilot have regularly barfed out garbage for me while I use them. I spend more time double checking whether it's right than I do writing code with it.

It's terrible for my use case where I am doing somewhat complex fixes/refactors everywhere. It just regularly makes up stuff, or gets the wrong API version, or just outright poops out non-existent syntax crap.


This is my experience. These productivity gain numbers are being pulled out of thin air, and certainly not broadly applicable.


> If you're more of a generalist though? I can see productivity gains of 25% maybe.

I really doubt this. Copilot/ChatGPT are not useful at all for the hard problems and they're not trustworthy enough for the small problems. Even for the simplest things I have to comb over the generated code very carefully because I've seen them be subtly wrong dozens of times in my relatively short use. If I have to go over everything they generate with a fine tooth comb it's just easier and less error prone to write it entirely myself.


Mirrors my experience exactly and in CoPilot's case with VSCode until very recently it would suppress normal IntelliSense breaking that whole muscle memory flow.

I'm convinced I've spent more time scrutinizing, undoing, and debugging the CoPilot code than it saved me on typing.


if you're developing code for a large codebase with stable requirements in a company with good engineering practices, and your work output is the working code itself, it's hard to see that it would be very useful.

if you have to glue together multiple unfamiliar APIs and packages, maybe even seeing a couple new ones per day, you often throw away your code, and your work output is something else with writing code only a means to that end, it hugely speeds things up even accounting for debugging the frequent cases where it's subtly wrong.

this latter situation is more common than many software engineers realize.


> if you have to glue together multiple unfamiliar APIs and packages, maybe even seeing a couple new ones per day, you often throw away your code,...

Maybe take the time to be familiar with them? I think that is a description of an unhealthy job. It's like asking an average mechanic to handle optimizing for an f1 car in day 1, then asking him to improve the endurance of a rally car the next day! Then telling them the main point is to market the sponsors, so no need to be perfect.


i don't think the car analogy is good because cars are expensive to build and maintain, usually are used more than once, and it's very bad if they crash. these properties are true of some software but not the kind of software i'm talking about.

maybe a better analogy would be if you travel to new cities a lot, and have to navigate, you don't want to memorize a map of all the city streets and learn in detail the transit schedules on each rail and bus line, you just care about a few routes a few times. and Google Maps can make your life a lot easier. on the other hand if you lived in that city, it would quickly become pointless to take out your phone all the time to plan a commute to work.


"If your top 10% productivity workers just got a 30%+ productivity boost"

Is that really how it works? It seems like the biggest gains are actually in the less skilled segment.


You think the top 10% of workers are getting, at a minimum, a 30% productivity boost? You think they're getting in an extra 1.5 days worth of work a week because they're using Copilot?


You mostly have to ignore these AI solution hopefuls.

They're living in a momentary dream that AI LLM tools will solve all our coding problems. They don't and reality hasn't hit them yet.

I don't know any real seasoned dev who is getting a 30% boost because they installed Co-Pilot. The tools poop out too many errors/fake-garbage to be useful to devs working on battle hardened codebases.


My productivity boost is much more modest than 30% to be sure, but I'm less tired at the end of the day.

The productivity boost for non-seasoned devs, or devs of average skill level or above doing relatively routine but needed work? For a large chunk of the nearly-invisible-to-silicon-valley workforce doing those tasks, I would be surprised if it wasn't more than 30%.


Another way of spinning this - if "fancy autocomplete" is giving you a 30% productivity boost then either you're not actually in the top 10%, or you're doing working on "top 10% problems".


Which of course, is the overwhelming majority of people! Perhaps about 90% of people :)


The first two paras I agree sound fairly likely.

Regarding the second para, the interesting part for me is: how do these companies determine which are the unproductive devs? My impression so far is that senior management has no way of measuring this, given how little data is provided to back up decisions to RTO.

Not so sure about the third para. Why would one company be more ahead of the game than any other, if indeed LLMs are providing this supposed boost?


Sorry, but mass layoffs aren't exactly the most rational ones.

There's a lot of "you have to cut X off your salary budget", pick whoever you want. Then it's up to the managers to pick, very subjectively, who they want to work with the most.

FFS - I managed to dodge 4 layoffs at a startup, including my whole department being cut wholesale. My managers always kept me, even though I wouldn't call myself the "best" or "the cheapest". All of it just boils down to one thing - I'm the one that whoever manages me can rely on. Just that... being reliable.


That makes sense. In lieu of a reliable objective measure of an employee's impact, the next best thing is to trust the managers judgement. I would still question how effective this is at selecting the "unproductive" workers; I guess it really depends on how good the managers are.


They don't. They basically go by highest salaries and various arbitrary measures such as unprofitable division, virtual location, etc.


> how do these companies determine which are the unproductive devs?

The senior management delegates. At every moment your direct manager knows whom they are willing to part ways with if asked. Even if everyone on the team is great there is always some criteria (costs, impact on important project, institutional knowledge) that sets some people apart.


It's not possible to measure with 100% accuracy, but let's not kid ourselves every company and every division can easily pick out the bottom 10-15% of devs who have close to zero real impact. The actual number in fact is likely far greater than that.


Anecdote coming up, but when Google laid off a bunch of people those that I know that worked there said the most critical people on their teams were the ones let go.


And yet there has been no operational impact at Google since the layoffs...so maybe they weren't so critical after all?


> And yet there has been no operational impact at Google since the layoffs...so maybe they weren't so critical after all?

How do you even know that?

Usually the only way that you will see that, as a customer, is when the people who left are unable to keep up with all the firefighting and things ultimately explode. Either with outages or a security breach.

You can accumulate a _lot_ of tech debt before customers notice. For smaller companies, that's often a fatal amount of debt.


I mean people can be critical without being load bearing.

If you decide to stop paying your water bill you don't lose service that same day. Typically these layoffs either cause or are a sign of stagnation and then some scrappy startup starts eating away at your market share.


What do you mean "no operational impact"? How did you measure that? People on this site are constantly complaining how horrible the Google services became.


Google services became worse in periods when they doubled headcount year over year than when they did layoffs.


Again, how did you measure that? Because that sounds like the same kind of MBA crock.


How do you measure everything you are claiming?


In my opinion, if you haven't measured something then it's kinda disingenuous to make objective-seeming quantifiable claims about it. Especially without including a qualifier like "in my experience... " or "I'd speculate that... ".


Over the last 3+X years, not suddenly after 1-2 years of layoffs.

I can bet that if you bring Google Reader up, you'll get a lot more complaining - than any "horrible Google services" topic.


Layoffs were 6 months ago.


Hah - you never know! Maybe they failed to innovate a way to cram another above the fold ad onto the SERP page due to laying off too many of their “best” people


The intersection between politically/socially unpopular and low impact.


in my experience is greater than the intersection between the politically/socially popular and low impact.


This feels like an appeal to intuition. I have no doubt that senior management believes they can do this accurately, but I'm not sure why I ought to agree.


> every company and every division can easily pick out the bottom 10-15% of devs who have close to zero real impact.

Can they? In my experience, they don't tend to do this when it comes to deciding who to lay off. I assumed this was because they can't tell who the bottom 10-15% are.

Team members can tell, but can upper management?


Developers usually follow scrum frameworks, so productivity could easily be measured by overall activity, number of issues closed, merge requests completed, etc.


The software engineers working on the best projects don't follow "scrum" frameworks. Where are the story velocity ticket points in Linux kernel development?


That's about as silly a measure as number of lines of code produced.


> That's about as silly a measure as number of lines of code produced.

How else would you measure how much work someone does, other than by how much tracked work that's attributed to them there is?

The people using these indicators probably don't care about someone who might have fewer closed issues but instead is helping out 5 people work more efficiently, but rather about people who are constantly shipping measurably less than others, stack ranking and all - in their minds, development should be viewed as some widget factory.

Just look at what happened to agile and how SAFE ended up: https://www.atlassian.com/agile/agile-at-scale

That's also how you get attempts to turn some abstract complexity evaluation (story points) into hours that could be billed to a client ahead of time, even though it probably shouldn't and doesn't work like that.


Ease of measurement is not enough. I can drive badly under the speed limit, so even though speed is easy to measure, it's not nearly good enough to evaluate my driving.


Measuring productivity is a fool’s errand. The reality is that performance is subjective and in a hierarchy only the whims of those above you matter.


That's literally the last thing that managers look at, when it comes to making a layoff decision.

Never have I ever been the person with highest "overall activity, number of issues closed, merge requests completed, etc.", yet I have not been laid off in a very long time.

Last time I thought I was getting laid off, I misread the manager's comments as PIP.


If ChatGPT should be able to generate any code, a function that outputs a set of prime numbers longs in Kotlin should be simple enough (unless you consider Kotlin, the 12th-20th most popular language according to various surveys, too obscure). One that has a good big-O runtime that is compact as possible. After a bit of futzing I get this:

fun sieve(n: Long) = (2L until n).fold((2L until n).toMutableList()){p,i->p.apply{for(j in i * 2 until n step i)this[(j-2).toInt()]=0}}.filter{it!= 0L}

and it works OK in some situations:

>>> sieve(100L)

res5: kotlin.collections.List<kotlin.Long> = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]

Let's try another test input -

>>> sieve(10000000L)

java.lang.OutOfMemoryError: Java heap space

Oops.

Also if the 12th-20th most popular language (Kotlin) being too obscure is the reason, how will they fare currently except with the most popular languages?

This is about as simple as a function as you can get. 95% of my programming work is standups, sprint planning and grooming, looking at Swagger or other API backend documentation for information, looking through Outlook, Teams and Confluence for information about my story, looking for UI on Figma, contacting various teams for more information - then someone wants me to review their pull request and stop to do that - but notice they have some business logic wrong because they don't fully understand the domain. Yet even something simple like a function generating a list of prime numbers, ChatGPT struggles with.

Not that it can't get better, but it's no threat to most programmers I know in the short term (and if it becomes one, I'll shift over to writing Python scripts that call Pytorch, numpy etc. - I already do so in my spare time).


Microsoft breaking Outlook and Teams does more for productivity than all the AI in the world combined until now.


This is standard for economic conditions. When there's a lot of easy to come by money (loans) there's a lot of disposable income and excessive employment. As soon as the money supply tightens (or in this case year on year the money in the economy goes negative), businesses trim the fat and get conservative about only keeping people creating value/profits. Which is why unemployment figures lag in a collapsing economy.


Keep in mind companies planned like it was sustainable and hired a lot because from a business perspective, its much easier to overhire and lay off later than it is to be understaffed during a period of potential unprecedented growth. It doesn't fully rationalize many companies actions but its just an application of a regret minimization framework.

I personally don't believe its a massive function of Generative AI (although it could be), I think the major contributing factor is that the projections that companies made in 2020/2021 and the hiring they did to resource more projects to take advantage of those rosy projections ended up falling short of real expectations. i.e instead of needing to take advantage of a potential 20% yoy future growth in their target markets, they only really need to staff for a 10% yoy growth.

Also probably a bit of "monkey see monkey do" where a company that actually needs to layoff (meta) lays off a bunch of people, and other large companies that may not "need" to layoff (like Google) follow suit since now its "socially acceptable."


https://steamdb.info/app/753/charts/

There is still some insane growth

Fortnite's game as a service fatigue, for some reason they didn't jump on the Extraction Shooter hype and it cost them today IMO, Fortnite became successful because they jumped on a trend at the right time (from Survival Base Building to Battle Royal), they seem to have forgotten that

EPIC launcher is still bad today.. which doesn't make me want to have it open, and let alone to browse it.. which might hurt their numbers

Steam's launcher follows the same path it seems, slow and bloated CEF bullshit.. makes me want to use it less


>I wonder also how much of a driver new programmer productivity tools like copilot and ChatGPT are factoring into this. If your top 10% productivity workers just got a 30%+ productivity boost, that means you can easily shed the bottom 20% productive workers and still be ahead of the game.

I have been evangelizing GitHub Copilot (+ Copilot Chat) and ChatGPT Pro to my coworkers that include full-stack .Net/React devs, Python data science people, and embedded C devs.

Only the .Net/React devs have been sticking to both Copilot and ChatGPT Pro. The Python data science are using ChatGPT Pro but have dropped Copilot. The embedded C devs are using neither.


There's a very simple reason for it - the amount of slightly customized code is directly proportional to automation.

When working with embedded systems - I barely ever have "lightly" customized code. Meanwhile the closer you're to the frontend - the closer you get to having majority of your code being boilerplate.


I’d love to know how these top developers are identified. What’s this mythical productivity metric? If Epic has one that accurately models productivity they should 1) share individual scores with their employees and 2) stop wasting time on games and start selling it to every other corporation.

If they really did over-hire during the pandemic and fail to plan for the future that was either 1) intentional and they lied to recruits about the duration of employment or 2) incompetence on the part of leadership and recruiting. Have heads rolled in the responsible departments?


> I wonder also how much of a driver new programmer productivity tools like copilot and ChatGPT are factoring into this. If your top 10% productivity workers just got a 30%+ productivity boost, that means you can easily shed the bottom 20% productive workers and still be ahead of the game.

Probably 0%. What I heard, mostly people not performing (which is the real pandemic after COVID-19). If I'm not wrong as other companies, devs there are not "allowed" to use copilot and chatgpt fearing code plagiarism and ending up with some license violations..


> I wonder also how much of a driver new programmer productivity tools like copilot and ChatGPT are factoring into this. If your top 10% productivity workers just got a 30%+ productivity boost, that means you can easily shed the bottom 20% productive workers and still be ahead of the game.

Epic's game programmers are already incompetent to the point that Fortnite is basically falling apart at the seams and every update adds dozens of new bizarre bugs, I don't even want to imagine how much worse it would get if they started using AI to write code.


> ChatGPT are factoring into this

Likely not a reason for layoffs. If someone was fired due to llms it was probably due to the incompetence to think the chat bot is more than a data leaking search engine.


Gaming is a notoriously boom/bust field. Layoffs, failures and bankruptcies happen even in the best of times. Reading anything macro into a game developer layoff is really stretching things.


This is starting to get even more irritating than the “put a blockchain on it” spammers ever were.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: