I sometimes wonder about whether the decline in IT worker quality is down to companies trying to force more and more roles onto each worker to reduce headcount.
Developers, Operations, and Security used to be dedicated roles.
Then we made DevOps and some businesses took that to mean they only needed 2/3 of the headcount, rather than integrating those teams.
Then we made DevSecOps, and some businesses took that to mean they only needed 1/3 the original roles, and that devs could just also be their operations and appsec team.
That's not a knock on shift-left and integrated operations models; those are often good ideas. It's just the logical outcome of those models when execs think they can get a bigger bonus by cutting costs by cutting headcounts.
Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.
Is anyone really surprised they are using ChatGPT to keep up?
This is going to keep happening until IT companies stop cutting headcounts to make line go up (instead of good business strategy).
One person can do the work of 3 and regularly does in startups.
I think that what the MBAs miss is this phenomena of overconstraint. Once you have separate the generic role of "developer" into "developer, operations, and security", you've likely specified all sorts of details about how those roles need to be done. When you combine them back into DevSecOps, all the details remain, and you have one person doing 3x the work instead of one person doing the work 3x more efficiently. To effectively go backwards, you have to relax constraints, and let that one person exercise their judgment about how to do the job.
A corollary is that org size can never decrease, only increase. As more employees are hired, jobs become increasingly specialized. Getting rid of them means that that job function is simply not performed, because at that level of specialization, the other employees cannot simply adjust their job descriptions to take on the new responsibilities. You have to throw away the old org and start again with a new, small org, which is why the whole private equity / venture capital / startup ecosystem exists. This is also Gall's Law exists:
I think there is another bit to this which is cargo cult tendencies. Basically DevOps is a great idea under certain circumstances and works well for specific people in that role. Realistically if you take a top talent engineer they can likely step into any of the three roles or even some others and be successful. Having the synergy of one person being responsible for the boundary between two roles then makes your ROI on that person and that role skyrocket.
And then you evangelize this approach and every other company wants to follow suit but they don’t really have top talent in management or engineering or both (every company claims to hire top talent which obviously cannot be true). So they make a poor copy of what the best organizations were doing and obviously it doesn’t go well. And the thing is that they’ve done it before. With Agile and with waterfall before that, etc. There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.
> There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.
This is a thought provoking phrase, and I liked that. Thinking a bit deeper, I'm not sure if it's accurate, practical or healthy though.
I've seen mediocrity produce excellent solutions, and excellence churn out rubbish, and I suspect most people with a few years and tech jobs under their belt have.
You could argue that if they turned out excellent things, then by definition, they're excellent, but that then makes the phrase uselessly tautological.
If it's true, then what's the advice to mediocre people and environments - just give up? Don't waste your time on excellent organisation, programming, testing, because it's a waste and you'll fail?
I think there's no 1 thing that can make excellence out of mediocrity for everyone. But I like to think that for every combination of work, people and environment, there's an optimal set of practices that will lead to them producing something better than average. Obviously a lot don't find that set of practices due to the massive search space.
I suppose I should have phrased that slightly kinder, and that’s on me. What I mean is that you can’t take your average jogger and expect them to win the 100 meter dash at the Olympics, no matter how much training and resources you provide them. Is that a real problem? No it certainly is not.
Lots of people like rock climbing but you can’t expect an average climber to win a major competition or climb some of the toughest mountains. It doesn’t mean they shouldn’t enjoy climbing, but if they start training like they are about to free solo El Capitan they are very likely to get seriously hurt.
Average developed, operators, sec. people can do plenty of good work. But put them in a position that will require them to be doing things way out of their area of expertise and no matter how you structure that role and what tools you give them you are setting them up for failure.
Another thing I was thinking about was not individual people being average but rather organizations: an average mediocre organization cannot reliable expect excellence even out of its excellent employees. The environment just isn’t set up for success. I am sure most people here are familiar with having talented people working in an organization that decided to poorly adopt Agile because it’s the thing that will fix everything.
All fair points. (Although I'm not a fan of the climbing analogy, the joy of programming is that you can practice anything you want without risk of real harm, and I think that's what makes unicorn-like programmers: they've repeatedly done the equivalent of free solo El Capitan practice in their spare time and lived to to tell the tail.)
I think it's help to consider excellence in the context of principals and practice. Everyone should be aiming for excellent principals for their domain and environment. But the practice that implements the principals needs to be specific to the capabilities.
E.g. I think in principal, any company with any tech should have backups, and ideally the ability to rebuild after a disaster. If you have a team of unicorn engineers ("company A"), the principals might be put into practice by meticulously tight and efficient SDLC and accompanying pipelines, tooling and other scripts and automations. If you have only one very competent engineer ("company B"), but a huge bunch of traditional ops people, that principal might have to be put into practice with a more manual, basic process (Bob from accounting once a week makes a copy of the important spreadsheets on a CD and stores it in a fireproof safe at home.)
Both businesses can deliver excellent and reliable service for their customers because they implement excellent principals in a way that their employees can deal with.
Swap company A and company B's practice, and even though it's for the same excellent principal, everything falls apart. I think we're in agreement here.
It's an uncomfortable truth that - on average - 50% of environments are company B. There's a problem that 90% of companies want their practices to look like company A. Nobody wants to feel mediocre.
> There is no methodology...that can make excellence out of mediocrity.
And yet we keep trying. We continually invent languages, architectures, and development methodologies with the thought that they will help average developers build better systems.
> There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.
That’s a pretty toxic statement. No one was born writing perfect code, every one of us needed to learn. An organizational culture that rewards learning is the way to produce an organization that produces excellent software.
It doesn’t matter how excellent someone is if the organization puts them to work on problems that nobody needs solved. The most perfect piece of software that nobody uses is a waste of bytes no matter how well it’s written.
Sure, everyone can learn, but what leads to mediocrity is a lack of desire to learn.
You can certainly structure your organisation around that and achieve great results. There’s no need for excellent tech to solve most problems. That’s ostensibly true for almost everything, because our society wouldn’t be able to survive if everything needed to be excellent.
A lot of different things can lead to mediocrity, and lack of desire to learn is only one of them.
Sometimes you want to learn, but your talents fail you.
Sometimes you want to learn, and you learn from someone who isn't good at the thing.
Sometimes you want to learn, and your environment punishes effectiveness.
Sometimes you want to learn, and you learn correctly for the wrong context.
Yes, people who give a damn about doing their job well are often good employees. But the converse is not true: many people who are not good employees do in fact care a great deal. There's no need to convert a descriptive statement about results into a moral judgement of a person's work ethic, and it's often just not factually correct to do so.
> Sometimes you want to learn, but your talents fail you.
> Sometimes you want to learn, and you learn from someone who isn't good at the thing.
> Sometimes you want to learn, and your environment punishes effectiveness.
> Sometimes you want to learn, and you learn correctly for the wrong context.
Should you just blame not having learned on the that and call it a day?
Sure, some people learn without much effort, but I’ve seen an equal amount of people that had to put it a massive amount of effort to be considered ‘smart’ (whatever that means).
I think I consider “well, I tried, it didn’t work, so it’s forever impossible” to be the definition of mediocre. That’s actually still pretty good, as many don’t even try, they’ll give something up as impossible before even starting.
I know people have reasons for that, but I don’t like it.
Your framing is someone making the immediate assumption that any failure derives from barriers beyond their control. And I agree, that's just as incorrect as assuming that any failure is due to barriers beyond their control. (It's also really demotivating.) But I think your framing is overly judgmental, because in fact most people who struggle at something have tried many times not to, they just haven't succeeded.
When speaking of other people, you often do not have good context on what they have or haven't tried. They have enormous context that you don't, often context that would be difficult or possible for you to understand from the outside even if you did have it. And to confidently judge that they just don't care without that context, without any awareness of what they have or haven't done, assumes way too much.
For me, doing advanced mathematics is easier than not feeling self-conscious when I talk to a grocery store clerk. Founding a company is easier than reliably keeping the clutter off of my desk. Walking many miles is easier than doing a push-up. I could explain to you why these things make sense within my particular context, but it would take many, many pages of trying to tell you who I am to do that. If you were to judge that I don't care about not being socially awkward, or that I don't care about having a clean room, you would be wrong - those things are just harder for me than they are for for the average person, and I choose (implicitly or explicitly) to put my efforts elsewhere.
I don't think that recognizing that is accepting mediocrity. I believe in excellence a great deal! But I think that if you want to seek excellence, you have to do it in ways that recognize what you are. Most of the time, you have to work with what you are and figure out ways to make that work. Once in a great while, something about what you are is so fundamentally at odds with what you want that you have to, at tremendous effort, change yourself. But the latter isn't something you can do every day or in every way.
"I've tried, so it's forever impossible" isn't the position I'm arguing for. The position I'm arguing for is: "I've tried, and it was very difficult, maybe more difficult for me than it is for you. I'm a human being whose resources and motivation (and judgment!) are finite, so I had to make a choice between working on this very hard thing and working on something else or taking some time to recuperate, and this time I decided some other option was better."
While we might not like it to be true, no amount of process can turn the average person into a Michael Jordan, a Jimmy Hendrix or a Leo Tolstoy. Assuming that software engineering talent is somehow a uniform distribution and not a normal one like every other human skill is likely incorrect. Don't conflate "toxic" with "true".
> One person can do the work of 3 and regularly does in startups.
Startup architecture and a 500+ engineer org's architecture are fundamentally different. The job titles are the same, but wont reflect the actual work.
Of course that's always been the case, and applies to the other jobs as well. What a "head of marketing" does at a 5 person startup has nothing to do with what the same titled person does at Amazon or Coca Cola.
I've also seen many orgs basically retitling their infra teams members as "devops" and call it a day. Communication between devs from different part of the stack has become easier, and there will be more "bridge" roles in each teams with an experienced dev also making sure it all works well, but I don't see any company that cared about their stack and performance fire half of their engineers because of a new fancy trend in the operation community.
> Startup architecture and a 500+ engineer org's architecture are fundamentally different
Certainly. The startup architecture is often better. I don’t know what exactly leads to those overcomplicated enterprise things, but I suspect its a lack of limitations.
Large org architecture is, most importantly, just much larger. The bank I worked in had 6000 _systems_. A lot of that was redundancy due to a lot of mergers and acquisitions that didn't get cleaned up (because it was really hard to do). Compare that to your typical SaaS startup, which typically has at most a couple systems.
Right. You can't run a 100 or even 1,000 person organization the same way you run a 25,000 person organization and many of the people roles are different. The 25,000 person organization can do more but it needs a lot more process to function. For example, the person just doing what they see as needing to be done in the small organization becomes a random loose cannon in the large one.
Someone mentioned DevSecOps upthread. The breaking down of organizational barriers really was more of a smaller company sort of thing. Platform Engineering and SREs are a better model of how this has evolved at scale.
There’s that, sure. But that’s also because everyone is way too happy to add another new system to the pile. Large enterprises generally have only so many business functions. I can get to about 100 necessary systems before feeling like they’re redundant.
And all these people have been given fodder by the microservices revolution.
> Large enterprises generally have only so many business functions.
Well, no, the difference in quantity of business functions can be enormous due to duplication. A small business will have an accounting business function, but a large business may have 57 separate accounting business functions due to having operations in different countries, having subsidiaries and mergers&acquisitions (possible multiple concurrently ongoing ones) where you will merge accountings (multiple!) from the new acquisition into yours, but it has not yet happened.
I think it less to do with judgement and more to do with one person can do the work of 3 in startups because there's several orders of magnitude less coordination and communication that needs to happen.
If you have 5 people in a startup you have 10 connections between them, 20 people = 190 connections, 100 = 4950 connections, 1000 people = 499500 connections.
Sure you split then in groups with managers and managers managers etc to break down the connections to less than the max but it's still going to be orders of magnitude more communication and coordination needed than in a startup.
In a startup you get significantly more focus time than in a large company. Especially if there is no production yet - there are no clients, no production bugs, no-call.
In a larger company, literally 80% of your job is meetings, Slack, and distractions.
I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.
When I look around at many of the newer startups that are popping up they're increasing filled with really talented people. I chalk this up largely to the fact that people that really know what they're doing are increasingly necessary to get a company started in a more cash constrained environment and those people are making sure they hire really talented people.
Tech right now reminds me so much more of tech around 2004-2008, when almost everyone one that was interested in startups was in it because they loved hacking on technical problems.
My experience with Cursor is that it is excellent at doing things mediocre engineers can do, and awful at anything more advanced. It also requires the ability to very quickly understand someone else's code.
I could be wrong, but my suspicion is this will allow a lot of very technical engineers, that don't focus on things like front-end or web app development, to forgo needing to hire as many junior webdev people. Similar to how webmasters disappeared once we have frameworks and tools for quickly building the basic HTML/CSS required for a web page.
While you have a good point, I think the experts also branched off with some unreasonable requirements. I remember reading Yegge's blog post, years ago, saying that an engineer needed to know bitwise operators, otherwise they were not good enough.
I don't know. Curiosity, passion, focus, creative problem solving seem to me much more important criteria for an engineer to have, rather than bitwise operations. An engineer that has these will learn everything needed to get the job done.
So it seems like we all got off the main road, and started looking for shibboleths.
I have a hard time imagining someone who has "curiosity, passion, focus, creative problem solving" regarding programming and yet hasn't stumbled upon bitwise operators (and then immediately found them interesting). They're pretty fundamental to the craft.
I can see having to refresh on various bit shifting techniques for fast multiplication etc (though any serous program should at least be aware of this), but XOR is fundamental to even knowing the basics of cryptography. Bitwise AND, NOT, and OR are certainly something every serious programmer should have a very solid understanding of no?
There is a world of difference between stumbling upon, knowing about in general and being proficient with -because of using on regular basis- some tech piece.
I do know how bitwise AND, NOT, OR, XOR work but I don't solve problems on day to day basis that use those.
If you give me two bytes to do the operations manually it is going to be a piece of cake for me.
If you give me a problem where most optimal solution is using combination of those operations I most likely will use other solution that won't be optimal because on my day to day work I use totally different things.
A bit of a tangent is that is also why we have "loads of applicants willing to do tech jobs" and at the same time "skilled workers shortage" - if company needs someone who can solve optimally problems with bitwise operations "right here right now", I am not the candidate, given couple of months working on those problems I would get proficient rather fast - but no one wants to wait couple of months ;)
It's probably a matter of domain, but most backend engineers who are working in Java/Golang/Python/Ruby systems have very little use of those, and they don't come up easily. Frontend (web or mobile), it comes up even less.
Can you tell me how this knowledge makes one a serious programmer? In which moments of the development lifecycle they are crucial?
Without judgement, this feels like a switch up. It seems to me that the prior author did not suggest they were necessary, but instead ambient, available, and interesting. Indeed for many they are not useful.
At the same time, that might be a good indication of passion: a useless but foundational thing you learn despite having zero economic pressure to do so.
In the domains you have worked on, what are examples of such things?
It is not a switch up. The author said that an engineer would find these during their journey, and I am asking when. I don't have a strongly held opinion here, I literally want to know when. I am curious about it.
Personally, I think it’s possible to not encounter them. I surely avoided them for a while in my own career, finding them to be the tricks of a low-level optimization expert of which I felt little draw.
But then I started investigating type systems and proof languages and discovered them through Boolean algebras. I didn’t work in that space, it was just interesting. I later learned more practically about bits through parsing binary message fields and wonder what the deal with endianness was.
I also recall that yarn that goes around from time to time about Carmack’s fast inverse square root algorithm. Totally useless and yet I recall the first time I read about it how fun a puzzle it was to work out the details.
I’ve encountered them many times since then despite almost never actually using them. Most recently I was investigating WASM and ran across Linear Memory and once again had an opportunity to think about the specifics of memory layout and how bytes and bits are the language of that domain.
I've mostly done corporate Java stuff, so a lot of it didn't come up, as Spring was already doing it for me.
I've first got to enjoy low level programming when reading Fabien Sanglard breaking down Carmack's code. Will look into WASM, sounds like it could be a fun read too.
I found them in 2015 when I was maintaining a legacy app for a university.
The developer that implemented them could have used a few bools but decided to cram it all into one byte using bitwise operators because they were trying to seem smart/clever.
This was a web app, not a microcontroller or some other tightly constrained environment.
One should not have to worry about solar flares! Heh.
Maybe. Every time I write something I consider clever, I often regret it later.
But young people in particular tend to write code using things they don’t need because they want experience in those things. (Just look at people using kubernetes long before there is any need as an example). Where and when this is done can be good or bad, it depends.
Even in a web app, you might want to pack bools into bytes depending on what you are doing. For example, I’ve done stuff with deck.gl and moving massive amounts of data between webworkers and the size of data is material.
It did take a beat to consider an example though, so I do appreciate your point.
Coming from a double major including EE though, all I have to say is that everyone’s code everywhere is just a bunch of NOR gates. Now if you want to increase your salary, looking up why I say “everything is NOR” won’t be useful. But if you are curious, it is interesting to see how one Boolean operand can implement everything.
I can understand writing your own instance and deployment "orchestrator", but I would not consider trying k8s for fun, because it just seems like some arbitrary config you have to learn for something someone else has built.
Nobody that uses bit flags do it because they think it makes them look clever. If anything they believe it looks like using the most basic of tools.
> One should not have to worry about solar flares!
Do you legitimately believe that a bool is immune to this? Yeah, I get this a joke, but it's one told from a conceit.
This whole post comes off as condescending to cover up a failure to understand something basic.
I get it, someone called you out on it at some point in your life and you have decided to strike back, but come on... do you really think you benefit from this display?
I made a concerted effort to understand the code before I made any effort to adapt it to the repo I was working on. I'm glad I did (although honestly, it wasn't in the remotest bit necessary to solve the task at hand!)
> someone who has "curiosity, passion, focus, creative problem solving" regarding programming
would find this on their journey. Whereas you are describing someone who merely views programming as a job.
It's perfectly fine to view programming as "just work" but that's not passion. All of the truly great engineers I've worked with are obsessed with programming and comp sci topics in general. They study things in the evening for fun.
It's clear that you don't, and again, that's fine, but that's not what we're talking about.
Software for you is a job. I work in software because it's an excuse to get paid to do what I love doing. In the shadow of the dotcom bust most engineers where like this, and, they were more technical and show more expertise than most software engineers do today.
It might be an indication for passion, but not knowing them does not indicate lack of it.
The thing is there are unfathomable amount of such things to learn, and if somebody doesn't stumble upon them or won't spend time with them it doesn't indicate lack of it.
I'm sorry but bitwise operators are the most fundamental aspect of how computing works. If you don't understand these things it does indicate a lack of passion, at least in regards to programming and computation.
Bitwise operators are not a particularly complex topic and extend from the basics of how logic is implement in a computer system.
Personally I think ever one interested in programming should have, at least once, implemented a compiler (or at least interpreter). But a compiler is a substantial amount of work, so I understand that not everyone can have this opportunity. But understanding bitwise operators is requires a minimal investment of time and is essentially to really understanding the basics of computing.
Guess I have a lack of passion then, despite building countless of full stack side projects and what-not even before I knew these could make me money at my teenage years. I started out with PHP using txt files as storage in my teens so none of what you said would be relevant to what I've experienced, and that was I would say almost 20 years ago. And in my high school years I took part of coding competitions where I scored high nationally without knowing any of bitwise or otherwise things, despite deeply enjoying those things.
This weird elitism drives me mad, but maybe truly I haven't done enough to prove my worthiness in this arena. Maybe PHP or JavaScript is not true programming.
You're talking past one another. The whole point of being interested and passionate about a topic is that you want to learn stuff regardless of whether or not it's useful
I can be interested in front-end development and learn everything there is to know about templating while never knowing about bitwise operators. Point is domains are so large that even if you go beyond what is required and learn for the sake of curiosity, you can still not touch what others may consider fundamental.
Yeah, that sounds like someone claiming to be passionate about math, but who never heard of prime numbers. It's not that prime numbers are important for literally everything in math (although coincidentally they also happen to be important for cryptography), it's just that it's practically impossible to have high-school knowledge of math without hearing about them.
What other elementary topics are considered too nerdy by the cool people in IT today? Binary numbers, pixels...? What are the cool kids passionate about these days? Buzzwords, networking, social networks...?
> I remember reading Yegge's blog post, years ago, saying that an engineer needed to know bitwise operators, otherwise they were not good enough.
I think he's right, nearly every engineer needs a basic understanding IP routing more today than in the 2000's with how connected the cloud is. Few engineers do, however.
Every time you need to ask yourself, "Where is this packet going?" you're using bitwise operators.
Yegge no longer thinks knowing these kinds of details matter so much, can’t find the exact timestamp but it relates to the skills shifting to being really good at prompting and quickly learning what is relevant
This is something I have some expertise in (I run a recruiting company that uses a standardized interview, and I used to run assessments for another). It's also something I've thought a lot about.
There is absolutely truth to what you're saying. But like most obvious observations that aren't playing themselves out, there's more to it than that.
-----
One: curiosity, passion, and focus matter a lot. Every company that hires through us is looking for them in one form or another. But they still have to figure out a means by which to measure them.
One company thinks "passion" takes the form of "wanting to start a company someday", and worries that someone who isn't that ambitious won't be dedicated enough to push past the problems they run into. But another thinks that "passion" is someone who wants to tinker with optimizing some FPGA all night long because the platonic ideal of making a 15% more efficient circuit for a crypto node is what Real Technical Engineers do.
These companies are not just looking for passion in practice, but for "passion for".
And so you might say okay, screen for that. But the problem is that passion-for is easily faked - and is something you can easily fail to display if your personality skews the wrong way.
When I interviewed at my previous company many years ago, they asked me why I wanted to work there. I answered honestly: it seemed like a good job and that I'd be able to go home and sleep at night. This was a TERRIBLE answer that, had I done less well on other components of the interview or had they been interviewing me for more than a low-level job, would likely have disqualified me. It certainly would not have convinced them I had the passion to make something happen. But a few years later I was in the leadership of that company, and today I run a successor to it trying to carry the torch when they could not.
If you asked me the same question today about why I started a company, an honest answer would be similar: I do this business because I know it and because I enjoy its challenges, not because it's the Most Important Thing In The World To Me. I'm literally a startup founder, and I would not pass 90% of "see if someone's passionate enough to work at a startup" interview questions if I answered them honestly.
On the flip side, a socially-astute candidate who understands the culture of the company and the person they're interviewing with can easily fake these signals. There is a reason that energetic extraverts tend to do well in business - or rather, there are hundreds of reasons, and this is one of them. Social skills let you manipulate behavioral interviews to your advantage, and if you're an interviewer, you don't want candidates doing that.
So in effect, what you're doing here is replacing one shibboleth that has something to do with technical skill, with another that is more about your ability to read an interviewer and "play the game". Which do you think correlates better with technical excellence?
-----
And two: lots of people are curious, passionate, energetic, and not skilled.
You say that a person with those traits "will learn" everything needed. That might even be true. But "will learn" can be three years, five years, down the line.
One of the things we ask on our interview is a sort of "fizzbuzz-plus" style coding problem (you can see a similar problem - the one we send candidates to prep them - at https://www.otherbranch.com/practice-coding-problem if you want to calibrate yourself on what I'm about to say). It is not a difficult problem by any normal standard. It requires nothing more than some simple conditional logic, a class or two, and the basic syntax of the language you're using.
Many apparently-passionate, energetic, and curious engineers simply cannot do it. The example problem I linked is a bit easier than our real one, but I have reliable data on the real one, which tells me that sixty-two percent of candidates who take it do not complete even the second step.
Now, is this artificial? Yeah, but it's artificially easy. It involves little of the subtlety and complexity of real systems, by design. And yet very often we get code that (translated into the example problem I linked) is the rough equivalent of:
print("--*-")
with no underlying data structure, or
if (row == 1 && col == 3)
where the entire board becomes an immutable n^2 case statement that would have to be wholly rewritten if the candidate were to ever get to later steps of the problem.
Would you recommend someone who wrote that kind of code, no matter how apparently curious, passionate, or energetic they were?
I have failed simple coding interviews because of stress. Been coding for 2 decades or more, know I'm good at it, and have had multiple other people confirm this. But sometimes under interview stress, I can lock up. Especially if something goes fubar like their on-line editor.
If I was at work, that wouldn't be a problem, take 3 minutes and come back to it. You don't have that in an interview.
I'm thinking of the observation from the speed climbing event at the Olympics. They had set up the course wrong, and one of the holds in one of the lanes was like 2cm out of position-- and it wasn't even a hold that was used by the climbers. But that was enough that people in that lane consistently lost.
Yeah, this is one of the best arguments against timed interview problems, and I do think it's a good one. You could justifiably argue for a lot of other structures, but those structures do come with their own trade-offs (like time investment for a take-home, or "game-ability" for a behavioral interview, etc).
This has been one of the most interesting and insightful replies I've ever got in a discussion. Thank you for taking the time and explaining your points. It really resonates with what I've seen in real life.
> I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.
It also goes back to if you're passionate about the technology, you'll be willing to spend a weekend learning something new vs just doing the bare minimum to check off this week's sprint goals.
This is happening across the board, not just to IT workers, and I suspect it's the major factor as for why the expected productivity improvements from technology didn't materialize.
Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.
None of these are part of their job descriptions - in fact, all of these are actively distracting and disproportionally compromise the workers' ability to do their actual jobs. All of these also used to have dedicated specialists, that could do it 10x as efficiently, for fraction of the price.
My hypothesis is this: those specialists like secretaries, internal graphics departments, financial staff, etc. they all were visible on the balance sheet. Eliminating those roles does not eliminate the need for their work to be done - just distributes it to everyone in pieces (in big part thanks to self-serve office software "improving" productivity). That slows everyone down across the board disproportionally, but the beancounters only see the money saved on salaries of the eliminated roles - the slowdown only manifests as a fuzzy, generic sense of loss of productivity, a mysterious costs disease that everyone seems to suffer from.
I say it's not mysterious; I say that there is no productivity gain, but rather productivity loss - but because it turns the costs from legible, overt, into diffuse and hard to count, it's easy to get fooled that money is being saved.
I agree with the hypothesis. My country has a significant shortage of doctors, and guess what the few doctors spend a large amount of their day on? Paperwork that used to be done by secretaries, whose salary would be maybe 1/3 or the doctor's. It's a massive waste of both money and doctor's potential, but somehow that's what the free market prefers.
> Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.
It feels like you are working at the same company as me.
Companies complain all the time about how difficult it is to find competent developers, which is their excuse for keeping most of the teams understaffed. Okay, then how about increasing the developers' productivity by letting them focus on, you know, development?
Why does the paperwork I need to do after visiting a dentist during the lunch break take more time than the visit itself? It's not enough just to bring the receipt to HR; I need to scan the paper, print the scan, get it signed by a manager, start a new process in a web application, ask the manager to also sign it in the web application, etc. I need to check my notes every time I am doing it, because the web application asks me a lot of information that in theory it should already know, and the attached scan needs to have a properly formatted file name, and I need to figure out the name of the person I should forward this process to, rather than the application figuring it out itself. Why??? The business travels were even worse, luckily my current company doesn't do those frequently.
My work is defined by Jira tickets that mostly contain a short description like "implement XY for Z", and it's my job to figure out wtf is "Z", who is the person in our company responsible for "Z", what exactly they meant by "XY", where is any specification, when is the deadline, who am I supposed to coordinate with, and who will test my work. I miss the good old days when we had some kind of task description and the definition of done, but those were the days when we had multiple developers in a team, and now it's mostly just me.
I get invitations to meetings that do not concern me or any of my projects, but it's my job to figure that out, not the job of the person who sent the invitations. Don't get me started on e-mails, because my inbox only became manageable after I wrote dozen rules that put various junk in the spam folder. No, I don't need a notification every time someone in the company made an edit on any Confluence page. No, I don't need notifications about people committing code to projects I am not working on. The remaining notifications often come in triplicate, because first I get a message in Teams, then an e-mail saying that I got a message in Teams, and finally a Windows notification saying that I got a new e-mail. When I return from a vacation, I spend my first day or two just sorting out the flood in my e-mails.
On some days, it is lunchtime before I had an opportunity to write my first line of code. So it's the combination of being Agile-Full-Stack-Dev-Sec-Ops-Cloud-Whatever and the fact that everything around me seems designed to make my work harder that is killing me. This is a system that slows down 10x developers to 1x developers, and us lesser mortals to mere 0.1x developers.
And yet we are told that competition will determine that capital and talent will flow towards that most efficient organisations over time. Thus, surely organisations that eschewed this practice would emerge and dominate?
So, either capitalism doesn't work, or your thesis isn't quite right...
I have two other counters to offer, first we have seen GDP per capita gradually increasing in major economies for the last 50 years (while the IT revolution has played out). There have been other technical innovations over this time, but I believe that GDP per capita has more than quadrupled in G8 economies. The USA and Canada have, at the same time, enjoyed a clear extra boost from fracking and shale extraction, and the USA has arguably enjoyed an extra extra boost from world dominance - but arguably.
The second one is simple anecdote. Hour for hour I now can do far more in terms of development than I did when I was a hard core techie in the 90's and 2000's. In addition I can manage and administer systems that are far more complex than those it took teams of people to run at that time (try running a 10gb size db under load on oracle 7 sitting on top of spinning rust and 64mb ram store for fun) I can also manage a team of 30's expenses, timesheets, travel requests and so on that again would have taken a person to do. I can just do these things and my job as well and I do it mostly in about 50 hrs a week. If I wasn't involved in my people's lives and happy to argue with customers to get things better I could do it in 40 hrs regularally, for sure. But I put some discretion in.
My point is - we are just more productive. It is hard to measure, and anecdote / "lived experience" is a bad type of evidence, but I think it's clearly there. This is why then accountants have been able to reorganise modern business organisations to use fewer people to do more. Have they destroyed value while doing this - totally - but they have managed to get away with it because 7/10 they have been right.
Personally I've suffered from the 3/10 errors. I know many of us on here have, but we shouldn't shut our eyes because of that.
> And yet we are told that competition will determine that capital and talent will flow towards that most efficient organisations over time. Thus, surely organisations that eschewed this practice would emerge and dominate?
That’s really not how competition works in practice. Verizon and AT&T are a mess internally but their competitors where worse.
GDP per capita has a lot more to do with automation than individual worker productivity. Software ate the world, but it didn’t need to be great software to be better than no software.
At large banks you often find thousands of largely redundant systems from past mergers all chugging along at the same time. Meanwhile economies of scale still favor the larger bank because smaller banks have more systems per customer.
So sure you’re working with more complex systems, but how much of that complexity is actually inherently beneficial and how much is legacy of suboptimal solutions? HTML and JavaScript are unbelievably terrible in just about every way except ubiquity thus tools / familiarity. When we talk about how efficient things are, it’s not on some absolute scale it’s all about the tools built to cope with what’s going on.
AI may legitimately be the only way programmers in 2070 deal with ever more layers of crap.
As long as you have the economies of scale and huge barriers to entry, companies can stay deeply dysfunctional without getting outcompeted. Especially when the same managers rotate between them all and introduce the "best practices" everywhere.
They're realizing that 10x (+) developers exist, but think they can hire them at 1x developer salaries.
Btw, they key skill you're leaving out, is to understand the business your company is in.
If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.
It's because it's often a suboptimal career move from an individual perspective. By branding yourself as a "FinTech Developer" or whatever instead of just a "Developer" you're narrowing your possible job options for questionable benefit in terms of TC and skills growth. This isn't always the case and of course if you spend 20 years in one domain maybe you can brand yourself as an expert there and pull high $/hr consulting rates for fixing people's problems. Maybe. More likely IMO you end up stagnating.
I went through this myself early in my career. I did ML at insurance companies and got branded as an insurance ML guy. Insurance companies don't pay that well and there are a limited number of them. After I got out of that lane and got some name-brand tech experience under my belt, job hunting was much easier and my options opened up. I make a lot more money. And I can always go back if I really want to.
> I did ML at insurance companies and got branded as an insurance ML guy.
If you're an "ML insurance guy" outside of the US, it may be quite lucrative compared to other developers. It's really only in the US (and maybe China) that pure developers are demanding $200k+ salaries.
In most places in Europe, even $100k is considered a high salary, and if you can provide value directly to the business, it will add a lot to your potential compensation.
And in particular, if your skills are more about the business than the tech/math of your domain, you probably want to leverage that, rather than trying to compete with 25-year-olds with a strong STEM background, but poor real-life experience.
> In most places in Europe, even $100k is considered a high salary
I think it's worth adding here that US developers can have a much higher burden for things like health insurance. My child broke his arm this year, and we hit our very high deductible.
I would like to see numbers factoring in things like public transportation, health insurance, etc., because I personally feel like EU vs US developers are a lot closer in quality of life after all the deductions.
It’s not really about the numbers. My pile of money at retirement would definitely be larger if I moved across the pond. But here my children walk themselves to school from the age of six. I don’t worry about them getting killed by a madman. And even the homeless get healthcare, including functional mental health support. Things no employer can offer in the US.
This a very reasonable viewpoint. As a different personal opinion, I am glad that I live on my side of the pond.
I am in my early 50s, and having worked in tech for the last 24 (with sane hours and never at the FAANG salaries) I own my condo in a nice town, my kids college is paid for and my personal accounts are well into 7-digits.
This is not all roses: schools in the US stink (I have grown up on the other side of the pond and was lucky to get a great science education in school so I can see the difference), politics are polarized, supermarket produce is mediocre, etc.
The biggest issue for me though is that I suspect that the societies on both sides of the pond are going to go through major changes in the next 10-15 years and many social programs will become unaffordable. I see governments of every first world country making crazy financial and societal choices so rather than depending on government to keep me alive and well I much prefer US salaries allowing me to have money in my own hands. I can put some of that into gold or Bitcoin, or buy an inexpensive apartment in a quiet country with decent finances and reasonable healthcare. Not being totally beholden to the government is what helps me sleep well at night. My 2c.
Im in western europe and I would never let my children walk themselves to school at 6. europe is far from being a safe place minus some eastern europe.
That depends on location within Western Europe. Where I live (also W. Europe), it's common for kids to walk to school from the age of 6, or soon after if the kids are not yet mature enough.
In similar places in the US it may not even be the risk of criminals that is the largest threat. It may simply be that the road network is built for cars only, with few safe ways to cross roads without a vehicle.
By comparison, where I live, parents are expected to act as a kind of traffic police a couple of mornings every year. That means that every place where the kids have to cross the road will have an adult blocking all cars from passing even if a kid is merely getting close (even if the speed limit is only 30km/h or 20mph)
In other words, pedestrians get the highest priority while motorists are treated as second class.
Nation wide, about 50-60% of the kids will walk or ride a bike to school in my country (and those who don't tend to either live far from the school or in a higher crime area).
Compared to ~10% in the US.
Also, while in the US kids of low income households are more likely to (have to) walk to school.
In my country, it's possible that the relationship is, if anything, inversed. Having the kids walk to school is seen by many resourceful families as healthy, both from the physical activity in a screen-rich world and to teach them to be independent and confident.
That means that in neighborhoods with a large percentage of such parents the parents are likely to ensure that the route to school is safe and walkable for kids.
I would also like to see those numbers. IMHO, the biggest difference comes from housing cost. For example, from my own back-of-the-envelope calculations, a £100k Oxbridge job affords a better lifestyle than a $180k NY job mostly because of housing.
However, some US areas have competitively priced housing and jobs that would make the balance tilt in favor of America. In EU, affordable spots with lots of desirable local jobs are becoming increasingly rare. Perhaps Vienna, Wrocław and a few other places in Central/Eastern EU.
I don't know how NY is in comparison, but housing in Cambridge is almost as expensive as in London. A detached 3br starts at £700k. NIMBYs keep killing any expansion of supply.
"Water supply issues are already holding back housing development around the city. In its previous draft water resources management plan, Cambridge Water failed to demonstrate that there was enough to supply all of the new properties in the emerging local plan without risk of deterioration.
The Environment Agency recently confirmed that it had formally objected to five large housing developments in the south of the county because of fears they could not sustainably supply water. It has warned that planning permission for more than 10,000 homes in the Greater Cambridge area and 300,000 square metres of research space at Cambridge University are in jeopardy if solutions cannot be found.
A document published alongside the Case for Cambridge outlines the government’s plan for a two-pronged approach to solving the water scarcity issue, to be led by Dr Paul Leinster, former chief of the Environment Agency, who will chair the Water Scarcity Group.
In the long term, supply will be increased, initially through two new pieces of infrastructure: a new reservoir in the Fens will delivery 43.5 megalitres per day, while a new pipeline will transfer 26 megalitres per day from Grafham Water, currently used by Affinity Water.
But, according to Kelly, a new reservoir would only solve supply requirements for the existing local plan and is “not sufficient if you start to go beyond that” – a point that is conceded in the water scarcity document. "
The NIMBYs are the ones trying to stop the reservoir from being built. They created the problem they're complaining about. A big hole in the ground with water in it is not a complicated piece of infrastructure, but the planning system is so dysfunctional and veto-friendly that the construction timeline has been pushed out into the 2030s, in the best case. Previous generations got them done in two years flat. It is an artificial problem.
Same thing with transport. "We can't build new houses because it would increase car traffic", meanwhile putting up every barrier they can think of to stop East-West Rail.
Just adding onto this because I can't edit: it beggars belief that water supply would ever be a limiting factor for urban growth in England. It's preposterous that this is even an issue. Yes, Cambridgeshire is the driest region in the country, but that's only a relative thing! It still gets quite a lot of rain! Other countries have far less rainfall in absolute terms and they grow their cities just fine, because they build reservoirs and dams and water desalination plants. Nature has not forced this situation on us, we are simply choosing not to build things.
US big tech devs typically make 3x+ what the Euro equivalent does. A 4500 deductible isn't really materially relevant. I (a US dev) have a high deductible plan but my employer contributes substantially to it anyway.
IMO big tech is also a small part of the US sector (I’m not in big tech). The US idolizes FAANG, but there are a heck of a lot of other companies out there.
edit: Yes they do employ a ton of people, but most people I know don’t make those salaries.
I live in a location that wouldn't have public transportation even in Europe. And my healthcare, while not "free," was never expensive outside of fairly small co-pays and a monthly deduction that wasn't cheap but was in the hundreds per month range. Of course, there are high deductible policies but that's a financial choice.
> I would like to see numbers factoring in things like public transportation, health insurance, etc
If that's worth more than $50k or so anyone living in the US that's not making significantly more than the median wage or would be in pretty horrible spot financially.
> hit our very high deductible.
Isn't the maximum deductible that's allowed "only" $16k?
Also taxes are usually significantly higher in Europe with some exceptions (e.g. Switzerland vs California, though you need to pay for your health insurance yourself in Switzerland ).
> rather than trying to compete with 25-year-olds with a strong STEM background, but poor real-life experience.
I have never seen this happen. All the new grads I've ever worked with (from Ivy league schools as well) are pretty much incapable of doing anything without a lot of handholding, they lack so much of everything, experience, context, business knowledge, political awareness, technical knowledge (I still can't believe "designing data intensive applications" isn't a required read in college for people that want big tech jobs) and even the STEM stuff (if they're so good why do i have to explain why we don't use averages?).
Which is fine, they are new to the market, so they need to be educated. Other than CRUD apps, i doubt these people can do anything complex without supervision, we're not even in the same league to compete with each other.
I’ve never gotten avaricious small town business types to believe that the cheap thing to do if the proprietary language you spent a lot to buy is EOL is hire some interns from your local Uni who took a compilers class to write an implementation or translator for $0.00 and some good experiences although… it’s true.
Pay me enough and I'll suck up everything about municipal waste management that any human being has ever said. Pay me 'market rates' and you'll get the type of work the market produces.
Funny, my world is full of the other kind of engineer. We all come with domain knowledge built in, and programming is the complex domain many people don’t want to learn.
I've worked in higher ed. Many of the masters students I see want the credential but they have no desire to write code. Copying and plagarism are rampant.
Which is short sighted, because if they’re on the other side of the interview table from me it takes about 60 seconds to find them out. Competence speaks a different language than fraud.
Why would you, 1) Learn a complex skill like software development (which now includes devops stuff), 2) learn a complex business domain, and 3) get paid for only doing one?
> If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.
By 'stay relevant' you mean run your own? Ability to build + align to business objectives = $$$, no reason to be an Agile cog at that point.
Biz side of the house here: for sure, it's always been the way that really what you're "weeding in" is the IC's who are skilled and aware. (and:If I had to do layoffs, my stack rank would be smack dab in here also, btw)
So its not going to stop. Typical C-suite who holds real power has absolutely 0 clue about IT complexity, we are overpriced janitors to them. Their fault, their blame, but they are probably long gone when these mistakes manifest fully.
In my banking corp, in past 13 years I've seen massive rise of complexity, coupled with absolutely madly done bureaucracy increase. I still could do all stuff that is required but - I dont have access. I cant have access. Simple task became 10 steps negotiating with obscure Pune team that I need to chase 10x and escalate till they actually recognize there is some work for them. Processes became beyond ridiculous, you start something and it could take 2 days or 3 months, who knows. Every single app will break pretty quickly if not constantly maintained - be it some new network stuff, unchecked unix update, or any other of trillion things that can and will go wrong.
This means - paper pushers and folks at best average at their primary job (still IT or related) got very entretched in processes and won, and business gets served subpar IT, projects over time and thus budget, perpetuating the image of shitty tolerated evil IT.
I stopped caring, work to live is more than enough for me, that 'live' part is where my focus is and life achievements are.
- my laundry app (that I must use) takes minutes to load, doesn't cache my last used laundry room, and the list of rooms isn't even fucking sorted (literally: room 3, room 7, room 1, room 2, ...)
- my AC's app takes 45 s to load even if I just used it, because it needs to connect. Worse, I'll bring the temp down in my house and in the evening raise it, but it'll come on even when 5F below my target value, staying on for 15+ minutes leaving us freezing (5F according to __it's thermometer__!)
- my TV controls are slow. Enough that I buffer inputs and wait 2-3 seconds for the commands to play. That pressing the exit button in the same menu (I turn down brightness at night because auto settings don't work, so it's the exact same behavior), idk if I'm exciting to my input, exiting the menu, or just exiting the sub menu. It's inconsistent!
There's so much that I can go on and on and I'm sure you can too. I think one of the worst parts about being a programmer is that I'm pretty sure I know how to solve many of these issues, and in fact sometimes I'll spend days to tear apart the system to actually fix it. Of course to only be undone by updates that are forced (app or whatever won't connect because why is everything done server side ( ┛ ◉ Д ◉ ) ┛ 彡 ┻ ━ ┻ ). Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)
I don't know how to stop caring because these things directly affect me and are slowing me down. I mean how fucking hard is it to use sort? It's not even one line!
I mean this is my TV menu. I'm adjusting brightness (the explicit example given). Does Apple TV control the whole TV settings?
Fwiw, my TV just ends up being a big monitor because all the web apps are better and even with all the issues jellyfin has, it's better than many of those. I just mostly use a mouse or kdeconnect.
Speaking of which, does anyone have a recommendation for an android keyboard that gives me things like Ctrl and super keys? Also is there a good xdotool replacement for Wayland? I didn't find ydotool working as well but maybe I should give it another try.
I can suggest this setup and think it'll work for many. My desktop sits behind my TV because it mostly does computational work, might run servers, or gaming. I'm a casual and so 60fps 4k is more than enough even with the delay. Then I just ssh from my laptop and do most of the work from there. Pretty much the same as my professional work, since I need to ssh into hpc clusters, there's little I need to do on the machine I'm physically in front of (did we go back to the 70's?)
This is simple: we can't just trust each other. When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice. This means that people have to protect their code from others. Program code is obfuscated (compiled and copyright enforced) or stored in a container with a limited interface (cloud).
It's not a technical issue, it's a social issue.
Applying a social mindset to technical issues (asking your compiler to be your partner, and preparing them a nice dinner) is equally silly as applying a technical mindset to social issues.
> When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice
Yeah, I lean towards this too. Signals I use now to determine good software usually are things that look auxiliary because I'm actually looking for things that tell me the dev is passionate and "having fun." Like easter eggs, little things like it looks like they took way too much time to make something unimportant pretty (keeping doing this devs. I love it and it's appreciated. Always makes me smile ^_^). But I am also sympathetic, because yeah I also get tons of issues opened that should have been a google search or are wildly inappropriate. Though I try to determine if these are in good faith because we don't get wizards without noobs, and someone's got to teach them.
But it all makes me think we forgot what all of this is about, even "the economy." Money is supposed to be a proxy for increasing quality of life. Not even just on a personal level. I'm happy that people can get rich doing their work and things that they're passionate about but I feel that the way current infrastructures are we're actively discouraging or handcuffing people who are passionate. Or that we try to kill that passion. Managers, let your devs "have fun." Reign them in so that they don't go too deep of rabbit holes and pull too far away, but coding (like any engineering or any science) is (also) an art. When that passion dies, enshitification ensues.
For a concrete example: I'm wildly impressed that 99.9 times I'm filling out a forum that includes things like a country or timezone that my country isn't either autoselected or a copy isn't located at the top (not moved! copied!). It really makes me think that better than chasing leet code questions for interviews you ask someone to build a simple thing and what you actually look for is the details and little things that make the experience smoother (or product better). Because it is hard to teach people to about subtly, much harder than teaching them a stack or specific language (and if they care about the little things they'll almost always be quicker to learn those things). Sure, this might take a little more work to interview people and doesn't have a precise answer, but programs don't have precise answers either. And given how long and taxing software interviews are these days I wouldn't be surprised if slowing down actually ends up speeding things up and saving a lot of money.
> my laundry app (that I must use) takes minutes to load
Do they have a place to mail your complaints? Who forces you to use the app? Annoy them. Annoy their boss. Write a flier pointing out how long it takes to use the app and leave those fliers in every laundry room you visit (Someone checks on those machines, right?). Heck, send an organization-wide email pointing out this problem. (CC your mayor, council-member, or congressional representative.) (You don't have to do all of these things, but a bit of gentle, non-violent, public name-and-shame can get results. Escalate gently accordingly as you fail to get results.)
> my AC's app takes 45 s to load even if I just used it, because it needs to connect
If I were in your shoes, assuming I had time, I might (a) do the above "email the company with a pointed comment about how their app sucks" or (b) start figuring out how to use Home Assistant as a shim / middleman to manage the AC, and thus make Home Assistant server and its phone app the preferred front-end for that system (c) write a review on your preferred review site indicating the app is a pile of garbage
> Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)
Admittedly, the heavy-handed solution for this is to make a software fork, or a mod-pack, or "godelski's bag of fixes" or whatever, and maintain that (ideally automating the upkeep on that) until people keep coming to you for the best version, rather than the primary devs.
---
No, I don't do this to everyone I meet or for every annoyance (it's work, and it turns people away if you complain about everything), but emails to the top of the food chain pointing out that basic stuff is broken sometimes gets results (or at least a meeting where you can talk someone's ear off for 30 minutes about stuff that is wrong), especially if that mail / email also goes to other influential people.
I'm pretty chill and kind and helpful, but when something repeatedly breaks and annoys several people every day, you might hear about it in several meetings that I attend, possibly over the next year, until I convince someone to fix it (even if it's me who eventually is tasked with fixing it).
Short of of writing to my mayor, I've done most of that. Big motivation to learn reverse engineering and some basic hacking.
I can't hack everything.
I can't fix everything.
I need time for my own things I need time to be human.
But no one is having it. My impression is that while everyone else is annoyed by things there's a large amount of apathy and acceptance for everything being crud.
I know you're trying to help, but this is quite the burden on a person to try to fix everything. And things seem to be getting worse. So I'm appealing to the community of people who create the systems. I do not think these are things that can be fixed by laws. You can't regulate that people do their best or think things through.
I appeal here because if I can convince other developers that slowing down will speed things up, then many more will benefit (including the developers and the companies they work for). Even convincing a few is impactful. Even more when the message spreads.
Small investments compound and become mighy, but so do "shortcuts"
>Do they have a place to mail your complaints? Who forces you to use the app?
Presumably the building he lives in contracted out laundry service to some third party company, which is in charge of the machines and therefore shitty app. In this situation there really isn't any real choice to vote with your wallet. They can tell you to pound sand and there's nothing you can do. Your only real option is to move out. Maybe if OP is lucky, owns the property, and is part of the HOA he might be able to lobby for the vendor to be replaced.
Most landlords are not complete assholes. They want to do what is reasonable to keep their tenants happy. It's much easier to re-sign a happy tenant than to have to turn over an apartment and spend time and money marketing it and possibly have it vacant for a while. If all the tenants hate the laundry app and let the landlord know (politely, with a list of machines that don't work, dates/times, etc), most likely it will have an effect.
In my anecdotal experience, many New York City landlords don’t see their tenants as human beings, just a revenue source. Tenants complain? Maybe a city inspector shows up, a day or days later, so the landlord can turn the heat/water back on, and the inspector reports “no issue found.” People get mad and move out? New tenants pay an even higher rent! Heard horror stories about both individual and management company landlords. Can’t be the only city like this.
I’m pretty sure the long histories of social unrest under feudalism, the French Revolution, the mere existence of Marxism and Renter’s Rights Law, strongly beg to differ with your contention.
I think it depends. If your landlord is a person then yeah my experience hasn't been all that bad. But if you're landlord is a bureaucratic organization, they couldn't care less. There's just such high variance in how much of an asshole a landlord is. And I've never experienced a chill one that is in charge of an apartment complex.
To put something in perspective a few months ago they changed mailing policy so that only the lease holder could pick up packages (citing safety. This is grad student housing, so already need to get your ID scanned to pick up a package or specifically authorize someone. Note that family and children can and do get university IDs -- not just for this). I wrote an email explaining that this was illegal (citing the relevant laws [0]) and explained how this decreases safety since not everyone is living with a responsible or even kind lease holder (luckily I'm the lease holder but I've been in that situation before. Student housing...). I got an annoyed letter back doubling down on the safety issue, so I escalated the issue twice (including reporting to the post office). I assume some lawyer finally saw it and freaked out. Now the policy is anyone in the house can pick up a package ( - _ - ; ). I've had no such success with the laundry app (they took away any other payment[1])
But my overall point is that 95% of these issues could be resolved my people taking a little more time to understand the consequences of their decisions. People will spend more time and energy defending bad decisions than resolving them. We all make mistakes, so that's not an issue. Especially since doing "good" it incredibly hard. But I'm pissed when we deny cracks in the system or that problems exist. I'm even okay with acknowledging it is low priority. But denial and gaslighting is what I more often experience and that's when I get angry.
[0] why my local postman confirmed and even double checked for me
[1] fun fact: they once double charged me. I sent them the logs from the app, a screenshot, and a screenshot from my bank showing the charges. They claimed to have no record. So I issued a clawback. They didn't seem to care and tbh, there's lots of businesses like this because they have such dominance over the market.
I've seen clear illegal behavior from entire industries and reporting it does nothing. If you want an unambiguous example pull up archive and pick a 3d printer manufacturer and read https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B... (hell, I even tried getting this on the radar of YouTubers. No one cares). The false advertising they do is worse than the example the fucking FCC gives (when I emailed a few companies trying to let them know in case they weren't aware they said what they did was legal. My email to then was really just "hey, I noticed this. Figured you didn't know, so maybe this will help so someone doesn't try to sue you". Not asking for anything in return because all I want is to stop playing these stupid games that eat up so much of my time)
... it is not "us" .. there is a collection of roles that each interact to produce products to market or releases. Add some game-theory to the thinking on that?
"Us" can mean different things depending on the context. The context here is "programmers" not "shitty programmers" (really we can be inclusive with engineers and any makers). Considering that it seems you appear to understand the context I'm a bit confused at the comment. You don't have to inject yourself if you're not in the referred to group (i.e. maybe I'm not talking about you. I can't know because I don't even know you)
I need to stress so much -- especially because how people think about AI and intelligence (my field of research[0]) -- that language is not precise. Language is not thoughts. Sure, many think "out loud" and most people have inner speech, but language (even inner) is compressed. You use language to convey much more complex and abstract concepts in your head to someone who is hopefully trying to make their decompression adapt to uncover the other person's intended meaning. This isn't a Chinese room where you just look in the dictionary (which also shows multiple definitions for anything). I know it's harder with text and disconnected cultures integration, but we must not confuse this and I think it's ignored far too often. Ignoring it seems to just escalate that problems.
> Add some game-theory to the thinking on that?
And please don't be dismissive by hand waving. I'm sure you understand game theory. Which would mean you understand what a complex problem this is to actually model (to a degree I doubt anyone could to a good degree of accuracy). That you understand perturbation theory and chaos theory and how they are critical to modeling such a process meaning you only get probability distribution as results.
[0] I add this context because it decreases the amount of people that feel the need to nerdsplain things to me that they don't research.
Sorry, your response was so generic, unrelated to the conversation, and unhelpful -- especially given the context of the reply -- I mistook you for a machine with no ability to reason.
I forgive your mid sarcasm as I actually find most of your commentary on HN good. My point remains, so I'll explain it to you to help you out. All those "problems" you listed are symptoms of greater problems, as a sibling suggested, although "late stage capitalism" is now seen as a lazy phrase. The greater problem is that increases in complexity require greater increases in energy to maintain, and as a result, corners are being cut everywhere. In your specific case, UI/UX and subsequent QOL. Hence, these are signs of a predicament, rather than a problem, because removing complexity that has been ossified into a system requires even more effort than maintaining the complexity, with no guarantee of success. I believe it was either Joseph Tainter (Collapse of Complex Societies) or Peter Turchin (End Times) that mentioned the only example he could give of a civilization successfully simplifying itself, at least for a time, was the Byzantine Empire, which of course did not make the resurgence that those simplifying measures were undertaken to facilitate.
I appreciate this reply much more and so I'll refrain from further sarcasm.
But to explain the point of frustration, I found the comment indistinguishable from a public intellectual masterbation. I chastised pojzon for this because I believe my op demonstrates that I'm quite aware of the issue and I think a reasonably intelligent person can infer that my understanding is deeper than what can be conveyed in a simple HN comment. I found your responses similar, being short quips.
But if you closely pay attention to my op, you'll find that I'm making a call to other developers to join me in pushing back. To not create the problems in the first place.
The reason I do not take kindly to responses like those is they are defeatist attitudes. Identifying problems is the easy part -- although not always easy themselves. But there's little value in that if that information is not used to pursue a means to resolve them. I appreciate this a bit more -- though I still find it lacking -- as at least you're suggesting references that one may wish to read to understand the issues deeper (though I feel similarly about many of these class of sources).
I don't mind adding comments to help refine understanding, but short quips don't. We can do better than a mic drop. And clearly I'm not shying away from verbosity. In today's age we're too terse to be meaningful. And you truly be terse and effective requires high still and time to refine. It is not the nature of a typical conversation.
And wholeheartedly I actually disagree with baby of these sources (despite agreeing with some points, I find the solutions either non-existent or severely lacking). The problem isn't that we need to remove complexity. That's a false notion. One of the many issues is that we need to recognize that complexity is unavailable. Yes, make things as simple as possible, but no more. It's important to remember that truth and accuracy is bounded by complexity, while lies are not. Shying away from complexity is turning away from growth. Late stage capitalism and socialism have a similar fatal flaw: bureaucrats who believe everything can be understood from numbers on a spreadsheet.
We're people, not machines. The environment evolves with time. Our understanding becomes more nuanced and complex as we grow. But the fuzziness never goes away because we can never measure anything directly, even if it stayed the same. The most important thing stressed in experimental science is understanding your error and limitations of your model (at least that's what I was taught and what I teach my students). Because you only have models, and all models are wrong (though some are better).
There's so much I can say on this topic, and there's so much that has been said before. But no one wants to talk about the (Von Neumann's) elephant in the room.
No, we're still going to talk past each other. The root of all your UI problems was an increase in complexity, but your experience was degraded, and now you're up a creek. Progress, growth, and complexity do not need to be intertwined, you just believe so the same way "capitalism realism" makes it hard to imagine a future without capitalism.
Before your laundry app, your AC app, and your TV being smart, it may not have been "seamless" to do those chores or work with those appliances, but you can't say your experience is seamless now, based on your anecdotes. A hammer is not made better by putting a computer chip in it. Simpler. Is. Better.
I wouldn't even say it's bureaucrats who think everything can be understood from spreadsheets. It's Pichai, it's Nadella, it's Altman, saying software will eat the world and data will lead us to a singularity but they're the same crowd trying to tell us Juicero is the future.
Your problem is a systems problem. The only way you will solve this systems problem is by changing the purpose of the system when you don't have any control over the inputs, interactions, or motivations of the system, because the tech brahmins that license the software and demand more money for minimal effort hold all the levers, and the AI craze is their final gambit to accrue all the power, irreversibly, forever. That's why it's a predicament.
In my experience, PMs rarely do the chasing down for you. Most of them ask why you're still blocked and if you've done anything about it. Even if they do do something to chase them down, you're still on the hook for the deadlines.
ah, see, so your job description includes, besides being a dev, also being a project manager. that's fine, there's nothing bad about it, it's just that your job requires a bit more from you than other places.
They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.
That's across the board, from the startups whose business plan is to be acquired at all costs, to the giant tech companies, whose business plan is to get monopoly power first, then figure out how to extract money later.
Probably a lot of that is to do with the short-term profit mindset. There is tons of software that is far from optimal, breaks frequently and has a massive impact on human lives (think: medical record systems, mainframes at banks, etc.). None of it is sexy, none of it is something you can knock up a PoC for in a month, and none of it is getting the funding to fix it (instead funding is going to teams of outsourced consultants who overpromise and just increase their budgets year on year). Gen AI won't make this better I think.
> They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.
The field is wide open for a startup to do it right. Why not start one?
Well, either (1) our free market is not healthy and the ability to form new companies that do better is weak and feeble, or (2) the current companies are doing about as well as possibe and it is hard to out compete them.
If it's the latter (number 2), then we need to start asking why American companies "doing about as well as possible" are incapable of producing secure and reliable software. Being able to produce secure and reliable software, in general and en masse, seems like a useful ability for a nation.
Reminds me of our national ability to produce aircraft; how's the competition in that market working out? And are we getting better or worse at producing aircraft?
Maybe because "secure and reliable software" isn't what makes successful companies. Just like how people complain about tight airplane seats and getting nickeled and dimmed but continuously pick airlines that have cramped seats but are cheaper over airlines with more generous space but cost more.
Companies deliver what customers are willing to pay for.
If customers are willing to pay for X, and no companies make X available, you have a great case to make to a venture capitalist.
BTW, in every company I've worked for, the employees thought management was stupid and incompetent. In every company I've run, the employees thought I was stupid and incompetent. Sometimes these people leave and start their own company, and soon discover their employees think they're stupid and incompetent.
It's just a fact of life in any organization.
It's also a fact of life that anyone starting a business learns an awful lot the hard way. Consider Senator George McGovern (D), who said it best:
George McGovern's Mea Culpa
"In retrospect, I wish I had known more about the hazards and difficulties of such a business, especially during a recession of the kind that hit New England just as I was acquiring the inn's 43-year leasehold. I also wish that during the years I was in public office, I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender. Today we are much closer to a general acknowledgment that government must encourage business to expand and grow. Bill Clinton, Paul Tsongas, Bob Kerrey and others have, I believe, changed the debate of our party. We intuitively know that to create job opportunities we need entrepreneurs who will risk their capital against an expected payoff. Too often, however, public policy does not consider whether we are choking off those opportunities."
I think those who run companies are often stupid and incompetent, and the workers are probably right most of the time. Just look at the quote again, there's lots of things people don't know when they're starting a company.
It would be nice if being smart and competent was the key to success in our society, but you hint at the real key to success in your own comment--getting favor and money from those who already have it.
You didn't really engage with the other half of my comment, but I'll say it again, in general our society seems to be crumbling and our ability to get things done efficiently and with competence is waning. Hopefully the right people can get the blessing of venture capitalists to fix this (/s).
The problem isn't that those who run companies are stupid and incompetent (which his probably true). It's that the people lobbing those jabs would also be stupid and incompetent if they were running the companies but thinks they have the answers.
Right. I agree that we all have our share of stupidity and incompetence. Some of us are stupid and incompetent on a worker's salary, and some of us are stupid and incompetent on a CEO's salary. See, for example: https://news.ycombinator.com/item?id=38849580
Not sure why this thread got consumed by a strawman that "Most people running businesses are stupid/incompetent". That is obviously not true.
What I have realized is that most employees never do even a basic analysis of their industry vertical, the key players and drivers etc. Even low-level employees who will never come face-to-face with customers can benefit from learning about their industry.
The flip side is that a lot of business people (I exclude people who start their own companies or actively take an interest in a vertical) are also mostly the same. They care about rising from low-level business/product role to a senior role, potentially C-suite role, and couldn't care less about how they make this happen. Many times, it is hard to measure a business person's impact (positive or negative) - think about Boeing. All their pains today were seeded more than 20 years ago with a series of bad moves but the then CEO walked off into the sunset with a pile of cash and a great reputation. OTOH, there was a great article yesterday on HN from Monica Harrington, one of the founders of Valve whose business decisions were crucial to Valve's initial success, but had to sell her stake in the company early on.
I think business, despite its outsize role in the success/failure of a company, follows the same power law of talent that most other professions carry. Most people are average, some really good, some real greedy etc.
I think there's a misunderstanding. They _are_ doing it right, it's just that the right way doesn't involve building a good product, because you aren't any more valuable. There's no way a startup can make more money than the guy speedrunning acquisition, making gonzo dollars fast >>> making (probably less) gonzo dollars in the long term.
If you think there's a problem with this model (and based on your wording of "doing it right", this seems to be the case), it's largely in the incentive structure, not the actors.
The field is open(tm) as in under heavy regulatory capture, expecting the right school for VC funding, laden with competitors propped up by tax subsidiaries (if EU) and/or H1B farms (if US)... a tough time for even a moderate hustler.
Because techno-meritocratically correct business principles are far from capitalism best practices. Even employees will hate it because you won't be bringing home the bacon.
Would UBI and LVT do the trick? I'm okay with minimum wages on the idea that a democratic government is the biggest possible de facto labor union, but in general I don't want the government making too many rules about how you can hire and fire people
What would this mean in practice? Forcing companies to justify layoffs? Mandating minimum amount of people in frontend/backend/devops/operations/security roles?
Yeah this after all. Private corporations exist to make money. They minimize investment and maximize returns. They progressively suppress wages and inflate profits. Clever individual choices and self discounts merely create temporary competitive advantages against other poor laborers in the market.
If you manage to secure the bag that way and exit that class altogether, good for you, but that solves nothing for the rest. There's no be all end all threshold that everyone can just stay above and stay ahead of inflation.
The city of Seattle decided that gig delivery drivers needed to be protected. They passed a minimum wage for them. The income of gig drivers promptly dropped, as the rise in prices corresponded with a drop in customers.
>Why did the prices for Uber/Lyft rise everywhere else too?
Presumably it rose in Seattle higher/faster than it would otherwise. The source that he provided in a sibling comment says sales dropped "immediately", which seems to corroborate this. It's lazy to argue "well prices rose elsewhere too so the minimum wage law couldn't possibly have had an impact"
Well, I don't think I'm lazy. IIRC prices were already moving up and the rent-a-car-apps were exiting. And Uber/Lyft were very far on funding, needing some margin. And then there was this other trigger event of worker pay. Which is a nice corporate spin opportunity. And we've seen that pattern 100s of times (blaming workers for price increase).
Leaving aside the relevance of an Alaskan newspaper talking about local Seattle politics, their only source is a news article from Seattle written the week of the law taking effect.
As I said in my original comment, show me something that isn't from the first couple of weeks after it took effect. Preferably something scholarly, not just anecdotes from a newspaper
For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people - not some business people scheme to reduce headcount and push more responsibilities.
I have received e-mails "hey thats DB don't touch that stuff you are not a DBA" or "hey developers shouldn't do QA" while the premise might be right, lots of things could be done much quicker.
I have seen throwing stuff over the wall "hey I am a developer, I don't care about some server stuff", "hey my DB is working fine it must be you application" or months of fixing an issue because no one would take the responsibility across the chain.
Like I said, I have no issue with integrated operations models, but if you try to have one person be a Jack of All Trades, you will end up with them all being 'masters of none'.
DevSecOps works very well when you have your coding specialists, operations specialists (including DBAs), and Security specialists all on the same team together, rather than being different silos with different standups and team meetings, etc. But it doesn't work at all well if you just ask the devs to also be Ops and Security, and lay off the rest.
> For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people
Well the DevOps grandfathers (sorry, Patrick & Kris, but you're both grey now) certainly wanted to tear down the walls that had been put up between Devs & Ops. Merging Dev & Ops practices has been a fundamentally good change. Many tasks that used to be dedicated Ops/Infra work are now so automated that a Dev can do them as part of their daily work (e.g. spinning up a test environment or deploying to production). This has been, in a sense, about empowerement.
The current "platform engineering"-buzz builds on top of that.
> - not some business people scheme to reduce headcount and push more responsibilities
I imagine that many business people don't understand tech work well enough to deliberately start such schemes. Reducing toil could probably result in lower headcount (no one likes being part of a silo that does the same manual things over and over again just to deploy to production), but by the same count the automations don't come free. They have to be set up and maintained. Once one piece of drudgery has been automated, something else will rear its ugly head. Automating the heck out of boring shit is not only more rewarding work, it's also a force multiplier for a team. I hope business people see those benefits and aim for them, instead of the aforementioned scheming.
Like most things the decline in quality is probably multi-faceted. There is also the component where tech became a hot attractive field so people flooded in who only cared about the paycheck and not the craft.
That definitely happend in the dotcom bubble. Plenty of "developers" were crowding the field, many of which neither had any real technical ability or interest.
The nerds who were into programming based on personal interest were really not affected.
Those who have tech as a passion will generally outpeform those who have it as a job, by a large margin.
Developers of the past worked towards the title of webmaster. A webmaster can manage a server, write the code and also be a graphic artist. Developers want to do it all.
What has changed is micromanaging of daily standup which reshapes work into neat packages for conversation but kills a non linear flow and limits exploration making things exactly as requested instead of what could be better.
What has also changed in my opinion is the vast landscape of tooling and frameworks at every level of the stack.
We now have containers that run in VMs that run on physical servers. And languages built on top of JavaScript and backed by Shadow Dom and blah blah. Now sure I could easily skip all that and stick a static page on a cdn that points to a lambda and call it a day. But the layers are still there.
I'm afraid only more of this is coming with AI in full swing and fully expect a larger scale internet outage to happen at some point that will be the result of a subtle bug in all this complexity that no single person can fix because AI wrote it.
There's too much stuff in the stack. Can we stop adding and remove some of it?
I never thought of a webmaster as meaning that. When I think of webmaster I think of a person who updates the content on a website using a CMS of some sort. Maybe they know a bit of HTML and CSS, how to copy assets to the web server, that sort of thing. But they are not sysadmins or programmers in the conventional sense. Maybe it just varies by employer.
This comment had me laughing pretty hard, and thanks for making me feel old. Anyway, I guess I'm a "webmaster" in theory even tho I've not worked on web since early 2000s, lamp stack handy++ tho. made me laugh because: was a Supabase meetup recently and some kid told me he was full stack, but he can't ssh into a server? I'm supa confused what fullstack means these days.
I have stopped calling myself a full stack developer, because the meaning and the role are kind of ambiguous now. Clients don't know what they can talk to me about (everything), and PMs seem to be unsure of what I can be assigned to (anything).
In fairness as well, the frontend tooling landscape has become so complex that while I'm capable of jumping in, I am in no way an efficient fronted developer off the rip.
The tooling has become complex, sure, but so have capabilities and expectations.
I used to “make websites” in the 2000s, then stopped for about 15 years to focus on backend and infrastructure.
Have been getting back into front end the last few months — there was a good bit of learning and unlearning, but the modern web stack is amazingly productive. Next.js, Tailwind, and Supabase in particular.
Last time I checked in to frontend dev, CSS directives fit on an A4 sheet, valid XHTML was a nerd flex and rounded corners were painstakingly crafted from tables and gifs.
I'm only 36, but you're making me feel extremely old.
To me, "developers of the past" were the people working on COBOL and JCL and FORTRAN and DB2, on z/OS or System 390/370/360, to whom "RPG" was only a 4GL[1], not a type of game, and there was no webmaster or graphic designer involved... not some dotcom era dev in the 90s when "webmasters" became a widespread thing.
Here's an interesting article on webmasters and their disappearance below[2].
From my experience it’s agile/scrum being poorly implemented.
So many companies no longer think about quality or design. “Just build this now and ship it, we can modify it later”, not thinking about the ramifications.
No thinking about design at all anymore, then having tech debt but not allocating any sprints to mitigate it.
Sometimes it feels like there is no planning... and as a consequence also no documentation. Or maybe there are tons of outdated documentation because the project is so agile that it was redesigned a few dozen times, but instead of updating the original documents and diagrams, the architects always only produced a diff "change this to this", and those diffs are randomly placed in Confluence, most of them as attached Word or PowerPoint documents.
"But developers hate to write documentation" they say. Okay genius, so why don't you hire someone who is not a developer, someone who doesn't spend their entire time sprinting from one Jira task to another, someone who could focus on understanding how the systems work and keeping the documents up to date. It would be enough to have one such person for the entire company; just don't also give them dozen extra roles that would distract them from their main purpose. "Once we hired a person like this, but they were not competent for their job and everyone complained about docs, so we gave up." Yeah, I suspect you hired the cheapest person available, and you probably kept giving them extra tasks that were always higher priority than this. But nice excuse. Okay, back to having no idea how anything works, which is not a problem because our agile developers can still somehow handle it, until they burn out and quit.
What you're asking for is not for the company to hire technical writers but reverse engineers. That's a much harder role to hire as the people who have the skills to do it are the same developers who don't want to do it.
Ultimately, mature developers just have to write docs. There's no alternative or lifestyle hack managers can come up with.
> What you're asking for is not for the company to hire technical writers but reverse engineers.
Definitely not.
First, if the architects properly document what they designed, there is no need to reverse engineer it. I don't know how frequent this is, but it's my experience that the architects often don't bother to have a model of the entire system -- they only care about the feature they are designing currently. It is then the developer's job to reverse engineer the entire design by looking at the code. And I would like this to stop.
It is so much easier to deploy now (and for the last 5-10 years) without managing an actual server and OS
It just gets easier, with new complexities added on top
In 2014 I was enamored that I didn’t need to be a DBA because platforms as a service were handling all of it in a NoSQL kind of way. And exposed intuitive API endpoints for me.
This hasn’t changed, at worst it was a gateway drug to being more hands on
I do fullstack development because it’s just one language, I do devops because it’s not a fulltime job and cloud formation scripts and further abstraction is easyish, I can manage the database and I haven’t gotten vendor locked
You don’t have to wait 72 hours for DNS and domain assignments to propagate anymore it’s like 5 minutes, SSL is free and takes 30 minutes tops to be added to your domain, CDNs are included. Over 10 years ago this was all so cumbersome
At my company, our sprint board is copy/pasted across teams, so there's columns like "QA/Testing" that just get ignored because our team has no testers.
There's also no platform engineers but IaC has gotten that good that arguably they've become redundant. Architecture decisions get made on the fly by team members rather than by the Software Architect who only shows up now and again to point out something trivial. No Product Owner so again the team work out the requirements and write the tickets (ChatGPT can't help there).
This is happening across most industries. My friends in creative fields are experiencing the same "optimizations". I call this the great lie of productivity. The worker (particularly tech workers) have dreamed of reducing their time spent doing "shit jobs" while amplifying their productivity, so that we can spend more time doing what is meaningful to us... In reality businesses sell out to disconnected private equity and chase "growth" just to boost the wealth of the top. Ultimately this has played out as optimizing work force by reducing head count and spreading workers over more roles.
Yep, it's 100% MBA-bros who don't understand the actual technical foundations of their companies "optimizing" away their specialized knowledge workers. They get stuck in fire-and-hire cycles of boom-and-bust, and tend to slowly degrade into obscurity.
You can categorize get people who are average at
several things devOps.
But you will not get someone with a deep background
and understanding in all the fields at the same time.
I come from a back end background.
and I appalled at how little the devOps I have worked with
know about even SQL.
Having teams with people who are experts at different
things will give a lot better output.
It will be more expensive.
Most devOps I have met, with a couple of exceptions,
are front end devs who knows a couple of Javascript,
knows Javascript and Typescript.
When it comes to the back-end it is getting everything possible
form npm and stringing it together.
In every team I've worked with, DevOps didn't do any of the development, while being called DevOps. The only thing they were developing were automation for build, tests and deployment. Other than that, they're still a separate role from development. The main difference between pre-devops days and now, is that operations people used to work in dedicated operations teams (that's still the case in some highly regulated places, e.g. banks), and now they work alongside developers.
Sadly. It is an onward trend. I have become so discouraged from this subject, that I am evaluating my career choices. Farming is a the safest bet for now.
Presumably he means safe with respect to task consolidation, as per the topic of discussion. Which I would agree is more or less true, but only because farming already went through its consolidation phase. Hence the old saying that goes something like: "A farmer has to be a mechanic, an engineer, a scientist, a veterinarian, a business manager, and an optimist—all in a single day."
As a farmer, if you think programming, CI/CD pipeline management, and database administration being consolidated into one job is a line too far... Brace yourself!
I wonder how you view farming as the safest bet. Farming is quite challenging and the competition will drive any noob into the ground. Not just the knowledge but also capital.
Anyone that says they are the “devsecops” person is doing it wrong. Companies should still hire for all three roles they just collaborate together. Someone sold that company a mountain of lies
> Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.
If fewer people need to undertake more roles, I think the simplest things you can get away with should be chosen, yet for whatever reason that's not what's happening.
Need a front end app? Go for the modern equivalent of jQuery/Bootstrap, e.g. something like Vue, Pinia and PrimeVue (you get components out of the box, you can use them, you don't have to build a whole design system, if needed can still do theming). Also simpler than similar setups with Vuex or Redux in React world.
Need a back end app? A simple API only project in your stack of choice, whether that's Java with Dropwizard (even simpler than Spring Boot), C# with ASP.NET (reasonably simple out of the box), PHP with Laravel, Ruby with Rails, Python with Flask/Django, Node with Express etc. And not necessarily microservices but monoliths that can still horizontally scale. A boring RESTful API that shuffles JSON over the wire, most of the time you won't need GraphQL or gRPC.
Need a database? PostgreSQL is pretty foolproof, MariaDB or even SQLite can also work in select situations. Maybe something like Redis/Valkey or MinIO/SeaweedFS, or RabbitMQ for specific use cases. The kinds of systems that can both scale, as well as start out as a single container running on a VPS somewhere.
Need a web server? Nginx exists, Caddy exists, as does Apache2.
Need to orchestrate containers? Docker Compose (or even Swarm) still exist, Nomad is pretty good for multi node deployments too, maybe some relatively lightweight Kubernetes clusters like K3s with Portainer/Rancher as long as you don't go wild.
CI/CD? Feed a Dockerfile to your pipeline, put the container in Nexus/Artifactory/Harbor/Hub, call a webhook to redeploy, let your monitoring (e.g. Uptime Kuma) make sure things remain available.
Architectures that can fit in one person's head. Environments where you can take every part of the system and run it locally in Docker/Podman containers on a single dev workstation. This won't work for huge projects, but very few actually have projects that reach the scale where this no longer works.
Yet, this is clearly not what's happening, that puzzles me. If we don't have 20 different job titles involved in a project, then the complexity covered under the "GlassFish app server configuration manager" position shouldn't be there in the first place (once had a project like that, there was supposed to be a person involved who'd configure the app server for the deployments, until people just went with embedded Tomcat inside of deployable containers, that complexity suddenly dissipated).
It matters what you measure. The studies only looked at Copilot usage.
I’m an experienced engineer. Copilot is worse than useless for me. I spend most of my time understanding the problem space, understanding the constraints and affordances of the environment I’m in and thinking about the code I’m going to write app. When I start typing code, I know what I’m going to write, and so a “helpful” Copilot autocomplete is just distraction for me. It makes my workflow much much worse.
On the other hand, AI is incredibly useful for all of those steps I do before actually coding. And sometimes getting the first draft of something is as simple as a well crafted prompt (informed by all the thinking I’ve done prior to starting. After that, pairing with an LLM to get quick answers for all the little unexpected things that come up is extremely helpful.
So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.
Copilot isn't particular useful. At best it comes up with small snippets that may or may not be correct, and rarely can I get larger chunks of code that would be working out of the gate.
But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.
Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.
I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.
> I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.
Agreed. I feel like coding with AI is distilling the process back to the CS fundamentals of data structures and algorithms. Even though most of those DS&As are very simple it takes experience to know how to express the solution using the language of CS.
I've been using Cursor Composer to implement code after writing some function signatures and types, which has been a dream. If you give it some guardrails in the context, it performs a lot better.
The one thing I'm a little concerned about is my ability as an engineer.
I don't know if I'm losing or improving my skillset. This exercise of development has become almost entirely one of design and architecture, and reading more than writing code.
Maybe this doesn't matter if this is the way software is developed moving forward, and I'm certainly not complaining in working on a 2 person startup
Honestly haven't tried out Cursor yet, it looks impressive but I've heard it has some teething issues to work out. For my use case I'd end up using it very similar to how I use Continue.dev and probably pay for Claude API usage separately, which has been working out to about $12-$15 a month.
For me, AI is like a documentation/Googlefu accelerant. There are so many little things that I know exactly what I want to do, but can't remember the syntax or usage.
For example, writing IaC especially for AWS, I have to look up tons of stuff. Asking AI gets me answers and examples extremely fast. If I'm learning the IaC for a new service I'll look over the AWS docs, but if I just need a quick answer/refresher, AI is much faster than going and looking it up.
I find that for AWS IaC specifically with a high pace of releases and a ton of versions dating back more than a decade the AI answers are a great spring board but require a bit of care to avoid mixing APIs.
Contrarian take: I feel that copilot rewards me for writing patterns that it can then use to write an entire function given a method signature.
The more you lean into functional patterns: design some monads, don’t do I/O except at the boundaries, use fluent programming, then it’s highly effective.
This is all in Java, for what it’s worth. Though, I’ll admit, I’m 3.5y into Java, and rely heavily on Java 8+ features. Also, heavy generic usage in my library code gives a lot of leash to the LLM to consistently make the right choice.
I don’t see these gains as much when using quicker/sloppier designs.
Would love to hear more from true FP users (Haskell, OCaml, F#, Scala).
I used the Copilot trial. I found myself waiting to see what it would come up with, analyzing it, and most often time throwing it away for my own implementation. I quickly realized how much of a waste of time it was. I did find use for it in writing unit tests and especially table-driven testing boilerplate but that's not enough to maintain a paid subscription.
I think my experience mirrors your own. We have access at my job but I’ve turned it off recently as it was becoming too noisy for my focus.
I found the tool to be extremely valuable when working in unfamiliar languages, or when doing rote tasks (where it was easy for me to identify if the generated code was up to snuff or not).
Where I think it falters for me is when I have a very clear idea of what I want to do, and its _similar_ to a bog standard implementation, but I’m doing something a bit more novel. This tends to happen in “reduce”s or other more nebulous procedures.
As I’m a platform engineer though, I’m in a lot of different spaces: Bash, Python, browser, vanilla JS, TS, Node, GitHub actions, Jenkins Java workflows, Docker, and probably a few more. It gives my brain a break while I’m context switching and lets me warm up a bit when I move from area to area.
> (where it was easy for me to identify if the generated code was up to snuff or not).
I think you have nailed it with this comment. I find copilot very useful for boilerplate - stuff that I can quickly validate.
For stuff that is even slightly complicated, like simple if-then-else, I have wasted hours tracking down a subtle bug introduced by copilot (and me not checking it properly)
For hard stuff it is faster and more reliable for me to write the code than to validate copilots code.
the fact that Copilot hallucinates methods/variables/classes that do not exist in compiled languages where it could know they do not exist is just unbelievable to me.
it really feels like people building the product do not care about the UX.
> So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.
A psychology professor I know says this holds in general. For any new tool, who will be able to get the most benefits out of it? Someone with a lot of skill already or someone with fewer skill? With less skill, there is even a chance that the tool has a negative effect.
I only use Copilot and Claude to do all the boilerplate and honestly just the mechanical part of writing code. But I don't use it to come up with solutions. I'll do my thing understanding the problem, figuring out a solution, etc. and once I've done everything to ensure I know what needs to be written, I use to AI to do most of that. It saves a hell of a lot of time and typing.
Yeah, Copilot is meh. Aider-chat for things with GPT-4 earlier this year was a huge step up.
But recently using Claude Sonnet + Haiku through OpenRouter also with aider, and it is like a new dimension of programming.
Working on new projects in Rust and a separate SPA frontend, it just ... implements whatever you ask like magic. Gets it about 90-95% right at the first prompt. Since I am pretty new to Rust, there are a lot of idiomatic things to learn, and lots of std convenience functions I don't yet know about, but the AI does. Figuring out the best prompt and context for it to be effective is now the biggest task.
It will be crazy to see where things go over the next few years... do all junior programmers just disappear? Do all programmers become prompt engineers first?
I think everything you said was wrong. Cursor is amazing now with the large context windows it’s capable of handling with Claude, especiallly at the hands of an expert programmer.
A junior writing simple code is the exact recipe for disaster when it comes to these tools.
Isn't that the opposite? The more experienced you are the higher the gains since you are able to see what it outputs and immediately tell if it is what you expected. You also know how to put on the best input for it to auto complete rest of it while you are already planning next steps in your head. I feel like a superhuman with it, honestly.
One thing i've been doing more of lately with Copilot is using prompts directly in a //comment. Although I distinguish this from writing a detailed comment doc about a function and then let Copilot write the function.
Theres "inline prompting" and "function prompting".
I noticed that AI is a bit like having a junior dev in time capsule. He won't solve your problem however he can Google, find stuff and write simple stuff, all you'd be forced to do otherwise. And does it in minutes rather than weeks or months.
Just for clarity - are you saying that going back and forth with ChatGPT is more useful than Co-pilot? The reason I ask is I have both and 95% of the benefit is ChatGPT.
I recently switched to cursor, and am in the process of wrangling an inherited codebase that had effectively no tests and cursor has saved me _hours_ it's generally terrible at any actual code refactoring, but it has saved me a great deal of typing while adding the missing test coverage.
Cursor has that plus whatever files you want to specifically add. Or it has a mode where you can feed it the entire project and it searches to decide which files to add
I wish I worked at a place where it’d be enough for me to “understand the problem space” as I pull down seven figures. But those bastards also want me to code, and Copilot at least helps with the boilerplate
But despite the theory/wish that "if experienced developers use AI well", at the present, inexperienced developers are benefitting more, which is what the study found.
I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. Because my personal experience has involved a lot of that in one of the companies listed in the study.
Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
I am curious about this also. I have now seen multiple PR's that I had to review that a method was clearly completely modified by AI with no good reason and when asked why something was changed we just got silence. Not exaggerating, literal silence and then trying to ignore the question and explain the thing we were first asking them to do. Clearly having no idea what is actually in this PR.
This was done because we asked for a minor change to be done (talking maybe 5 lines of code) and tested. So now not only are we dealing with new debt, we are dealing with code that no one can explain why it was completely changed (and some of the changes were changes for the sake of change), and we are dealing with those of us that manage this code now looking at completely foreign code.
I keep seeing this with people that are using these tools and they are not higher level engineers. We finally got to the point of denying these PR's and saying to go back and do it again. Loosing any of the time that was theoretically gained from doing it in the first place.
Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.
> Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.
It is worse than that. We're all maintaining in our heads the mental sand castle that is the system the code base represents. The abuse of the autocoder erodes that sand castle because the intentions of the changes, which are crucial for mentally updating the sand castle, are not communicated (because they are unknowable). This is same thing with poor commit messages, or poor documentation around requirements/business processes. With enough erosion, plus expected turn over in staff, the sand castle is actually gone.
Easy: exclude developers who try it. Learning--be it a new codebase, a new programming language, a new database--takes time. You're not going to be as productive until you learn how. That's fine! Cheating on your homework with an LLM should not be something we celebrate, though, because the learner will never become productive that way, and they won't understand the code they're submitting for review.
The truth is that the tools are actually quite good already. If you know what you are doing they will 10-20x your productivity.
Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.
Do you agree that the brain-memory activity of writing code and reading someone else’s code is totally different ?
The sand castle analogy is still valid here because once you have a x10 productivity or worse a x20 one, there is no way you can deeply understand the things the same way than if you wrote it from scratch. Without spending a considerable amount of time and getting productivity down, the understanding is not the same.
If no one is responsible because it’s crap software and you won’t be around enough time to bear responsibility… it’s ok I guess ?
if you are seeing 900% productivity gains, why did these controlled experiments only find 28%, mostly among programmers who don't know what they're doing? and only 8–10% among programmers who did? do you have any hypotheses?
i suspect you are seeing 900% productivity gains on certain narrow tasks (like greenfield prototype-quality code using apis you aren't familiar with) and incorrectly extrapolating to programming as a whole
I think you’re probably right, though I fear what it will mean for software quality. The transition from assembly to high level languages was about making it easier to both write and to understand code. AI really just accelerates writing with no advancement in legibility.
When using code assist, I've occasionally found some perplexing changes to my code I didn't remember making (and wouldn't have made). Can be pretty frustrating.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
Thank you, this says what I have been struggling to describe.
The day I lost part of my soul was when I asked a dev if I could give them feedback on a DB schema, they said yes, and then cut me off a few minutes in with, “yeah, I don’t really care [about X].” You don’t care? I’m telling you as the SME for this exactly what can be improved, how to do it, and why you should do so, but you don’t care. Cool.
Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out. I’m not even talking about doing micro-benchmarks (though you should…), I’m talking about dead-simple stuff like “maybe use this data structure instead of that one.”
In a similar vein, some days I feel like a human link generator into e.g. postgres or kafka documentation. When docs are that clear, refined, and just damn good but it seems like nobody is willing to actually read them closely enough to "get it" it's just a very depressing and demotivating experience. If I could never again have to explain what a transaction isolation level is or why calling kafka a "queue" makes no sense at all I'd probably live an extra decade.
At the root of it, there's a profound arrogance in putting someone else in a position where they are compelled to tell you you're wrong[1]. Curious, careful people don't do this very often because they are aware of the limits of their knowledge and when they don't know something they go find it out. Unfortunately this is surprisingly rare.
[1] to be clear, I'm speaking here as someone who has been guilty of this before, now regrets it, and hopes to never do it again.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
They are/will be the management's darling because they too are all about delivering without any interest in technology either.
Well designed technology isn't seen as foundation anymore; it is merely a tool to just keep the machine running. If parts of the machine are being damaged by the lack of judgement in the process, that shouldn't come in the way of this year's bonus; it'll be something to worry about in the next financial year. Nobody knows whats going to happen in the long-term anyway, make hay while the sun shines.
Its been a while since my undergrad (>10 yrs) but many of my peers were majoring in CS or EE/CE because of the money, at the time I thought it was a bit depressing as well.
With a few more years under my belt I realized theres nothing wrong with doing good work and providing yourself/your family a decent living. Not everyone needs the passion in their field to become among the best or become a "10x"er to contribute. We all have different passions, but we all need to pay the bills.
Yeah, I think that the quality of work (skill + conscientiousness), and the motivation for doing the work, are two separate things.
Off-the-cuff, three groups:
1. There are people who are motivated by having a solid income, yet they take the professionalism seriously, and do skilled rock-solid work, 9-5. I'd be happy to work with these people.
2. There are other people who are motivated by having a solid or more-than-solid income, and (regardless of skill level), it's non-stop sprint performance art, gaming promotion metrics, resume-driven development, practicing Leetcode, and hopping at the next opportunity regardless of where that leaves the project and team.
3. Then there's those weirdos who are motivated by something about the work itself, and would be doing it even if it didn't pay well. Over the years, these people spend so much time and energy on the something, that they tend to develop more and stronger skills than the others. I'd be happy to work with these people, so long as they can also be professional (including rolling up sleeves for the non-fun parts), or amenable to learning to be professional.
Half-joke: The potential of group #3 is threatening to sharp-elbowed group #2, so group #2 neutralizes them via frat gatekeeping tactics (yeah-but-what-school-did-you-go-to snobbery, Leetcode shibboleth for nothing but whether you rehearsed Leetcode rituals, cliques, culture fit, etc.).
Startups might do well to have a mix of #3 and #1, and to stay far away from #2. But startups -- especially the last decade-plus of too many growth investment scams -- are often run by affluent people who grew up being taught #2 skills (for how you game your way into prestigious school, aggressively self-interested networking and promoting yourself, etc.).
#3 is the old school hacker, "a high-powered mutant of some kind never even considered for mass production. Too weird to live, and too rare to die" :-)
Blend of #1 & #3 here. Without the pay, I don't think I'd muster the patience for dealing with all the non-programming BS that comes with the job. So I'd have found something else with decent pay, and preferably not an office job. Sometimes I wish I'd taken that other path. I have responsibilities outside work, so I'm rarely putting in more than 40 hours. Prior to family commitments, I had side projects and did enough coding outside work. I still miss working on hobby video game projects. But I do take the professionalism seriously and will do the dirty work that has to be done, even if means cozying up to group #2 to make things happen for the sake of the project.
The hardest part of any job I’ve had is doing the not so fun parts
(meetings, keeping up with emails, solidly finishing work before moving on to something new)
As I progressed, I've learned I am more valuable for the company working on things I am interested in. Delegate the boring stuff to people that don't care. If it is critical to get it done right, do it yourself.
There’s certainly nothing wrong with enjoying the high pay, no – I definitely do. But yeah, it’s upsetting to find out how few people care. Even moreso when they double down and say that you shouldn’t care either, because it’s been abstracted away, blah blah blah. Who do you think is going to continue creating these abstractions for you?
I get the pragmatism argument, but I would like to think certain professions should hold themselves to a higher standard. Doctors, lawyers, and engineers have a duty to society IMO that runs counter to a “just mail it in to cash a paycheck” mentality. I guess it comes down to whether you consider software developers to be that same kind of engineer. Certainly I don’t want safety critical software engineers to have that cavalier attitude (although I’ve seen it).
...Someone else who thinks of actual web applications as an abstraction.
In the olden days, we used to throw it over to "Ops" and say, "your problem now."
And Junior developers have always been overwhelmed with the details and under pressure to deliver enough to keep their job. None of this is new! I'm a graybeard now, but I remember seniors having the same complaints back then. "Kids these days" never gets old.
I get what you’re saying, but at the same time I feel like I encounter the real world results of this erosion constantly. While we have more software than ever, it all just kind of feels janky these days. I encounter errors in places I never had before, doing simple things. The other day I was doing a simple copy and paste operation (it was some basic csv formatted text from vs code to excel iirc) and I encountered a Windows (not excel or vs code) error prompt that my clipboard data had been lost in the time it took me to Alt+Tab and Ctrl+v, something I’ve been doing ~daily for 3 decades without any issues.
I’m more of a solo full stack dev and don’t really have first hand experience building software at scale and the process it takes to manage a codebase the size of the Windows OS, but these are the kinds of issues I see regularly these days and wouldn’t in the past. I also use macOS daily for almost as long and the Apple software has really tanked in terms of quality, I hit bugs and unexpected errors regularly. I generally don’t use their software (Safari, Mail, etc) when I can avoid it. Also have to admit lack of features is a big issue for me on their software.
>Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out.
Similarly Docker is an amazing technology, yet it enabled the dependency tower of babels that we have today. It enabled developers that don't care about cleaning up their depencies.
Kubernetes is amazing technology, yet it enabled the developers that don't care to ship applications that constantly crash, but who cares, kubernetes will automatically restart everything.
Cloud and now AI are similar enabler technologies. They could be used for good, but there are too many people that just don't care.
The fine art of industry is building more and more elaborate complicated systems atop things someone deeply cares about to be used by those who don't have to deeply care.
How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?
We're just watching the bar of "Don't need to care because a reliable system exists" move through something we know and care about in our lifetimes. Progress is great to watch in action.
> How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?
hopefully some of those that did a computer science degree?
Disclaimer: I work at a company who sells coding AI (among many other things).
We use it internally and the technical debt is an enormous threat that IMO hasn't been properly gauged.
It's very very useful to carpet bomb code with APIs and patterns you're not familiar with, but it also leads to insane amounts of code duplication and unwieldy boilerplate if you're not careful, because:
1. One of the two big bias of the models is the fact that the training data is StackOverflow-type training data, which are examples and don't take context and constraints into account.
2. The other is the existing codebase, and it tends to copy/repeat things instead of suggesting you to refactor.
The first is mitigated by, well, doing your job and reviewing/editing what the LLM spat out.
The second can only be mitigated once diffs/commit history become part of the training data, and that's a much harder dataset to handle and tag, as some changes are good (refactorings) but other might be not (bugs that get corrected in subsequent commits) and no clear distinction as commit messages are effectively lies (nobody ever writes: bug introduced).
Not only that, merges/rebases/squashes alter/remove/add spurious meanings to the history, making everything blurrier.
Consider myself very fortunate to have lived long enough that I'm reading a thread where the subject is the quality of the code generated by software. Decades of keeping that lollypop ready to be given, and now look where we are!
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
Bingo, this, so much this. Every dev i know who loves AI stuff was a dev that I had very little technical respect for pre AI. They got some stuff done but there was no craft or quality to it.
For what it's worth, a (former) team mate who was one of the more enthusiastic adopters of gen AI at the time was in fact a pretty good developer who knew his stuff and wrote good code. He was also big on delivering and productivity.
In terms of directly generating technical content, I think he mostly used gen AI for more mechanical stuff such as drafting data schemas or class structures, or for converting this or that to JSON, and perhaps not so much for generating actual program code. Maybe there's a difference to someone who likes to have lots of program logic generated.
I have certainly used it for different mechanical things. I have copilot, pay for gpt4o etc.
I do think there is a difference between a skilled engineer using it for the mechanical things, and an engineer that OFFLOADS thinking/designing to it.
Theres nuance everywhere, but my original comment was definitely implying the people that attempt to lean on it very hard for their core work.
If it were not for all the things in your profile, I would aver that the only devs I know that think otherwise of their coding abilities were students at the time.
Hm… I think it’s fair to say that as a learning tool, when you are not familiar with the domain, coding assistants are the extremely valuable for everyone.
I wrote a kotlin idea plugin in a day; I’ve never used kotlin before and the jetbrains ui framework is a dogs breakfast of obscure edge cases.
I had no skills in this area, so I could happily lean into the assistance provided to get the job done. And it got done.
…but, I don’t use coding assistants day to day in languages I’m very familiar with: because they’re flat out bad compared to what I can do by hand, myself.
Even using Python, generated code is often subtly wrong and it takes more time to make sure it is correct than to do it by hand.
…now, I would assume that a professional kotlin developer would look at my plugin and go: that’s a heap of garbage, you won’t be able to upgrade that when a new version comes out (turns out, they’re right).
So, despite being a (I hope) competent programmer I have three observations:
1) the code I built worked, but was an unmaintainable mess.
2) it only took a day, so it doesn’t matter if I throw it away and build the next one from scratch.
3) There are extremely limited domains where that’s true, and I personally find myself leaning away from LLM anything where maintenance is a long term goal.
So, the point here is not that developers are good/bad:
It’s the LLM generated code is bad.
It is bad.
It is the sort of quick, rubbish prototyping code that often ends up in production…
…and then gets an expensive rewrite later, if it does the job.
The point is that if you’re in the latter phase of working on a project that is not throw away…
You know the saying.
Betty had a bit of bitter butter, so she mixed the bitter butter with the better butter.
.. the exact same content for a screensaver with Todd Rundgren in 1987 on the then-new color Apple Macintosh II in Sausalito, California. A different screensaver called "Flow Fazer" was more popular and sold many copies. The rival at the time was "After Dark" .. whose founder had a PhD in physics from UC Berkeley but also turned out to be independently wealthy, and then one of the wealthiest men in the Bay Area after the dot-com boom.
I am not necessarily arguing against GenAI. I am sure it will have somewhat similar effects to how the explosion in popularity of garbage collected languages et al had on software back in the 90s.
More stuff will get done, the barrier of entry will be lower etc.
The craft of programming took a significant quality/care hit when it transitioned from "only people who care enough to learn the ins and outs of memory management can feasibly do this" to "now anyone with a technical brain and a business use case can do it". Which makes sense, the code was no longer the point.
The C++ devs rightly felt superior to the new java devs in the narrow niche of "ability to craft code." But that feeling doesn't move the needle business wise in the vast majority of circumstances. Which is always the schism between large technology leaps.
Basically, the argument of "its worse" is not WRONG. Just, the same as it did not really matter in the mid 90s. Does not matter as much now, compared to the ability to "just get something that kinda works."
I mean... in a way yes. The status of being someone who cares about the craft of programming.
In the scheme of things however, that status hardly matters compared to the "ability to get something shipped quickly" which is what the vast majority of people are paid to do.
So while I might judge those people for not meeting my personal standards or bar. In many cases that does not actually matter. They got something out there, thats all that matters.
SWE has a huge draw because frankly it's not that hard to learn programming, and the bar to clear in order to land a $100-120k work-from-home salary is pretty low. I know more than a few people who career hopped into software engineering after a lackluster non-tech career (that they paid through the nose to get a degree in, but were still making $70k after 5 years). By and large these people seem to just not be "into it", and like you said are more about delivering than actually making good products/services.
However, it does look like LLM's are racing to make these junior devs unnecessary.
> However, it does look like LLM's are racing to make these junior devs unnecessary.
The main utility of "junior devs" (regardless of age) is that they can serve as an interface to non-technical business "users". Give them the right tools, and their value will be similar to good business controllers or similar in the org.
A salary of $100-$150k is really low for someone who is really a competent developer. It's kept down by those "junior devs" (of all ages) that apply for the same jobs.
Both kinds of developers will be required until companies use AI in most of those roles, including the controllers, the developers and the business side.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
I found this too. But I also found the opposite, including here on HN; people who are interested in technology have almost an aversion against using AI. I personally love tech and I would and do write software for fun, but even that is objectively more fun for me with AI. It makes me far more productive (very much more than what the article states) and, more importantly, it removes the procrastination; whenever I am stuck or procrastinating getting to work or start, I start talking with Aider and before I know it, another task was done that I probably wouldn't have done that day without.
That way I now launch bi weekly open and closed source projects while before that would take months to years. And the cost of having this team of fast experienced devs sitting with me is max a few $ per day.
> people who are interested in technology have almost an aversion against using AI
Personally, I don't use LLMs. But I don't mind people using them as interactive search engines or code/text manipulations as long as they're aware of the hallucination risks and took care of what they're copying into the project. My reasons for it is mostly that I'm a journey guy, not a destination guy. And I love reading books and manuals as they give me an extensive knowledge map. Using LLMs feels like taking guidance from someone who has not ventured 1km outside their village, but heard descriptions from passersby. Too much vigilance required for the occasional good stuff.
And the truth is, there are a lot of great books and manuals out there. And while they teach you how to do stuff, they often teach you why you should not do it. I strongly doubt Copilot imparting architectural and technical reminders alongside the code.
For my never finishing side projects I am too; I enjoy my weekends tinkering on the 'final database' system I have been building in CL for over a decade and will probably never really 'finish'. But to make money, I launch things fast and promote them; AI makes that far easier.
Especially for parts like fronted that I despise; I find 0 pleasure in working with css magic that even seasoned frontenders have to try/fail in a loop to create. I let Sonnet just struggle until it's good enough instead of me having to do that annoying chore; then I ask Aider to attach it to the backend and done.
Yeah. There is a psychological benefit to using AI that I find very beneficial. A lot of tasks that I would have avoided or wasted time doing task avoidance on suddenly become tractable. I think Simon Willison said something similar.
Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs? If the latter, how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt? I've gotten a lot of mileage in my career so far by paying attention when something felt tedious or mundane, because that's a signal you need some combination of refactoring, tooling, or automation. If instead you just lean on an LLM to brute-force your way through, sure, that accomplishes the short term goal of shipping your agile sprint deliverable or whatever, but what of the long term cost?
> Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs?
Like - I presume almost everyone - somewhere in the middle?
That was a helluva dichotomy to offer me...
> how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt?
By reading it, thinking about it and testing it?
Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?
There's a weird gulf of incomprehension between people that use AI to help them code and those that don't. I'm sure you're as confused by this exchange as I am.
Working in a codebase with 10s of other developers seems... pretty normal? Not universal sure, but that has to be a decent percent of professional software work. Once you get to even a half dozen people working in a code base I think consistency and clarity take on a significant role.
In my own experience I've worked on repos with <10 other devs where I spent far more effort on consistency and mantainability than getting the thing to work.
I'm not sure where I said that but I certainly didn't intend to give that impression.
I use AI either as an unblocker to get me started, or to write a handful of lines that are too complex to do from memory but not so complex that I can't immediately grok them.
I find both types of usage very satisfying and helpful.
It does generate swats of code, however, you have to review and test it. But, depending on what you are working in/with, you would have to write this yourself anyway; for instance, Go has always so much plumbing, AI simply removes all those keystrokes. And very rigorously; it adds all the err and defer blocks in, which can be 100s and a large % of one go file: what is the point of writing that yourself? It does that very fast as well; if you write the main logic without any of that stuff and ask sonnet to make it 'good code', you write a few lines and get 100s back.
But it is far more useful on verbose 'team written' corporate stuff than on the more reuse intensive tech: in CL or Haskell, the community is far more DRY than Go or JS/TS; you tend to create and reuse many things and your much of the end result is (basically) a DSL; current AI is not very good at that in my experience; it will recreate or hallucinate (when you pressure reuse of previously created things, if there are too many, even though it does fit in the context window) functions all over the place. But many people have the same issue; they don't know, cannot search or forget and will just redo things many times over; AI makes that far easier (as in, no work at all often), so that's the new reality.
I'm not the same person but I share their perspective on this. I do it by treating AI written code the exact same way I treat mine. Extremely suspect and probably terrible on the first iteration, so I heavily test and iterate it until it's great code. If it's not up to my standards, I don't ever put it in a merge request, whether I handwrote it myself or had an AI write it for me.
> I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts.
It does not
You also may find this post from the other day more illuminating[0], as I believe the actual result strongly hints at what you're guessing. The study is high schoolers doing math. While GPT only has an 8% error rate for the final answer, it gets the steps wrong half the time. And with coding (like math), the steps are the important bits.
But I think people evaluate very poorly when there's ill defined metrics but some metric exists. They over inflate it's value since it's concrete. Like completing a ticket doesn't mean you made progress. Introducing technical debt would mean taking a step back. A step forward in a very specific direction but away from the actual end goal. You're just outsourcing work to a future person and I think we like to pretend this doesn't exist because it's hard to measure.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
Is this a bad thing? Maybe I'm misunderstanding it, but even when I'm working on my own projects, I'm usually trying to solve a problem, and the technology is a means to an end to solving that problem (delivering). I care that it works, and is maintainable, I don't care that much about the technology.
No code solutions are often superior.
Around 15 years ago we were shipping terabyte hard disks as it was faster than the internet (until one got stuck in customs)
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
For them programming is a means to an end, and I think it is fine, in a way. But you cannot just ask an AI to write you tiktok clone and expect to get the finished product. Writing software is an iterative process, and LLMs currently used are not good enough for that, because they need not only to answer the questions, but at the very minimum to start asking questions: "why do want to do that ?" "do you prefer this or that", etc., so that they can actually extract all the specification details that the user happily didn't even know he needed before producing an appropriate output. (It's not too different from how some independent developers has to handle their clients, isn't it ?). Probably we will get there, but not too soon.
I also doubt that current tools can keep a project architecturally sound long-term, but that is just an hunch.
I admit though that I may be biased because I don't like much tools like copilot: when I write software, I have in my mind a model of the software that I am writing/I want to write, the AI has another model "in mind" and I need to spend mental energy understanding what it is "thinking". Even if 99/100 it is what I wanted, the remaining 1% is enough to hold me back from trusting it. Maybe I am using it the wrong way, who knows.
The AI tool that work for me would be a "voice controller AI powered pair programmer": I write my code, then from time to time I ask him questions on how to do something, and I can get either an contextual answer depending on the code I am working on, or generate the actual code if I wish so". Are there already plugins working that way for vscode/idea/etc ?
I've been playing with Cursor (albeit with a very small toy codebase) and it does seem like it could do some of what you said - it has a number of features, not all of which necessarily generate code. You can ask questions about the code, about documentation, and other things, and it can optionally suggest code that you can either accept, accept parts of, or decline. It's more of a fork of vscode than a plugin right now though.
It is very nice in that it gives you a handy diffing tool before you accept, and it very much feels like it puts me in control.
> had to tackle after the less experienced devs have contributed their AI-driven efforts.
So, like before AI then? I haven't seen AI deliver illogical nonsense that I couldn't even decipher like I have seen some outsourcing companies deliver.
I have. If you're doing niche-er stuff it doesn't have enough data and hallucinates. The worst is when it spits two screens of code instead of 'this cannot be done at the level you want it'.
> that I couldn't even decipher
That's unrelated to code quality. Especially with C++ which has become as write only as perl.
But that is a HN bubble thing: I work and worked with seniors with 10-15 years under their belt who have no logic bone in their body; the worst (and an AI does not do that) is when there is a somewhat 'busy' function that has an if or switch statement and, over time, to add features or fix bugs, ifs were added. Now after 5+ years, this function is 15000 lines and is somewhat of a trained neural network adjacent thing; 100s of nested ifs, pages long, that cannot be read and cannot be followed even if you have a brain. This is made by senior staff of, usually, outsourcing companies and I have seen it very many times over the past 40 years. Not entry level, not small companies either. I know a gov tax system which was partly maintained by a very large and well known outsourcing company which has numerous of these puppies in production that no one dares to touch.
AI doesn't do stuff like because it could not, which to me, is a good thing. When it gets better, it might start to do it, I don't know.
People here live in a bubble where they think the world is full of people who read 'beautiful code', make tests, use git or something instead of zip$date and know how DeMorgan works; by far, most don't, not juniors, not seniors.
"Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. "
From my experience, most of it is quickly caught in code review. And after a while it occurs less and less, granted that the junior developer puts in the effort to learn why their PRs aren't getting approved.
So, pretty similar to how it was before. Except that motivated junior developers will improve incredibly fast. But that's also kind of always been the case in software development these past two decades?
Code quality is the hardest thing to measure. Seems like they were measuring commits, pull-requests, builds, and build success rate. This sort of gets at that, but is probably inadequate.
The few attempts I've made at using genAI to make large-scale changes to code have been failures, and left me in the dark about the changes that were made in ways that were not helpful. I needed suggestions to be in much smaller chunks. paragraph sized. Right now I limit myself to using the genAI line completion suggestions in Pycharm. It very often guesses my intentions and so actually is helpful, particularly when laboriously typing out lots of long literals, e.g. keys in a dictionary.
I don't remember who said it, but "AI generated code turns every developer into a legacy code maintainer". It's pithy and a bit of an exaggeration, but there's a grain of truth in there that resonates with me.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
You get what you measure. Nobody measure software quality.
Maybe not at your workplaces, but at mine, we measured bugs, change failure rate, uptime, "critical functionality" uptime, regressions, performance, CSAT, etc. in addition to qualitative research on quality in-team and with customers
I don't think it's that clear cut. I personally think the AI often delivers a better solution than the one I had in mind. It always contains a lot more safe guards against edge cases and other "boring" stuff that the AI has no problem adding but others find tedious.
If you're building a code base where AI is delivering on the the details of it, it's generally a bad thing if the code provided by AI provide safeguards WITHIN your code base.
Those kinds of safeguards should instead be part of the framework you're using. If you need to prevent SQL injection, you need to make sure that all access to the SQL type database pass through a layer that prevents that. If you are worried about the security of your point of access (like an API facing the public), you need to apply safeguards as close to the point of entry as possible, and so on.
I'm a big believer in AI generated code (over a long horizon), but I'm not sure the edge case robustness is the main selling point.
Sounds like we're talking about different kind of safeguards. I mean stuff like a character in a game ending up in a broken case due to something that is theoretically possible but very unlikely or where the consequence is not worth the effort. An AI has no problem taking those into account and write tedious safeguards, while I skip it.
This. Even without AI, we have inexperienced developers rolling out something that "just works" without thinking about many of the scaling/availability issues. Then you have to spend 10x the time fixing those issues.
>Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
You are not a fucking priest in the temple of engineering, go to fucking CS dep at the local uni and be the one and preach it there.
You are worker of the company with customers, which pays you a salary from customers money.
> I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
If I don't deliver my startup burns in a year.
In my previous role if I didn't deliver the people who were my reports did not get their bonuses.
The incentives are very clear, and have always been clear - deliver.
Succesful companies aren't built in a sprint. I doubt there has ever been a successful startup that didn't have at least some competent people thinking a number of steps ahead. Piling up tech debt to hit some short term arbitrary goals is not a good plan for real success.
Here's a real example of delivering something now without worrying about being the best engineer I can. I have 2 CSRs, They are swamped with work and we're weeks away from bringing another CSR on board. I find a couple of time consuming tasks that are easy to automate and build those out separately as one-off jobs that work well enough. Instantly it's a solid time gain & stress reducer to CSRs.
Are my one-off automation tasks a long term solution? No. Do I care? Not at the moment, and my ego can take a hit for the time being.
I hear this but I don't think this is a new issue that AI brought, it simply magnified it. That's a company culture issue.
It reminds me of a talk Raymond Hettinger put on a while ago about rearranging the flowers in the garden. There is a tendency from new developers to rearrange for no good reason, AI makes it even easier now. This comes down to a culture problem to me, AI is simply the tool but the driver is the human (at least for now).
It's probably worth going a bit deeper into the paper before picking up conclusions. And I think the study could really do a bit of a better job of summarizing its results.
The abstract and the conclusion only give a single percentage figure (26.08% increase in productivity, which probably has too many decimals) as the result. If you go a bit further, they give figures of 27 to 39 percent for juniors and 8 to 13 percent for seniors.
But if you go deeper, it looks like there's a lot of variation not only by seniority, but also by the company. Beside pull requests, results on the other outcome measures (commits, builds, build success rate) don't seem to be statistically significant at Microsoft, from what I can tell. And the PR increases only seem to be statistically significant for Microsoft, not for Accenture. And even then possibly only for juniors, but I'm not sure I can quite figure out if I've understood that correctly.
Of course the abstract and the conclusion have to summarize. But it really looks like the outcomes vary so much depending on the variables that I'm not sure it makes sense to give a single overall number even as a summary. Especially since statistical significance seems a bit hit-and-miss.
To get a better picture of how this comes about:
Microsoft has a study for their own internal product use, and wants to show its efficacy.
The results aren’t as broadly successful as one would hope.
Accenture is the kind of company that cooperates and co-markets with large orgs like Microsoft. With ~300 devs in the pool they hardly move the population at all, and they cannot be assumed to be objective since they are building a marketing/consulting division around AI workflows.
The third anonymous company didn’t actually have a randomized controlled trial, so it is difficult to say how one should combine their results with the RCTs. Additionally, I am sure that more than one large technology company went through similar trials and were interested in knowing the efficacy of them. That is to say, we can assume other data exist than just those included in the results.
Why did they select these companies, from a larger sample set? Probably because Microsoft and Accenture are incentivized by adoption, and this third company was picked through p-hacking.
In particular, this statement in the abstract is a very bad sign:
> Though each separate experiment is noisy, combined across all three experiments
It is essentially an admission that individually, the companies don’t have statistically significant results, but when we combine these three (and probably only these three) populations we get significant results. This is not science.
The third company seems a bit weird to include in other ways as well. In raw numbers in table 1, there seem to be exactly zero effects from the use of CoPilot. Through the use of their regression model -- which introduces other predictors such as developer-fixed and week-fixed effects -- they somehow get an estimated effect of +54%(!) from CoPilot in the number of PRs. But the standard deviations are so far through the roof that the +54% is statistically insignificant within the population of 3000 devs.
Also, they explain the introduction of the week fixed effect as a means of controlling for holidays etc., but to me it sounds like it could also introduce a lot of unwarranted flexibility into the model. But this is a part where I don't understand their methodology well enough to tell whether that's a problem or not.
I generally err towards the benefit of the doubt when I don't fully understand or know something, which is why I focused more on the presentation of the results than on criticizing the study and its methodology in general. I'd have been okay with the summary saying "we got an increase of 27.3% for Microsoft and no statistically significant results for other participants".
The assumption is that they almost certainly did not have the sample size to justify two decimal places. If they want two decimals for aesthetics over properly honoring significant figures then it calls the scientific rigor into question.
But they say "26.08% increase (SE: 10.3%)", so they make it clear that there's a lot of uncertainty around that number.
They could have said "26% (rounded to 0 dp)" or something, but that conveys even less information about the amount of uncertainty than just saying what the standard error is.
They could still have gone for one decimal. Or possibly even none, considering the magnitude of the SE, but I get that they might not want to say "SE: 10%".
The second decimal point doesn't essentially add any information because the data can't provide information at that precision. It's noise but being included in the summary result makes it implicitly look like information. Which is exactly why including it seems a bit questionable.
That's not the major issue with the study, though, it's just one of the things that caught my eye originally.
Personally, my feeling is that some of the difference is accounted for by senior devs who are applying their experience in code review and testing to the generated code and are therefore spending more time pushing back asking for changes, rejecting bad generations and taking time to implement tests to ensure the new code or refactor works as expected. The junior devs are seeing more throughput because they are working on tasks that are easier for the LLM to do right or making the mistake of accepting the first draft because it LGTM.
There is skill involved in using generative code models and it’s the same skill you need for delegating work to others and integrating solutions from multiple authors into a cohesive system.
My hunch - it's just a hunch - is that LLM-assisted coding is detrimental to one's growth as a developer. I'm fairly certain it can only boost productivity to a certain level - one which may be tedium for more senior developers, but formative for juniors.
My experience is that the LLM isn't just used for "boilerplate" code, but rather called into action when a junior developer is faced with a fairly common task they've still not (fully) understood. The process of experimenting, learning and understanding is then largely replaced by the LLM, and the real skill becomes applying prompt tweaks until it looks like stuff works.
For myself it's an incredible tool for learning. I learn both broader and deeper using chat tools. If anything it gives me a great "sounding board" for exploring a subject and finding additional resources.
E.g last night I setup my first Linux raid. A task that isn't too hard, but following a tutorial or just "reading the docs" isn't particularly helpful given it takes a few different tools (mount, umount, fstab, blkid, mdadm, fdisk, lsblk, mkfs) and along the way things might not follow the exact steps from a guide. I asked dozens of questions about each tool and step, where previously I would have just "copy paste and prayed".
Two nights ago I was similarly able to fully recover all my data from a failed ssd also using chatgpt to guide my learning along the way. It was really cool to tackle a completely new skill having a "guide" even if it's wrong 20% of the time, that's way better than the average on the open Internet.
For someone who loves learning, it feels like thousand league boots compared to just endlessly sifting through internet crap. Of course everything it says is suspect, just like everything else on the Internet, but boy it cuts out a lot of the hassle.
You’ve given two examples for “broad”, but none for “deep”. I’ve also used LLMs for setting up my homelab, and they were really helpful since I was basically at beginner level in most linux admin topics (still am). But trying to eg setup automatic snapshot replications for my zfs pool had me go back to reading blog posts, as ChatGPT just couldn’t provide a solution that worked for me.
I think one of the catch-22s of LLMs is that using it as a fancy search index (which is the dev assistant use-case) is that the information it surfaces is hugely dependent on what words you use and it matches energy. If you don't know the words you'll get very beginner oriented content and you can get it to surface deeper knowledge iff you know the shibboleths, but it's annoying. One dumb trick that's been unreasonably useful is just copy-pasting barely-related source code, but at the level you're trying to understand and then just asking an unrelated question— just yank that search vector way over to the region where you think the good information lives.
My approach is typically to follow some kind of a guide or tutorial and to look into the man pages or other documentation for each tool as I go, to understand what the guide is suggesting.
That's how I handled things e.g. when I needed to resize a partition and a filesystem in a LVM setup. Similarly to your RAID example, doing that required using a bunch of tools on multiple levels of storage abstraction: GPT partitions, LUKS tools, LVM physical and logical volumes, file system tools. I was familiar with some of those but didn't remember the incantations by heart, and for others I needed to learn new tools or concepts.
I think I use a similar approach in programming when I'm getting into something I'm not quite familiar with. Stack Overflow answers and tutorials help give the outline of a possible solution. But if I don't understand some of the details, such as what a particular function does, I google them, preferring to get the details either from official documentation or from otherwise credible-sounding accounts.
I was really hoping this study would be exploring that. Really, it's examining short-term productivity gains, ignoring long-term tech debt that can occur, and _completely_ ignoring effects on the growth of the software developers themselves.
I share your hunch, though I would go so far as to call it an informed, strong opinion. I think we're going to pay the price in this industry in a few years, where the pipeline of "clueful junior software developers" is gonna dry way up, replaced by a firehose of "AI-reliant junior software developers", and the distance between those two categories is a GULF. (And of course, it has a knock-on effect on the number of clueful intermediate software developers, and clueful senior software developers, etc...)
I think it really depends on the users. The same people who would just paste stackoverflow code until it seems to work and call it a day will abuse LLMs. However, those of us who like to know everything about the code we write will likely research anything an LLM spits out that we don't know about.
Well, at least that's how I use them. And to throw a counter to your hypothesis, I find that sometimes the LLM will use functions or library components that I didn't know of, which actually saves me a lot of time when learning a new language or toolkit. So for me, it actually accelerates learning rather than retarding it.
I think it can be abused like anything else (copy paste from stack overflow until it works).
But for folks who are going to be successful with or without it, it's a godsend in terms of being able to essentially ask stack overflow questions and get immediate non judgemental answers.
Maybe not correct all the time, but that was true with stack overflow as well. So as always, it comes back to the individual.
With the progress LLM's having been making in the last two years, is it actually a bad bet to not want to really get into it?
How many contemporary developers have no idea how to write machine code, when 50 years ago it was basically mandatory if you wanted to be able to write anything?
Are LLM's just going to become another abstraction crutch turned abstraction solid pillar?
Abstraction is beneficial and profitable up to a certain point, after which upkeep gets too hard or expensive, and knowledge dwindles into a competency crisis - for various reasons. I'm not saying we are at that point yet, but it feels like we're closing in on it (and not just in software development). 50 years ago isn't even 50 years ago anymore, if you catch my drift: In 1974, the real king of the hill was COBOL - a very straight-forward abstraction.
I'm seeing a lot of confusion and frustration from beginner programmers when it comes to abstraction, because a lot of abstractions in use today just incur other kinds of complexity. At a glance, React for example can seem deceptively easy, but in truth it requires understanding of a lot of advanced concepts. And sure, a little knowledge can go a long way in E.G. web development, but to really write robust, performant code you have to know a lot about the browser it runs in, not unlike how great programmers of yesteryear had entire 8-bit machines mapped out in their heads.
Considering this, I'm not convinced the LLM crutch will ever solidify into a pillar of understanding and maintainable competence.
And it really helps if you have a global view across the abstraction stack, even if you don't dive in the details of the implementation. I still think that having some computer organization/OS architecture knowledge would be great for developers, at least to know that memory is not free, even though we have GBs of it and that having an internet connection is not an integral part of the computer like the power supply.
My experience has been that indeed, it is detrimental to juniors. But unlike your take, it is largely a boon to experienced developers. That you suggest "tedium" is involved for more senior developers suggests to me that you haven't given the tooling a fair chance or work with a relatively obscure technology/language.
I think you’ve misunderstood the GP. They are saying AI is useful to seniors for tasks that would otherwise be tedious, but doing those tedious tasks by hand would be formative for juniors, and it is detrimental to their growth when they do them using AI.
Ah yeah you're right. In my defense, that sentence reads ambiguously without a couple of re-reads. There's a (not actually) implied double negative in there, or something, which threw me off.
Thanks for pointing it out with words instead of downvotes.
The most interesting thing about this study for me is that when they break it down by experience levels, developers who are above the median tenure show no statistically significant increase in 'productivity' (for some bad proxies of productivity), with the 95% confidence intervals actually dipping deep into the negatives on all metrics (though leaning slightly positive).
This tracks with my own experience: Copilot is nice for resolving some tedium and freeing up my brain to focus more on deeper questions, but it's not as world-altering as junior devs describe it as. It's also frequently subtly wrong in ways that a newer dev wouldn't catch, which requires me to stop and tweak most things it generates in a way that a less experienced dev probably wouldn't know to. A few years into it I now have a pretty good sense for when to use Copilot and when not to—so I think it's probably a net positive for me now—but it certainly wasn't always that way.
I also wonder if the possibly-decreased 'productivity' for more senior devs stems in part from the increase in 'productivity' from the juniors in the company. If the junior devs are producing more PRs that have more mistakes and take longer to review, this would potentially slow down seniors, reducing their own productivity gains proportionally.
A 26% productivity increase sounds inline with in my experience. I think one dimension they should explore is whether you're working with a new technology or one that you're already familiar with. AI helps me much more with languages/frameworks that I'm trying to learn.
I'd also expand it to "languages/frameworks that I'll never properly learn".
I'm not great at remembering specific quirks/pitfalls about secondary languages like e.g. what the specific quoting incantations are to write conditionals in Bash, so I rarely wrote bash scripts for automation in the past. Basically only if that was a common enough task to be worth the effort. Same for processing JSON with jq, or parsing with AWK.
Now with LLMs, I'm creating a lot more bash scripts, and it has gotten so easy that I'll do it for process-documentation more often. E.g. what previously was a more static step-by-step README with instructions is now accompanied with an interactive bash script that takes user input.
While I’ll grant you that LLMs are in fact shockingly good with shell scripts, I also highly recommend shellcheck [0]. You can get it as a treesitter plugin for nvim, and I assume others, so it lints as you write.
Bash is a terrible language in so so many ways and I concur that I have no interest in learning its new features going forward. I remember how painfully inadequate it was for actual computation and its data types are non existent. Gross.
You have to understand the domain in which its intended to be used.
Bash scripts are essentially automating what you could do at the command line with utility programs, pipes, redirects, filters, and conditionals.
If you're getting very far outside of that scope, bash is probably the wrong tool (though it can be bent to do just about anything if one is determined enough).
I don't know too much about the history, but wasn't this one of the original motivations for Python? Like it was meant to be basically "bash scripting but with better ergonomics", or something like that.
You might thinking of Perl. Affectionately known as the swiss army chainsaw of programming languages, it incorporated elements from bash, awk, sed, etc and smushed them together into a language that carried the early Web.
Sounds more like Perl.
I do miss Perl. It was far nicer than Python as a bash replacement. Python is better for larger projects in my opinion, as Perl is too flexible and expressive.
I've found Copilot pretty good at removing some tedium, like it'll write docstrings pretty well most of the time, but it does almost nothing to alleviate the actual mental labour of software engineering
Yeah this has been my experience. I bucket my work largely into two categories - "creative work" and "toil". I don't want or need AI to replace my creative work, but the more it can handle toil for me, the better.
As it happens, I think co-pilot is a pretty poor user experience, because it's essentially just an autocomplete, which doesn't really help me all that much, and often gets in my way. I like using Cursor with the autocomplete turned off. It gives you the option to highlight a bit of text and either refactor it with a prompt, or ask a question about it in a side chat window. That puts me in the driver seat, so to speak, so I (the user) can reach out to AI when I want to.
I wish I could upvote this comment more than once. There does appear to be a prejudice with more senior programmers arguing why it cannot work, how they just cause more trouble, and other various complaints. The tools today are not perfect but they still amaze me at what is being accomplished, even a 10% gain is incredible for something that costs $10/month. I believe progress will be made in the space and the tooling in 5 years will be even better.
The prejudice comes down to whether they want to talk the LLM into the right solution vs applying what they know already. If you know your way around then there’s no need to go through the LLM. I think sr devs often tend to be more task focused so the idea of outsourcing the thinking to an LLM feels like another step to take on.
I find Claude good at helping me find how to do things that I know are possible but I don’t have the right nomenclature for. This is an area where Google fails you, as you’re hoping someone else on the internet used similar terms as you when describing the problem. Once it spits out some sort of jargon I can latch onto, then I can Google and find docs to help. I prefer to use multiple sources vs just LLMs, partially because of hallucination, but also to keep amassing my own personal context. LLMs are excellent as librarians.
The trouble is that they seem to be getting worse. Some time ago I was able to write an entire small application by simply providing some guidance around function names and data structures, with an LLM filling in all of the rest of the code. It worked fantastically and really showed how these tools can be a boon.
I want to taste that same thrill again, but these days I'm lucky if I can get something out of it that will even compile, never mind the logical correctness. Maybe I'm just getting worse at using the tools.
As a senior, I find that trying to use copilot really only gives me gains maybe half the time, the other half the time it leads me in the wrong direction. Googling tends to give me a better result because I can actually move through the data quicker. My belief is this is because when I need help I'm doing something uncommon or hard, as opposed to juniors who need help doing regular stuff which will have plenty of examples in the training data. I don't need help with that.
It certainly has its uses - it's awesome at mocking and filling in the boilerplate unit tests.
I find their value depends a lot on what I'm doing. Anything easy I'll get insane leverage, no exaggeration I'll slap together that shit 25x faster. It's seen likely billions of lines of simple CRUD endpoints, so yeah it'll write those flawlessly for you.
Anything the difficult or complex, and it's really a coinflip if it's even an advantage, most of the time it's just distracting and giving irrelevant suggestions or bad textbook-style implementations intended to demonstrate a principle but with god-awful performance. Likely because there's simply not enough training data for these types of tasks.
With this in mind, I don't think it's strange that junior devs would be gushing over this and senior devs would be raising a skeptical eyebrow. Both may be correct, depending on what you work on.
I think for me, I'm still learning how to make these tools operate effectively. But even only a few months in, it has removed most all the annoying work and lets me concentrate on the stuff that I like. At this point, I'll often give it some context, tell it what to make and it spits out something relatively close. I look it over, call out like 10 things, each time it says "you're right to question..." and we do an iteration. After we're thru that, I tell it to write a comprehensive set of unit tests, it does that, most of them fail, it fixes them, and them we usually have something pretty solid. Once we have that base pattern, I can have it pattern and extend variants after the first solid bit of code. "Using this pattern for style and approach, make one that does XYZ instead."
But what I really appreciate is, I don't have to do the plug and chug stuff. Those patterns are well defined, I'm more than happy to let the LLM do that and concentrate on steering whether it's making a wise conceptual or architectural choice. It really seems to act like a higher abstraction layer. But I think how the engineer uses the tool matters too.
As a senior, you know the problem is actually finishing a project. That's the moment all those bad decisions made by junior need to be fixed. This also means that a 80% done project is more like 20% done, because in the state it is, it can not be finished: you fix one thing and break 2 more.
I am seeing that a lot - juniors who can put out a lot of code but when they get stuck they can't unstick themselves, and it's hard for me to unstick them because they have a hard time walking me through what they are doing.
I've gotten responses now on PRs of the form. "I don't know either, this is what Copilot told me."
If you don't even understand your own PR, I'm not sure why you expect other people can.
I have used LLMs myself, but mostly for boilerplate and one-off stuff. I think it can be quite helpful. But as soon as you stop understanding the code it generates you will create subtle bugs everywhere that will cost you dearly in the long run.
I have the strong feeling that if LLMs really outsmart us to the degree that some AI gung-ho types believe, the old Kernighan quote will get a new meaning:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
We'll be left with code nobody can debug because it was handed to us by our super smart AI that only hallucinates sometimes. We'll take the words of another AI that the code works. And then we'll hope for the best.
Is that so new? We used to complain when someone blindly copied and pasted from stack overflow. Before that, from experts-exchange.
Coding is still a skill acquisition that takes years. We need to stamp out the behavior of not understanding what they take from copilot, but the behavior is not new.
You're right, it isn't entirely new, but I think it's still different. You still had to figure out how that new code snippet applied to your code. Stack overflow wouldn't shape the code you're trying to slot in to fit your current code.
Juniors don't know enough to know what problems the AI code might be introducing. It might work, and the tests might pass, but it might be very fragile, full of duplicated code, unnecessary side-effects, etc. that will make future maintenance and debugging difficult. But I guess we'll be using AI for that too, so the hopefully the AI can clean up the messes that it made.
Now some junior dev can quickly make something new and fully functional in days, without knowing in detail what they are doing. As opposed to weeks by a senior originally.
Personally I think that senior devs might fear a conflict within their identity. Hence they draw the 'You and the AI have no cue' card.
I haven't found it that useful in my main product development (which while python based, uses our own framework and therefore there's not much code for CoPilot to go on and it usually suggests methods and arguments that don't exist, which just makes extra work).
Where I do find it useful are
1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content (i.e., Qt, CSS);
2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python") - the result is pointing me to a library or some example rather than directly generating code that I can copy/paste
3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors. I have the CoPilot plugin for PyCharm so I'll write it as a comment in the file and then it'll complete the next few lines. Again best results is something that is very short and specific. With anything longer I almost always have to iterate so much with CoPilot that it's not worth it anymore.
4) a quick way to search documentation
Some people have said it's good at writing unit tests but I have not found that to be the case (at least not the right kind of unit tests).
If I had to quantify it, I'd probably give it a 5-10% increase in productivity. Much less than I get from using a full featured IDE like PyCharm over coding in Notepad, or a really good git client over typing the git commands in the CLI. In other words, it's a productivity tool like many other tools, but I would not say it's "revolutionary".
> 1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content
Books and manuals, they're pretty great for introductory materials. And for advanced stuff, you have to grok these first.
> 2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python")
I usually go backwards for such questions, searching for not what I want to do, but how it would look like if it exists. And my search-fu have not failed me that much in that regards, but that requires knowledge on how those things work, which again goes back to books and other such materials.
> 3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors.
Snippets and templates in my editor. And example code in the documentation.
4) a quick way to search documentation
I usually have a few browser tabs open for whatever modules I'm using, plus whatever the IDE has, and PDFs and manual pages,...
For me, LLMs feel like building a rocketship to get groceries at the next village, and then hand-waving the risks of explosions and whether it would actually get you there.
I used Google Search for all of the above and would usually find what I needed in the first few hits -- which would lead to the appropriate doc page or an example page or a page on SO, etc.
So it's not like CoPilot is giving me information that I couldn't get fairly easily before. But it is giving it to me much __faster__ than I could access it before. I liken it to an IDE tool that allows you to look up API methods as you type. Or being able to ask an expert in that particular language/domain, except it's not as good as the expert because if the expert doesn't know something they're not going to make it up, they'll say "don't know".
So how much benefit you get from it is relative to how much you have to look up stuff that you don't know.
Well, like I said, a well-designed IDE is a much bigger productivity booster than CoPilot, and I've never heard anyone describe JetBrains, NetBeans or VSCode as "revolutionary".
I've been using Cursor for around 10 days on a massive Ruby on Rails
project (a stack I've been coding in for +13 years).
I didn't enjoy any productivity boost on top of what GitHub Copilot already gave me (which I'd estimate around the 25% mark).
However, for crafting a new project from scratch (empty folder) in, say, Node.js, it's uncanny; I can get an API serving requests from a OpenAPI schema (serving the OpenAPI schema via swagger) in ~5 minutes just by prompting.
Starting a project from scratch, for me at least, is rare, which probably means going back to Copilot and vanilla VSCode.
I feel like this whole "starting a new project" might be the divide between the jaded and excited, which often (but not always) falls between senior and junior lines. I just don't do that anymore. Coding is no longer my passion, it's my profession. I'm working in an established code base that I need to thoughtfully expand and improve. The easy and boilerplate problems are solved. I can't remember the last time I started up a new project, so I never see that side of copilot or cursor. Copilot might at its best when tinkering.
If I struggle on a particularly hard implementation detail in a large project, often I'll use an LLM to set up a boilerplate version of the project from scratch with fewer complications so that I can figure out the problem. It gets confused if it's interacting with too many things, but the solutions it finds in a simple example can often be instructive.
To fully extract maximum value out of LLMs, you need to change how you architect and organize software to make it easy for them to work with. LLMs are great with function libraries and domain specific languages, so the more of your code you can factor into those sorts of things the greater a speed boost they'll give you.
I start new projects a lot and just have a template that has everything I would need already set up for me. Do you think there is a unique value prop that AI gives you when setting up that a template would not have?
I do both. I have a basic template that also includes my specific AI instructions and conventions that are inputs for Aider/Cursor. Best of both worlds.
How do you use AI in your workflow outside of copilot?
I haven’t been able to get any mileage out of chat AI beyond treating it like a search engine, then verifying what it said…. Which isn’t a speedy workflow
I only use (free) ChatGPT sporadically, and it works best for me in areas where I'm familiar enough to call bullshit, but not familiar enough to write things myself quickly / confidently / without checking a lot of docs:
- writing robust bash and using unix/macos tools
- how to do X in github actions
- which API endpoint do I use to do Y
- synthesizing knowledge on some topic that would require dozens of browser tabs
- enumerating things to consider when investigating things. Like "I'm seeing X, what could be the cause, and how I do check if it's that". For example I told it last week "git rebase is very slow, what can it be?" and it told me to use GIT_TRACE=1 which made me find a slow post-commit hook, and suggested how to skip this hook while rebasing.
Same for me. I also use it for some SQL queries involving syntax I’m unfamiliar with, like JSONB operators in Postgres. ChatGPT gives me better results, faster than Google.
Sounds about right to me, which is why the hysteria about AI wiping out developer jobs was always absurd. _Every_ time there has been a technology that improved developer productivity, developer jobs and pay have _increased_. There is not a limited amount of automation that can be done in the world and the cheaper and easier it gets to automate stuff, the more stuff will be economically viable to automate that wasn't before. Did IDE's eliminate developer jobs? Compilers? It's just a tool.
The automated elevator is just a tool, but it "wiped out" the elevator operator. Which is really to say not that the elevator operator was wiped out, but that everyone became the elevator operator. Thus, by the transitive properties of supply and demand, the value of operating an elevator declined to nothing.
Said hysteria was built on the same idea. After all, LLMs themselves are just compilers for a programming language that is incredibly similar to spoken language. But as the programming language is incredibly similar to the spoken language that nearly everyone already knows, the idea was that everyone would become the metaphorical elevator operator, "wiping out" programming as a job just as elevator operators were "wiped out" of a job when operating an elevator became accessible to all.
The key difference, and where the hysteria is likely to fall flat, is that when riding in an elevator there isn't much else to do but be the elevator operator. You may as well do it. Your situation would not be meaningfully improved if another person was there to press the button for you. When it comes to programming, though, there is more effort involved. Even when a new programming language makes programming accessible, there remains a significant time commitment to carry out the work. The business people are still best to leave that work to the peons so they can continue to focus on the important things.
Does it increase the number of things that pass QA?
Do the things done with AI assistance have fewer bugs caught after QA?
Are they easier to extend or modify later? Or do they have rigid and inflexible designs?
A tool that can help turn developers into unknown quality code monkeys is not something I’m looking for. I’m looking for a tool that helps developers find bugs or design flaws in what they’re doing. Or maybe write well designed tests.
Just counting PRs doesn’t tell me anything useful. But it triggers my gut feeling that more code per unit time = lower average quality.
Great call out. I'd be extremely curious what the results looked like with something like cursor or just using claude out of the box. I'm amazed at just how easy it is to get simple small scripts up and going now with claude.
For me AI just brought back documentation. All new frameworks lack documentation big time. The last good one for me was a DOS book! I don't think newer developers even have an idea of what good documentation looks like.
Even so, AI will propose different things at different times and you still need an experienced developer to make the call. In the end it replaces documentation and typing.
For public facing projects - your documentation just became part of the LLM's training data, so its now extra important your documentation is thorough and accurate because you will have a ton of developers getting answers from that system.
For private projects, your documentation can now be fed into a finetuning dataset or a RAG system, achieving the same effect.
Every now and then there's a HN discussion "how do you manage internal documentation" where most commenters write something like "there's no point writing documentation because it quickly becomes outdated". (There were two such threads in just last days). Might explain why nothing is documented anymore.
Yes, it can both help you write it but also, if you start with the documentation, e.g. the comment describing what function does, AI becomes orders of magnitude better and actually helping write the function, so it actually forces developers to better document their code!
I've noticed the same thing. I can get great results by being detailed in my comments and/or docstrings. A lot of the time I don't want the AI actually "thinking" for itself.
Oh hey I really love writing great docs, of course not always I have the opportunity to do so, but could you point me to one you consider great?
Can be anything, no need to be some modern live docs.
I want to see what is in the past that was so great but we lost, maybe I can incorporate some of it.
The result is "less experienced people got more stuff done". I do not see an assessment of whether the stuff that got done was well done.
The output of these tools today is unsafe to use unless you possess the ability to assess its correctness. The less able you are to perform that assessment, the more likely you are to use these tools.
Only one of many problems with this direction, but gravity sucks, doesn't it.
When I'm using genai to write some code for me, I lose the internal mental state of what my code is doing.
As such, when I do have to debug problems myself, or dream up ideas of improvements, I no longer can do this properly due to lack of internal mental state.
Wonder how people who have used genai coding successfully get around this?
I use Claude and there's a couple things I'd suggest.
1) You need to be the boss with the AI being your assistant. You are now a project manager coming up with strict requirements of what you'd like done. Your developer (AI) needs context, constraints and needs to be told exactly what you'd like created without necessarily diving into the technical details.
2) Planning - you need to have a high level plan of roughly how you'd like to structure your code. Think of it like you're drawing the outline and AI is filling in the gaps.
3) Separation of concerns - use software principles to drive your code design. Break problems down into separate components, AI is good at filling in components that are well defined.
Once you change your thinking to a higher level, then you can maintain flow state. Of course the AI isn't perfect and will make mistakes - you do need to question it as you go. The more creative you become with a solution the harder time the AI will have and sometimes you'll have to break out and fix things up yourself.
Go over the code again and again like you would if you'd written it yourself. Keep iterating on the code yourself without having the AI generate everything. In a current project I have everything internalised because I'm constantly going through the code and improving upon it myself (and using the AI to do boilerplate), even though much of it was initially generated. I'll still have the AI generate code in plenty of places, but I don't let it take over the thinking for me unless I'm unsure about something, then I ask it for possible solutions and ideas, then I go over those solutions myself.
They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.
I've already fixed a couple of tests like this, where people clearly used AI and didn't think about it, when in reality it was testing something wrong.
Not to mention the rest of the technical debt added... looking at productivity in software development by amount of tasks is so wrong.
Must have seen AI write the implementation as well?
If you're still cognizant of what you're writing on the implementation side, it's pretty hard to see a test go from failing to passing if the test is buggy. It requires you to independently introduce the same bug the LLM did, which, while not completely impossible, is unlikely.
Of course, humans are prone to not understanding the requirements, and introducing what isn't really a bug in the strictest sense but rather a misfeature.
> it's pretty hard to see a test go from failing to passing
Its pretty easy to add a passing test and call it done without checking if it actually fails in the right circumstances, and then you will get a ton of buggy tests.
Most developers don't do the start out at failing and then to passing ritual, especially junior ones who copies code from somewhere instead of knowing what they wrote.
> They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.
Let's not forget that developers some times do this, too...
Empirical studies like this are hard to conduct... I'm curious though. This study was authored by at least two folks from Microsoft and one of the sample groups in the study was also from Microsoft. The reason this seems to stand out to me as odd is that Microsoft also owns the AI tool being used in the study and would definitely want a favourable conclusion in this paper.
Can someone potentially smarter than me explain how the data, which in table I clearly shows the majority of the means for each experiment metric being less than the SD could even hope to be salvaged? Taken blindly, the results are simply unbelieveable to outright lying, the sort of thing you see submitted to garbage open access journals. The text describing model they employ afterwards is not convincing enough for me and seems light on details. I mean, wouldn't any reasonable reviewer demand more?
I know preprints don't need polish but this is even below the standard of a preprint, imo.
"However, the table also shows that for all outcomes (with the exception of the Build Success Rate), the standard deviation exceeds the pre-treatment mean, and sometimes by a lot. This high variability will limit our power in our experimental regressions below."
What I find even stranger is that the values in the "control" and "treatment" columns are so similar. That would be highly unlikely given the extreme variability, no?
Reminds me of a situation I've been in a few times already:
Dev: Hey einpoklum, how do I do XYZ?
Me: Hmm, I think I remember that... you could try AB and then C.
Dev: Ok, but isn't there a better/easier way? Let me ask ChatGPT.
...
Dev: Hey einpoklum, ChatGPT said I should do AB and then C.
Me: Let me have a look at that for a second.
Me: -Right, so it's just what I read on StackOverflow about this, a couple of years ago.
Sometimes it's even the answer that _I_ wrote on StackOverflow and then I feel cheated.
if having fun is what makes you happy yeah! That's great brother! Sharing is caring and I'm glad you are so smart that your work is the giant's shoulders other people make use of
I'm guiding a few and sometimes they write pretty good code with the help of GPT but then in meetings have trouble understanding and explaining things.
I think it's a big productivity boost, but also a chance that the learning rate might actually be significantly slower.
This is far, far more rigorous than the experiment Microsoft behind Microsoft's claim that Copilot made devs 55% faster.
The experiment in question was to split 95 devs into two groups and see how long it took each group to setup a web server in Javascript. Control took a little under 3 hours on average, the copilot group took 1 hour and 11 minutes on average.
And it is thanks to this weak experiment that Github proudly boasts that Copilot makes devs 55% faster.
By contrast the conclusion that Copilot makes devs ~25% more productive seems reasonable, especially when you read the actual paper and find out that among senior devs the productivity gains are more marginal.
This is a decently-thorough study, using PRs as a productivity metric while also tracking build failures (which remained constant at MSFT but increased at Accenture).
Would love to see it replicated by researchers at a company that does not have a clear financial interest in the outcome (the corresponding author here was working at Microsoft Research during the study period).
> Before moving on, we discuss an additional experiment run at Accenture that was abandoned due to a large layoff affecting 42% of participants
It's very exciting that generative ai lets people get more code written, especially in unfamiliar domains. It feels great to ship loads of code that does a thing. The sure result is an unprecedented increase in the amount of code in products.
A minor drawback to that enthusiasm is that a lot of the code I read didn't need to exist in the first place, even before this wave. Lots of it can be attributed to the path dependence of creation as opposed to what it is trying to do. This should be a rich time to change to security / exploit work - the random search tools are great and the target just keeps getting easier.
What our industry really desperately needed was to drive the quality of implementation right down. It's going to be an exciting time to be alive.
> Notably, less experienced developers showed higher adoption rates and greater productivity gains.
And that is why demand for senior developers is going to go through the roof. Who is going to unfuck the giant balls of mud those inexperienced devs are slinging together? Who’s going to keep the lights on?
I've used Copilot and chatGPT to help with algorithms where I'm unsure where to start. Actual case: "Write a performant algorithm to find the number of work days given a future date". It's trickier than you think and makes a great interview question.
Both AI tools came back with...garbage. Loops within loops within loops as they iterated through each day to check if the day is a weekend or not, is a leap year and to account for the extra day, is it a holiday or not, etc.
However, chatGPT provided a clever division to cut the dataset down to weeks, then process the result. I ended up using that portion in my final algorithm creation.
So, my take on AI coding tools are: "Buyer beware. Your results may vary".
I look at this very differently. There are a lot of grumpy people jealously guarding their "skills" and artisanal coding prowess getting really annoyed at the juniors that come in and devalue what they do by just asking an AI to do the same thing and then moving on with their day. Young people are more mentally agile and most young people are now growing up with LLMs as a tool chain that was always there.
I'm actually fairly senior (turning 50 next month) and I notice an effect that AI is having on my own productivity: I now take on mini projects that I used to delegate or avoid doing because they would take too much time or be too tedious. That's not the case anymore. The number of things I can use chat gpt for is gradually expanding. I notice that I'm skilling up a lot more rapidly as well.
This is great because if you want to stay relevant, you need to adapt to modern tools and technology. That's nothing new of course. Changes are a constant in our industry. And there always are a lot of people that learn some tricks when they are young and then never learn anything new again. If you are lucky some of that stuff stays relevant for a few decades. But mostly a lot of stuff gets unceremoniously dumped by younger generations.
The ability to use LLMs is becoming an important skill in itself and one that is now part of what I look for in candidates (including older ones). I don't have a lot of patience for people refusing to use tools that are available to them. Tell me how you use tools to your advantage; not how you have a tool aversion or are unwilling to learn new things.
At the end of the day, all the layers of abstraction will be removed... interpreters, transpilers, compilers, microcode, instruction sets... all the layers of convenience and APIs created for humans and the overhead they introduce will be gone.
It will be a machine game, just like assembly is mostly compiler generated today.
The AI will produce faster, smaller, more power efficient and more secure binaries than a human ever can.
The AI will learn all the compilation steps, fuse everything into a simplified pipeline and what we call compilers today will be erased from reality.
"Tell me how you use tools to your advantage; not how you have a tool aversion or are unwilling to learn new things" is a false binary. It leaves out the possibility that the tool in question is not suited for its purpose.
I am an average developer with more than five years experience in Python. I was using chatgpt to create prototypes of what to do in something I was familiar with and was able to debug the task to make it work. I wasn’t specific enough to specify the epaper display had seven colors instead of black and white.
When I was using chatgpt to do qualifiers for a CTF called Hack A Sat at defcon 31 I could not get anything to work such as gnu radio programs.
If you have the ability to debug then I have experienced that it is productive but when you don’t understand you run into problems.
However, there's a big question as to whether these are short productivity gains vs longer lasting gains. There's a hypothesis that the AI generate code will slowly spaghetti-fy a codebase.
Is 1-2 years sufficiently long enough to take this into consideration? Or disprove the spaghettification?
> Notably, less experienced developers showed higher adoption rates and greater productivity gains.
This is what I’ve seen too, I don't think less experienced developers have gotten better in their understanding of anything just more exposed and quicker, while I do think more experience developers have stagnated
> while I do think more experience developers have stagnated
Is this because they are not using coding assistants? Are they resistant to using them? I have to say that the coding assistant is helpful; it is an ever-present rubber duck that can talk back with useful information.
Across teams and organizations what I’ve seen this year is that the “best” developers haven’t looked. The proverbial rockstars with the most domain knowledge have never gotten around to AI and LLMs at all.
This is compounded by adherence to misguided corporate policies that broadly prohibit use of LLMs but are meant to only be about putting trade secrets into the cloud, not distinguishing between cloud vs locally run language models. Comfortable people would never challenge this policy with critical thinking, and it requires special interest to look at locally run language models, even just to choose which one to run.
Many developers have not advocated for more RAM on their company issued laptop to run better LLMs.
and I haven't seen any internal language model that the company is running in their intranet. But it would be cool if there was a huggingface-style catalogue and server farm companies could have and let their employees choose models to prompt, always having the latest models to load.
I think this post from the other day adds some important context[0]. In that study kids with access to GPT did way more practice problems but worse on the test. But the most important part was that they found that while GPT usually got the final answer right that the logic was wrong, meaning that the answer is wrong. This is true for math and code.
There's the joke: there's two types of 10x devs, those that do 10x work and those who finish 10x jira tickets. The problem with this study is the assumptions that it makes, which is quite common and naive in our industry. They assume that PRs and commits are measures of productivity and they assume passing review is a good quality metric. These are so variable between teams. Plenty are just "lgtm" reviews.
The issue here is that there's no real solid metric for things like good code. Meeting the goals of a ticket doesn't mean you haven't solved the problem so poorly you are the reason 10 new tickets will be created. This is the real issue here and the only real way to measure it is using Justice Potter's test (I know it when I see it), and requires an expert evaluator. In other words, tech debt. Which is something we're seeing a growing rise in, all the fucking enshitification.
So I don't think that study here contradicts [0], in fact I think they're aligned. But I suspect people who are poor programmers (or non programmers) will use this at evidence for what they want to see. Believing naive things like lines of code, number of commits/PRs, etc are measures of productivity rather than hints of measure. I'm all for "move fast and break things" as long as there's time set aside to clean up the fucking mess you left behind. But there never is. It's like we have businesses ADHD. There's so much lost productivity because so much focus is placed on short term measurements and thinking. I know medium and long term thinking are hard, but humans do hard shit every day. We can do a lot better than a shoddy study like this.
ChatGPT and to a lesser degree Copilot have been very valuable to me this year.
Copilot often saves me a lot of typing on a 1-3 line scope, occasionally surprising me with exactly what I was about to write on a 5-10 line scope. It’s really good during rearrangement and early refactoring (as you are building a new thing and changing your mind as you go about code organization).
ChatGPT, or “Jimmy” - as I like to call him - has been great for answering syntax questions, idiom questions, etc. when applying my general skills based on other languages to ones I’m less familiar with.
It has also been good for “discussing” architecture approaches to a problem with respect to a particular toolset.
With proper guidance and very clear prompting, I usually get highly value responses.
I would rough guess that these two tools have saved me 2-3 months of solo time this year - nay, since April.
One I get down in the deep details, I use Jimmy much less often. But when I hit something new, or something I long since forgot, he’s ready to be relative expert / knowledge base.
I've been doing a lot of new things lately and using some tech I hadn't used before (or lately... and much has changed).
Being able to ask for an example of something in that domain, and get a useful answer, is much, much faster than hunting down the current documentation (which may be thin or non-existent).
Also being able to say, "I do X in this language. What is the idiomatic way of doing it in Y language?"
My pretty broad knowledge can be directed, with careful wording, at ChatGPT, and ChatGPT is the relative domain expert who can get me quite close to a correct solution very quickly.
I used to be super productive at the raw keyboard. Then RSI got to me. But with CoPilot, I’m back to my normal productivity. For me it’s a life-saver as it allows fast typing with minimal hand strain.
Interestingly, they compare number of pull requests as a statistic for productivity. Not that I know of a better metric, but I wonder if this is an accurate metric. It seems similarly misguided as looking at lines of code.
If an AI tool makes me more productive, I would probably either spend the time won browsing the internet, or use it to attempt different approaches to solve the problem at hand. In the latter case, I would perhaps make more reliable or more flexible software. Which would also be almost impossible to measure in a scientific investigation.
In my experience, the differences in developer productivity are so enormous (depending on existing domain knowledge, motivation, or management approach), that it seems pretty hard to make any scientific claim based on looking at large groups of developers. For now, I prefer the individual success story.
This is obvious. Right, of course you get an increase in productivity, especially as a junior - when current AI is able to solve leetcode.
BUT I think a lot of people mentioned that, you get code - that the person which wrote it do not understand. So the next time you get a bug there, good luck fixing it.
My take so far. AI is great, but only for non critical, non core code. Everything that is done for plotting and scripting is awesome (which can take days to implement and in minutes with AI) - but core lib functions - wouldn't outsource it to the AI right now.
An interesting topic that will, given this site's community, attract a lot of strong opinions.
I, for one, only decide whether CoPilot's productivity increase is worth the $10 it costs per month.
It doesn't really matter whether you're an employer getting a 3–30% increase in productivity or whether you pay for it personally and finish 2 hours faster every week and log off illegaly. It's easily worth its money. What more to consider?
Developers, Operations, and Security used to be dedicated roles.
Then we made DevOps and some businesses took that to mean they only needed 2/3 of the headcount, rather than integrating those teams.
Then we made DevSecOps, and some businesses took that to mean they only needed 1/3 the original roles, and that devs could just also be their operations and appsec team.
That's not a knock on shift-left and integrated operations models; those are often good ideas. It's just the logical outcome of those models when execs think they can get a bigger bonus by cutting costs by cutting headcounts.
Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.
Is anyone really surprised they are using ChatGPT to keep up?
This is going to keep happening until IT companies stop cutting headcounts to make line go up (instead of good business strategy).