Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is average code getting worse?
68 points by Aeolun 45 days ago | hide | past | favorite | 83 comments
Hi HN,

There used to be a time, in the dark dark ages of history, 10 years ago or so, when I would encounter issues during the course of my work, and I could fairly confidently assume I was doing something wrong, or I just hadn’t read the manual correctly.

Contrast that to now, when I regularly write code, or include a library to do a certain thing, and I find it just does not work. Following my historical logic, I spent a day trying to figure out what I did wrong, only to figure out that it was a bug, or edge case that someone just hadn’t covered (never mind that my life seems to consist of edge cases now) or the documentation is just plain out of date (or nonexistent).

Is this a trend? And does it have to do with the average code, or myself? Have you experienced something similar? It’s driving me nuts.

I want to rely on other code, but more and more I find that it’s safer to just assume it’ll be broken from the start.




I'm in the camp that would answer "yes" to your question.

For me, it went like this:

- The nerds did their nerd shit in the garage. Nobody saw them.

- Then the nerds figured out how to revamp many business processes with computers.

- Then those businesses needed more of that, so they started hiring other nerds.

- Not too many nerds existed because they're nerds and who wants to be a nerd? So therefore demand was high. What happens to the nerds' salaries when demand is high? Salaries are high.

- Now the nerds are making money. Tons of it. People started to notice that the nerds could sit all day and type at a computer (hey I can do that too!) but they make 100k more than I do.

- Eager to cash in on that nerd money, I start to google "how to code". Codeacademy, Kahn Academy, Udemy, and a plethora of highly expensive code camps show up. I pick my poison and begin.

- A Github profile is set up, my LinkedIn is updated, I have a few webapps under my belt, I'm ready for my interview.

- I get a job at Big Tech company as a junior position. A few weeks go by and I'm asked if I could help interview another candidate. Of course, I'm qualified enough right?

And the cycle continues.

This is how I perceived the shift happen. Code Camps were really detrimental since it became very difficult to vet actual skills vs. ability to pass coding interviews. When I worked at Uber this was a huge deal - a lot of people that had just finished code camps nailed the interviews but only lasted a few months because they had no idea how to actually do anything.

Of course it's all very nuanced and this isn't the only thing that happened (making programming "easier" certainly hasn't helped). But this was a large factor.


I taught for code camps. I think there's truth to this. Teaching programming is a lot like math. Ideally, you teach them how to follow things logically. But due to time constraints, you just give them a bunch of exercises, especially the ones that help them land jobs. Today it's probably harder to land a job than do the job, so why not optimize for that? It's hard to be able to code, so why not teach them to code and hope the passion clicks once they can build things?

But I suppose it doesn't work that way. These guys were not meant to shortcut people who have a Masters and get into Uber. It's surprising that my average 2-3 month bootcamp student makes a higher income than someone with an actual degree. I'd love to find a way to fix this, but I don't have the resources to, and most people who do don't want to give up the money.


heh, makes me want to go to a code camp now. I've got ~7 years professional(?), ~11 years hobby, experience at two medium sized companies. Spent my years reinventing a ton of things, and know a fair amount of oddities on how to get things done. Combine that with a passion for idiomatic, standards, maintainable code and .. i think i'd be a decent hire.

(Primary experience through the years in: Rust (current), Go, Python, JavaScript(Node/etc).

Yet. I don't think i'd pass a single interview. I really need to learn the fundamentals i need for those interviews. Feels like a liability at this point.


Nobody that hired me has ever been dissapointed by what they got, but there’s a ton of people that never hired me because I can’t tell them whether something is O(n) or O(n*n), just ‘faster’ and ‘slower’.

Who the hell cares what it’s called as long as it’s the fastest it’s ever going to be.

> Do you have options other than caching?

Yes, but who cares since they’re all worse (for your wallet and for speed).

I may still be a bit salty.


Not wanting to rub salt to your wound but there is a difference between able to tell if a car went fast or slow, or definitely slower than the speed limit, or exactly 58.4km/h.


That’s fine. If someone ever wants to know whether it was 49km/h or 51.3km/h we can figure it out. It’s just not knowledge that is very essential most of the time.


I don't know whether to laugh or to cry ! This is so true. Too many people with not enough background/experience, the dumbing down of programming to appeal to the masses (hey, just use this framework and call this api), the explosion in complexity of everything because apparently all solutions have be at Google scale, babel of languages/tools etc. etc. The joy of understanding/mastery/comprehension is gone.


> The joy of understanding/mastery/comprehension is gone.

Yes, and is also not rewarded. Deeply understanding a problem has much less career value than job hopping + leet code practice.

We're creating a generation of senior developers who do not understand problems deeply. (And are also "senior" within 1-2 years...)


How else will you be a staff software engineer in 4 years?


The last sentence is especially true. I hate writing code most days, now. There's nothing interesting about the problems most companies are trying to solve, for example.


How is it that the smartest and most effective programmer that I've worked with was a boot camp grad? How is it that I've worked with programmers that have master's degrees from reputable universities that simply can't code? I think it's true that salaries definitely have had an effect, but there's also the fact that the industry has grown exponentially, and the employees have to come from somewhere. Make the pool bigger and you're going to have more bad programmers among the good ones.

Currently I'm working with a team and it's a mixed bag, some of the guys are good and some are terrible but they're all well-educated.


So in the late 90s in computer science 90% id say where their just to cash in.. code camp or university the idea that a lot of people enter a field when the gold is plentiful is not new... the only thing I think that is true is there is no good code or bad code there is only code...


Yes, absolutely that is one, probably major factor.

But that those people could even get there and do that was caused by the rise of frameworks and GC languages.

A person no longer needed to:

1. understand how things worked, in many cases you could just do some tutorials and start producing things that kinda worked but underneath were a pile of garbage

2. No longer needed to worry about memory or optimization so wrote more pile of garbage

I'm not saying we want to go back there, but if everyone had to write C, C++, Rust, or one of the functional languages:

1. We'd have a lot fewer people in the industry

2. Overall quality of code would be higher, though I'm sure more C/C++ would have led to more memory and security issues also or perhaps not..


When I was a younger developer I used to believe things like this. There are tons of roles and different levels to fill. People come from all sorts of backgrounds. You shouldn't discount folks for following a different path. A passionate and curious developer can learn C, C++, Rust if/when they need to.

I would also argue that in the front end world frameworks have vastly improved code quality and developer experience. 10 years ago every team had their own homegrown build system, there were no libraries for type checking, you had write code or conditionals for the lowest version browser you support. Oh and app structure? Every team did something different.

Compare that to now: We have a handful of large open source build systems, so if you jump teams/projects there is consistency. We have things like TypeScript and Flow. We have Babel, which can transpile modern JavaScript in to older versions of the language, so you can support older browsers without even thinking about it. Finally, we have frameworks like React and Vue, which enforce a component based architecture. This is great because you can jump between different projects and for the most part will find code organized in the same way. All of this is amazing.


I've been in the Node.js world for 10, and web dev world for almost 20, years now.

There are now hundreds of build systems, most of which copy the other ones but still aren't compatible with each other.

There are hundreds of frameworks spewing out security issues all over the ecosystem.

React and Vue don't enforce anything and certainly aren't the most stable of the selection of frameworks that exist. At least for React, a significant amount of money was dumped into it by a large, privacy invading and unethical corporation, which ultimately made it "popular". Facebook used it so it MUST be good, right?

None of it is amazing; it's quite sad. The fact that we're locked into a small set of browsers, none of which can be reproduced or hacked on in any meaningful way aside from those dedicated to them. The fact we can't create our own browsers without wading through an ocean of specs.

It's ludicrous. How anyone can say "it's amazing" in a positive connotation astounds me.


I'm not saying things are the best I think they can be. I'm saying the progress is amazing and overall has been positive.

If someone were to make you an offer to build a complex web app right now using current tooling vs traveling back in time to 2010 and building it using 2010 tooling, would you actually choose to go back to 2010?


A resounding "YES".


I've also been into web dev for 20 years, Node for 5. This is exactly how I feel.


Honestly, I love dealing with Android because it cuts close to the operating system and is one of the few mainstream jobs where you don't deal with frameworks. You deal with the mess that is the Android file system, for example, and I'm not sure if that's a good thing, but at least it's fun.

One of the benefits is we can swap in external code that "does everything", but in a poor manner, yet it works well enough to suit business purposes. And later on you can surgically remove it and replace it with an optimized home-built version.


I agree entirely.


I wrote an article collecting some of my thoughts on this topic a while ago: https://dev.to/darkwiiplayer/a-rant-on-change-and-the-good-o...

In it I also complain quite a bit about the cultural shift away from "building cool stuff" to "top 10 JavaScript interview questions to memorise for 2021".


Extended pool of engineers, as the field has grown, has diluted the quality of software.

Post "Social Network" (the movie), and the economic downturn of 2008-2012 there is a new generation of engineers that started programming/entered the field because that's where the money is....

There is also proliferation of the code-academies, who often encourage trainees to create a 'github portfolio', often with low quality libraries.

2001 tech bust acted like a great filter. During the 2001-2012, a lot of people that remained in the field were there because general passion towards it, and had the skills to be employed after the bust.

People that couldn't cut it, did leave the field. Hence, the software was often more of a work of passion, or craft. A lot of the open source libraries that we use today, were created during this times (and the 90s, for the databases, and Linux), also a lot of the culture of startups (good and bad), was cemented during this time.

I think the field massively expanded after 2012, and people started doing engineering because that's where the money was/became trendy. Hence you will naturally have a dilution of overall quality.


Also all the bad pre-2001 programmers have come back and instantly jumped into tech lead and CTO positions.


If you suspect that greater expectations for easy money and prestige in the industry have lead to lower quality code, what else might fall from that?

Would you also expect pockets of the industry where money and prestige were hard to come by (such as video games or experimental open source operating systems) to have better code on average?


Well certainly video games are much harder than what most software engineers are doing. And game developers have a strong incentive to carw about performance and efficiency and really push the machine to its limits.

I am currently working at a FAANG company and doing game programming on the side. The problems I am facing at my day job are mostly trivial compared to the game and graphics programming I am doing on nights and weekends.


I don't think it's the code necessarily. It's the feature creep. It used to be that we just supported monitors - now it's touchscreens, tablets, watches, keyboards, mouse. It's LCD screens, plasma, TV, and so on.

I find that code become exponentially more costly the more features you add in, because of the number of architectural layers you have to add on. And if you haven't planned for that, well, now you have to deal with the pain of adding another architectural layer.

HN is likely in the sweet spot of good architecture, but it also means that it's not supporting things like infinite scrolling and reddit-like awards.


Interesting that you should mention device support as a cause of more complex code, as the best way to support the devices you list is generally to use LESS code.

Bare HTML is supported by pretty much everything...


But for some reason it's not cool to use HTML. We throw on megabytes of code just to bypass the page reload so that we can better control the user experience. This is why we end up with complex and bloated libraries like React and end up with a frontend and backend state that have to be kept in sync.

I wish we went back plain HTML, especially for sites that don't need to be dynamic.


Browsers shouldn’t let you bypass reloads outside of the window unload event, and that is standardized and doesn’t leave much room to mess with. Can you elaborate if that was a real thing you are referring to?


I assume the parent is referring to the rise in popularity of single-page applications.


It is similar to "are teenagers worse than they used to be?".

But in all fairness - from my personal experience - the average code is getting better. It is mostly thanks to tooling: languages (write a readable code in PHP vs Python, or C vs Rust), version control (no need to keep tons of unused code that one does not want to throw away), linters getting more widespread, tests and CI becoming a standard.


I don't think code has changed. Definitely not in the last 10 years. Sturgeon's law is much older than that :)

• Visibility and accessibility of software has increased. Repositories like npm and GitHub mean you can see anyone's weekend project. If you only got code from distros, curated repositories, that would have been a filter.

• It's a surviorship bias. Old libraries that survived must have been good enough. New half-baked libraries written today will either get polished and survive, or be forgotten.


I also find that this varies by the ecosystem.

In PHP I would not assume that an external dependency is without flaws. In fact, I'd assume that it's mostly buggy simply due to the lack of a type system, but fine if I navigate the happy path.

In Haskell I would assume that I'm using the library in a wrong way. In Haskell I also find that I can categorize libraries by quality and that, in the most cases, I'm not running into bugs, but rather lack of good examples or lack of pushing the dependency bounds to Stackage (the package distro on top of Hackage), either because the author doesn't use Stackage or because they've become too busy.

---

Comparing code 10 years ago and now:

I find that there are much more libraries in almost every ecosystem.

That means: More stale packages, more low-grade packages, more critical selection required, but also more opportunities! Hopefully you also experience that you can do a lot more with packages than you were able to 10 years ago.

This, in some ways, has shaped modern software development a lot.


In case OP or anyone else is interested: PHP lacks strict typing (by default). For older applications I agree with you, but in the last few versions it's been a lot better. Depending on your framework or tools, you can customise file templates and add strict typing to each file, i.e. Laravel's `php artisan make:xxx` can now always have declare(strict_types=1); . Of course you can't assume that for external packages, but it's highly recommended.


Here are some quotes of people complaining about "youngsters these days".

1. The world is passing through troublous times. The young people of today think of nothing but themselves. They have no reverence for parents or old age. They are impatient of all restraint. They talk as if they knew everything, and what passes for wisdom with us is foolishness with them. As for the girls, they are forward, immodest and unladylike in speech, behavior and dress.

2. The children now love luxury; they show disrespect for elders and love chatter in place of exercise. Children are tyrants, not servants of the households. They no longer rise when their elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize over their teachers

3. I see no hope for the future of our people if they are dependent on frivolous youth of today, for certainly all youth are reckless beyond words... When I was young, we were taught to be discreet and respectful of elders, but the present youth are exceedingly wise [disrespectful] and impatient of restraint

#1 was written in the 13th century, #2 in the 5th century BC and #3 in the 8th century BC. Old people always think young people are worse than ever.

I say this not to invalidate your feelings of frustration with how people are coding these days compared to the good old days. But ask yourself if your skills haven't improved in that time. Bugs that you would miss 10 years ago are now obvious to you. Also, there's just a lot more code out there than there used to be, judging by number of repos. The friction to put something into a public repository is lower than ever. That gets us some gems that might not have existed otherwise, but also some turds.

Rest assured that 10 years from now, devs will be talking about the good old days of today and how much better it was compared to 2030.


This misses the mark, for reasons I can't see why. Maybe let's compare code 8 years ago to 4 years ago to today.

I did Android dev. 8 years ago it was a nightmare. To download an image, you had to build your own async thread, transfer the bits, resize the image, and then make sure there aren't any memory leaks and keep it small enough that it doesn't suck up all the RAM and crash the device.

4 years ago, it was easy. Just plug in the URL, and where you want it to go and it'll handle all the above + caching.

Today we have Android Jetpack. Now MVVM architecture is the default. Everything is cleanly handled - we make sure that there's no memory leak when someone makes a call within the half second it takes to download the image.

But for some reason, code is more tiresome to write. A single page used to be 2-3 files, now it's easily 12 files. Trying to debug a problem means sifting through those parts. The fact that we can code a full feature in 5 min means that very little thought is going into whether this feature is necessary, has side effects, or the responsibilities of the different parts. Budgets have gone up too, and now you can expect to deal with a freelancer from Qatar or Ukraine or whatever who builds a piece just fine, but nobody remembers how it works or whether this part was built correctly.

I'd say that the individual parts work fine but as a whole, they go together poorly.


OP's comment has nothing to do with "youngsters these days" but with how code is nowadays vs at the past. Unlike humans, there are changes in how code is produced, what it targets, where it runs, the motivations behind writing it, etc.

E.g. to expand on what others have mentioned, having to target Windows on a desktop PC with regular monitor at around 800x600 or at most 1024x768 resolution and a mouse for English is a bit different than having to target any desktop OS, any tablet OS, any mobile OS, perhaps also provide a web-based version, with a UI that works under resolutions ranging from ~1366x768 up to ~5120x2880 on monitor sizes ranging from 5" to 55" (or above) and inputs handling that includes keyboard, mouse, touchscreens, TV remotes, etc while having support for unicode, multiple languages, emojis, etc.

Even with the exact same people (which certainly isn't the case) writing the code, the latter is going to have much higher likelihood for bugs and other issues to crop up.


> #1 was written in the 13th century, #2 in the 5th century BC and #3 in the 8th century BC.

Not true. These all appear to be fake quotes. Apparently #1 are not words of Peter the Hermit, #2 not words of Socrates/Plato, #3 not found in Hesiod.

#2 seems to be from 1907.[0] There are claims on the web it's a paraphrase of a speech of Justice in Aristophanes' comedy The Clouds[1], but I don't see much resemblance.

This[2] page contains all 3 quotes, #2 prefaced by "You may have seen this quote, often (apparently inaccurately) attributed to Socrates", then claims the other two quotes are real. But he links to a 2004 discussion page[3], which under quote #1 says "It should be noted that the attribution of this quote to Peter the Hermit is as shaky as the attribution of the other quote to Socrates."

[0] https://quoteinvestigator.com/2010/05/01/misbehave/

[1] http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:19...

[2] https://nickfalkner.com/2013/04/21/the-kids-are-alright-with...

[3] http://answers.google.com/answers/threadview?id=398104

https://en.wikiquote.org/wiki/Socrates#Misattributed


This is a very generic ramble to (potentially) derail the thread with. I'm more interested in people's answers to the actual question the OP asked.


Yes. Average code quality is objectively worse than it used to be, and there is an objective reason why.

The barrier to entry is much lower.

In 1995, getting a C++ program compiled, tested, shrink wrapped and shipped (in waterfall fashion) required a lot more work. The "survivor" software you bought on your floppy disk or CD-ROM at the store meant it was high enough quality to jump through all those hoops. Some of those hoops were the software developers needed to know more about computers. Some were that you couldn't fix an issue after it was shipped in the wild pre-internet. Some of it was that a lot of those people had an intrinsic passion for the craft in which they weren't paid as well. They also did the work even though they were considered geeks, unlike now where a lot of "engineers" are indistinguishable from marketers and considered popular.


>The barrier to entry is much lower

Very true and i blame it entirely on the rise of "Web Programming". When it started off with HTML/CSS, a whole lot of non-programmers jumped into the industry and thus a lot of frameworks/libraries added layers of abstractions to cater to them to make it "as simple as possible" to write a "Web App". But then things only got more complex with the need to recreate desktop app experience within a browser and the explosion of mobile devices. The non-programmers were now out of their depth but since the software market had expanded exponentially they "upgraded" their skills and continued churning out "code" with predictable results.


I think a lot of it has to do with higher education as well. Most courses focus solely on theory rather than teaching the best industry paradigms for writing clean code.


I generally assume code is broken from the start now, but I don't think that this is because average code is getting worse. I think it's because there are more "solutions" to problems online (also, I'm more critical than I was a decade ago).

In the past, I had to write almost everything from scratch, and that meant I'd have worse code than what is open-sourced nowadays for the time period that I was implementing (and possibly longer if the problem is gnarly).

Nowadays, I can do a search on GitHub and find something that appears to solve my problem, but when I use it it'll turn out the author was solving a slightly different problem than mine and so his solution only partially solves my problem or doesn't handle certain edge-cases/bugs that will be a problem for me.

That doesn't mean that the code is worse. I am just using something that was created to solve another person's problem and expecting it to solve my own problems with no extra work.

Basically, you can still choose to implement everything from scratch if you want to. In these situations you should make sure to learn from the people that came before you, by reading other people's code first and spending time thinking about your design/implementation.

But if you want to use other people's code off-the-shelf to implement something faster, you can't just assume that because their solution solved their problem without running into edge-cases, that it'll do the same for yours. You have to be responsible for every line of code that you integrate with, and should assume that is not guaranteed to work for your problem/data.

---

What is an issue and wasn't before, are problems that occur due to a particular combination of packages and incompatabilities. These problems are difficult because they can require impossible levels of co-ordination to fix upstream. I think in many situations we should rewrite things to be more monolithic than they are.


> you can't just assume that because their solution solved their problem without running into edge-cases

I agree in general, but there’s a class of general problems with good solutions. Such libraries tend to just work. Like database engine, XML/JSON parser, multimedia codecs. If you pick the most popular and compatible one, like sqlight or libpng, you’re very likely get your problem solved without too many edge cases.


I've been in this game since 1998 and I've seen much across the years. Back in the late 90s/early 00s, people started to get a little more sloppy due to less constraints on HW/SW (RAM, etc.). These days it seems people pay less attention to what makes a good programmer and more on the language or framework du jour. Very few people take the time to master not just a language but good habits. They flit from language to language and framework to framework. A good programmer with just a few years of experience should be able to pick up any language within reason. Having said this, going from JavaScript to C++ or Python might be a challenge if all you've ever done is front end work. Concepts like mutable/immutable, etc. can be challenging if you've never dealt with them. Constant learning, constant challenges on and off the clock, always be curious. Too many bootcamps now that don't teach anything other than getting people out the door. Notice that most of these bootcamps are for front end work. There is no way you are going to see a bootcamp cranking out a systems programmer in a few months--there is simply too much to know about how the underlying systems work together (HW/SW). A friend years ago lamented the very same thing he went through as a sysadmin with bootcamps cranking out "paper tiger" MCSEs who didn't know the first thing about running a server or setting up a domain controller or print server.


> Concepts like mutable/immutable

I'm curious, have you used React or Immutable.js? As the latter's name suggests, both heavily rely on concepts of immutability.

Just sounds like you're FE-bashing like people on the internet love to do.


Sorry I may have come across as bashing; this was not my intent. I have nothing against FE work, as I do both myself. I work in a small two-man shop and I have to do a little of everything, including sysadmin stuff, which I don't really like, but it keeps me up to date.

Editing to add, no, I don't work with React. I primarily live in Python, Bash, and PowerShell in a mixed Linux/Windows environment. Starting to do more with IoT for sensors, etc. and this is where things like Python shine.


It's interesting how you have such strong opinions about JS developers when you're not even familiar with the most popular tools that are available today... I always find it weird how people are like that.

Especially mentioning JS and then saying immutability is a problem... Immutability is a core concept in React, so you couldn't be more wrong honestly.


Preface: I've worked on code ranging between FORTRAN 77 to a rather nuts react, typescript, webpack,... combination.

From my experiences, what I have observered first and foremost is that the problems that software is solving are getting more complex as time progresses, causing software complexity. That is often proportional to the number of bugs.

Many popular FORTRAN packages out there solve a particular (or set of) well(ish)-documented mathematical or physical problems. Matrix manipulation, eigenvalues, weather simulation, etc. What i've observed is that these problems are, although sometimes complex, require minimal dependencies. The primary dependency is some journal paper somewhere, which you don't have to install as a software dependency!

Constrast that to a couple recent codebases i've worked on, easily >300MB of npm dependencies for a mix of react, typescript, webpack, sass (gyp!! >:(), jest, cypress, etc.

Having said all of that, for both of the above eras of software, i've encountered annoying bugs. It's just that nowadays i see most bugs are package compatibility issues (looking at you, steaming pile of webpack 5), whereas with FORTRAN it has instead been a mathematical error.

My point is that software is solving very complex problems right now. Back then, a web application was 3 types of files, because web applications were simpler. But now entire businesses use them, all their processes can rely on a single web app. Important processes sometimes with regulatory involvement. This necessitated typescript (scalable js), testing (jest and cypress), better performance (webpack), etc....


I see TypeScript more as a bandaid patch which solves small problems but causes bigger future problems than it solves. TypeScript and its many dependencies add to the kinds of compatibility problems you allude to.

I'm a strong believer that the language's type system doesn't matter that much. Statically typed or dynamically typed. It's possible to write equally good quality code in both paradigms. The key to good code is logical separation of concerns (high cohesion) and simple interfaces which have simple parameters and return values (loose coupling). Also, having well defined concurrent event loops is important if your system requires a lot of async logic.

The obsession with TypeScript now is mostly the result of the fact that people have a tendency to blame tools (dynamically typed languages in this case) for their own incompetence.


I feel very productive in statically typed languages like Rust and Swift but hard agree on Typescript and I’ve said it here before. It feels like the worst of both worlds, extra code and dependencies for limited type safety. The amount of times it causes some issues with compatibility or a library has missing or incorrect typings has left a bad taste in my mouth. Granted I always work on small teams or alone though.


I agree with the library compatibility issues for typescript, I've felt that pain before, but it is getting better by the day with more adoption.

I've always held the belief that the major flaw of Typescript was that it is preceded by Javascript. This caused fragmentation. If the web was Typescript first, I believe it would have been quite the "nirvana". But then again, Rome wasn't built in a day...


I just joined a major tech company, after consulting for 8 years all around the industry.

My take is code quality is concentrated in the FAANG level companies. They have a strong moat in just really good software engineering that leads to stable, fast reliable products.

I ask myself why this is?

I think there's something to:

- They hire the best technologists, have high standards when hiring... these folks put the effort in to make something they're proud of

- The leadership is tech literate, and understands how/where to build and payoff technical debt. What you need to build in-house, what to outsource/purchase, etc

- Because of higher tech literacy: the leadership/devs tends not to heavily cargo-cult buzzwords like "AI" etc (even though this is what their marketing depts push) and focus on the unsexy work that maters (reliable, maintainable software)

HOWEVER

Outside of the FAANG-verse, you have a variety of different cases with different incentives for code quality:

- The startups that just need to do whatever it takes to deliver an MVP before the money runs out. Lots of coding heroics to do this!

- The enterprisey big companies without strong tech leadership, they hire anyone with a pulse that can do development. These places there's often a couple smart people holding things together with lots of mediocre devs at best augmenting the smart devs, at worst moving things backwards. You also have leadership that doesn't understand how to make good tech leadership decisions, and often doesn't understand the tradeoffs carefully.

These are gross generalizations, of course, but on average with this pattern I think high quality code concentrates in the upper end of the market.


What about the medium sized companies?

In my opinion when a FAANG dinosaur publish some source code, the quality doesn't seem much different than anything else.


I find this varies by ecosystem. Certain ecosystems have incredibly high quality libraries on average. However ecosystems also vary in their breadth and quantity of available libraries also so that may play into it.

I find Rust and Perl to both have very high quality libraries available. JVM/Java ecosystem has a huge number of libraries and some extremely high quality ones but also a massive number of abandoned or low quality libraries left to decades past.

The ecosystems I find troublesome are JS, Python and to a much lesser degree Ruby. Huge numbers of low quality libraries and dependency hell.

The differences seem to stem from the culture of said ecosystems. Java code is often flexible to a fault, Rust code is strictly correct and fast but with less emphasis on flexibility/ease of use. JS/Python hail from the "get shit done" culture and as such there are many half-baked libraries.

These are just generalisations. Even the ecosystems I don't enjoy working with have some very high quality libraries of course. JS has React, Python has Tensorflow, Requests, etc.

If you want to be able to rely on library code though I would go for Java/C#/Rust maybe Go if it doesn't annoy you for other reasons.


Regardless of ecosystem, I think it's important to choose packages carefully. Python may have some trash libraries but there are also some very stable and trustworthy ones. I've used Python long enough that I have a selection of libraries I know and use. Javascript / NPM land has been a lot more difficult for me. I've been writing Go a lot lately and it's a breath of fresh air in that I rarely need packages outside of the standard library, which means I simply don't need to think about dependencies very often. That might change if the type of projects I'm working changes.


Maybe you've become a better coder and now the "average code" has dropped in your estimation. There are a multitude of potential explanations, and anecdotes from people will not reveal the answer to your question.

My small amount of experience has shown that people are astoundingly bad at intuiting generational change in crime, capability, and morality.


Maybe it's not the quality of code that has slipped but the quality of the sources you're relying on?

I tend to rely as much as possible on the standard library of the languages I'm using (especially for Python and C++, which ships with a pretty good set of libraries already), and when I have to go beyond that, I try to vet libraries as carefully as possible.

If a library deals with problems I know are going to be tricky, I try to take a peek beforehand at how it approaches dealing with those problems. If the approaches look obviously simplicistic/broken, I can save myself some time.

But when I want to deal with truly hellish code problems, I let my 8-year-old sweet-talk me into installing just one more mod into his Minecraft installation...


Maybe you're just getting better at coding so you make less mistakes so the mistakes you find are more often somebody else's? Kind of a regression to mean.


Not sure what you are doing but i don't think things got worse. They just changed.

Compare the situation on the web.

The days of Ie 6, polyfills and all that garbage are mostly gone. Browsers are an order of magnitude better than ever.

On the other hand you got the frontend js situation which is broken beyond belief. I dare you to pick any js framework and try to update it a version...

If you skip that and use the old and tested stuff, things are just way better than 10 years ago. Just compare Rails or Django to Jsp...

The god awful PHP std lib is still the same, but they clobbered some sort of object orientation and static typing onto PHP. So its as bad as ever and the libraries reflect that.

But in general whole library situation improved massively. Nowadays i can file a bug report on Github, or just fork it, get it fixed and make a merge request.

Back in the day you had to join someones IRC, talk to someone else who had the stuff on its cvs or whatever (hopefully) and then you wasted your day just fiddling around waiting for some random person to build a binary with some obscure version of a c compiler.

So there is more crap but the good old stuff matured and got way better.

There are better ways to discover problems for example it became quite the norm to use version control, a CI and automated tests..


It seems like you're comparing the current situation to more like ~20 years ago than ~10 years ago.

In 2011, both Rails and Django were well established and used heavily.

And in 2011, not too many people cared about IE6 either (some did, but even the big sites didn't support it anymore).

Even GitHub was used quite a lot in 2011, and was well known.

Still, I think you're right, it's just a different time scope...


This is why I laugh about all those articles predicting the rise of "no code" and that developer roles would be taken over by click-together apps or some AI. With devices, use cases, privacy, userbase expanding the industry can't even keep up with hordes of actual professionals working on bespoke solutions. Even as the engineering market really exploded in the 2010s.


The same could be said in the 1800’s where mills were popping up all over the place. Nothing was standardized and everything was designed and built by craftsmen.


> There used to be a time, in the dark dark ages of history, 10 years ago or so, when I would encounter issues during the course of my work, and I could fairly confidently assume I was doing something wrong, or I just hadn’t read the manual correctly.

Hehe, I remember those days :D Yes, code has gotten A LOT worse, and right now, whenever I pick a new library, I expect it to have bugs.

A lot of people will start coding and bring in a TON of open-source libs, each with their dependencies. And then when something goes wrong, spend an insane amount of time just figuring out who's to blame. Or, just fix the symptom and continue.

Clearly, the above won't work in the long run. But who cares about that, at the current pace of business, it'll be someone else's problem in a few months? They'll get promoted or switch jobs :D

I'm on the opposite side of the spectrum: I choose VERY FEW libraries, go for mature ones, and do quite a bit of testing. For pretty much each library, I simply add an extra layer on top of it, so I don't need to deal directly with it, just in case I may want to later switch to another implementation, or perhaps do some workaround until the issue gets fixed.

And this brings me to Microsoft: the code since Windows 8 (2012) has been beyond horrible, and their direction has been, lets say, "flexible" (to read: as the wind blows). I've discovered so many CRITICAL bugs, I've had to struggle with depression for months.

> I want to rely on other code, but more and more I find that it’s safer to just assume it’ll be broken from the start.

Yes, that is definitely a safe bet. The more you rely on 3rd party libs, the more you increase the risk of (critical) bugs. The more libs you use, the higher the risk. It's a really dim picture -- but something we unfortunately need to deal with. It's the new reality, and we just need to adapt :D


Yes, but this is always the case when more people are doing something. The less gated something is, the worse the quality will, in general, get. When writing was the privilege of the few, written words carefully curated and edited and survive for ages. When everybody could write, there was a lot of trash, along with a lot of great stuff. The average will usually decline in such cases, because most people will not have the drive and resources to become excellent at it.

In the same way, when we are pushing coding at kids barely above kindergarten, there is going to be lots of crap code, but also much more excellent code, in general though, the amount of crap will overwhelm the amount of excellence so average will trend down. If you applied some kind of cutoff at the base, removing the absolute worst, the average might trend up. This is the case with anything that gets more democratized.


While it is true that a dilution of the field has some impact on quality, I think there's other reasons too.

- We routinely depend on higher and higher level abstractions. These abstractions almost never cover 100% of the usecases you could imagine, but that doesn't mean they still won't get used inappropriately for a while.

- Corporate software is often subject to the technical equivalent of a pump and dump. The kind of situation where a developer wants to open source something but the company isn't really passionate about having them finish all of the features, only the ones that are pertinent to them.

- Companies are routinely understaffed, possibly by design. When I first started programming I heard all about programmers working in teams to accomplish things. When you don't have the option to pair on tough problems that means tough problems get a singular perspective.


This talk deals with the issue you are describing as well as the underlying reason for it: "Preventing the Collapse of Civilization" by Jonathan Blow https://youtu.be/pW-SOdj4Kkk

It's very informative and engaging. Highly recommended.


Talking about averages is really tricky when the total amount of code/libraries is growing rapidly. In general though I think all of the following are true:

1. The number of high-quality libraries available has increased substantially. 2. The variance has also increased substantially. 3. The AVERAGE quality of polished libraries has gone down.

Basically, the number of open source libraries has exploded over the past decade or so but the majority of new libraries are of low quality because they are slapped together or only half-baked and then abandoned. But it is easier than ever to find high-quality open source libraries for any use case you can think of. In other words, things have never been better but you cannot assume that a library is of a reasonable quality just because it is published as an open source library.


Maybe the quality is still the same, and your skills improved? Sounds like 10 years ago you blamed yourself when something went wrong, and now you can see when it's the library that's broken. 10 years ago you may have not been able to spot defects, but now you can.


Maybe some of the problems today are because of the ability to fix code after release much more easily. If you wanted to ship something on a set of floppies or a CD you had to be pretty confident it worked unless you wanted to deal with a costly patching process.

Now you can push a hot fix or day 1 patch almost immediately, and in the case of some software like games, actually prevent people from using the old version, leading to software being released that has known issues. Obviously with web based applications the fix deployment is completely transparent to the users, making the cost of deploying a fix almost zero.


Hum... Probably 10 years ago you were using a highly curated and very small set of dependencies, written by some of the top experts on your ecosystem, while nowadays you are using that same set plus a huge amount of code you download at random from the internet.

The many different communities of programmers made that transition at different times, but very few dependencies was basically the only model available at the 80's, while the tons of dependencies is the model basically everybody uses nowadays (the amount of stuff embedded software imports nowadays would surprise a desktop GUI developer at the 90's).


I believe it's due how huge the demand has become. Entry into software engineering has become really easy. Thus, people without proper are contributing to projects and the result is a mess.


In my experience yes. Average code is much, much worse than it used to be. Regardless of the origin. Then again, there's also much more of it. It could be argued to be a mixed blessing.


My experience is similar. I seem to stumble upon two different types of libraries:

- One that is so specifically designed to fix the authors' original problem. It is implemented in such a way that it doesn't cover a lot of use cases.

- One that is actually designed to be a general purpose library and to be used everywhere; but because of this there is a lot of feature creep and very complicated interfaces.


> Have you experienced something similar?

In 99% of cases when I noticed that they were third-party libraries.

Quality of first party ones, like Microsoft, Apple or Linux system APIs, is IMO mostly good.

Some third-party libraries are good as well. The problem is some others are not. For this reason I review at least public API, and ideally source code, before introducing new third-party dependencies.


Where are you getting these libraries compared to 10 years ago? What languages do you use?


It is more than likely your perception has changed with 10 more years of experience.


I think we are seeing code stick around longer than ever before in the personal computer/mainstream era of computing, which has led to more ways to hit buggy, unmaintained code than ever before. Hear me out:

If you wrote code for desktop computers in the 1980s, the platform you wrote code for probably didn't exist for longer than 5-10 years and the code had to target the CPU/arch directly because anything else was too slow. Your C64 library wasn't going to be reused on an Amiga or IBM. So any code older than ~5 years old just sort of died off. No support needed.

If you wrote code in the late 80s/early 90s, the hardware platform was starting to stabilize around the IBM PC, but the GUI layer was in rapid flux and languages were relatively immature - Windows 3 or OS/2? Protected mode? Windows 95? 16bit vs 32bit? Would C++ win? Even basic stuff like playing sounds (Adlib? Soundblaster?) or what graphics card to target (EGA? VGA? etc.) was in constant flux. Software lived a short life.

By the mid/late 1990s, the OS was stabilizing (mostly around Windows 16-bit) and basic hardware for video and sound had shaken out, but now we have the internet on the scene and a whole new round of immature and one-off tools. Remember WinSock? Netscape Navigator 4? Java applets?

By the early 2000s, 32bit Windows and OSX were on the scene and desktop software was mature in a way we rarely see anymore, but we were in the last days of single-user desktop software. Momentum was moving towards the web and software designed with the internet in mind. The mature desktop software of the early 2000s would mostly be abandoned as people moved to the internet and expected different things from computers.

By the late 2000s, we have the full-on mobile device wars with iOS, Blackberry, Android, Windows Phone, Palm, etc, all fighting over new ways of presenting software to users and all using different software approaches to make that happen. Iteration was incredibly rapid. APIs and languages changed quickly. The UI bar was raised significantly.

But around 2011-ish (10 years ago), things really started to shake out. The internet was relatively mature. iOS and Android became a duopoly. Linux became the standard for deploying web backends. Desktop software is largely dead at this point except in niche/creative industries. Rapid development moved toward new areas (AI/ML, etc) but the foundation was pretty stable.

The difference between now and 10 years ago is OSX Lion vs. Big Sur and Windows 7 vs. Windows 10. Hardly a fundamental change. People used Chrome to browse the web then and they still do, now. People mostly used iOS and Android then and they still do.

In 2011, web devs were mainly targeting Linux to run Python, Ruby, PHP, and Javascript. HTML 5 was finally a real thing. MySQL and Postgres were hot. Memcached and redis were cool. Some people used Mongo. The Windows crowd was using C# on .Net 3.5. All of those things have evolved over the last 10 years, but aside from Javascript, they are just iterations and are basically what we still use today. (Javascript of course has gone crazy, but that's a different story.). Maybe now you target a container instead of a bare machine, but the ideas are basically the same.

So my premise is that mass-market computers have broken compatibility less over the past 10 years than ever before. Along the way, more open source code has been released than ever before by a huge factor.

We have a situation now where new generations of developers are coming up and writing their new code for the same basic platforms as the previous generation. The old code still more or less works fine, but may have lapsed into unsupported territory. The new code more or less does the same thing, maybe in a slightly different way. There's more code than ever to maintain and not enough time to maintain it.

So now the number of libraries you can choose between to solve any problem has multiplied to the point where the surface area needing bug fixes and maintenance has become untenable. The hottest Django library that everyone used for file attachment uploads in 2011 has been replaced by 10 newer libraries that half-work and then those were replaced by 400 javascript versions of the same idea. And all that half-supported code is still sitting on Github, just waiting for you to include it in your project and cause yourself headaches. There's often no good way to know which library is mostly likely to work reliably and sometimes none of them do.

Another symptom of this phenomenon is that when you google about a bug or error, you often get 10 year old Stack Overflow answers that sort of apply but are also totally out of date and lead you down the wrong path. Attempts to update the question get deleted as "already answered". So now we not only have bugs to fight, but we have the endless perpetuation of wrong answers to long-since fixed bugs getting copy/pasted into new code forever.

In the past, the constant platform churn let us avoid this problem because the old software would just break and be obviously useless. But now that the basic platform is more stable and software is sticking around longer, we need to figure out better ways to deal with gradual change.


Have you vetted the vendors of the library? Is it something that a reputable company offers that you /payed/ for? Or is it some side-project of some rando?


It does not seem to matter. Paid things, if anything, seem to be more likely to be trash than the free ones, but the free things are certainly not blameless.


These metrics don't matter. Almost every library you care to pull for a popular (read: widely used and written) language will be trash. Yes, even if you pay for it. Even if it was a "reputable" company making it.


Never thought about it in that way about majority of .NET libs, even those made by 3rd party and most popular ones


It's a mix.

I think that average or high quality code has increased in quality over time, because CI/CD is a lot more widespread. Increased computing power has improved tools such as IDEs, compilers, linters, testing tools, etc. So the ceiling is generally higher, I'd say.

However there are a ton more newbies in the field, because there's a need for more people, so there is also a lot more garbage and that garbage is even worse.

Plus, there are a ton of programming-adjacent folks which can now publish their stuff. Since their focus and background are not related to programming, their libraries and apps are generally of lower quality.

TL;DR: There is more of everything, and since 80% of everything is crap, there is more crap. It's also harder to find the gems, since we have to wade through crap. But good stuff is still out there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: