Perhaps nothing about the code itself, but rather the surrounding environment?
It was generally very well written and where it differs from the standard there's usually an interesting reason why. It had extensive unit tests that ran automatically across a very wide range of supported platforms and compilers including all major desktop, mobile and console platforms. It is generally much more legible than the STL implementation used by Microsoft (which was one of its design goals) while also often more efficient. It's the STL so it's mostly fundamental algorithms and data structures and widely useful utilities of general interest rather than very domain specific business logic.
How often do you look at some code and think "an idiot wrote this", and then realise you're looking at your own code?
Maybe all code is technical debt? That is, maybe every piece of code you inherit is bad because you now have more to learn and understand, no matter how nicely structured/documented/tested it is.
Sorry OP, I got nothing positive!
I think what's really at stake is that code and data are a sort of inventory: they are not what you are selling, but they get turned into what you are selling. Inventory always has a carrying cost, and generally people underestimate that because they are only looking at the direct cost of storage, not how the presence of the inventory itself gets in the way, makes getting to other things harder, makes bottlenecks harder to see.
And that's where you see that debt is a wrong metaphor, because debt has the particular property that you can pay all of it off and that would be a good thing. By contrast inventory is a good thing in the right place: It means that if one thing stops working, the system can still continue for a while. Really operating with zero inventory everywhere is possible, and it's not done because it would drive you out of business. Similarly, deleting all of your code is not accessible in the way that getting rid of all of your debts is.
Designing an API to have a separate messaging later from its business layer from its data management layer from its data fetching layer is a technical debt; the fact that any change in the system now needs to be distributed across 10 different places in the code base is your interest payment. I would argue that you would like to derive all of these from some shared source of truth to remove those interest payments, and when you do, I no longer think that it's a bad thing for you to have a homebrew HTTP framework that has those separations in its internal functions.
Actually when I have to modify my old codebase I am usually pleasantly surprised and it is much more tidy than I am expecting it to be. It's an amazing feeling when you read your own code and go "wow, I wrote that, what a nice way of doing that".
I am not really sure what that says about me... maybe my expectations are too pessimistic on average...
The complexity of software should not be dictated by anybody's skill at creating and navigating complex designs, but by the complexity of the problem. Second system syndrome comes from youngsters not heeding this advice. Somehow it takes an inordinate amount of experience to know what you don't need.
Actually never. The reverse actually happened: I looked at some code and thought "a wizard must have written this", and then realised I am looking at my own code. :-)
No, seriously: I could immediatelly reconstruct my intentions that I had for code that I wrote 15 years ago and never touched afterwards. For this reconstruction process, the emotions that went into the code lines provide a strong mnemonic. This is also the reason why I actually need very few comments in my own code if it is just for me (of course, if other authors want to contribute, these are very important - but in this case, I prefer to ask them directly what kind of guidance they actually need).
What is much harder is to get into a foreign codebase. Even if it is of high quality (it often isn't), it takes a lot of time to get deeply into the thought process on which the original authors based their code structure.
Whole new meaning to "best code is the one that doesn't exist".
The next morning, I woke up late in a panic. I was gonna email some excuse to the prof in a last ditch attempt to salvage a decent grade in the class. I ran to my computer desk in the corner and turned on my ginormous CRT monitor to find... the confirmation for my final project in my email inbox! I had somehow scored 100% on the project, despite having no memory of what must have been several hours of intense programming and debugging.
Perusing the code later that morning, I was stunned at how clever, clear, and concise it was. It was, at that time, the best code I had ever written. Were it not for a few telltale grammatical peculiarities, I wouldn't have believed I was its author.
That was the first and last time I ever got blackout drunk, but it made me a firm believer that the Ballmer curve exists.
If this story actually happened, I'd hazard a guess that someone else wrote that code, not you.
Every single day. I am my own worst enemy.
I am still removing jQuery like dirty splinters.
But also I am using Vue.js... (es6 modules are awesome) BUT, I am designing everything I build with Vue to be easily replaced with web components down the road. No extra plugins, no complications that super specific to Vue if I can help it. (fool me twice... prototype.js, sigh)
I am expecting to get rid of Vue.js in a 5 years or less depending on how long it takes for web components to catch up. (or another framework to replace Vue)
That being said, some of the vanilla js code I wrote years ago still works great, no need to replace it.
So, maybe technical debt is a trade off we can _manage_ with our systems consciously instead of "by accident".
It is intentionally a leaky abstraction so that there is always a way to customize it if it doesn't work the way you want.
There is only enough design to make it reusable, no claim to a "grammar" or other highfalutin abstractions.
There have been dozens of contributors and many parts are inconsistent.
Yet it still gets a ton of use 7 years on because it does 80-90% of what you want.
I've learned so much from this. Worse is better. Don't box yourself in with designs you don't understand.
Solve one big problem and after that be humble and let users work around anything you didn't think of.
And I haven't kept up with merging PRs because it is a lot of work to test and integrate code. (Help welcome!)
With that out of the way: it's dc.js
At the heart of it, there was a short list of instances to run with their purposes. Most of everything was automated around that. You could add a line to order any resources in the world and have it running in the next 5 minutes. Instance provisioned with standard OS setup and patches, DNS and aliases up, permissions for developers and services deployed.
It was extremely efficient and well organized. I'm not sure who wrote it but I'm pretty sure it's the only guy in the world who figured out how to use AWS, accidentally.
While I worked there, I updated it to support provisioning in any region, EBS backups, automated firewall groups and a few other things. Everything was tagged consistently with purpose/team/environment for identification and billing.
It was neat. I doubt I will ever find again a company that can setup hardware or manage resources any decently.
To conclude this. A coworker told me that new guys were hired after I left and they undid most of it in the next 2 years.
Another great plugin is Metorik's helper plugin! Bryce is amazing and he's so responsive and helpful. If you're looking for a tool to extend Woocommerce functionality, definitely check it out.
The team who was supposed own it hated it, so I get to work on it :)
I just added a couple of linters to a new project and am looking forward to having the computer flag any obvious errors before allowing a git commit.
1) lack of extensibility. This comes from poorly scoped projects. Both previous dev and product manager didn't understand the value in what they were making.
2) built in headaches. This is related to (1) but it's kind of the opposite problem. I see deployment tools that don't make the options object available to read in all contexts, or unhelpful automations, like silent failures. This is often from someone inexperienced trying to be clever.
3) terrible engineering practices - storing prebuilt native binaries in git, deploying a custom (read unsupported) version of a tool like gpg or perl. This can represent just a terrible engineering culture, but I often find these practices can be sourced to someone with a title like Director of Research
4) Lack of scalability - this is the least worrisome thing I run into. It takes experience in big problems to know ahead of time with any accuracy where the bottlenecks will show up. If this were the only problem I ever ran into, I'd be a happy camper.
Needless to say I was pleasantly surprised by that.
On the other hand I also inherited an angularjs frontend written by an ape.
If you don't know wmorgan, he's the one who created trollop and the leveldb-ruby gem. Any Ruby practitioner should know what I'm talking about.
I see too much symbol soup and point-free style makes it tougher to understand the code.
They gave us a stock rails CMS/static site generator that did the frontend part.
almost full rewrite as we where a Python shop and there was a mess of JQuery pubsub involved so you wouldn't know what was happening in what order or if there would be a bug because something wasn't subscribed to this event bus in time. almost goto level.
We met with the external company a few times and they made it clear to us they wheren't getting paid to fix tech debt or make the handover process smooth for us just that they where getting paid to make it look like the design.
This is actually a better state than the other projects i can think of, more because its a small project and the levels of technical debt are thus lower because theres not as much code/complexity, not that the code that is there was good.
All the tests where failing (very flaky by design, using browser driving to check for very specific parts of text on the page that had long been replaced), we also deleted all their tests/writ our own aswell as the almost full rewrite.
The thesis explained all the high level concepts.
And every line of code was commented.
It made it quite easy to understand why and how he was doing things.
There are over 100 repositories with code in different languages, and I can honestly say that each time I've ever needed to go in and work on something, usually from a position of little or no knowledge about the code or internals other than as a black box, I just find really well-organized code that seems like it was thoughtfully put together by a bunch of people that assumed they were always going to be the ones who would be stuck dealing with the consequences of whatever decisions they made every day.
https://github.com/teamhephy/workflow (from https://github.com/deis/workflow)
If you're looking for some training wheels for your beginner Kubernetes experience, you could do a lot worse! The product itself is basically a "Bring-your-own-Infrastructure" open source Heroku work-alike.
https://web.teamhephy.com / https://blog.teamhephy.info / https://docs.teamhephy.info
But then that's Delphi; a pure pleasure to work with. I only wish I had more use cases for it. I would put back in the lineup immediately if I did.
The people who wrote it were competent at writing maintainable software. It was well thought out in terms of design and discoverability and tooling, it was well tested, and the authors cared a lot about not obfuscating purpose.
It had good dev ramp-up docs and a pretty clean commit history.
I think probably the largest differentiating factor for this particular codebase was that it wasn't developed under duress of time-pressure and the previous authors were past the point in their programming careers where they learned how to not overestimate their own ability or underestimate unknowns in their domain. The company was relatively small, technically competent at the top, and culturally less concerned with inflating profits than making enough money to live comfortably and do their thing.
Documentation had the proper GNU framework, of course without man pages as most such GNU stuff, but this also easy to add. The other testsuite parts were also easy to add, dejagnu still rocks, gtest would have been a horror show.
GNU coding standards are a godsend.
The worst code bases I inherited, on the other hand, are a lot memorable. The absolute worse was a ~500k-ish line of code WordPress/BuddyPress mess of undocumented spaghetti, complete with a slew of modifications to WP and BP core files. It took me weeks to put the latter in separate plugins in order to upgrade the mess.
id Software writes beautiful code, or did at the time.
To be fair, I usually seem to find myself given rails 2.x and 3.x codebases which have sometimes been in production for close to a decade. That is a lot of time to build serious technical debt.
I highly recommend it, it's only a couple thousand lines so it's worth the read. Never before have I seen such readable and clean C code. I read it and then implemented my own terminal emulator from scratch, which we now use on an embedded platform at work to debug problems that cannot be debugged using a PC (for example because Ethernet and serial connections to the device are broken).