I like where this is going. Most projects I've been on recently have a "hurry up and get something out" mentality. Which... I'm not completely opposed to. It's important to get some early design wins out. But the flip side of this is as you go on, you learn more about the problem domain and whether specific solutions are appropriate. You should be okay with throwing out bad code or code that internalizes old requirements that didn't survive into the present.
I once had a conversation with a "Business Guy" and I brought up the concept of "technical debt." Their response was "Oh! Great! We have lots of cash right now and can add more programmers later to pay down the debt, but only if we release something very, very fast!" -- While the concept of technical debt make sense to me, a software person, the business guy's response was "debt is something that can help you if used strategically." We changed over to using the phrase "Delivery Molasses" because the more of it you got, the harder it was to move through it.
And rather than silently stew about how "business people" don't understand software, putting it in terms they understand so you can have a grown-up conversation is worthwhile. So... happy to see someone's doing that here.
> I once had a conversation with a "Business Guy" and I brought up the concept of "technical debt." Their response was "Oh! Great! We have lots of cash right now and can add more programmers later to pay down the debt, but only if we release something very, very fast!" -- While the concept of technical debt make sense to me, a software person, the business guy's response was "debt is something that can help you if used strategically." We changed over to using the phrase "Delivery Molasses" because the more of it you got, the harder it was to move through it.
To continue the analogy, perhaps the disagreement was what the interest rate is on technical debt? Or that viewing it as "debt" makes you think throwing money at the problem can fix it quickly, when it has to be paid down through labor instead?
Technical debt is debt in something non-fungible. You don't just need to pay it back, you need to pay it back with exactly the right thing. You can't just get a lump of commodity code and drop it in. That makes it much more expensive to pay back.
It's a bit worse than that. What we call "technical debt" is roughly the equivalent of building a skyscraper with inferior steel and saying we're going to go back and fix it later.
The skyscraper is also a flawed analogy because it doesn't cover the fact that this will somehow magically slow down the work on the rest of the building until it is fixed. The molasses metaphor is good for that one, but I don't see how I can mix the two.
Totally. Guesstimating interest rates for various types of tech debt is a fun exercise:
- Using a single-letter variable: 10% interest on any code which uses that variable. That is, any work involving that variable takes 10% longer than it otherwise would.
- Tests run for long enough to get distracted: 1% interest rate on the production code per second of test runtime. If this calculation could be worked out to anywhere near ballpark accuracy it would be a huge benefit to knowing which language/framework to choose for production/test code.
- Flaky tests: 100% interest on the test code + 10% interest on the code under test, mostly due to having to re-run the entire test suite in CI on an irregular basis.
- Not having tests for a piece of code: 1,000% interest on that code and anything that depends on it. Basically, overall and in the long run I'd expect any work involving untested code to take ~10x the time it would take for well-tested code, due to having to figure out what it's meant to do (explained and proven by good tests), how it does it (generally pulled apart into easily understood parts if it's got a good test suite), and why the changed code doesn't work.
This is obviously not scientific, but it would be cool to have the time to work out what a framework for such calculations would look like, and applying it based on experienced people's estimates.
... unless some code/feature gets deprecated and has to be removed anyways.
But yeah, I think there should be some concrete examples with concrete times to help people to understand better what the impact is. And it should be noted that technical debt is not debt to a single entity. It's like debt to multiple entities. Sometimes technical debt for a certain part MUST be paid back right now, or a critical bug cannot be fixed and a client is lost. You can't buy more time by increasing the interest and paying more later. Not possible.
- Tests run for long enough to get distracted: 1% interest rate on the production code per second of test runtime. If this calculation could be worked out to anywhere near ballpark accuracy it would be a huge benefit to knowing which language/framework to choose for production/test code.
I am not sure if I agree with this (or maybe I just don't understand what you mean).
I am working on a large, horribly legacy system (running for 25 years now).
When I add/alter some functionality, the time for materially executing a test is < 1sec.
The time needed to verify that the result is exactly what I expected can take 15-20 mins. Or balloon into hours if I find something that I do not expect and I need to confirm that the result is the correct one (and I just missed/forgot about the interaction of decades old business rules).
So if had everything rewritten in Assembler, to take your idea to the extreme, the test would take maybe 1/10th or maybe 1/100th to run.
And the verification of the results would probably take days of human work to complete.
Absolutely. I think the "business guy" was mostly familiar with non-usurous interest rates. They hadn't heard the term "technical debt" enough to understand that interest rates increase the more you borrow. Every bit of cruft you add to the code adds to the interest rate as well as the principal.
I think it often has to actually be paid back in a worse way than money or labor, it often can't be paid back without business disruption, that means slowed future expansion, growth, possible user loss due to bugs, or lack of addressing their future needs and issues promptly, it can come with downtime, feature freezes, etc.
Technical debt is a metaphor that's easy to understand for everybody but it could be the wrong one especially when talking to business people. Try unhedged call options instead as explained in https://www.castsoftware.com/blog/bad-code-isnt-technical-de...
That introduces the concept of risk and of very bad consequences. That should be easy to understand for them. On the other side, does the average person know what's a call option? And an unhedged one?
But it depends on your time horizon. And how much debt you're taking. For most places I've worked, the analogy isn't a cash-flush business, it's a household that's living paycheck to paycheck. The feature requests are insatiable, and there's never enough headcount to pay for them all to get finished as fast as they would like.
If you can only afford is Top Ramen, you probably shouldn't be taking loans out to eat at Maestro's.
Far too many prototypes make their way into production. I know because I’m maintaining a 10yo legacy product that started as a prototype that should never have made it into production but that is an essential tool for the project and basically un-replaceable. It’s an absolute nightmare to maintain as a result.
I wrote an article recently trying to put across my thoughts that debt isn't actually a very good analogy for maintainable code, basically because it doesn't really fit in with the opportunity cost calculating that real debt has: https://dev.to/aha/technical-debt-isnt-technically-debt-4oke
As someone that also deals with security, what I like is that memory corruption bug fixes and exploits are being mapped into the money spent fixing them.
IMO, low code quality and technical debt is one consequence of the Agile movement. Mantras like “Working software over documentation”, the constant push to shove demos in front of customers, and the battle for metrics, the idea that everyone needs to be “full-stack”, etc. have real consequences.
Building software is analogous to building a house. Sure you can make it look good and deliver that to your customer, but poor build quality will eventually be exposed during your first hurricane or earthquake.
I'd say it's more a consequence of the need to put boundaries on the costs associated with developing the software. I've been in an internal development group where costs were not defined. We developed what the company needed internally, regardless of whether it was "worth it." If the budgeting people knew that the silly app created to ease their yearly budgeting process likely cost 30-40k of developer time, it wouldn't have been built since it likely saves 1-2k per year of their time. But quality was good because we didn't need to build it in a week at a cost of 2k.
I'm not saying that's a good thing. But my point is that quantifying those costs for external customers means that corners are cut and speed is essential. _That's_ why the code is low quality.
I don't believe that "true" Agile is the reason for it. Referring back to the Agile Manifesto has allowed us to focus on what's important. Other Agile (Scrum,etc.) in the workplace is no different from waterfall or any other method since it is ultimately applied by those focused on the money and then fails for the same economic reasons.
I would agree with that in general. It's not the only reason I would say. Quantifying can be good. In an internal non-quantified project you still have constraints on the number of people you have around to build stuff, so there are still timelines to meet, because you can't build for the other department's needs if your people are still building that first piece of software. I'd go a bit further and analyze your parent's "reasons" in that light:
Mantras like “Working software over documentation”
Fluffy documentation that has nothing to do with the actual software as built (aka "out of date" or "waterfall spec that took 9 months to create and that nobody updated when it turned out we had to build it differently in the remaining 3 months of project time" etc.) is bad. And that's what this "mantra" is about. Whether it's an internal or external and quantified project, you had 12 months to build it.
the constant push to shove demos in front of customers
It's a good idea to demo to customers, if you can, because your customers can tell you if you're on the right track while you are still building, instead of telling you that you built the wrong wooden staircase after 12 months only. They wanted a metal ladder. Whether that's internal or external doesn't matter. It's always a good idea and I would argue that it's easier with internal customers. I.e. very easy to go over to Joe that works for the department you're building this piece of software for. You met him at the x-mas party last year and from time to time you eat lunch together anyway.
the battle for metrics
Hard agree. Pure reliance on (usually badly chosen, incomplete set of) metrics is absolutely bad because there will always be some high enough people with not enough understanding of everything that will rely on those metrics and those metrics alone to make (bad) decisions.
the idea that everyone needs to be “full-stack”
Not everyone needs to be full stack. But I do believe that you have either be full stack enough to understand the system from top to bottom or be really good at communicating w/ the other people in the stack to figure out how to build the system properly.
Building software is analogous to building a house. Sure you can make it look good and deliver that to your customer, but poor build quality will eventually be exposed during your first hurricane or earthquake.
If we want to stay in this analogy, there's one thing that can help a lot. And that is to actually start building in vertical slices instead of horizontally.
A house is not build horizontally. It's built vertically. You build an entire "feature" from the bottom to the top. First feature of a house is the structure. You build the walls for the basement, put in a floor and build the first floor, then same for the second floor and put a roof on. Structure done. (I may get the order wrong because I don't build houses) The next feature could be plumbing rough-ins. You can probably parallelize this feature w/ the electrical rough-ins and you build it from bottom to the top/vice versa but you build the entire feature. Only after that do you build the entire insulation feature, again from top to bottom/vice versa. Rinse repeat with drywall (another feature). At each of these steps you can either rush it, make incorrect decision, try to save money and do a bad job or do it with quality. You might use 2x3s for the structure instead of 2x4s. You can use aluminum wiring instead of copper. You can use R2 insulation etc. Luckily in house building there's a building code.
In software that's the same. If you build horizontally, you need to rely on things like designing and documenting APIs very carefully, build to spec etc. Basically you're just doing mini-waterfalls and calling it Agile or Scrum or whatever. But it has all of the same problems. The guy that built the xyz API last month and is now working on something else. Finally the UI people got around to building the xyz UI on top of the API and are discovering all the shortcomings in the design, the bugs that weren't apparent when "they tested the API" etc.
Build it vertically. Build the xyz feature from top to bottom/vice versa i.e. build the API and UI at the same time. The people doing it either need to communicate very well with each other or someone does it full stack. Doesn't matter. But skip the documentation and just build what you need in tandem. Build it well. You can use the same technique as with the house. The fun part in software development is that it's much easier to split these vertical slices into much much smaller parts. In the house you were only able to split the "rough in" feature into "Plumbing rough in" and "electrical rough in". In software the xyz feature can probably be split into many many more parts that can each be done in a very small amount of time. At each step you can decide whether you've been building the right thing, the wrong thing (and stop) or something that just needs some adjustments going forward. Just as in house building, you can do each of these vertical slices either with or without quality. Unfortunately there's no "software building code" w/ inspectors ;)
And if you build vertically, you can use your house err I mean software before it's finished. You can probably not move in right after the structure feature is finished. But you may, if you're fine cooking w/ the BBQ and use a porta-potty outside. You probably don't really want this if you have a family. And you don't want to ship software in this state. But after you have the plumbing and electrical done and the house is weather proof, you can cook w/ a small electric cook top, you'll have a sink and a proper toilet. You have drywall and heat. In many new houses you even opt to leave some of the features out completely in some areas. Like an unfinished basement. You can ship software that way too.
EDIT:
And even the Full stacking works in this analogy. Think "structure" part of a house. Your basement is poured concrete. That's done by a specialist usually, think of this as your DBA optimizing the SQL queries (or nowadays a BE guy w/ lots of experience in that). The framers build your walls and roof structure (trusses and such). These could be seen as your "BE guys" and the roof itself, i.e. the plywood and shingles or metal or whatever is put on by specialized roofers, these are your FE guys. Now if you have the skills and are full stack, you can build all of this yourself instead. Even if you build a whole house you could but let's say you just build a shed. Same as a house really, just everything is smaller and easier. But you can totally pour the pad for the shed (or the basement of a house but you will want to use a "library" or "framework" err I mean a company that delivers concrete by truck. You can frame a wall by yourself. It takes longer if you're talking whole house, but shed is not much slower when building alone vs. a crew. And you definitely can put on a roof. For a shed, you easily build a lean to style one yourself. You might use a "library" err I mean a company that delivers pre-built trusses for a house. And if you're building a lean to shed, anyone can lay plywood and shingles on that. If it's a whole house you probably only want to do that if you're good with heights. Maybe you're not full full stack but you can hold your own for smaller tasks.
> low code quality and technical debt is one consequence of the Agile movement.
Low code quality and technical debt were always there. The people in the early Agile movement, XP in particular, which most functional 'Scrum' teams are doing approximately 60% of, had a bag full of tricks that they used to speak to and encourage enlightened self-interest from management, and to pull a fast one if that didn't work. I can and have sympathized with this sentiment.
The problem is that when you build a system partially on trickery, your coworkers don't necessarily get the memo, and defectors engage in their own trickery to undermine what you're trying to do. A common blog post thesis at the beginning of the Trough of Disillusionment for XP was that people were attracted to the fact that XP let you omit certain steps from the development process, but at the cost of adding other steps that were much less onerous but still not particularly pleasant, and then people were cherry-picking and not doing any of the onerous bits. The analogy of eating your dessert but not your vegetables came up many, many times.
I think the next phase of improvement will have to be more transparent and more empirical about what it does. At present too many items in the Agile Toolbox are bound up in virtue and ethics and neither of those speak to the pragmatists (which is how we ended up with something called Pragmatic Programming, which, like all such titles, is more aspirational than descriptive).
We need to do more 5 Why's analysis on some of these Best Practices, rules of thumb, and aphorisms, because it's clear we're missing a few things. For instance, here's a proposition that I think makes an excellent mental exercise.
Merging code is one of the most difficult problems we have not (cannot?) solve, and lacking a solution we should avoid merge conflicts as often as possible by organizing our code to avoid them.
How many of your software practices end up serving this concern? A lot of mine do, especially if I dig. For example, grouping similar functions together, avoiding God Objects, and alphabetizing entries in unordered sets decrease the likelihood of conflicts when two people work on unrelated features. Even global shared state is really a problem of merging data from two sources at runtime. Pure functions and borrow semantics both establish a clear order of operations where you can avoid or quickly identify conflicts.
But almost always I hear these techniques described as "the right thing to do" by proponents and "aesthetics" by deniers, and so the fight continues on with no resolution in sight.
Building software is not analogous to building a house. Building software is more like what architects do before the house is built. With the difference that we can press a button and a new “house” is built within seconds.
What I often see is a chronic failure to properly define and maintain boundaries.
Chicago has only two remaining buildings from the 1893 World's Columbian Exposition: what was once the Palace of Fine Arts and is now the Museum of Science and Industry, and what was once the World's Congress Auxiliary Building and is now the Art Institute of Chicago.
In order to save costs, most the buildings were deliberately not built to last. These two were built to spec. One because its future as an art museum had been identified ahead of time, the other because its immediate purpose as an art museum meant it couldn't be slapdash; no other museums would lend their art if it were going to be housed in a potential fire hazard. But all the others, the ones that were expendable? They were immediately torn down after the fair concluded, in part so that people couldn't get themselves into trouble misusing them. And any further use would have been misuse.
I don't see the same culture of maintaining boundaries in software development. What I see much more frequently is that we make a reasonable decision to cut some corners with some purpose-specific code that doesn't need to be high quality, but fail to put up any sort of warning signs about it or otherwise prevent its reuse or misuse. When we do that, it's inevitable that someone will come along later, perhaps long after the original authors have moved to a different project, and attempt to simply reuse it rather than, say, rewriting it into something that was engineered to last.
Some attention also needs to be called to the Clean Code afficionados who spend 20 person-hours gold-plating code that was only worth four person-hours. I appreciate the desire to take pride in one's work, but that kind of misallocation of team resources does grave harm to the developer-manager relationship, and leaves the development team with little social capital they can use to argue for building things with care when doing so really is necessary.
That said, it does need to be observed that Scrum (not agile in general, but definitely Scrum) came out of the contract development community. I don't think it would be unfair to characterize the methodology as one that was made by and for people who don't normally expect to be maintaining the code they write for years to come. They ship the project, they move on to the next contract. And they work for companies where, as a matter of business necessity, teams are constantly being reshuffled around those contracts, so letting people get too specialized is somewhat antithetical to the nature of the business. It's theoretically the managers' jobs to be paying enough attention (and know their business well enough) to notice this, and reflect on whether they are working under similar conditions.
I've made peace with the fact that rewrites are almost a necessity, however well you do it the first time. Currently I'm rewriting a well-designed and written piece of code, which just was abstracted in a way that couldn't support new business requirements.
I've embraced this approach in personal projects too, and don't suffer from analysis paralysis or gumption traps as much as I used to.
You want to make something nice? Prepare to do the same thing at least two-three times, each time being better than the last.
This week I worked on the code of a customer that was passing an object into a command / service hierarchy and extracting some attributes of that object in the lowest level of the hierarchy, where a method is doing the real job. Unfortunately the new feature I worked on needed most of the methods of that hierarchy but didn't have that object because it doesn't have more than half of the data needed to build it. So, passing individual attributes as arguments instead of the object would have been the right implementation and the assumptions the code was based on were wrong for this new use case.
I derived a class from the first level of the hierarchy with only the methods I needed, redefined as empty the ones that were using the attributes I didn't have and ended up with code that passed tests. No time to refactor all the code into the correct abstraction. Time to understand how the class hierarchy worked: too much. Was it overengineered? Yes. I prefer simple straightforward code with few files to navigate to understand how it works.
TCDD can be OK if you control the scope. "We must deliver a functional thing in 2 weeks, it may be small and advance us 2% towards the goal but it must be done in 2 weeks and it must work. Now let's see what we can do well in that timeframe..."
Well, in this case it was Wednesday and I'll be working for another customer on Thursday and Friday (they know it) so that feature must be completed by Wednesday night. They accept that the quality of a feature worked on for one day will be less than the quality of a feature worked on for two or three days. Costs.
Not my experience at all. I simply refactor/improve the code one small step a day guided by auto tests and eventually end up with clean and highly maintainable code. I am managing a very large C++ application and haven’t had a bug in production for 5+ years. Which means that I spend almost 100% of my time adding new features instead of being in debug hell.
Fair enough. And let me add that sometimes refactoring for me means completely rewriting a sub-module or sub-system. It is safe to do as long as I have good solid tests to compare before/after.
I write in layers and modules. Each one is well-done, fairly insular, and coupling is as loose as possible. A lot of my refactoring is about reducing coupling, and increasing autonomy.
Bottom layers can often be left alone for years, while top layers can see almost constant change.
Right now, I’m working on the home stretch of an app (backend/native iOS frontend) that has been under development for a couple of years. A lot of the reason for that time, is because the people I’m working with, didn’t really know what they wanted, when we started (sound familiar?). It has undergone several massive pivots during this time, but we have settled on an operational model, and it’s been pretty much pure refinement, for the last three or four months.
I just rewrote a major backend driver[0], to make the server connection smoother and have a simpler asynchronous behavior. It will also make it easier to integrate other types of backends, in the future, but I know better than to think it is “future proof”; however, it affords a structure that will be quite amenable to change. The main app is closed-source, but many of its components are open.
It took about a month to write the driver, and it was sort of akin to reconstruction of a bridge, while traffic was going over it. If I hadn’t done the original design, in a modular, API-driven manner, it never could have been done, but I’ve also done exactly this, numerous times. The old design[1] was too restrictive, and is one of my older projects. I wanted to get it out of my app (even though I wrote it).
The big deal about this project, and the key to being able to pivot, is that it has been released through Apple TestFlight, since it was about a month old. I’m coming up on a thousand TestFlight releases. This allows the non-tech stakeholders (the ones that don’t know what they want) to run the app, as a release-quality application, and provide really meaningful feedback (like “gee … I really thought it was a good idea, but you’re right. It sucks.”). Sometimes, there’s no substitute for giving someone what they want, to convince them that it’s not what they want.
Needless to say, a great deal of high-quality work has been binned, but we haven’t been running a brand-destroying lashup MVP. The project has been internal-only, this whole time. I am quite aware that this method of development is not commercially friendly, in today’s development environment, but we have the luxury of the Principal (Yours Trooly) working for free, and knowing full well, what he signed up for, when we started, and designing accordingly.
The end result will be an application that will enjoy almost jaw-dropping Quality, out the door, with many thousands of test runs, on shipping code, for two years. We’re finding the “rough spots,” now that we are in the home stretch, and still have the cosmetics (theme, aesthetic design, interface, etc.) to go before widening to our first test phase (which will also be TestFlight).
Since the “team” doing the lion’s share of the work consists of one person, the Quality is of paramount importance. It’s a pretty big project, and this scope is usually implemented by a much larger team.
When I write something, I can generally leave it alone, until I decide to revisit it, on my terms.
I think it depends a lot on where the code you are working on lives in the stack.
The further down it is the more it makes sense to do it right.
The further up it is it becomes a question of whether that code will be around long enough to make worthwhile the more interesting invasive changes required to do it right. An extreme case is A/B tested UI code which has a strong chance of not working out.
This is frequently not subtle enough an a source of conflict with coworkers. You don't know up front what 'right' is and quite a few people can't look at this and so they ghost some of their bad decisions instead of owning them and learning from them.
The concept of Reversible Decisions is based on the idea that some decisions have a small blast radius, that we should make those quickly and cheaply, and that if we are proven wrong we pivot quickly and cheaply. Don't invest energy here because you'll need it for the Irreversible Decisions. For the Irreversible ones, you need a long runway. You need time to stew on them. You need to know when the Last Responsible Moment is, and you need to gather every bit of evidence for or against a certain solution prior to that moment.
Those are the only decisions you need to get right the first time, and often you get a better decision if you delay. Not procrastinate, delay.
Moreover, I try to embedded documentation and diagnostic/debugging tools, such as meaningful error messages and traces, into code. Usually, debugging takes 3x more time than coding, so it is very profitable to reduce debugging time from the very beginning.
Also, I'm automating branching, building, testing, merging, committing, releasing procedures, so for me, it feels like a game: read a ticket, make a branch, write a test case, write the business code, update documentation, tests are OK? -> merge and commit, update resolution comments, send ticket for review and manual testing, next ticket. Of course, it works only when the whole team works the same way and each member of the team plays his/her role properly.
The trouble is that requirements change. So you write it "the right way", then you get a minor requirement change that forces you to slightly bend the rules, which is okay. 100 requirement changes later your code is a real mess.
Part of writing it "the right way" is to make it easy to change. This is why I've come to dislike OOP. The rules are too rigid and result in exactly what you describe.
I often feel the same way. I've seen cases where trying to add functionality to an existing but "very clean and correct" codebase ended up being extremely difficult and the result felt far too complex in my mind.
I've come to think the real sin is coupling. If you try to keep your codebase clean at all times but don't refractor often enough, you end up with coupling where it shouldn't be, which can sometimes be a lot worse than some of the so called "deadly sins" such as code duplication.
I find OOP fine but it's not something you can just let rot. You need to be ok with redefining classes and refactoring things around the new boundary lines regularly as requirements change.
This requires a testing suite that gives you confidence to make these sweeping changes quickly. Without it, it's not a fun way to work.
Also, you don't have to go all in on oop. I like to use procedural most of the time only defing objects for things I find more easy to reason about as objects.
For example, I'll write the command line interface in procedural that looks a lot like a complicated bash script and then define objects that do the work. This often results in objects not really changing much because they're not defined until later in development when their needs are well known. The place that changes is usually feature switches, command toggles, output formats, and the like. All that can be objects if you want but for the most part a function or two goes a long way and results in easier to change code.
"Good judgment comes from experience. Experience comes from bad judgment."
I have pretty good judgment, these days (see "toes," above).
Well-written OOP, on the other hand, is a marvel of Quality, simplicity, maintainability, performance, and refactorability.
We often like to blame a tool, when it's really PEBCAK.
It's like we go to a bar, one night, and there's this old blues guy, alone, with a beat-up Telly and an old Peavey amp, making music that sounds like it came straight from God's House Band. It looks effortless.
So we rush home, buy a PRS guitar, and a Mesa Boogie amp, strike a pose, and hit the strings.
The cat grabs his favorite dead rat, and heads for the hills, never to be seen again.
The kids delete their TikToks.
The wife calls a lawyer.
Maybe there's something to be said for learning mastery of a tool, before disparaging it.
Given how frequently I hear something along the lines of "OOP is great; you're just doing it wrong!", I'm inclined to believe it actually isn't so great.
Everyone loves to rain hate on C++. I ran a C++ shop for 25 years, and have heard it all.
Listen. It's current fashion to hate OOP, and I'll never convince anyone otherwise, so I'll just back off, and let y'all think what you will.
For me, I like some of the new stuff. If you you look at the links in my comment elsewhere in this thread, you'll see some of the transitions that I've made, from an older, OOP/delegate-based model, to one that leverages protocols and closures. I'm not averse at all to learning new tech, but I've found a lot of mileage in combining it with classic patterns.
You see, I am not an expert functional programmer. I already know that it won't work well, for the UI-based stuff that I do, and, in my experience, UI programming, and device control programming, share a lot of patterns. But that may also be, because I am not actually in a position to render much judgment on the matter.
But, like I said. I am not an expert on FC. I have liked using some of its contributions to my favorite language (Swift), but that doesn't really give me the authority to make any really big statements on the pattern (like, for instance, "FC is bad for drivers").
See how it feels, when people throw shade on something that you know well?
> See how it feels, when people throw shade on something that you know well?
My feeling is mostly that of confusion. I spent several years writing OOP, and then learned pure FP, and now that's all I do. Pure FP is the best thing I have ever used for UI programming.
My confusion is not from you "throwing shade" on something I know well. Rather, it's from you throwing shade on something you — by your own admission — are not in "a position to render much judgement".
I'm not actually "throwing shade." I just said, and I quote:
> "I hear functional programming is great. Not so sure I'd write a driver with it, though."
That's not an insult.
Your comment wasn't, either -technically, but it wasn't quite as nuanced. It said that OOP was, by inference, a bad thing.
> "Given how frequently I hear something along the lines of "OOP is great; you're just doing it wrong!", I'm inclined to believe it actually isn't so great."
Like I said, this has descended to a religious debate, and I won't play, anymore.
But this is a big world, and there's many different things going on. I know that I'm an old "OK boomer," but I have also been writing ship software for over thirty-five years, so there's a vanishingly small chance, that I may actually have some valid points to make.
There is such a thing as a bad tool. In most industries, they simply refuse to use the tool anymore. Not so much in software development, for some reason.
In my experience your options are either [high initial fixed cost + low maintenance cost] or [low initial cost + high maintenance cost]. Usually there is so much pressure to "demo" and "show something" that you end up biased in favor of the latter option.
We have an interesting situation at work. We’ve got two monoliths, both quite nice in terms of code quality, test coverage, etc. One is for the main site with listings and one is the “customer platform” that deals with offices, organizations, invoice management, etc. The split made sense at one point in time but today our most frequent developer complaint is that keeping the two states in sync is difficult. We now want to merge the two but it’s a huge project. It’s architectural debt.
My main take away from this is the importance of getting to grips with the codebase first. This may seem completely obvious but right now I'm in a context where the codebase is spread across different dev teams whose code 'meets in the middle' through an API but is never considered in a holistic fashion. In other words, there is not even a coherent codebase to review. The project is overdue and over budget by 100%, of course.
The article explores quality only on one axis, on it's impact for development/maintenance work. But software quality impacts far more areas also. Poor software can require unnecessary amount of resources to run, leading to increased operational cost especially if running in cloud. Slow and/or buggy software makes users unhappy and can end up wasting lot of users time, especially if users start do workarounds for the issues. Software having lots of operational issues wastes time on firefighting, and of course downtime can have severe business impact. And so on and on.
Previous job a new bright young international software guy started. I walked him thru some of the stuff I made first. He nodded. Then we dove in to the legacy and "new legacy" code. He immediately looked at me and said, "Is this [nationality] code? This an [nationality] company!?" I explained to him that we are not allowed to speak like that in California. And yes.
That was my first thought, but California is already in the comment, making self-censoring pointless. So I parsed the sentence as a sarcastic "is this a (racial slur) company (churning out low-quality code)?". I wasn't 100% sure though!
California secures our freedom of speech, but in business there are protected classes, one being "national origin" [0]. So you can call someone a donkeydick but not insult them based on their nationality. I wasn't bothered but wanted Boris to know about the law.
He also wrote Your Code as a Crime Scene and authored the open source tool, Code Maat. I've found both extremely useful in my current job where I took over a code base with immense technical debt. AFIK, Code Scene is a commercial version of Code Maat.
I’ve just tested it out with our code base and it’s pretty impressive providing very useful intelligence of my team. Codescene provides metrics like he wrote in the post and many more.
The biggest issue with technical debt is that it encompasses too many things at once. To fix it you need to be specific about your scope, goals etc...
Availability, adaptability, maintainability, ... most of the NFAs/quality attributes can have a different impact on the business. I've done a lot of in- & out "help the house is on fire" assignments that typically last half a year.
In my last project f.e. the team was delivering fast, high quality code, but the biggest issue was build & deployment took over a week and lots of errors/iterations due to lack of proper architecture and maturity/DevOps.
(Assignment: fix the current "CTO"'s biggest fires, phase him out, replace him temporarily and find a replacement, without interrupting delivery of new features.)
So I made "deployability" a priority, and created some huge debt by using docker-in-docker containers to at least have an automated, reproducible deployment unit available.
Next was "lack of trust from the stakeholders", so we installed a weekly sprint, created a high level architecture diagram and a preliminary planning that we updated reach month, so they at least had an idea what we were doing.
Important note : our Jira tickets were only for communicating progress with the stakeholders, so only using business language etc...
Next I created a few high level - core to the business - happy path tests that ran on staging deployment, which probably was a mistake, because the team ignored the failing tests. (Their mindset wasn't ready for it yet, even though they deployed a failing instance to production twice, resulting in a business outage.)
By then I found a replacement - someone who was almost obsessive about code quality, documentation and process/communication, showed him my idea about feature evolutions, how to coach/manage the teams, and after a month I tapered off my work from full-time to zero in two weeks, because I was experiencing the "mom syndrome" (i.e. "as long as mom is doing it, I'm not picking up my responsibility".)
Last time I checked, the team had a little bit of a rough transition phase, but is now doing great.
TL;DR: to tackle technical debt, identify the specific kinds of debt you are dealing with and prioritize/classify with stakeholders what they would like to see fixed first.
Attacking technical debt should be no different than developing a new feature.
and yet there continues to be a movement to defund all liberal arts and humanities departments in colleges, turn everything into a Google code certificate and eliminate the English classes while claiming it is because people dont like wokeness.
English = writing. History = writing. Art History = writing. Philosophy = writing. Technical Writing = part of the English department.
You aren't going to learn to write in math class or database class or ML class.
>” and yet there continues to be a movement to defund all liberal arts and humanities departments in colleges”
What movement is this? And, is this actually a serious movement with any real traction? Or is it merely some reactionary meme?
>”eliminate the English classes while claiming it is because people dont like wokeness.”
If you look at highschool graduation requirements, English requires more credits than any other subject. Often the full 8 semesters / 4 years. Almost all colleges require this as well. I don’t see this going away anytime soon.
As far as the whole “wokeness” angle is concerned, when I was in high school I often felt like English class was more about making sure students learned “the moral of the story” and proving that they did in the format of an essay. I didn’t feel like we spent much time on the grammar and syntax of the English language. I can understand wanting a wider variety of expression at the expense of reading stories and reviewing them. But to some that might sound like being anti-woke, albeit indirectly?
https://github.com/dlang/dmd/blob/master/compiler/src/dmd/cp...
is vastly different from the one I wrote in the 1980s:
https://github.com/DigitalMars/Compiler/blob/master/dm/src/d...
My original evil plan was to recycle the old one, but I just couldn't.