> Strategies for building systems that can be adapted for new situations with only minor programming modifications.
The problem is I'd rather make a medium size changes to a simple system than a minor change to a complex system.
>The authors explore ways to enhance flexibility by:• Organizing systems using combinators to compose mix-and-match parts, ranging from small functions to whole arithmetics, with standardized interfaces• Augmenting data with independent annotation layers, such as units of measurement or provenance• Combining independent pieces of partial information using unification or propagation• Separating control structure from problem domain with domain models, rule systems and pattern matching, propagation, and dependency-directed backtracking• Extending the programming language, using dynamically extensible evaluators
"The truth is, everyone is going to hurt you. You just got to find the ones worth suffering for."
To some degree or another, everything sucks and complexity is unavoidable. You just have to find the type of complexity you're willing to live with.
That being said --
I suppose if you're used to that kind of complicated stuff and used to typing out the boiler plate code, it doesn't seem quite so inscrutible. Tedious, but it doesn't blow up in your face.
It's like I had a hard time understanding the strategy pattern the first time I saw it, especially when generics were involved. It felt mind bending.
Now I happily use it and it's all straightforward. The formerly intellectually difficult parts are now autopilot for me. And it's a hell of a lot less stressful than adding another set of random if statements to an important code path.
And keep in mind many of these phrases are very fancy ways of something that makes intuitive sense if you're familiar with it.
But see, the difference is something that is confusing to a beginner is quite a bit different than something that is just overwhelmingly complex for anyone.
And "simple" code has far more risk of side effect than well architected code, which is quite the opposite of what you assert.
Getting to the bottom of what you need to change is complicated if you're new, but once you get past the learning curve it's straightforward.
On the other hand, a "simple" code structure can becoming a steaming pile of garbage if you want to make any real changes to it.
To be very clear, I'm talking using something like composition instead of making a big file with cyclomatic complexity -- not arguing for some of those other things, which in all honesty I'm not completely familiar with.
Really just arguing that sometimes this stuff seems impossible and dumb when you're starting out, but easy and intuitive when you learn more.
> I pity the guy trying to fix some old legacy code base, while also trying to figure out what the hell a combinator is
That goes to my other point: it's wrapped up in excessively formal language, but the actual ideas are simple. I think they might be for instance just advocating the use of functional programming, and juniors use Javascript lambdas every single day without knowing what the hell a combinator is.
Even though they might feel the same, on a human level, there’s clearly a difference between being early in the learning curve and facing down a complex system.
> But maintainable code is about allowing other people to easily work on it.
But as OP said, it is selective. For instance, I would much rather have the OP use the strategy pattern. I have used it before, I will recognize it and it will take me tons of time when trying to understand the codebase and making a change that doesn't break things.
We cannot find a global optimum, we have to decide if we prefer to make it easier for experts/experienced developers or for beginners. I prefer the former and then teach beginners to become experts/experienced (in some area) as quickly as possible. That costs resources but pays off in the end.
Google went the opposite way. They created Go to be beginner's friendly but at the same time hurting productivity of more experienced developers. Let's see how that turns out in the end.
> Google went the opposite way. They created Go to be beginner's friendly but at the same time hurting productivity of more experienced developers. Let's see how that turns out in the end.
well, they started the Fuschia kernel in C++, not in Go... that should be telling enough :)
I think the goal of any such pursuit is to reduce/manage the cognitive load that any one person has to deal with while they are trying accomplish something.
Things like "Separating control structure from problem domain" enable separation of concerns so that the person deal with resource utilization – efficiency/capacity-planning and scalability can be an expert in systems domain and doesn't have to deal with functional domain understanding and its complexity (and vice versa).
Solving for separation of concerns in such a way that it also enables separation of roles and responsibilities and hence facilitates developing expertise/specialization in a particular orthogonal discipline is super powerful.
In my experience, this is one of the most effective ways to scale a fundamentally human endeavor of building large software projects.
> The problem is I'd rather make a medium size changes to a simple system than a minor change to a complex system.
The risk there is you (or someone else less disciplined on your team) doesn't make the medium size change. They instead hack in the smallest possible size change they can get away with to satisfy the current problem.
After a few changes like that, you have a complicated system and all changes are major.
It's hardly a small problem either. I can see it unfold every single day in my code base.
I've spent weeks trying to make minor changes to code like that because I'm afraid changing one variable is going to backfire because it's used in some unexpected way somewhere else.
On the other hand I've also been the bad guy before, too. I've realized how much work refactoring is going to be, and instead of making the project managers mad at me I just slapped in some additional cyclomatic complexity.
Everyone wins... in the present. We get to check off our to do lists. Everyone in the future loses. Technical debt is an apt metaphor.
> The problem is I'd rather make a medium size changes to a simple system than a minor change to a complex system.
Spending one hour thinking to make a one-line change might not feel productive if one is used to just hacking away, but, IMHO, it is a net win by some orders of magnitude (assuming we are working on some software with solid abstractions).
> And this doesn't sound like a simple system.
Unless one understands it, in which case that list sounds like a collection of solutions/patterns one would come up with after the n-th rewrite anyway. If, and that is were we often fail, we manage to document the design decisions adequately, code can be easier to read as well.
Yea all the quotes are contradictory to the goal of 'flexibility'. What's more flexible than being able to write the code that does the thing you want it to do, rather than creating, then mixing and matching objects/abstractions that you very likely have a surface level understanding of?
There's a huge misunderstanding of 'flexibility' within codebases. You can achieve 'flexibility' via the lack of programming rules. Yet objects/abstractions/encapsulations are literal rules applied onto code to achieve some benefit (less code, increased surface understandability, etc), it's less about flexibility. And the quoted systems feel like they have little benefit to one's codebase.
If you squint, these seem to be (optional) layers of transformation (data and code).
I would rather navigate code with a clear, high level pipeline of transformations, rather than each path having its own idiosyncratic low level data assembly.
One of the seminal books for me was Writing Solid Code[0], written about 25 years ago.
It's quite dated, but has a lot of stuff that is still relevant.
I remember that one of his sections was titled "FLEXIBILITY BREEDS BUGS".
Here's the first paragraph:
Another strategy you can use to prevent bugs is to strip unnecessary flexibility from your designs.
You've seen me use this principle throughout the book.
In Chapter 1, I used optional compiler warnings to disallow redundant and risky C language idioms.
In Chapter 2, I defined ASSERT as a statement to prevent the macro from being mistakenly used in expressions.
In Chapter 3, I used an assertion to catch NULL pointers passed to FreeMemory even though it's quite legal to call the free function with a NULL pointer.
From every chapter I could list examples in which I reduced flexibility in order to prevent bugs.
I like to put flexibility into my designs, but I've learned to be very careful about it, and usually test the unused code paths anyway.
There doesn't seem to be much information about the content of this book. But the quote by Dan Friedman is intriguing:
“Hanson and Sussman's Software Design for Flexibility has introduced additive programming, a game changer. An additive style allows for making changes to existing designs without the programmer's efforts looking like the work of a contortionist. With elegance, clarity, and care, they point out long-overlooked problems in software design and offer their Scheme-friendly, clever solutions. Enjoy!”
Dan FriedmanProfessor of Computer Science, University of Indiana; author of The Little Prover
The site design makes me want to throw something. The massive header bar with literally 7 words on it that is more than inch wide, taking up more than 25% of the vertical space and has a bizarre iiiIji logo (or something) that has no mouseover. What a waste of space!
I'm an old fuddy-duddy. I prefer a small number of links written in text on the left, a nav sidebar. Frames or not, doesn't make a big difference to me. I am generally on a laptop with high DPI and have limited vertical resolution. The OS, window manager, browser, etc, already have a bunch of stacked bars and even in fullscreen mode there is a huge base overhead. Most sites seem to be optimized for tablets only, with massive SVG icons that are supposed to look so pretty, 20 point font, completely sparse layout for fat fingers. It's like people forgot about desktops. That is pretty annoying to me.
Gerald Sussman is one of the authors (along with Hal Abelson and Julie Sussman) of the famous, absolutely incredible “SICP”: Structure and Interpretation of Programs.
> it will be published by MIT Press soon, with a Creative Commons Share Alike license (and all the code in support of the book is under the GNU GPL).[1]
The problem with flexibility is that while it may take you one or two steps away from a corner, you'll likely find yourself in a corner soon enough anyway. It's like buying a larger hard drive, thinking you'll never use all the space.
The only real solution, that most managers don't get, is to know all requirements upfront.
In my world a company probes the market continuously to see what the market wants. The business analysts are getting a stream of requirements and then design new features. Then devs implement those features. Development stops when market growth slows down or when the investors cash in.
The only way to predict what all requirement will be in a product lifetime would be to invent a time machine. This is not what I call "a real solution".
The only real solution is to communicate a lot with the business analyst and make informed decisions on flexibility vs simplicity.
And yet it is impossible to know all the requirements upfront which is why micro services and atomic functions are so popular. It seemingly reduces the complexity of redoing everything.
Not sure I think that's a big argument for microservices/atomic functions but rather just for decoupling things in general.
For me, the appeal of microservices is the organizational challenges it solves (allowing teams to move and deploy at their own pace).
My current codebase is a monolith because we're a tiny team, but the different parts are not tightly coupled more than they need to be and it doesn't feel slow to change things.
The hard drive analogy is appropriate; the solution is to buy more/bigger hard drives when you need them. You can’t foresee all requirements, “the map is not the territory”, etc.
Similarly, you build a system with the best information available, and be prepared to change it when the information changes.
Sussman built a course ostensibly around this book, but I couldn't find any indications of the specific material inside of the book itself; on the course pages there were just a bunch of essays not written by them. Any leads?
The problem is I'd rather make a medium size changes to a simple system than a minor change to a complex system.
>The authors explore ways to enhance flexibility by:• Organizing systems using combinators to compose mix-and-match parts, ranging from small functions to whole arithmetics, with standardized interfaces• Augmenting data with independent annotation layers, such as units of measurement or provenance• Combining independent pieces of partial information using unification or propagation• Separating control structure from problem domain with domain models, rule systems and pattern matching, propagation, and dependency-directed backtracking• Extending the programming language, using dynamically extensible evaluators
And this doesn't sound like a simple system.