Most professional programmers only occasionally need to look into performance issues, but need to take complex things and simplify them with every line of code they write. And yet most programming interviews don’t evaluate this ability. I think this should change.
So far the answer I've worked out is:
Because its optimal solution to let talent flow through BigCo in SV. There is no domain or tech-stack specific stuff. So studying for the interview process allows the applicant to apply for multiple companies. At the same time it's a pretty hardcore process, so the company knows they are getting engineering talent that is a) smart and b) obedient.
For the rest of the world it absolutely doesn't make sense and that's why where I live, you don't see that style of interviews at all.
David Epstein's new book Range talked about academic research splitting domains into kind vs wicked. Kind learning domains are ones where "feedback links outcomes directly to the appropriate actions or judgments and is both accurate and plentiful", while wicked ones are "situations in which feedback in the form of out-comes of actions or observations is poor, misleading, or even missing". 
The hard parts of real-world software development are generally in the "wicked" bucket. Schoolwork and puzzle questions are both generally "kind" in the sense that there's a right answer and you're expected to figure it out. It's impossible to be too smug working on "wicked" problems because you get your ass kicked often enough to stay humble. But in "kind" domains it's quite easy to indulge one's desire to feel superior by dragging people through things you know well.
Personally, when I interview people I try to set things up so that there's no right answer; the goal is to see how well they get to good answers, and how well they collaborate during that process. I'd love to see more people do that.
For example, I never understood Calculus I until I took Calculus II. I never understood Calculus II until I took the next class. Etc.
I am reluctant to believe someone has mastered simplicity until after they've mastered complexity, hence the complex interview does have value.
While I buy the premise, this seems like a strange conclusion. If a mastery of simplicity requires a mastery of complexity first, then testing for an ability to be simple tests for both kinds of knowledge, whereas testing for an ability to handle complexity tests only one–so what's the point?
At first I had a distorded view of what is complex. I thought was complex was mostly foreign and a different perspective. With the right view point everything aligns onto a small line.
The more you read/see the more you get accustomed to that fact. The more you see that more != more complex, quite the opposite.
The math fields, not always I believe, but very often, runs after minimized models of anything. Even recursion is a way to reduce the infinite into a small/finite set of rules.
I’m not sure how I could ever incorporate a shower into an interview loop.
Interviews suck at evaluating anything that requires more than 40 minutes.
That sounds like a recipe for disaster, though. Simplicity that doesn't account for every corner case in the domain of the code is false simplicity, a bad abstraction. The challenge of writing simple code is this: threading a single, unifying concept through all corner cases.
 - I.e. when writing a function, you don't necessarily have to account for e.g. the off chance of heap getting corrupted externally in the middle of execution of your code. But you'd better account for all the values your function might be called with, in all combinations.
Some corner cases influence the core algorithm some don’t. Checking for degenerate cases and so on may not influence the core algorithm, if so need not be done in an interview.
Your approach is just more of that.
Yes, making code that is easy to read and less bug-prone is good. But at the end of the day the customers are going to be running your code millions of times a day, and if you need to make the code slightly harder to read to improve performance, then by all means do so.
If your code is only going to be run once and must be reliable, then you can make a different trade-off.
I know it's fun and exciting to optimise a function to perform at maximum efficiency, but people tend to forget that someone has to read that piece of code in the future and understand it.
All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever. =)
Spectrum of performance:
LO |---*-------*--------*------------*-------| HI
^ ^ ^ ^
| | | |_root of all evil if premature
| | |_you should be here
| |_you can be here if you don't do stupid things
|_you are here
> All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever.
This applies to hairy, last-ditch effort optimizations. The kind of your average programmer isn't even capable of doing. It's nothing like the optimizations most real-world code needs.
It's why I consider the "premature optimization" adage to be actively harmful, as it legitimizes lack of care and good craftsmanship.
From what I've seen, a lot of code can be trivially optimized with no net loss to readability (and sometimes a gain!), by simply removing dumb things, mostly around data structures. Fixes involve using vectors instead of lists or hash tables, depending on size and access and add/delete patterns. Using reference equality checks instead of string comparisons. Not recalculating the same value all over again inside a loop.
The kind of things above are ones that bleed performance all over your application, for no good reasons. I consider it a difference between a newbie and a decent programmer - whether or not they internalized how to code without stupid performance mistakes, so that the code they write is by default both readable and reasonably performant.
Then you go to actual optimizations, the kind that benefit from a benchmark - not because doing them elsewhere is wrong in principle, but because they take time and noticeably alter code structure. Using better algorithm, and/or using a better data structure, both come here. They don't have to impact readability, as long as you isolate them from the rest of the system behind a simple interface.
(Like, e.g. one day I achieved 100x boost of performance of an application component by replacing a school-level Dijkstra implementation with a two-step A* -based algorithm and data structure specifically designed for the problem being solved, and easily managed to wrap it in an even simpler interface than original. Since the component was user-facing, it pretty much single-handedly changed the perception of application from sluggish to snappy. The speedup itself probably saved many people-hours for users who were a captive audience anyway (this was an internal tool).)
Only then you get to the "premature optimization is a root of all evil" part, which is hairy tricks and extreme levels of micromanagement. Making sure you don't cons anything, or more than absolutely necessary. Counting cycles, exploiting cache-friendly data layouts, etc. This can have such a big impact on a system and surrounding code that it does really benefit from not being done until absolutely needed (except if you know you'll need it from the start - e.g. in some video games).
.. so, you measured the performence (sluggish), saw the need for improvement and improved it (snappy). It is not premature optimization. It would be premature optimization if it happend without mesaurement and without need.
I agree with your examples above. If you choose the right data strcuture/algorithms/patterns without sacrafising readability or development speed. By all means. But don't spend hours to improve something which dosn't need improvement.
Beyond that, measure before you optimize, as such interventions will require larger amount of effort, so it makes sense to do them in the order of highest-impact first.
(Also note that "performance", while usually synonymous to "execution speed", is really about overall resource management. It's worth keeping memory in mind too, in particular, and power usage if your application could be used on portable devices. Which is really most webapps nowadays.)
But as it is normally understood, writing performant code is more about writing for fast performance with low time constant on everyday small inputs - something which algorithms and data structures interviews never touch on. Big O complexity is not directly related to performance, except for pathological inputs.
You can't spend two hours just setting up the tasks to be performed.
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
And reading some more about it, I found this good article: http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-...
I've only just started doing TDD the "Growing Object Oriented Software guided by tests" way, and I find it incredibly helpful that each and every class does just _one_ thing, even splitting up those 15 line functions into two or three separate classes implementing an interface -- single responsibility -- helps me a lot in reasoning about the code.
I _have_ experienced the dependencies issue myself already though, it's very annoying to click on a method in my IDE, and then get shown the interface definition of that particular method. I'll then have to trace my way through a couple of files to find the dependency, very annoying.
let (|>) f g = g f
I say that without even looking at those studies, which is perhaps unfair. But there are So Many Studies...
My personal experience is that when I was exposed to shorter simple and (so important!) well named functions, my work became so much better. And that is now the school I subscribe to.
That's not - at all - to say you can't also find very good practices doing different things. But that's not where I found it.
My personal experience differs from yours somewhat. I believe it's not the length or the number of methods that matter, but what language (i.e. abstraction) they create. You try to subdivide the function into functions that are natural fit for the task being done, but no further. If you still end up with a long block of code - as you very well might - consider comments instead. A comment telling what the next block of code will do is kind of like inlined function, except you don't have to jump around in file, you don't lose the context. Much easier to read.
I used to write code where essentially every piece of code longer than 3-5 lines got broken out into its own private function. The amount of jumping I had to do when reading the code, and the amount of work maintaining and de-duplicating small private functions, was overwhelming.
When I was shown that you can break out a function that's only used once, just in order to name it (2005, or so), it was one of the greatest revelations in my career.
It also serves as a way to tell you what that code does, without you having to know details of how it does it, until the rare day when it's important.
But I only do it when that code is genuinely hard to follow, not because my function is "over 10 lines, and that's our policy".
With IDE support answering the question for a single function is just a matter of a key combination, but that still adds friction when reading. I found that friction particularly annoying, and a file with lots of small helper functions tend to be overwhelming for me to read (it's one reason I like languages with local functions). Whereas if you didn't break the code out, and only maybe jotted a comment inline, you can look at it and know it's used only in this one place.
Not every time your function line count is > 10, as I heard from some crazy company a friend worked for...
I prefer to put the broken out function(s) immediately below the main one for a logical reading experience: Overview first, details below if needed.
Comments are of course good when they are current and correct. But they rarely stay that way for long...
Where the end becomes the means,
And the forest gets so lost among the trees;
When polishing the source,
Blunts all our creative force,
And procedure kills the genius it claims it frees.
It seems to me like "complex" and "ability to understand" mean the same thing, so this phrase doesn't have much meaning.
It's difficult to define "ability to understand" / "complex" without using either of those words in the definition. For example, you mention lines of code, nesting and multiple concepts.
I tend to agree with your examples, however not necessarily the lines of code. I've seen single large functions that represent an algorithm in a way that's easier to understand than the implementation that breaks it up into tens of little functions. It made liberal use of comments to explain each section of code in the function. I believe its advantage was that when reading, you could simply scroll down the function line by line rather than having to jump all over the file.
I suppose you could take my definition of complexity to be approximate to cyclomatic complexity
Sometimes splitting out code makes sense by making the underlying structure clearer, sometimes it makes it bothersome to find the actual important details. For instance, logic that filters what jobs to run based on conditions, it might make sense to abstract the logic for the conditions into one class per condition, to make the filtering logic clearer.
Yeah, that seems to be something I'll have to spend some time learning about. Right now I'm just mindlessly splitting off everything, and it kind of works, but it's annoying to navigate all over the place just to find some detail somewhere
The IDE should be able to show you the implementations. When using Java and Eclipse for example, you just hold Ctrl and hover with the mouse over a method call. In the appearing dialogue you can then select "Go to implementation" instead of "Go to definition" (or similar) and Eclipse searches for and then shows you all classes that contain an override for that method and then you can click on one of them to view that specific overriding method.
This is an excellent example of local optimizations - i.e. local design optimizations - which end up harming the overall design and maintainability. You've discovered this yourself, but still cling to this dogma of tiny functions.
> And reading some more about it, I found this good article: http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-...
That article bashes (among other things) IOC containers and mocking frameworks. I absolutely hate Spring IOC and mockito. But dependency inversion is not dependency injection. It wasn't until I got sufficiently fed up with Spring being everywhere in my Java code that I figured out how to hide Spring as an implementation detail.
This is where I ended up: https://sites.google.com/site/unclebobconsultingllc/home/art...
Kevlin Henney basically refuted the idea that SOLID has any meaning or use in software development.
I also agree that I'm tired of tiny classes and tiny functions. I went on a rant to my coworkers the other day about them, after spending an afternoon figuring out how some code works that fully believed in small functions.
Nothing makes 100 lines of code more readable than splitting them across 100 functions in 10 classes in 10 files! /s
I agree with another poster, though, that VisualWorks Smalltalk made it a lot more palatable. I'd honestly really like to try a system that treats a program as a database of functions rather than a directory structure of files. I don't think the directory structure adds anything that a tagging system couldn't do.
Might have been old to you, but not to everyone. I have seen plenty of codebases which inject nothing and where all objects create their dependencies themself. Lower level code is especially susceptibly to this - the amount if C code that I've seen that made use of some kind of DI tends to go towards zero. And with that often also the amount of tests that are available for the codebase (with claims like: This is not testable).
I find the term "Dependency Injection" also still unnatural and academic, but I really like the core idea, which isn't that hard to teach (pass dependencies instead of creating them yourself).
Whether one also needs DI frameworks is another topic, on which I have no strong opinion.
No dependencies are being inverted. The name is just complete crap.
One of the thought experiments I use for teaching testing is to remind them that nobody wants to read your code. The next person to read your test is probably going to be reading it because it’s failing. They are in effect already having a bad day. Don’t make it worse. They will assign that negative emotion to you.
This has started to affect all of my design thinking. I’m probably only reading your code because I’m trying to hunt down a bug (or my attempt to add new functionality has failed spectacularly). Every time I run into a function that contains code smells, I have to stop long enough to figure out if those smells are the bug I’m looking for or something else. In a particularly bad codebase, like the one I inhabit now, by the time I finally find what I’m looking for, I no longer recall why I was looking for it in the first place.
This is not good code. It’s shitty code written by people who aren’t emotionally secure enough to write straightforward code. Or are running from one emergency to the next, all day every day. Or both. Or are young developers just copying what they see around them.
Is a source of complexity, yes. But it really depends on what you're doing.
Any orthodoxy is a poor substitute for actual thought.
The problem with declaring simplicity and clarity to be the final goal is that’s not an objective truth
Most these dogmas are anecdotal at best with very little to no empirical backing within surrounding context, but people will stand by certain approaches as if they were well tested theories like general relativity, newtonian mechanics, or QED.
tl;dr - Context!
The problem with a lot of the principles developed from the Smalltalk days, is that different environments have differing cost/benefit results for different tasks. This means that many practices which are awesome in a dynamic, programmer-is-god-of-runtime environment like Smalltalk are doing to bog down in other environments. (see below)
What this reveals about the above observation, is that reading and conceptualizing large chunks of operation of the code has a higher cost/benefit in the accustomed environment. My day job is in C++, and because of the compile times, that's certainly the case in that environment. However, in an environment like VisualWorks Smalltalk, where the debugger is 10X nimbler and has literally made students in my Smalltalk classes cry out, "This debugger is GOD!" the cost/benefit trade-offs are very different. Most of the time, code is read and edited in the debugger, and flipping between multiple classes in that context happens automatically by navigating. Also, the work for navigating relationships in the object model in a clean code base is an entirely different order of magnitude. Instead of O(2^n) on n == distance by references, it's O(n), because there's no fan-out for different kinds of references. It's all done by "message sends" or calls of a method.
A lot of damage over the past 3 decades has been done by very smart OO people who tried to transplant methodologies from Smalltalk to C++ and Java. If one wants to be a step above, then look at the largest 7 or so cost/benefit trade-offs that affect the methodology. Then adjust accordingly.
When I'm in C# by contrast, I often get irritated that I can't just have a couple functions that I can reuse, no, I have to create as static class, ideally in a namespace and do a bit more. I do try to minimize the complexity of my classes and separate operational classes that work on data, and data classes that hold values.
This is how I view Entity Component System. It's kind of re-mapped Object Oriented, with strict Areas of Concern.
You can do all 4 of those at the same time!
(In my current side project, I have relaxed performance requirements, so I'm experimenting with taking the relational aspect up to 11.)
With modern IDEs, going to class/method definitions is a breeze. In my experience people who write big walls of code are often those who don't know how to leverage modern IDEs.
Are the complex functions unit-testable? Do they depend on other units of work or other libraries? Does it have multiple responsibilities? You are probably following most of what SOLID entails.
I find it funny that HN consistently bashes SOLID, I feel like SOLID has been misrepresented. They are _guidelines_ for development, they do not dictate everything. They might influence or support a decision.
Those who bash SOLID: have you worked on gigantic projects that are in active development for decades? I advocate for SOLID is because I have witnessed first-hand its great benefits. I have built and worked on plenty of projects that apply these principles, I have seen wonderful open-source projects that embrace them. And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it. And of course there are some devs/architects that go overboard, introducing premature abstractions, etc.. To those people I say YAGNI. The point being: following SOLID doesn't guarantee nice design. It is easy to produce shit code following SOLID, but it is even easier without it.
At the end of the day, there are trade-offs. I think using SOLID as development guidelines produces a scalable codebase divided cohesively into units of work.
And of course I have seen adverse effects from it...but that is mostly due to inexperienced developers that don't get it.
Whether SOLID is seen to pay off in the medium term, or the long term, or the very long term, is dependent on environment. In some environments, the payoff is apparent sooner. In others, it's only longer term. This is why inexperienced developers may not get it.
Of course, that beings up the question, "How can we better communicate the benefits?" Can we document and present those, from the actual history of the project?
EDIT: To better relate this to my other comment in this thread, a problem with SOLID in environments where there's a lot of bookkeeping for the compiler's sake and compile times slow down the edit-test cycle, less experienced developers are going to first notice, "Hey, this stuff makes me flip back and forth between files!" If they never see the benefits, they're naturally going to conclude it's a bad thing.
Re: your comment about "And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it"
While this is true, there is a deeper implication here. Assume programming talent is normally distributed. Now, ask yourself above what percentile do you have to be in that distribution to truly grasp the how/why of SOLID and to be able to wield it to solve problems. Now, ask yourself what percentile do you have to be below where you just go crazy with creating non-sensical interfaces and thousands of awfully named classes with single responsibilities such as "CustomerCommandMapEmbelisherConverter"?
The real problem is that code bases tend to be horrible (also normally distributed!) because a lot of talent doesn't meet the bar and can't actually produce programs that aren't rubbish. In any org you'll find the quality of the code base is somewhere on a normal distribution. And you will find all the engineers somewhere on a normal distribution. You'll have a couple of brilliant people, a couple of horrible people, a lot of average people.
The only time you truly see exceptional code bases that everyone stops and goes "wow, this is nice!" are the rare times the stars aligned.
1. Single-responsibility Principle. Objects should have only 1 responsibility
Objects by default often have two responsibilities. Changing it's own state and Holding it's own State.
2. Open-closed Principle. Objects or entities should be open for extension, but closed for modification.
The very concept of a setter or update method on an object is modification. Primitive methods promoted by OOP immediately violate this principle.
3. Liskov substitution principle. Subtypes can replace parent types.
This principle represents a flaw in OOP typing. In mathematics all types should be replaceable by all other types in the same family, otherwise they are not in the same type family. The fact you have the ability to implement a non-replaceable subtype that the type checker identifies as correct means that OOP or the type checker isn't mathematically sound... or in other words the type system doesn't make logical sense.
4. Interface Segregation Principle. Instead of one big interface have functions depend on smaller interfaces.
I agree with this principle. Though many composable types leads to high complexity. I don't think it's an absolute necessity.
5. Dependency Inversion principle. High level module must not depend on the low level module, but they should depend on abstractions.
This is a horrible, horrible design principle. Avoid runtime dependency injection always. Modules should not depend on other modules or abstractions, instead they should just communicate with one another.
If you are creating a module that manipulates strings. Do not create the module in a way such that it takes an Database interface as a parameter than proceeds to manipulate whatever the database object outputs.
Instead create a string manipulation module that accepts strings as input and outputs strings as well. Have the IO module feed a string into the input of the string manipulation module. Function compositions over dependencies... Do not build dependency chains.
1. Single-Responsibility Principle: Whether or not an object can change it's own state has nothing to do with how many responsibilities it has. Even a pure function that takes a single argument can have multiple responsibilities. To give a silly example a spell-check-and-update-wordcount function/object would violate SRP.
2. Open-Closed Principle is about modification of the code. It means the function/object should do its thing so well, you never have to touch its code. But if you want to modify the behavior of your program you should have a way to insert your new function/object so the new behavior is added.
3. Liskov Substitution Principle: "the type checker isn't mathematically sound" No type checker is mathematically sound. Obviously correct statement, since even math itself cannot be automatically proven. However what LSP basically warns against is to say: "a square is a special type of rectangle". It's not, because if you take this 'rectangle' and multiply its width by 2 and its height by 3, you either end up with a 'not a square', which is unexpected or you don't end up with 2xwidth by 3xheight, which is also unexpected.
4. Interface Segregation Principle: agreed
5. Dependency Inversion Principle: "modules should just communicate with one another" is exactly what DIP warns against. Your monthly-activity-calculator shouldn't 'just' communicate with the user-database module. It should take a user-collection interface and let another part of the program that is responsible (SRP!) for setting up that system provide it. That way this program-setup can decide based on configuration / the environment to pass it a redis-user-collection instead of an oracle-user-database.
2. Why does the open and closed principle only have to apply to code? What if it could apply to everything. You gain benefits when you apply this concept to code... what is stopping the benefits from transferring over to runtime structures. SOLID for OOP is defined in an abstract hand wavy way, for FP many of those guidelines become concrete laws of the universe.
>No type checker is mathematically sound.
3. A type checker proves type correctness. Languages can go further with automated provers like COQ or agda. They are mathematically sound. Your square example just means that types shouldn't be defined that way. It means that the type checker isn't compatible with that method of defining types.
5. I highly disagree. There should only be communication between modules NEVER dependency injection. The monthly activity module should not even accept ANY module, or module interface as a parameter. It should only accept the OUTPUT of that module as a parameter. This makes it so that there are ZERO dependencies.
For example don't create a Car object that takes in an engine interface. Have the engine output joules as energy and have the car take in joules to drive. Function Composition over Dependency Injection. (Also think about how much easier it is to unit test Car without a mock engine)
If you get rid of dependency injection, you get rid of the dependency inversion principle. DIP builds upon a very horrible design principle which makes the entire principle itself horrible.
But the rest of your examples are very far off base.
I somewhat agree with Single Responsibility being perhaps not quite right as "single" isn't always desired, appropriate or possible. But the general philosophy is absolutely on point. It's an instruction to carefully consider whether a component should be responsible for something or not and if not then think about where else that responsibility should lie. It gets pretty gnarly when you see things that just have way too many responsibilities. They become unwieldy. An object being responsible for holding and manipulating its state isn't what I would class as a responsibility. That is below the line. That's thinking far too granularly about what a responsibility is.
Same for open/closed. There is a great picture that represents open/closed of a human body (being the closed system) that you can put different layers of clothes on (open for extension) which I think beautifully captures the essence of the principle. When this is done right it's an absolute blessing. You mostly find it in frameworks that have a life-cycle and at certain points (say before anything happens or after everything has happened) they provide an overridable method with no behavior. That method allows you to insert logic the framework designers didn't think to cater for, but also keeps the framework life-cycle intact.
The example of dependency inversion just doesn't make sense. If you're creating a string manipulation library it should take strings and nothing else. It doesn't need anything else. If you're creating a string manipulation library in the first place you probably should just use the standard library. Maybe that's just a bad example, but I still don't agree with your sentiment with always avoid runtime dependency injection.
Forgetting the string manipulation example - I'm curious what you have in mind when you say "modules should instead communicate with one another". How does this communication take place? What language are we talking about and what does some code look like? Mostly what comes to mind when I think of that are either newing up an instance of a class or calling a static method, or perhaps making some kind of http/tcp request?
See what I wrote about composition. I also have an example about a Car and engine class later in the thread.
Function Composition > Dependency Injection.
> ...I agree with your comments on Liskov Substitution Principle, that the fact it's even possible is a weakness in OOP type systems.
It's not actually a weakness in the type system. It's the weakness in the language. The language should never allow for such types to be constructed. Basically Inheritance is not compatible with the type checker. You get rid of inheritance, you get rid of this problem.
The problem with Object oriented programming is not any of these things. The problem is that an object is a bad choice for a unit of work. A good analogy is bricks and construction. If a brick represents a unit of work to construct a wall, object oriented programming represents a brick with jagged faces.
This is why, no matter how deeply you follow these guidelines you will always have to build custom "interface bricks" (aka glue code) to compose jagged bricks together.
GoLang solves the problem with objects by getting rid of objects all together, but the fundamental procedural function that it uses as a primitive of composition is also jagged in a way. GoLang procedures do not compose very well.
There is a deeper primitive that programmers should model their code around that gets rid of the usage of misshapen bricks as the building block of programs. Bricks that compose with other bricks without glue. I leave it to you to find out what this primitive is, as you use it everyday to build misshapen objects.
The original article talks about readability and simplicity. It does not talk about compose-ability and modularity. Both of the aforementioned traits have a strange relationship with readability and clarity. More modularity does not necessarily mean less readability in all cases but it certainly changes readability.
Yes, and by far the worst part of them is the dependency hell problem of a sufficiently mature front-end. It gets sand in your cornflakes during development, testing, and debugging.
Imagine you're writing a front-end in this mature codebase. What injection bindings do you need to instantiate a FooUIWidget, which contains a BarUIWidget and BazUIWidget, and a few new data types, relevant to the business logic of FooFeature?
Who the fuck knows! You have a rabbit's nest of nested dependencies, you have no idea what part of the system owns which data change, or what cascading effects that data change has. Oh, and when you decide to move FooUIWidget out of ParentUIWidget into UncleUIWidget, good luck figuring out which dependencies it needs, which need to be removed from Parent, which need to be added to Uncle, which need to have alternative bindings added (Because Uncle already provides them, but they are not what Foo needs - your code compiles, and gets no run-time Dependency Injection errors, but your values are silently bound wrong behind the scenes.)
Unless, of course, you do something sensible, and instead of having each bit of your system depend on 20 things provided by dependency injection, just build the bloody thing right the first time, by using event listeners and MVVM.
 Oh, and of course, neither your compiler, nor your DI framework is mathematically capable of telling you that half of the dependencies you're providing for Parent are no longer used for anything. Go get your coal miner's hard-hat, finish up your will, sign the waiver about black lung, and go delving through your dependencies.
How is SOLID responsible for these issues? The acronym represents guidelines.
What you are describing sounds awful. IoC can get nasty when developers are inexperienced and off-the-leash.
Keep in mind that everyone is ignorant. We all have different experiences with different technologies on different codebases.
It's the domain where proper architecture matters the most, because it's hard to get it right.
> It sounds like you are pointing out specific issues that you encountered when working on a particular project.
If by 'particular project', you mean every single FE project that I've worked on, that made unopinionated use of dependency injection, sure.
> How is SOLID responsible for these issues?
'LI' doesn't do any value add for these problems (You don't use all that much inheritance, or define very many interfaces when working on front-ends), and 'D' is actively harmful, because it paves the road to dependency injection. I find posting events to a bus to be a lot easier to deal with, then dealing with a spaghetti of objects interacting with injected dependencies.
Yes they are?
If it's not referenced it, it's not needed. That's pretty straight forward?
The compiler can tell if something isn't referenced - but it can't tell if a provider that goes into a DI framework is never invoked.
The DI framework can tell (at run-time) that you're asking for something that is missing a provider. It, quite obviously can't tell (at run-time) that you're never going to ask for something in the future.
It sort of can though. It depends on the circumstance. If there is an interface with one implementation or even multiple implementations and that interface isn't referenced anywhere nor are any of its references then you can reason that those dependencies might be provided to the DI container but will never be requested as they can't be. In that case - delete them.
In the case where you have one interface which has multiple implementations and the interface is referenced, I agree. Nothing will tell you if there is one implementation sitting there entirely unused forever.
If you wanted to solve that problem you probably could. In practice I don't find it a big issue.
2. The interface is referenced, the implementations might not even be bound to it, depending on run-time conditions.
Even trivially scoped dependency injection is a fantastic way to make it impossible for your compiler, and very hard for a human, to reason about your dependencies.
For the most part I evaluate software clarity by how many times I had to hit "goto definition" to see what was actually happening. But this takes us away from what the author was attempting to say. In my opinion, 95% of clarity comes down to writing good abstractions, and it is next to impossible to articulate what a good abstraction is.
It's like describing the taste of salt.
People tend to tolerate clever code that is out in leaf functions they don’t have to step through, and knowing they could revert the change makes them tolerate the clever longer.
It's not a good name. Nor is it concise, informative or anything vaguely useful.
But I get the strong feeling it comes from someone spouting off a line along the lines of "we need to name things for what they do" and then someone else just coming up with that.
In your case I'm going to guess doBillCustomer() is about 2.5k lines long with a copy of its entire logic duplicated and it branches based on annual or monthly billing and subtle bugs have been fixed in one implementation but not the other and now they are diverged such that they are only 93% the same but that means all bets are off. There are 15 levels of nesting and it interacts with at least 5 external systems during all of that.
Am I close?
I'd sooner say clever is a skill, that gets better with practice. Compare it to solving tricky math problems - the more you solve them, the more clever you 'expend', the better you get.
Exactly! However, keep in mind the difference between rehearsal and performance, and act according to the cost/benefit. It's very analogous to what stand-up comedians do. They do a form of practice, where they try material out with friends in private. There's another level of practice when they're on the road, but in small, obscure venues. Those are the times when they go "courageous," take risks, and try new things out. It's a different matter entirely, when it's their big HBO special filmed in some huge famous theater. That filmed show is going to be a permanent record which affects their reputation for years after. Think on this, when coworkers check code into production. Code in production might be executed and read years down the road.
There's a level of clever, where things seem complex and abstruse on the surface. There's another level of clever, where things seem clear and simple on the surface, but deep insight went into making things that way. (Then there's a level of faked deep-clever that relies on "automagic," but which isn't as clever as it seemed on the surface and costs a ton of extra debugging time.)
The issue is whether it is sufficiently general to become a standard technique.
If so, you're right. Familiarity with it makes you "cleverer", as it becomes intuitive, and less complex (as you push details down into long-term memory).
But, in that case, IMHO, it's even cleverer to "turn over the detail to the machine", i.e. create an abstraction, to hide the detail.
> There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself. https://wikiquote.org/wiki/Alan_Turing
I'm not making any claim regarding the relation between clever and clear/understandable code. Just that writing clever code, however defined, doesn't expend much of a cleverness resource - if anything, it strengthens it.
That is, looking at each piece of clever code in isolation. If the original claim was meant more that a code base has a limit to how many clever tricks it can contain before it becomes unmaintainable, I'd be more inclined to agree.
Walking strengthens the legs and increases endurance. However, no Roman Legionnaire would command his men, as if more marching would only increase capability, and they could therefore march forever and as much as they want. Instead, it's best to reserve the fast marches for when a goal is attainable which gives a tactical or strategic advantage.
There's only 24 hours a day to be clever, and there's only a limited number of hours per day a given person can muster the concentration to be clever.
That is, looking at each piece of clever code in isolation.
Which only applies to an isolated problem, as in a coding interview. In a programming project or a startup, it's more like a military campaign, where there will be many, many interrelated problems over many years.
Perhaps you're right, that practicing cleverness will make you cleverer.
But it still takes time and effort - perhaps those are the resources that are limited?
I've challenged far quite a lot of implementations where understanding a piece of functionality has required for the developer to jump between more than 23 files across 8 different projects in implementing a very domain specific functionality. Splitting code into single independent parts introduces simplicity, only and if only you are reading that part by itself, but when you layer it overall to get the functionality it delivers and it becomes a web of tangled mess of code, then that clever solution was not really clever after all.
One of the ways I complain about particularly bad decomposition (the sort of practices that lead to parodies like Enterprise FizzBuzz) is the ridiculousness of stacktraces for errors in these systems.
We tell people to use delegation but many have trouble differentiating delegation from indirection. You know things have gotten particularly bad when you have traces with the same sequence of three or more functions appearing three times. Debugging this is a nightmare. It’s literally a maze of logic. This type of code has to be memorized to be understood, which further makes an existential threat of a saner person’s attempts to refactor it - moving things around to be discoverable and debuggable comes at a cost to the people who already memorized it.
There is also DAMP vs DRY and “desertification” of code, which is related to the good versus bad indirection problem.
When you get a prolific “clever” person who suffers from these problems, the whole team suffers with them (which is why I need a new job...).
Someone above mentioned flame graphs, which is a trace of every call in the system, typically for the purpose of visualizing where time is spent by the CPU. In thinking about this thread, I now want to look into using them as a measure of time spent by the reader.
My overall philosophy on code is that we should use our best days to protect ourselves from our worst days. I expend most of my clever on trying to make things look easy, which is a bit of a challenge come review time because one of the hallmarks of really clever reasoning is that people react by saying things like, “well of course it works that way”.
/shamelessly stolen from somewhere I'm too lazy to look up.
It's not. It's possible to not know how to do it well though. As it's something you have to experiment with, learn various approaches, instrumentation.
When you are writing code, you generally know what it is you are trying to achieve. When debugging code, you're frequently trying to find out why a problem is happening in the first place; often in code that someone has written or that you wrote months or even years ago.
I'm not saying that debugging isn't a skill you can learn, but it's a superset of writing code, so it's by definition harder.
But being harder is not even about the skills themselves, it's about mental effort it takes to do something. And designing and implementing things is certainly much much harder, than digging into something already designed and implemented.
Code that is trivial really needs no elaboration, but occasionally I feel like I gotta go crazy with a bunch of hashmaps of lambdas and all that jazz, and I don't think that there's inherently anything wrong with that.
However, when I do that, I make sure I document it like crazy with comments, so that when I have to look at the code two weeks later, I at least can figure out what I was doing.
There is a point beyond which accurate documentation is more difficult than improving the code to negate some of that documentation. That makes the code cleverer still (without air quotes). This is not far off from St Antoine de Exupery’s comments on perfection being achieved when there is nothing left to take away.
Also what is up with comparator function being used this way for most articles nowadays? If I am not mistaken "return a-b" is the much better solution and don't say that it is considered too clever :)
> This is reasonable when you’re dealing with functions which fit on a slide, but in the real world complicated functions– the ones we’re paid for our expertise to maintain–are rarely slide sized, and their conditions and bodies are rarely simple.
Cleverness should be used in constrained situations like performance (fast inverse square root  comes to mind), and comments explaining the cleverness is important.
I should note  did a terrible job at commenting.
Ah, "comp"utation. "comp"lex numbers? "comp"licated function? "comp"onent? "comp"anion numbers, is that a thing? "comp"rehensive example?
Saving three characters of typing is more important than being clear?
Simple and dirty beats complex and clean any day. - Learn C the Hard Way
But yes, O(n^2) vs. O(n) can matter for large n.
This is a terrible example and a code change I would never approve. The clarity-over-cleverness goal is good but not with these kinds of cases.
Maybe on to something there, golang code bases I've seen are a complete mess because of how "simple" the language is. Hopefully more people start realizing this and moving on to better languages.
Of course, not that there's anything wrong with properly done OO, it's just that golang's implementation of it is botched.
The calibre of the standard Soviet infantry weapon is 7.62mm. In 1930, a 7.62mm `TT' pistol was brought into service, in addition to the existing rifles and machine-guns of this calibre. Although their calibre is the same, the rounds for this pistol cannot, of course, be used in either rifles or machine-guns.
In wartime, when everything is collapsing, when whole Armies and Groups of Armies find themselves encircled, when Guderian and his tank Army are charging around behind your own lines, when one division is fighting to the death for a small patch of ground, and others are taking to their heels at the first shot, when deafened switchboard operators, who have not slept for several nights, have to shout someone else's incomprehensible orders into telephones-in this sort of situation absolutely anything can happen. Imagine that, at a moment such as this, a division receives ten truckloads of 7.62mm cartridges. Suddenly, to his horror, the commander realises that the consignment consists entirely of pistol ammunition. There is nothing for his division's thousands of rifles and machine-guns and a quite unbelievable amount of ammunition for the few hundred pistols with which his officers are armed.
I do not know whether such a situation actually arose during the war, but once it was over the `TT' pistol-though not at all a bad weapon-was quickly withdrawn from service. The designers were told to produce a pistol with a different calibre. Since then Soviet pistols have all been of 9mm calibre. Why standardise calibres if this could result in fatally dangerous misunderstanding?
Ever since then, each time an entirely new type of projectile has been introduced, it has been given a new calibre...
[West Germany and France] have excellent 120mm mortars and both are working on the development of new 120mm tank guns... [W]hat happens if, tomorrow, middle-aged reservists and students from drama academies have to be mobilised to defend freedom? What then? Every time 120mm shells are needed, one will have to explain that you don't need the type which are used by recoilless guns or those which are fired by mortars, but shells for tank guns. But be careful-there are 120mm shells for rifled tank guns and different 120mm shells for smoothbore tank guns. The guns are different and their shells are different. What happens if a drama student makes a mistake?
The Soviet analysts sit and scratch their heads as they try to understand why it is that Western calibres never alter.
(This specific chapter can be read here: http://militera.lib.ru/research/suvorov12/06.html)
There are too many developers who are "super good at writing clever code", and not enough who are comfortable with profiling and analytics.
> Even the open source Common Lisp compilers, written by arguably the lispiest of Lispers, don’t have a lot of “cleverness”.
To which I replied my agreement:
> Don't write clever. Write clear.
Its a sentiment that you find attached to Lisp programming style fairly often, although ironically there is a whole lot of barely readable Lisp code out there.
Personally, I think the code is (nearly) worthless crap if someone with skill has to spend as much time parsing it as the writer did writing it.
That said, I still agree with the basic idea. An if-else-if chain is easier to reason about than if-return, and representing information with enum or algebraic data type can be more robust than using a combination of booleans.
At its base level, this is missing an important thing: context. Clear to who? Something can be very clear to one person, yet opaque to another. In writing, and in programming, you need to decide which audience that you're talking to, and write something that they will understand.
This often comes up in discussions about jargon. Jargon is a way to increase the density of communication. This is often perceived as a loss of clarity, but the question again is, clarity for who? For two experts, discussing complex things in their field of expertise, jargon can increase clarity, by referring to shared context. Higher bandwidth communication allows for more discussion of more complex topics, because you're not wasting time and mental energy re-explaining things from first principles.
Put another way, there is always some shared context going on; that's what language actually is in the first place. I have used a number of words in writing this comment, but I haven't set out any definitions; that's because I'm assuming that you know English in order to read my comment. If I were trying to communicate to a child, I wouldn't be using all of the words that I'm using here, because it is too complicated for them to comprehend. But, trying to explain the topic of this comment to that child would take much longer, and be much more difficult.
So yeah, that's just one way in which discussions like these tend to frustrate me. Writing is a rich, wonderful thing, that has a huge variety of uses. Pigeon-holing it in this way makes me feel, well, dispirited. Or should I say "sad"...
(I do believe that, for both commercial development of software, as well as commercial development of writing, "keeping things simple" can be important, for various reasons. But not everything we do in life must be in the service of business needs.)
> Put another way, there is always some shared context going on; that's what language actually is in the first place. I have used a number of words in writing this comment, but I haven't set out any definitions; that's because I'm assuming that you know English in order to read my comment. If I were trying to communicate to a child, I wouldn't be using all of the words that I'm using here, because it is too complicated for them to comprehend. But, trying to explain the topic of this comment to that child would take much longer, and be much more difficult.
Thanks for this. I always struggle to articulate it.
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? https://wikiquote.org/wiki/Brian_Kernighan
> Every weightlifter is fully aware of the strictly limited size of his own muscles; therefore he approaches the weight lifting task in full humility and among other things he avoids heavy weights like the plague.
While we're throwing fun quotes around:
"programs must be written for people to read, and only incidentally for machines to execute"
— Abelson & Sussman
On the other hand, why aren't programs all written in machine language, in hex or octal? Why invent assembly language? Why invent macro assemblers? Why invent high-level languages?
Programmers are not unique in this regard. Mathematicians and logicians do not write all their dealings in English. They've developed a highly specialized notation for writing compact and precise descriptions of their ideas.
Furthermore, many layfolk might even say that the language of jurisprudence isn't quite English, despite how it looks. The jargons of many fields, like “legalese”, serve the same purpose as mathematical notation, which is itself the same purpose as programming languages: to enable ease, brevity, exactness, and precision in their respective domain-specific communications.
You can see a little of all that in the same preface by Abelson & Sussman, which goes on to say:
These skills are by no means unique to computer programming. … We control complexity by establishing new languages for describing a design, each of which emphasizes particular aspects of the design and deemphasizes others. ¶ Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. … Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.”
> Because that's how people used to read things when the A&S quote was from, in 1979.
Clearly it's not how people always read things back then, as it's not how people always read things now. People read programs, sometimes on screens, sometimes on paper, just like they read mathematical formulas łsĩ. In some cases, programs have been written on paper in some formal language that hadn't actually been implemented, simply because that language was seen as an effective means to communicate them. We usually identify it as pseudocode, ranging from “pidgin algol” to “plausibly python” to the M-expressions of the early LISP manuals.
M-expressions are still used in the LISP 1.5 manual of late 1962, despite the fact that 2.5 years after the LISP 1 manual, the LISP system was still incapable of reading M-expressions—the programmer had to translate them to S-expressions by hand before entering them. The Appendix B of the 1.5 manual gives the code for the interpreter, as well as some rationale:
This appendix is written in mixed M-expressions and English. Its purpose is to describe as closely as possible the actual working of the interpreter and PROG feature.
(It turns out to be possible to get an even closer description with a formal notation for the semantics, as was done with the definition of Standard ML, but such formalism has yet to catch on).
This emphasis on the importance of notation for the exact expression of thoughts and precice description of “ideal objects” is not particularly new, and it certainly predates the invention of the computer:
… I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. …
I believe that I can best make the relation of my ideography to ordinary language clear if I compare it to that which the microscope has to the eye. Because of the range of its possible uses and the versatility with which it can adapt to the most diverse circumstances, the eye is far superior to the microscope. Considered as an optical instrument, to be sure, it exhibits many imperfections, which ordinarily remain unnoticed only on account of its intimate connection with our mental life. But, as soon as scientific goals demand great sharpness of resolution, the eye proves to be insufficient. The microscope, on the other hand is perfectly suited to precisely such goals, but that is just why it is useless for all others. ¶ This ideography, likewise, is a device invented for certain scientific purposes, and one must not condemn it because it is not suited to others.
(from the preface of «Begriffschrift» by Gottlob Frege, 1879, translated by Stefan Bauer-Mengelberg).
In 1882, Frege further explained: “My intention was not to represent an abstract logic in formulas, but to express a content through written signs in a more precise and clear way than it is possible to do through words.”
> People don't code as if code was primarily for people to read.
I agree. I am often guilty of this too, although I usually forget about it until I try to read a program I'd written some time ago and discover that it requires some careful study to figure it out.
It's a shame, really, because we should be writing readable code. But after I'd read this statement, I was thinking: how do people code, then? And I was reminded of this little bit from Paul Graham's essay “Being Popular”:
One thing hackers like is brevity. Hackers are lazy, in the same way that mathematicians and modernist architects are lazy: they hate anything extraneous. It would not be far from the truth to say that a hacker about to write a program decides what language to use, at least subconsciously, based on the total number of characters he'll have to type. If this isn't precisely how hackers think, a language designer would do well to act as if it were.
It is a mistake to try to baby the user with long-winded expressions that are meant to resemble English. Cobol is notorious for this flaw. A hacker would consider being asked to write `add x to y giving z` instead of `z = x+y` as something between an insult to his intelligence and a sin against God.
The goal of code, however, is simply to communicate a process to a computer. Or rather, when that process is subject to change over time, to communicate a process to a computer, simply.
I would tend to disagree. Code is not about communicating with a computer. It is about communicating with humans. The computer does not care how the code is written - it is the humans that have difficulty with it. In a way, programming is the translation of a language that the computer understands into a form that humans can comprehend. Not so much the other way around. In this regard, clever is fine for a computer, but it is not always understandable to a human.
For code that you only use yourself? Maybe do it, but I would guess that after one or two years you would not immediately understand what your former self fabricated.
But once that simple process was first communicated to a computer in 1965, what next? More complex processes, surely? And more and more complex processes as the computing power increases?
Point being, a little bit of "cleverness" in one place may save a lot of effort down the line. I've also never been too fond of that quote.
I write a filter/map, which is slow in some languages, but looks readable and if I need better perf I rewrite it with a for loop.
> verbiage: excessively lengthy or technical speech or writing.
> synonyms: verboseness, padding, superfluity, redundancy, long-windedness, protractedness, digressiveness, convolution, circuitousness, rambling, meandering; waffling, wittering, "there is plenty of irrelevant verbiage but no real information"
Oh thanks, that was so useful.