For ages now, I've been telling people that the best best code, produced by the most experienced people, tends to look like novice code that happens to work --- no unnecessary abstractions, limited anticipated extensibility points, encapsulation only where it makes sense. "Best practices", blindly applied, need to die. The GoF book is a bestiary, not an example of sterling software design. IME, it's much more expensive to deal with unnecessary abstraction than to add abstractions as necessary.
People, think for yourselves! Don't just blindly do what some "Effective $Language" book tells you to do.
(For starters, stop blindly making getters and settings for data fields! Public access is okay! If you really need some kind of access logic, change the damn field name, and the compiler will tell you all the places you need to update.)
> Could it be that best practices are designed to make sure mediocre programmers working together produce decent code?
Yes, it is, but the issue is that the industry should move away from the idea that software can be done in an assembly line. It is better to have a few highly qualified people capable of writing complex software than a thousand mediocre programers that use GoF patterns everywhere.
I'm a security analyst, not a developer so please forgive my ignorance, but how do you become one of the highly qualified coders without first spending some time being a mediocre developer?
You don't. There's no way around having to spend some time in the trenches --- but we can at least minimize the amount of time developers spend in the sophomoric architecture-astronaut intermediate phase by not glorifying excess complexity. Ultimately, though, there's not a good way to teach good taste except to provide good examples.
"Best" practices may be a misnomer, but I don't believe it's possible to execute large projects without some kind of standardization. It is inevitable that in some cases that standardization will hinder the optimal strategy, but it will still have a net positive impact. Perhaps if we started calling it "standard practices" people would stop acting like it's a groundbreaking revelation to point out that these practices are not always ideal.
I think it's more that the best practices were adopted for specific reasons, but they are not understood or transmitted in a way that makes those reasons very clear. That is, a 'best practice' tends to solve a specific kind of problem under a certain set of constraints.
Nobody remembers what the original constraints are or if they even apply in their current situation, even if they are actually trying to solve the same problem, which they might not be.
This spirals as well: I've spent quite a lot of time recently helping people at the early stages of learning programming, and it's taken me a while to stop becoming frustrated with their misunderstandings. Sometimes it is just because something basic isn't clicking for them, but a lot of the time it's down to me trying to explain things through a prism of accumulated patterns that are seen as best practise/common sense, but stepping back and viewing objectively are opaque and sometimes nonsensical outside of very specific scenarios. There's a tendency to massively overcomplicate, but you forget very quickly how complicated, then you build further patterns to deal with the complexity, ad infinitum
Yep the biggest issue we have with software is every developer knows Single Responsibility Principle, and no developers know why. Everyone knows decouple and increase cohesion, but few any know what cohesion is.
That's it. "Best practices" is, essentially, coding bureaucracy. That's not a pejorative; bureaucracy is quite necessary.
I have a rough idea of "why OO?" but in practice, it can be pretty hard on things like certain kinds of scaling, projects that require a goodly amount of serialization/configurability and the like.
There is a spectrum of "it just works" to "I've applied every best practice" bad programmers who spend huge amounts of time on best practices will end up wasting a lot of time over optimizing but it may have the benefit of reducing risk where they may have a lack of deep understanding or it may shine light on that risk, that is if they apply practices correctly.
You're ignoring how "best practices" frequently add negative value. Design style isn't a trade-off between proper design and expedience. It's a matter of experience, taste, and parsimony.
Those "Effective $Language" books make arguments for their advice, which you are free to find compelling, or not. Back when I first read Effective Java, maybe about a decade ago, I thought it was over-complicated hogwash that I knew better than. When I read it again starting about a year ago, I found the arguments for most of its advice very compelling, based on problems I've run into time and again in my own, and other people's, code, and not just in Java. YMMV I suppose!
The backlash against "engineered" software definitely seems real, and I think that's great - questioning assumptions is critical - but I think a lot of the insinuations about peoples' motivations and talents are unnecessary and honestly kind of silly. Most of us are just trying to find ways to avoid issues we've seen become problems in other projects in the past, it's not some nefarious conspiracy against simple code.
Are you seriously saying that the best code is an untestable mess of big God classes?
Because in my experience this is by far the type of code written by inexperienced programmers.
Abstractions and interfaces are the best way to make a system testable and extensible and it has nothing to do with using a pattern just because you read about it in the gof book 5 minutes ago.
And using public fields in a non trivial project is a sure receipt for disaster.
Better than ugly, beautiful is.
Better then implicit, explicit is.
Better than complex, simple is.
Better than complicated, complex is.
Better than nested, flat is.
Better than dense, sparse is.
Counts, readability does.
Special enough to break the rules, special cases are not.
But beaten by practicality, purity is.
Silently passed, an error should never be.
Unless explicitly, is it silenced.
In the face of ambiguity, the temptation to guess, refuse you must.
One, preferably only one, way to do it, there should be.
Not obvious, it might be.
Better than never, is now.
But often better than right now, is never.
If hard to explain, bad it is.
If easy to explain, good it may be.
Namespaces are a honking good idea - more of them we should do!
So which rule take precedents here? Namespace is obviously not flat, at least it's more nested than one without namespace.
That's the problem I have with people praising Zen of Python as if it means anything. It's like a bible you just pick the verse you like to justify your action even if it might conflict with other rules. Then you praise the whole thing for being so wise.
It means a lot: good python code follows it. However, yes, some of the verses conflict, because it turns out that good advice sometimes contradicts itself. All I can say is don't go too far in either direction.
You would prefer PHP[1] where the standard library has no namespaces?
[1] I refer to "classic" PHP. No clue if anything PHP5+ fixed this, though I doubt that they would make such a breaking changed even across major revisions.
Yes, I would. And though PHP is widely agreed to be piece of shit (it seems: I don't work with PHP so I'm here only relaying this popular sentiment), that doesn't tarnish the idea by association (which is what I sense you might be trying to do).
ISO C and POSIX also have a flat library namespace, together with the programs written on top. Yet, people write big applications and everything is cool. Another example is that every darned non-static file-scope identifier in the Linux kernel that isn't in a module is in the same global namespace.
Namespaces are uglifying and an idiotic solution in search of a problem. They amount to run-time cutting and pasting (one of the things which the article author is against). Because if you have some foo.bar.baz, often things are configured in the program so that just the short name baz is used in a given scope. So effectively the language is gluing together "foo.bar" and "baz" to resolve the unqualified reference. The result is that when you see "baz" in the code, you don't know which "baz" in what namespace that is.
The ISO C + POSIX solution is far better: read, fread, aio_read, open,
fopen, sem_open, ...
You never set up a scope where "sem_" is implicit so that "open" means "sem_open".
Just use "sem_open" when you want "sem_open". Then I can put the cursor on it and get a man page in one keystroke.
Keep the prefixes short and sweet and everything is cool.
I was a big believer in namespaces 20 years ago when they started to be used in C++. I believed the spiel about it providing isolation for large scale projects. I don't believe it that much any more, because projects in un-namespaced C have gotten a lot larger since then, and the sky did not fall.
Scoping is the real solution. Componentize the software. Keep the component-private identifiers completely private. (For instance, if you're making shared libs, don't export any non-API symbols for dynamic linking at all.) Expose only API's with a well-considered naming scheme that is unlikely to clash with anything.
PHP namespacing in many ways ruined the language. I'm not just talking about the poorly chosen use of the backslash '\' path separator or the fact that namespaces aren't automatically inferred which causes me to have to write "use" endless times at the top of the file which destroys my productivity when working outside an IDE.
I'm talking about the heart of PHP which is stream processing. Why in the world would you destroy the notion of simply including other source files in order to wedge in this C++ centric notion of a namespace? Before "namespace" and "use", the idea was that all of the files included together can be treated as one large file, and sadly that conceptual simplicity has been lost.
Also the lost opportunity of having objects be associative arrays like Javascript, combined with namespacing, have convinced me that perhaps PHP should be forked to a language more in-line with its roots. I haven't tried Hack or PHP7 yet but I am apprehensive that they probably make things worse in their own ways.
I think of PHP as a not-just-write-only version of Perl, lacking the learning curve of Ruby, with far surpassed forgiveness and access to system APIs over Javascript/NodeJS. Which is why it's still my favorite language, even though the curators have been asleep at the wheel at the most basic levels.
The standard library isn't namespaced. If you want to use a stdlib function inside an NS, you can without issue. If you want to use a stdlib class, or any top level class, it needs backslash before its name or you need to import it.
There is a community move towards a standard namespacing with PHP-FIG. This is useful and we are seeing lots of progress on internals thanks to the work by the community.
Like a lot of things, PHP name spaces are or were a big mess. Lots of progress is being made to improve though I think a lot must remain.
I've thought about creating a project that packages up various categories in the stdlib into namespaces with consistent inputs and outputs. That would be nice but the process isn't a lot of fun.
(an attempt to translate to standard english grammar, for the benefit of other non-native English readers, who may also struggle to parse this)
Beautiful is better than ugly.
Explicit is better then implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases are not special enough to break the rules.
But purity is beaten by practicality.
An error should never be silently passed.
Unless it is silenced explicitly.
You must refuse the temptation to guess in the face of ambiguity.
There should be one, preferably only one, way to do it.
It might not be obvious.
Now is better than never.
But never is often better than right now.
It is bad if it is hard to explain.
It may be good if it is easy to explain.
Namespaces are a honking good idea - we should do more of them!
I think the advice is more relevant in the context of the specific language; it's not universal.
In python you can go from attribute access to using a property (getter/setter) without breaking anything.
The same is not true in a language like Java where obj.foo is always a direct field access distinct from calling a method like obj.getFoo(), so going from public fields to getters is not backwards compatible and can be painful.
>In python you can go from attribute access to using a property (getter/setter) without breaking anything.
True, but that should be avoided if possible. Python 'properties' violate the principal of "explicit is better than implicit". Once you realize "Oops, I need an accessor function here", the lazy programmer says "Aww, grepping for all uses of .foo and replacing them with .getFoo() will take 20 minutes. Instead, I'll just redefine it as a property and no one will notice." If you care about quality, go the extra mile: make it clear to the people reading your code that a function is being called.
Properties are a kinda nice language feature, but they are so frequently misused that I think the language would have been better off without them. They encourage bad habits.
However, it's still a simple enough change that you shouldn't build getter/setters unless you're already pretty sure it'll be changed. OTOH, if you're using a language with generated getter/setters (ruby, smalltalk, lisp), just do it.
I use getters and setters, but still think you will waste more time arguing about this issue, than leaving it be and finding out you have to change them down the line.
Edit: also, especially for getters, I like my accessors to be simple accessors. Hiding too much code behind them can be unpleasantly surprising, so as they deviate further from accessors I like to rename them - e.g. CalculateXxxx rather than GetXxx. Fewer surprises. Given that, I potentially have an issue with continuing to call it GetXXX or SetXXX in the face of certain changes.
Using an IDE with an understanding of the language you have is even less painful. Having the IDE automatically refactor a public field to use getters and setters is a breeze with managed languages like Java and C#.
The same can be said about interal interfaces with only one implementation. If there is really a need for an interface then why not add it later until it's actually needed it instead of creating additional overhead for something we may never need?
If there are inexperienced programmers working on it, as in my preamble, then almost surely it is in whatever language that doesn't prohibit to mutate objects indiscriminately.
In F#, Haskell and other languages that enforce immutability obviously it isn't a problem, in Python most surely it is.
Code can be underabstracted, but it can also be overabstracted - and abstracted with the wrong abstractions. And fixing the latter sometimes involves a temporary stay at "untestable mess of big God classes" when you remove the bad abstractions to clear the way for creating better ones. Not because it's the best code - far from it - but because it's slightly less terrible code.
> And using public fields in a non trivial project is a sure receipt for disaster.
Ergo, all non trivial C projects are disasters? Well, maybe, but I disagree on the reasons.
Language enforced encapsulation is a useful tool, but some people take it to the deep end and assume that if their math library's 3D vector doesn't hide "float z;" in favor of "void set_z(float new_z);" and "float get_z() const;" (one of which will probably have a copy+paste bug because hey, more stupid boilerplate code), they'll have a sure receipt for disaster. Which I suspect you'd agree is nonsense - but would also follow from reading your words a little too literally.
In my experience, in quite big projects, people had a tendency to mutate objects in the wrong place and in the wrong reasons.
A field encapsulated in a getter without setter certainly helps.
But if it has to be able to be mutated in some cases, you're stuck. This is a place where Scheme's parameterize may be useful: Define a closure with the setter in scope, and write it so that when it's called with a lambda as an arg, it will raise an error unless those certain conditions are met, in which case, it will use parameterize to make the setter available in the lambda's scope. Or I suppose it could pass it in, which would be slightly simpler...
The issue is, for example, if you have for example a class encapsulating a bunch of flags, and allowing the user to either set each flag seperately, or a bitfield representing all of it.
In C, you might be able to do some magic, but in Java, you’ll need setters and getters there – you can’t even do final fields there.
Luckily, in C#, you can just use accessors, and maybe in Java with Lombok, too.
I think he is rather referring to cascades of function calls/types and "custom solutions". Of course abstractions are a great tool when used properly. Thinking of a best practice from Object oriented design: low coupling but high cohesion.
When you use abstractions, you get low coupling. But like everything in life, also this has a price: you may need an extra line of code to instantiate the abstraction and sometimes have to write an extra getter because it's not perfect yet. That's a fine price to pay unless go you use way too many abstractions and they are nested deeply. It may have some aesthetics but it can be hell to debug and overly complicated to extend code.
So that's why one must also focus on high cohesion. I really like the modern JavaScript way, the imports/requires are for low coupling and the build automation is for cohesion. Anyways, these things are best practices as well but not sure if the author took those into account... ;)
I don't think that's seriously what he's saying. You've over-interpreted to project the "logical" extreme of the POV expressed on the author. This is a trope in online debate that needs to die.
I've returned recently ( for domain-specific reasons ) to sets of (opaque) god-classes with good interfaces when needed. No state in a god-class is visible except through the interface.
Since the domain requires a great deal of serialization, the interfaces are usually strings over that. In cases, it's even easier to just open a socket to the serialization interface.
So far, I'm able to pub/sub periodic data streams, but it'd be pretty easy to add "wake me when" operators and such.
It forces the use of one central table of names cross callbacks ( which can be grown dynamically ) but it's very, very nice to work with.
YMMV. The domain is instrumentation and industrial control, which just fits this pattern nicely. All use cases can and are specified as message sequences.
I've been using the delegation pattern a lot lately as a nice way to combine the best bits of god classes (few dependencies) with the best of SRP (small easy to test parts).
This way A, B, C (and many others) only depends on X, the delegator (which exposes a number of interfaces for practically everything), but X depends on everything and the kitchen sink, X contains no real functionality.
>And using public fields in a non trivial project is a sure receipt for disaster.
Why, though? Surely it's the programmers' job to access what they need and leave alone what the don't. As someone else said, Python has no concept of public/private and it works okay.
If you make the statically typed field public you lose the ability to change the implementation without breaking the clients. (Getters/setters have the same problems, but not as pronounced). And no, you don't always have the ability to recompile everything that's using your code.
An argument for properties are to hide the implementation details. It keeps idiots from changing variables they shouldn't (if they really want to, there's sometimes reflection), and it hides implementation detail variables from the autocomplete box.
Blind application needs to die in general. SOLID is a good idea (it should just be SID, IMHO, but anyways), so are design patterns... when they make sense. When you think "I need to delegate object construction to different subclasses contextually" use a factory. Don't use a factory when you don't think that. When you need to chose an approach that should be given by the caller, no, for the love of god don't use the Strategy pattern. The strategy pattern is a hack that belongs in the past, and you should use lambdas instead because it's the 21st freaking century, not 1996, and we all have lambdas now. So don't use strategy, unless you're stuck behind, and if you do, feel bad about it.
/rant
Anyways, yeah, just use your brain, insist on defined interfaces, and if something might be suboptimal, allow for it to be changed, and you'll be fine.
> The strategy pattern is a hack that belongs in the past, and you should use lambdas instead
In that case, your lambda object is your strategy. One of the good points that the "Design Patterns" authors make and that everyone else ignores is that these patterns are names for recurring structures that pop up independently of implementation language and environment. Some silly functor object is one way of implementing the more abstract "Strategy" pattern. Your lambda is a better one in many cases. It's the same high-level concept.
Well, yes, but if you every write a class for C#, modern Java, or pretty much any language save C++ or Python (and even then you should write functions) that has the word "strategy" in it, you're doing it wrong, and should be punished for reinventing language features using OO methodology.
Was it Steve Yegge who menationed the Perl community calling Design Patterns "FP for Java"?
There is a name for "strategy pattern" that existed long before the GoF book: higher-order function. But what's worse, in that it causes confusion, is when names are repurposed, "Functor" is a useful abstraction in programming, but is not related to your usage above.
I never said a Functor was a HOF, but it's not far wrong. The implementation of the morphism-mapping part of a functor is a higher order function. Show me an example of Strategy that is not essentially a HOF.
I asked for an example of Strategy that wasn't just a HOF, but I now see you do agree with this. Yes your example is technically a HOF as it returns a function. So I guess your point is that HOF is not specific enough, although one might argue that in general usage it usually does imply function-value arguments.
I mean, that's not the general use I see, so we clearly have different social circles. The thing is, a HOF is a mechanism, the Strategy pattern is an intent. I would in fact argue that there is at least one HOF that takes functions as arguments and isn't an instance of strategy: call/cc.
call/cc does not actually use the passed in function to determine how any action should be done. All it does is provide a capture of the current continuation as an argument. That's it. So it's not a strategy, it's just a HOF.
Well Java didn't add lambdas to write make-counter, it was map and fold that got them envious. In fact, IIUC mathematically make-counter is first-order, counting the nesting of funtion types. In other words, I don't understand why many describe it as a HOF at all. Map and fold are second order. Callcc, ignoring its Scheme implementation as a macro, would be third order. One could argue that the second order argument to callcc is strategy, with the continuation being the strategy.
Call/cc isn't a macro. It's actually a special form, although in CPS-based schemes, it could probably be implemented as a function if there was a lower level continuation primitive.
And semantically, a continuation is pretty much never a strategy.
As for make-counter not being higher-order, that's just not true. A higher-order function takes and/or returns a function. make-counter returns a function: it's higher-order.
No its disputed, for the reasons I gave. But I've already accepted that many regard make-counter to be a HOF. I was careful and said make-counter is not a second-order HOF, which Strategy is.
Callcc can be implemented as a function, no continuation primitives needed. Haskell has examples in its libraries.
Lastly, regarding semantics, I see no difference between Strategy and a second-order HOF. Yes the scope of Strategy is supposed to much more limited, but I don't see value in this. I concede that others might do.
Having thought about this further, I think it is probably only correct of me to talk about the "order" of "functionals" as in functions to scalars in mathematics. The term "rank" is better suited here. Category theory supports the popular definition of higher-order function and gives a different meaning to "order". So make-counter is rank-1 but still higher-order. Apologies for the previous post.
The ServiceLocator anti-pattern seems a bit silly: either just depend on the object directly, or if there may be different objects used throughout the system, pass one in.
Still, apply common sense. Sometimes the probability of adding something is so high and the cost of making an extension point so low that it makes sense to design with extensibility in mind. These extension points are rare, though, and usually occur at major module boundaries (e.g., plugins), and don't need to be scattered through random bits of your code.
You also have to factor in the cost to maintain your extension which you currently have no use for, which most people forget.
It also might be easy to include now, but your extension might complicate a new feature request. Or the new feature might complicate your previously simple extension. If it's still unused, don't be afraid to throw it away then either.
Because sometimes those choices have a large effect on the amount of work/mental tax in the future - for example, if you know that a new feature will have to interop with yours in the near future and the cost is low to implement the right extensibility point now, it would be absolutely stupid not to - I would call that bad engineering that costs time & effort.
Obviously one doesn't want to go down the rabbit hole too early, but the other extreme is just as bad.
The cost to add it is often much higher, once the code has high fan in (lots of code paths depend on it).
I am definitely a fan of not over engineering, but I'm more of a fan of thinking. Think about your problem and your use cases, and about your own ability to predict the future. If you can predict future changes with high probability, then go ahead and design for those. If you're not a domain expert, you probably can't predict the future at all (and you probably grossly underestimate how bad your predictions are), so you should stick to the bare minimum.
The GoF book is a dictionary. When it came out, we all got names for patterns we use when appropriate.
The thing that must die is over-reliance on formalisms where they don't add value.
(Because there absolutely are places where they do.)
Having the taste to use the right approach for each job is what sets experience apart!
Design Patterns: Elements of Reusable Object-Oriented Software, whose four authors are sometimes called the Gang of Four (GoF). It established a vocabulary of common patterns.
For ages now, I've been telling people that the best best code, produced by the most experienced people, tends to look like novice code that happens to work --- no unnecessary abstractions, limited anticipated extensibility points, encapsulation only where it makes sense.
I love this.
"Perfection is reached, not when there is nothing left to add, but when there is nothing left to take away." -- Antoine de Saint-Exupery
Have to disagree a little on getters and setters though. They're tremendously useful as places to set breakpoints when debugging. Well, setters are, anyway; I guess it's rarer that I'll use a getter for that. Anyway, perhaps we can agree that the need for these is a design flaw in Java; C# has a better way.
For ages now, I've been telling people that the best best code, produced by the most experienced people, tends to look like novice code that happens to work --- no unnecessary abstractions, limited anticipated extensibility points, encapsulation only where it makes sense.
In other words, the best code is the simplest code that works. It usually tends to be very flexible and extensible anyway, because there is so little of it that understanding it all and modifying it becomes easy. The most experienced programmers are the ones who can assess a problem and write code to capture its essence, and not waste time doing that which isn't necessary.
I've observed that there is a "spectrum of complexity" with two very distinct "styles" or "cultures" of software at either end; at one end, there is the side which heavily values simplicity and pragmatic design. Examples of these include most of the early UNIXes, as well as later developments like the BSD userland. The code is short, straightforward, and humble, aptly described by "novice code that happens to work".
At the other end, there's the culture and style of Enterprise Java and C#, where solving the simplest of problems turns into a huge "architected" application/framework/etc. with dozens of classes, design patterns, and liberal amounts of other bureaucratic indirection. The methodology also tends to be highly process-driven and rigid. I don't think it's a coincidence that the latter is heavily premised on and values "best practices" more than anything else.
Here's another one of the "rebellion" articles against "best practices": ttp://www.satisfice.com/blog/archives/27
^^^This. The tendency for a certain type of software engineer to go around telling everyone else they're doing it wrong reminds me strongly of this scene from Justified: https://www.youtube.com/watch?v=LG4hOjJ9tEs.
The problem with their advice is that they are not master Foo. They could tone it down a little for their wisdom is not absolute. Some rules works for some people in some situations and other rules are just practical conventions.
One good advice I found is to just plain ignore status quo, and follow common sense when the context demands it.
The getter/setter problem is solved very nicely by C#. You can change a field to a property at any time without changing the rest of the code, and a lot of the use cases for get/set can be handled compactly like public int X{get; private set;} instead of having to have 2 variables.
Getters/setters are important when the data structure is used from a different linkage unit than where it is defined, and binary compatibility across versions is important.
That doers happen, but usually in fewer cases than most intermediate programmers realize. Instead, they see cases where it is used for real (like winforms or Direct3D) and cargo cult it into all code they write.
> they see cases where it is used for real (like winforms or Direct3D) and cargo cult it into all code they write.
And a lot, a lot, of my CS prof colleagues believe it is deeply important, for reasons most of them are unable to articulate, that in an intro CS1 class all instance variables be private and accessed only through getters and setters. This is actually baked into the College Board's course description for AP CS A, and a point or two (out of 80) often hinges on it in every exam, so it is near-universally taught in high-school level CS classes (in the US). Sigh.
The kinds of things you need to do in order to maintain a stable ABI are not the kinds of things you should apply to all parts of your program. That way lies madness.
BTW, you don't need accessors even for public ABIs. stat(2), for example, doesn't need "accessors" for struct stat and it's been stable for decades. The same idea applies to Win32 core APIs.
People, think for yourselves! Don't just blindly do what some "Effective $Language" book tells you to do.
(For starters, stop blindly making getters and settings for data fields! Public access is okay! If you really need some kind of access logic, change the damn field name, and the compiler will tell you all the places you need to update.)