1. It gave me specific goals in terms of my own improvement, and the chapter titles did a good job of being a stand-in for challenging my own intuition.
2. In mentoring other developers, those I shared some of the lessons in the book with saw huge improvement literally week over week. I saw, personally, one of my mentees go from producing code that was hard for me to step through without bringing up half a dozen issues, to producing code where my largest criticisms were ecosystem dependent.
3. It created common language for discussing with teammates why certain code felt wrong.
4. It significantly reduced time spent refactoring later. The subtleties in some of the points make a re-read valuable as well. One of the better examples of this, was "Make code just a little generic"- often, we attempt to code things as generic as possible, and end up making what is effectively code soup that doesn't help accomplish the problem. Instead, building specifically for this problem, but trying to spot places it can be made just a little bit more generic, has altogether eliminated times I know I would've had to refactor again later if I hadn't been thinking of it.
5. The book knows what it isn't. He even says right in the forward and on the related Google Talk, that this first version is probably not the most value it can produce, but that hopefully by version 3 it'll be there.
For my time and energy, it's been more influential on how I develop software than anything else I've read. Furthermore, despite coding in primarily functional and/or dynamic languages, the examples used never made that a problem despite always being based on C. I recommend it highly.
Wow, I've often felt this during my time coding. Unnecessary abstractions often just make things more convoluted and increases boilerplate when adding new things. Great to see this being addressed.
Just Abstract It a Little
That is technically correct, but is it then about increasing code size and complexity? Sometimes
I think it's important to realise that these principles are good in general, but you can have too much of a good thing, and there are trade-offs involved as you take them more and more to the extreme. I can't say I'm an expert so I may be wrong about this, but it seems like you really need to get a sense of what's needed in each scenario - how should this bit of code work and is this communicated effectively? I find this takes practise to gain a "feel" for, but then that feels less teachable than simple principles.
In the end: there is NO right way, only trade-offs. At least know what the tradeoffs are. My book would be a trade-off recognition training guide.
I would state my experience, though. Such as "In my experience, typical shops are more comfortable with pattern X over pattern Y. I have no idea why: it's a psychology question I don't have the funds to answer."
And there'd be a chapter on how to recognize and scrutinize fads.
You are talking about "trade-offs" and I had a similar idea about "Values". With this values you would be able to derive the "Rules" (the HOW you mentioned) depending on the context and chosen trade-offs.
About it I think the 17 unix philosophy rules are very interesting. https://en.wikipedia.org/wiki/Unix_philosophy#Eric_Raymond%E... and I also started to write a blog about it https://github.com/JpOnline/Blog/blob/master/language.md
But another key dimension is dealing with changes in requirements over time. One cannot over-value their forecasting/guessing ability, and have to hedge the design to some degree. Some of the GOF design pattern writing assumed too much regularity in future changes, for example. I'd like to describe counter examples that show how assumptions of future regularity can byte one in the proverbial output port.
That's not necessarily the same as UI design, but for example, less about generic API design to be used by a million other developers, and more about in-shop API or stack design for say 20 developers.
The writer uses a very narrow perspective and just "assumes" all this can be generalized. One of the most recurring topics in the book is "complexity", but it is just introduced with a narrow one sentence tautological definition without offering any interesting discussion. All kinds of assumptions are taken for granted and not discussed.
So overall, the book reads so far like a bland collection of "do this, don't do that blog-posts". Papers like "Out of the tarpit", speakers like Rich Hickey, etc. offer much more interesting discussions on the topic of design and complexity, and are definitely more argumentative and "philosophical".
I even feel bad for writing something in such a negative vibe about a book. But really its pretentious title lead to such disappointment...
That being said, after recently reading "Out of the tarpit" and being a long time Rich Hickey fan, I completely agree that the model of complexity presented by the book feels simplistic.
I can't quite put my finger on why though. Maybe cause complexity in software has a lot more to do with understanding the problem domain rather than just the structure of the code. IMO it was worth the read, but I was slightly disappointed and it was without a doubt not a be all end all reference. (And it would be unfair to expect it to be.)
The idea behind "deep modules" could be expressed as learning an interface should save you work, not cost you work. There's a couple of deep insights in that. One is that every new interface you create imposes a cost on the programmer who is reading the code, and so you should be sparing in them. New functions and new classes are not free; each time you add one, it makes it that much harder for the maintenance programmer or library user to grok the API as a whole.
The other big insight is in embedding complexity in the code. This was a perspective shift I learned from talking with Ben Gomes (my boss's boss at the time, and now head of Search at Google) while complaining about how messy some of Google's code was. His argument was that the code is complex because the problem is complex. By embedding the complexity into the code, you let the computer deal with it on every search request instead of letting the user deal with it on every search request. The number of times a programmer will look at the messy code probably numbers in the single digits, while the number of searches that code gets per day numbers in the billions, so by making things a little better for the user at the cost of it being much worse for the programmer, you are saving millions of hours in mental tax for humanity.
As programmers, we all have a drive toward simplicity, but it's worth remembering that we exist within an economic reality too. If you want people to use your product, your product has to do something they don't want to do themselves, and if it's easy for them to whip up a quick tool that solves their problem, your product is doomed in the marketplace.
The point the review is making seems to much more about protocols, and the ability for interfaces to serve as a firewall that lets different implementations interoperate. If you don't specify that interface precisely, you get abstraction leakage, where an interface that seems simple at first actually can become very complicated.
This is a valid and interesting point too, but it's worth remembering that the vast majority of interfaces have only a single implementation. It's not necessary to specify that interface in precise detail as long as the user of that code has a reasonably intuitive view of it. They can always go look up the details as necessary, when the strange behavior occurs, while the "deep module" saves them a significant amount of work as long as they stick to the happy path.
Your point about protocols and multiple implementations nailed down the reason I was frowning after reading that review.
Thank you. I'm going to quote it, and keep quoting it, at the "developer time is more important than machine time" crowd - the one that then goes and externalizes all this "machine time", making it "user's time (and electricity)".
interfaces should be simpler than their implementations
Which the author then goes to some lengths to absolutely reject.
I find this very interesting, because 1) this is the view I’ve always held and 2) I’m having trouble accepting the authors proposition than an interface and a formal specification are equivalent.
...if they are, certainly, the non-code ‘level 3 artifacts’ that define code behaviour but are not code, will be much more than the implementation in some cases.
However, I dispute more strongly that the principal is wrong.
Surely not every implementation is longer than its interface, but in general striving to avoid bloated hideous interfaces is ideal.
Certainly it’s a sliding scale... and the idea of ‘too simple’ an interface that is superficially simple seeming, but actually very complex (eg. the file I/O example) is food for thought.
...but I feel too much is made of rejecting the point that all interfaces should be simpler than their implementation absolutely.
Did the book really claim that?
I want to actually read it myself now.
Thats where I could not listen to him anymore. I would entertain the idea that theoretically programming languages that have features like contracts or specifications could be more robust or effective in some ways or for some applications. And actually I am interested in seeing examples of feature rich applications written in languages with these types of features.
But I am not convinced of the practicality of these languages or the superiority of the reviewer. The proof is in the pudding. I would challenge the reviewer who has clearly placed their knowledge and skills above the creator of Tcl/Tk and RAMCloud to show the useful projects he has created that prove this superiority.
My experience with him is firstly through Tcl, which I like more than he does now (time for a TIP: "-jo" switch to [unset] which removes complaint to unset non-existent variables), and his book on it, and various papers. His work is at once both powerful and humble. Note in  how he without much emotion he suggests how Tcl came to be less prominent, and how even at Tcl's height, he explained without ego why it was there, and how it would leave. His papers on threads and comments on performance and testing are also lovely reads. I encourage any developer or architect to read or re-read them.
 https://www.youtube.com/watch?v=bmSAYlu0NcY (thx `quaunaut)
 https://vanderburg.org/old_pages/Tcl/war/0009.html (This is The Law, and it is good.)
"In other words, take a conditional out of the spec for the function. But in Section 10.5, he tells us that we should add a conditional to the spec of a function, namely in making Java’s substring method defined for out-of-bounds indices. I’m not completely sure he’s wrong (as I discuss in my Strange Loop talk, it comes down to: is there a clean way to describe this behavior?), but I find his claims that this makes the code “simpler” only slightly more credible than his claims about the Unix file API."
Ousterhout contrasts that with the similar operation in Python. Substring throws exceptions if the indices are out of range, Python returns the part of the string that is in-range. For example, calling it witha start, end pair where start > end results in an empty string, not an exception; it's much easier to use in my experience. And I doubt the formal specification is any larger, to address the author's point.
The formal does of Unix I/O is a horror show, sure, but what does the spec of it plus all of the filesystem that can back it look like?
The "right" answer depends on the domain. For a web startup, you may want to live with occasional errors being ignored in order to get your product up quickly and grow market share. Early Amazon.com used to fudge up my orders, but such didn't end the company.
But a bank probably doesn't want the software to "guess" if there is a potential numerical problem. It would rather have the batch process "crash". Otherwise, it could generate thousands of bad transactions and get sued to Pluto.
People used to debate this with MySql's default "truncation" setting. MySql was popular with start-ups because you could get it up and change the tables quickly without dealing with persnickity details, living with stuff occasionally falling thru the cracks.
This can be seen for example in the intro chapter to Fowler's 2nd edition of Refactoring, where he pulverizes some code into submission.
One oddity is the view of the class as the only real scope at which one can build abstractions. The idea that one can use little abstractions to build big abstractions, without cluttering the final abstraction to be presented to the user, is nowhere in sight.
Most shockingly to me, I've found no reference to types as a design tool. Sure, precise names help you avoid confusing physical block ids and logical ones. But wouldn't a type-based distinction be way better?
Can I also recommend Yaron Minsky's talk on OCaml, specifically the part where he talks about "making invalid states impossible": https://youtu.be/-J8YyfrSwTk?t=1079
I've found this to be the single most valuable programming technique I've learnt.
Richard Feldman dives deeper into the topic with a lot more examples in https://youtu.be/IcgmSRJHu_8?t=73
Meta 2: What better way to establish yourself as an expert than disagreeing with a currently recognized one? Not saying he did not back up his claims. In fact, I want to read more of his materials now to check that up.
Meta 3: This reading also gets me excited about investigating model driven development more. In this post, he talks a bit about the value of strict types related to such modeling, but lately I've been reading and writing some Clojure code and I'm slightly swinging back to more dynamic typing as a more flexible way of writing modular systems.
Object Design by Wirfs-Brock is a practical OOP book.
I would avoid anything with clean code in its name, they're generally dogmatic and obsessive about TDD.
That's a bit meta, eh? The author of the book being reviewed reviewed a review of his book.
Once you learn few different platforms and spend some time diagnosing problems you will start recognizing recurring patterns in problems. Hopefully this will gradually become your ability to diagnose difficult issues on unfamiliar platforms.
They are also forbidden to stay stuck more than one hour without requesting feedback(actually that's a rule for everyone on my team).
Pairing with seniors is a great way to learn too, you'll discover tons about what makes your company's code special.
You can block stackoverflow.com, or at the very least avoid searching on it before having tried to __understand__, not solve, the issue.