This short book is (IMHO) one of the best on software design. To me the main point of the book is the importance of well-designed abstractions. The "surface area" of a well-designed abstraction is small, easy to understand, and helpful as you reason through your code when you use it. The underlying implementation may be deep and non-trivial, but you find that you don't have any need to worry about the underlying internal details.
In short:
A beautifully designed abstraction is easy to understand and use.
It is so trustworthy that you don't feel any need to worry about how it is implemented.
Finally, and most importantly, it enables you to reason with rigor and precision about the correctness of the code you are writing that makes use of it.
That book is a almost perfect summary of what is in my head after 30+ years of programming. I recommend it often to new people, as I see they make the same mistakes I did back then.
I recommend not loosing time with “Clean X” books, but instead read this book. Also, as noted in other comments, you can only “get it” after some real experience, so it is important to practice an develop a “common sense” of programming.
I disagree that the "Clean X" books are a waste of time. They lay a nice ground understanding of what to aim for when writing code, in particular when you're early in your career.
When I was starting as a professional coder years ago, I had an intuitive sense of what good code was, but I had no idea how much actual thought had been put to it by other people. Reading those books was a good step in seriously starting to think about the subject and look at code differently as a craft ("it's not just me, this code smells!" or "hey that's a neat idea, better keep this in mind").
Definitely would recommend to someone starting out their career.
Edit: getting downvoted for a reasonable, justified opinion. Classy.
Don’t know about the rest of the series, but Clean Code isn’t merely a waste of time, it’s worse — it’s actually a net negative, and lies at the root of a number of problems related to incidental complexity.
Not GP but: Personally, I find that book's advice is highly subjective and rooted on aesthetics rather than pragmatism or experimentation. It encourages an excessive number of very small methods and very small classes, and brushes off problems that it causes.
Not about the book, but: Its influence is malignant. Even Uncle Bob mentioned in a recent interview that he will break the "10 lines per method" rule if need be. But practitioners influenced by the book lack his experience, and are often very strict. I even remember a specific Ruby linter that capped methods at 5 or 6 lines max IIRC. Working in such a fragmented codebase is pure madness. This comment from another user made me remind some of those codebases: https://news.ycombinator.com/item?id=42486032
EDIT: After living in the "Clean Code world" for half a decade I can categorically say that it produces code that is not only slow to run (as argued by Casey Muratori [1]), but also slower to understand, due to the jumping around. The amount of coupling between incestuous classes and methods born out of "breaking up the code" makes it incredibly difficult to refactor.
I think people get hung up with the small classes/methods and ignore all the rest. One important lesson being that the aesthetics do matter and you have to pay attention to writing maintainable code. These are important lessons for a beginning developer. If you think otherwise, you've never worked on a code base which has 300 line functions with variables named temp, a and myVar.
Regarding short functions: yes, having them too short will absolutely cause problems. And you should not use this as an absolute rule. But when writing code it's very useful to keep this in mind in order to keep things simple - when you see your functions doing 3 independent things, maybe it's time to break ivt in 3 sub functions.
Edit: I see some criticism concerning too small classes, class variables being used as de facto global variables and shitty inheritance. Fully agree that these are plain bad practices stemming from the OOP craze.
Sure, but nobody is saying that aesthetics don't matter. Quite the opposite. People have been saying this for decades, and even government agencies have code-style guidelines. Also, the idea that big procedures are problematic is as old as procedural programming itself.
The problem is that, when it comes to aesthetics, one of the two more-or-less-novel ideas of the book (and the one that is followed religiously by practitioners) is downright problematic when followed to the letter.
> when you see your functions doing 3 independent things, maybe it's time to break it in 3 sub functions
That's true, and I agree! But separation of concerns doesn't have much to do with 10-lines-per-method. The "One Level of Abstraction per Function" section, for example, provides a vastly better heuristic for good function-size than the number of lines, but unfortunately it's a very small part of the book.
> I see some criticism concerning [...] class variables being used as de facto global variables
The criticism is actually about the book recommending transforming local variables into instance/object variables... here's the quote: https://news.ycombinator.com/item?id=42489167
If the 3 things are related such that they will only ever be called in order one after the other (and they are not really complex) it’s better to just do all the work together.
But this line of thinking is exactly what's wrong with Clean Code. Just seeing your function doing three independent things is not a signal that you should begin refactoring.
I've worked on code bases with functions that were longer than 300 lines with shorter variable names. Whether this is a problem is completely dependent on the context. If the function is 300 lines of highly repetitive business logic where the variable name "x" is used because the author was too lazy to type out a longer, more informative variable name, then maybe it's possible to improve the function by doing some refactoring.
On the other hand, if the function is an implementation of a complicated numerical optimization algorithm, there is little duplicated logic, the logic is all highly specific to the optimization algorithm, and the variable name "x" refers to the current iterate, then blindly applying Clean Code dogma will likely make the code harder to understand and less efficient.
I think the trick here is to cultivate an appreciation for when it's important to start refactoring. I see some patterns in when inexperienced developers begin refactoring these two examples.
In the first example, the junior developer is usually a little unmoored and doesn't have the confidence to find something useful to do. They see some repetitive things in a function and they decide to refactor it. If this function has a good interface (in the sense of the book---is a black box, understanding the implementation not required), refactoring may be harmful. They run the risk of broadening and weakening the interface by introducing a new function. Maybe they accidentally change the ABI. If you have only changed the implementation, if no one spends any time looking at the details of this function because it has a good interface, ... what's been gained?
In the second example, the junior developer is usually panicked and confused by a Big Complicated Function that's too hard for them to understand. They conflate their lack of understanding with the length and complexity of the function. This can easily be a sign of their lack of expertise. A person with appropriate domain knowledge may have no trouble whatsoever reading the 300 line function if it's written using the appropriate idioms etc. But if they refactor it, it now becomes harder to understand for the expert working on it because 1) it's changed and 2) it may no longer be as idiomatic as it once was.
One of the biggest issues with the book is that it is a Java-centric book that aspires to be a general-purpose programming book. Because it never commits to being either, it sucks equally at both. In much the same way, it's a "business logic"-centric book that aspires to be general purpose, so it sucks at both (and it especially sucks as advice for writing mostly-technical/algorithmic code). This is epitomized by how HashMap.java from OpenJDK[0] breaks almost every single bit of advice the book gives, and yet is one of the cleanest pieces of code I've ever read.
One fundamental misunderstanding in the book and that I've hear in some of his talks is that he equates polymorphism with inheritance. I'll forgive him never coming across ad hoc polymorphism as present in Haskell, but he book was published in 2009, while Java had generics in 2004. Even if he didn't have the terminology to express the difference between subtype polymorphism and parametric polymorphism, five years is plenty of time to gain an intuitive understanding of how generics are a form of polymorphism.
His advice around prefering polymorphism (and, therefore, inheritance, and, therefore, a proliferation of classes) over switch statements and enums was probably wrong-headed at the time, and today it's just plain wrong. ADTs and pattern matching have clearly won that fight, and even Java has them now.
Speaking of proliferation of classes, the book pays lip service to the idea of avoiding side-effects, but then the concrete advice consistently advocates turning stateless functions into stateful objects for the sake of avoiding imagined problems.
One particular bugbear of mine is that I've had literally dozens of discussions over the years caused by his advice that comments are always failures to express yourself in code. Many people accept that as fact from reading it first hand, others you can clearly trace the brain rot back to the book through a series of intermediaries. This has the effrt of giving you programmers who don't understand that high-level strategy comments ("I'm implementing algorithm X") are incredibly information dense, where one single line informs how I should interpret the whole function.
Honestly, the list goes on. There's a few nuggest of wisdom buried in all the nonsense, but it's just plain hard to tell people "read this chapter, but not that, and ignore these sections of the chapter you should read". Might as well just advise against juniors reading the book at all, and only visiting it when they've had the time to learn enough that they can cut through the bullshit themselves. (At which point it's just of dubious value instead of an outright negative)
I think you are totally right. The clean X books are not a waste of time. I meant that in the sense of “start here, don’t delay this”. I would recommend: read aPoSD, then Clean X series, then again aPoSD ;)
There tend to be two camps with the Uncle Bob franchise as I see it:
Those that fall for the way he sells it, as the 'one true path', or are told to accept it as being so.
Those who view it as an opinionated lens, with some sensible defaults, but mostly as one lens to think through.
It is probably better to go back to the earlier SOLID idea.
If you view the SRP, as trying to segment code so that only one group or person needs to modify it, to avoid cross team coupling, it works well.
If you use it as a hard rule and worse, listen to your linter, and mix it in with a literal interpretation of DRY, things go sideways fast.
He did try to clarify this later, but long after it had done it's damage.
But the reality is how he sells his book as the 'one true path' works.
It is the same reason scrum and Safe are popular. People prefer hard rules vs a pile of competing priorities.
Clean architecture is just ports and adapters or onion architecture repackaged.
Both of which are excellent default approaches, if they work for the actual problem at hand.
IMHO it is like James Shore's 'The Art of Agile Development', which is a hard sell compared to the security blanket feel of scrum.
Both work if you are the type of person who has a horses for courses mentality, but lots of people hate Agile because their organization bought into the false concreteness of scrum.
Most STEM curriculums follow this pattern too, teaching something as a received truth, then adding nuance later.
So it isn't just a programming thing.
I do sometimes recommend Uncle Bob books to junior people, but always encourage them to learn why the suggestions are made, and for them to explore where they go sideways or are inappropriate.
His books do work well for audiobooks while driving IMHO.
Even if I know some people will downvote me for saying that.
(Sorry if you org enforced these over simplified ideals as governance)
Robert M. Pirsing discusses qualia in his writings. One objection raised by his antagonists is, “quality is just what you like", echoing the idea of broad subjectivity you raise. Yet there is broad agreement on what counts as quality. Among the aspects we agree on is complexity and subjective cognitive load.
Yes, it's subjective, but not entirely. After you've done it for a couple of decades, you start to have a sense of taste, of aesthetics. Some things seem beautiful, and others ugly. It's "subjective", but it's also informed by two decades of practice, so it is far from being purely subjective.
This same identity can be used to provide geometric intuition as to why i*i must equal -1. This is shown in the diagrams at the bottom of http://gregfjohnson.com/complex/.
A nice way to think about eta-reduction is that it asserts something about the types of expressions in lambda calculus, namely that every expression is in fact a function.
If an expression M can appear in the left position of the function application operation, this implies that M is a function.
By way of analogy, if I have a formula x == x+0, this implies that x is a number.
Or, s == s + '' would imply that s is a string.
So, if M == lambda x. M(x), this is saying that M is a function.
I work at a medical device company that specializes in radiation treatment for brain tumors.
This thoughtful and profound essay brings home the lived reality of the patients who are treated by our systems.
The writer speaks lived truth that has a tone of heft and substantiality.
Human life is a fragile and temporary gift. Most of us are lucky enough to have a few moments of transporting and profound beauty and joy.
While life's journey has an inevitable end for all of us, we can help each other in innumerable ways to make the journey more bearable, and at times joyful.
I'm an old guy, and have an artificial hip and cataract implants. I'm deeply grateful for the quality of life I've been gifted to receive by the medical people who make these kinds of things possible.
I hope that the brain treatment system I work on will be a similar gift to the lives of at least some of the patients who require that kind of treatment.
>Human life is a fragile and temporary gift. Most of us are lucky enough to have a few moments of transporting and profound beauty and joy.
>While life's journey has an inevitable end for all of us, we can help each other in innumerable ways to make the journey more bearable, and at times joyful.
very nicely articulated. thank you.
the series of essays is moving and profound. what we take for granted is a miracle and we dont realize it, and are caught up in trifles.
thanks to whoever brought this our collective attention.
We used LwIP for a project some years ago, and found a very nice way to do system testing.
The project involved multiple microcontrollers communicating over an internal LAN. They used a small embedded kernel named MicroCOS, with LwIP as the IP stack.
We had cross-platform build tools set up, so we could build our stand-alone microprocessor applications either for native execution or with gcc, compiling to x64 code and executable on developer boxes. In the latter case, we implemented the lowest level link-layer part of LwIP using a mock, that used standard TCP/IP! We wrote a small TCP server and would spool up the micro-controller applications, which would then talk to each other on the developer machines as though they were running inside the actual system.
This setup worked really well, and we used it for years during the development effort for the project.
A person close to me works at a law firm. She was feeling a bit stagnant, so she connected with a recruiter. She got a very solid offer with a significant pay bump. She gave her two weeks' notice to her firm, including appointments with the partners. One of the partners asked her for a half hour. They came back with a massive pay raise, and a promotion to partner if she would stay. She was in a state of shock, but then informed the other firm that she was staying at her current firm.
By way of contrast, an engineering firm I am familiar with had an employee who had been there six years, and knew the company's very complex product inside and out, every nook and cranny. He was one of the only people who had such deep understanding of the system that he could fix any issues that might come up, hardware, firmware, software, everything. He gave his two weeks' notice, and then went to a different job. He's a very talented guy, who would command a very attractive offer, but his talent to the current company is vastly greater than his generic value on the market, because of his detailed knowledge of the product. Although he diligently documented his knowledge, the company was still left in a jam after his departure. It would have been great if the company had fought for him the way the law firm fought for the other individual described above.
“but his talent to the current company is vastly greater than his generic value on the market”
This has always been one of my fears. I would not want to become indispensable for a no name company while paying the price of being average to the market.
The point is that the market is willing to pay more, which seems counterintuitive since the employee should be much more valuable to their current employer. But, this does seem to happen with some frequency.
Because the Industry thinks that Management/Marketing/Sales are the important "leaders" and Engineering is just mere "foot soldiers" and hence replaceable/dispensable as needed. The above is justified as "Business needs", "Investor returns", "Profit next quarter" etc.
Very early on in my career, I had a few good mentors who all told me the same piece of advice - tech (IT, programming, QA, etc) are __cost centers__ to most businesses. No matter how valuable you are, your pay is not going to scale the same way no matter how good your performance reviews are. I’m very thankful for that advice because I was never shocked when raises were low or when finding out that new, lower leveled coworkers had higher starting salaries than me. My default mindset is that I’m fungible to the business. Work hard but also expect to fight to prove your worth. This is generally good advice in any career but I think it’s a must in tech.
But it wasn't like this in the beginning of the Industrial/Knowledge revolution. Engineers/Scientists were valued and founded companies with Engineering at its core with Management/Marketing/Sales/etc. playing their proper ancillary roles. It was much later that the system was manipulated to place Management on a pedestal (undeserved) citing market/financial reasons. And Engineers have allowed this to happen, take root and persist to this day.
We need to change the above status-quo.
However; the important point we need to be aware of is that in the current Economic/Financial System many events and their payoffs are no longer linear and that is what companies are trying to optimize for. The best explanation of this is Nassim Taleb's Mediocristan (non-scalable) vs Extremistan (scalable) dichotomy. This video Pareto, Power Laws, and Fat Tails—what they don’t teach you in STAT 101 is a very nice overview of the essential points : https://www.youtube.com/watch?v=Wcqt49dXtm8
Well, yeah, and generally you need to seek new employment regularly, because your current company is counting on you not recognizing your worth, whereas other companies will have to make competitive offers in order to win you over.
> What prevents engineering firms from acting like law firms?
I wonder if it's harder to associate an employee's skill level with company profit.
If most lawyer companies have established hourly billing rates for each employee, then the owners can more clearly see an employee's true value to the company?
and yet for software consulting firms, this billing hours are also done the same. Yet, you don't see the same level of compensation raises unlike with law firms.
I think it's true that Engineering is just mere "foot soldiers" and hence replaceable/dispensable as needed (as mentioned in another sibling comment here).
Management perceives people to be more replaceable than they are. Years of working in, or being the architect of, the company's core product will make you a true expert in it.
But, from the company perspective, your value is based on the 'market rate' for your generically defined skills and experience.
I think engineers tend to overestimate the value they bring. _Most_ companies make money from business deals that are written and signed into contracts, sometimes those deals involve automating stuff, that's where software engineers are useful. If a company loses some rock-star engineer, the automations they worked on don't break immediately, there's some time for other engineers to figure out how it works. If something cannot be delivered in a timely manner according to the contract because engineering branch got weaker after a rock-star left, companies either have to pay some fraction as fines or may agree on deadline extensions, however the contracts have been signed, the money are flowing, all the holes will be plugged eventually by other engineers.
Anecdotally, I have been recently approached by someone who was very eager for me to consult them on the product they were going to build. After a few hours of talking I quickly realized that they don't really need to build anything complex. In fact, my advise was to only focus on the core functions which are very simple and leave the majority of actual work to be done manually by a much cheaper secretary-type role until the product got enough traction to actually benefit from automation.
All attorneys that work for law firms have billable hours. Only agencies and consultancies have billable hours for engineers.
It’s easier to say “we will lose $X if this person leaves” or “$Y will be at risk if they leave due to personal relationships” than it is to quantify an engineer’s revenue impact to the company.
People may hate this, but people leaving is valuable to companies as a whole.
Lawyers typically specialize and they work off the same body of work everywhere (the same set of laws). Having worked for 10 law firms doesn’t mean that they know something current employees don’t.
Tech isn’t like that. Everywhere is different, many of us touch multiple specializations, the body of knowledge we need is always shifting. An engineer that has worked 10 places very likely does know things your current employees don’t, like different tools.
Losing an engineer is bad for the individual business, but engineers moving around is good for businesses in general.
Partners at law firms know that the money comes from the lawyers, and they (probably) know who the rising stars are.
Managers at engineering firms too often think that the money comes from the wisdom of the managers and the hustle of the marketers, not from the work of the engineers.
This may be because partners at law firms are lawyers, but upper managers at engineering firms often are not engineers.
In theory, this would become some type of exploitable game theory.
Send out offer letters to companies most indispensable people, just to jack up your competitors labor costs. Arms race. Eventual mutually assured destruction.
The fraudulent offer letters notwithstanding, I think insiders would see this as an opportunity for Moneyball rather than a risk of unraveling in confusion.
Moneyball as in an optimization technique when roles are static. Obviously this is generic because static data of any value inevitably ends up in a spreadsheet. Otherwise, I’m not sure game theory holds together here.
engineers either have patent quotas or are blue collar as far as any Enterprise is concerned.
do you have patent quotas at your position? if not they are counting the days to replace you with a machine, someone overseas, a script, AI. depending on the decade.
Why is a lawyer position irreplaceable, but an engineering/blue collar one is not?
Not trying to be spiteful or angry, just geniunely curious about the business logic and psychology behind that kind of decision making. Isn't every job replaceable?
I think the reason is not that lawyers are irreplaceable, to the contrary perhaps.
The reason is most likely that law firms are always led by lawyers, while that's rarely true for firms where most engineers work (or for most other professions). Typcially, law firms can only be owned by lawyers due to regulatory constraints (the UK, Australia being a notable, but not particularly successful exception). While this rule restricts the possible size, scalability, efficiency and profitability of law firms compared to many other sectors, it can ensure a certain degree of independence (insulation) of the management from extra-professional influences. Like "Das Kapital" trying to change the way the firm works to achieve better returns in the next 5 years.
That leads to law firm management and owners being composed mostly of people who have previously worked in that exact field and PROBABLY being better at judging the worker's worth in terms of potential revenue he/she may earn, the client trust they have and the chance that they may transfer that to another firm...
Maybe law firms also have less of an illusion about what is valuable in their law firm as opposed to those companies relying on engineers who have or believe they have some special moat outside their people (technology, source code, inventory etc.).
No professional law firm would have any illusions that their lawyers are special - perhaps outside trial lawyers and some very special field of litigation, the firm just needs burnout resistant cannon-fodder with the capacity to run for at least ten years, so they may acquire the sufficient experience, plus a number of bland human skills and patience in coping with other human beings. There never was a genuine myth of the "10x lawyer", that's something that only ChatGPT invents.
(Sidenote: I wouldn't call an engineering job blue collar as long as you work in an office - WFH, we are all collarless workers now.)
1) Power/Politics - This is basically Human Nature at work and institutionalized as Management/Leadership/etc to put themselves at the top. A lot of it is BS (see the books by https://jeffreypfeffer.com/) but unfortunately only the enlightened in the industry have woken up to this. It is also the case that in these domains many of the objectives are intangible/subjective and difficult to measure thus allowing the actors to create an illusion of "Importance".
2) Nature of Engineering - The output of any Engineering activity has a well-defined boundary. This makes it more tangible/manageable/measurable and reason about. All the main costs are paid upfront and once gizmo-x/software-y is done the recurring costs are generally pretty low. This gives the illusion that the Engineer is now not worth his pay compared to his current output and hence replaceable/dispensable based on bean counter calculations. It also doesn't help that Engineers do a pretty good job so that a product/software once released and accepted in the market is generally very stable and not in much need of rework. This is the reason "Planned Obsolescence", "Subscription Model" etc. were invented by the Industry.
a lawyer very much can, but 1. there's no machine or script capable of replacing one yet, and 2. they are half sales people with their own client lists.
I grew up seeing founder mode first-hand. My dad founded Celestron, and he operated just as Paul is describing. At my current company, I see the CEO doing the exact same thing, which gives me great hope. Huge contrast from my previous start-up, for which I have great anti-hope unfortunately. (Current company is Zap Surgical Systems, with Dr. John Adler as the founder.) I've seen John everywhere, all day every day. I've seen him literally on his hands and knees under our robotic treatment table back in the engineering lab, discussing details with a young mechanical engineer. And yeah, we are curing cancer ;-)
To re-iterate: What I believe PaulG is saying is that there is a crucially important phenomenon that we don't yet understand, and naming it will facilitate the process of bringing it into focus so that we can help more companies be successful.
His footnote has a painful resonance: "I also have another less optimistic prediction: as soon as the concept of founder mode becomes established, people will start misusing it."
This is exactly what has happened to the Agile methodology.
I don’t believe it is that mysterious or revolutionary. What PG is describing is the “optimization of local maximum” problem.
Founders focus on the strategic goal. Managers focus on tactical goals. Rules and processes are put in place to efficiently achieve these tactical goals. The problem is, in certain situations, these local process are at odds with the broader goal. Only the Founder has the authority to break the bureaucratic rules.
My favorite example is from the film Zulu when the British quartermaster adamantly dispenses ammo per the rule book and a long queue of desperate soldiers form up. Nevermind the British were out numbered 10 or 20 to 1 and should be firing their rifles as quickly as possible and using up all their ammo as quickly as possible.
Do managers focus on tactical goals? Or do the engineers do that? Managers are more like the glue of the system, they don’t really provide much value besides that.
In the enterprise, the cost of failure to ones career/reputation is unreasonably high.
pg's reference to "most skillful liars in the world" stuck out to me.
The extreme conservatism employed by managers to prevent failure - can only be summed up as "success at any cost". The consequence is decisions that spread the pain far and wide.
Unfortunately, these managers are not accountable for these consequences.
It's no winder that solutions take longer, cost more, are sub-optimal at almost every level. Furthermore, they very painful for the poeple who have to suffer these solutions.
But hey, some unrelated manager-chain can claim success.
The worst of it is, these managers rinse-and-repeat at their next gig!
I think the general issue here is that you can't observe successful businesses or individuals and copy their processes to be successful, because their processes are largely a symptom of their competency, not the cause.
This is just a different way of saying your managers need to be more engaged with the work they are doing and not just worry about abstract metrics like measuring progress by number of tickets they close or something as such.
The thing is this is hard to pull off even for the actual founders. For starters its just hard, and tiring(and even pointless) to keep jumping over the ever increasing bar. Even more so when you are aging, know you are going to die and you are better off enjoying some of that money you earned, in the time and health that remains. At that point, you let things go on autopilot and begin to reduce engagement with your work.
The middle managers and employees you hire don't have incentive to even care this much. They are neither paid this high to care, nor have much to lose if they don't. Younger people show some passion because they wish to learn, but once they have enough of it, they go on the same mode.
To summarise. For people who made big money making a little extra doesn't matter all that much. People who don't get paid, don't even bother to try.
It comes back to the work itself being interesting and whether you actually care about the field that you are working in. I would not like to live in a world where you have to put in 80 hours a week to get a decent salary as an expert. It's true that there are a lot of people who got into the industry just because of the comp though :(
>>It's true that there are a lot of people who got into the industry just because of the comp though :(
Lots of people are just tuned out. And just don't care. With or without the comp.
>>It comes back to the work itself being interesting and whether you actually care about the field that you are working in.
There was this HR in a company here in India. She spoke at length about what attracted people to IT. Basically the employees wanted the realisation that they were creating tremendous value for the company, at the same time earning very high, and of course wanted work life balance. It was more or less impossible to achieve this. There aren't even enough projects of substance to make this happen. At least not for everyone, all the time. The net result was people tuned out, or just don't involve themselves enough at their work, or associate the outcome of their work at personal level.
Another additional factor how quickly the product/projects get shut down/eol in our industry. There are not many things that last for years that you take pride building them.
I don't see anything distinct enough to call "founder mode" here. It's just one (very important, but small) sub-facet of Ron Westrum's organisational typology. That specific behaviour, of a top-of-the-tree authority figure getting involved in the weeds, is just one behaviour he calls out as characterising generative cultures, but there are others: an eagerness to find mistakes is another important one.
This has become more popularised by DORA, but Westrum divides cultures into "Generative", "Bureaucratic", and "Pathological". Someone who finds a mistake is praised in a generative culture, ignored in a bureaucratic one, and blamed in a pathological one. The conventional "management"-type detail-hiding hierarchy would be absolutely stereotypical of bureaucratic or pathological cultures depending on the goal.
From that point of view I think this is better understood than you'd think, but possibly not well communicated.
What I'm disappointed by is that in the hundreds of years of capitalism, there has been little or no scientific research that looks at what structures create companies that have the desirable attributes we want. The fact that Paul Graham is now coining 'founder mode' without formulating any theory and approach to test what works (you would think VCs would be a group capable of doing this research) is disappointing.
The lack of any scientific rigor means we have founders floundering when building companies. For now we can just point to individual cases like Steve Jobs or Valve and have to guess why what they did worked. We are in the snake oil period of corporate governance.
"What I'm disappointed by is that in the hundreds of years of capitalism, there has been little or no scientific research that looks at what structures create companies that have the desirable attributes we want."
-- It may well be that, beyond the obvious, it depends on the broader social (how society is structured) and financial context, and the spirit of the times. The same could be said of all sorts of "how to's": how to get rich, how to be successful in life (the vagueness of which already throws a wrench into the engine of exploration), how to win at soccer/football/whatever sport or skill you are interested in.
While technology may open new horizons with capabilities that were not there before (think 4G and true mobile internet experience), one might ask why, looking back at the "primitive" ways sports were trained for and played 50 years ago, nobody had the intuition to propose ways of training and playing that are closer to what we see today. It did not require new technology, it needed a different way of thinking about the problem. But what scientific research, such as the one you propose to use to study how organizations win, would have allowed this thinking to emerge?
And the same will probably be pondered 50 years from now when their ways will be compared to ours.
I think it's much more simple than having a separate manager or founder mode..
Good founders will want to learn or at least understand all tasks that sets their business apart without becoming micro-managers, when their become overburdened by details in certain areas the default action should be to delegate to someone they _trust_ with doing in the area rather than just hiring someone with a CV and charisma that _implies_ that they are good at handling that area.
This probably often works well for most founders initially when they only need to delegate practical tasks that they know how to measure directly.
However once the money stakes, and most importantly number of people grow. The tasks needed to be delegated are managerial rather than practical so the founders become faced with another kind of delegation that they have less proficiency in.
This is probably where things go awry, at least some of their early promotions or delegations probably didn't thrive as managers, faced with failures they start seeking out managers from the managerial class from the outside rather than risk finding the right people with domain knowledge to promote.
There's nothing unique about Startups in this sense.
What we are talking about is people who care about the success of the business and pleasing customers, versus people who are optimizing for "corporate success" (ladder climbing).
You can find both types of people everywhere, at all levels. So it really is about "hire the best people". You just have to have the right measure of "best".
The real problem with methodologies like "agile" or whatever the hell is that:
- they are not magic solutions to a fundamentally flawed business, or a business that is building the wrong thing
- they are not solutions to people issues
To a great degree if you have the right business and people treat each other like humans, you don't need a methodology, just keep doing whatever you are doing.
The current Celstron is owned by a Taiwanese company and just acts as a US distributor for that company so I wouldn't say its the same company anymore.
> His footnote has a painful resonance: "I also have another less optimistic prediction: as soon as the concept of founder mode becomes established, people will start misusing it."
Exactly what happened with Product Market Fit. With the current AI buzz, I see everyone and their nana preaching about how their product has PMF. If you had PMF, you'd have onboarding teams, not sales teams.
> His footnote has a painful resonance: "I also have another less optimistic prediction: as soon as the concept of founder mode becomes established, people will start misusing it."
Oh I can totally picture this being used as an excuse for bad micromanagement! Unfortunately a lot of people are more interested in "doing the label" than developing a deep understanding of why they'd do something and how to apply it correctly.
And yes, I know I just described cargo-culting ;-)
Doesn't it sound like the most self-absorbed footnote in the history of footnotes? "Attention, sailors. I just defined a new concept (--which is as new as the wheel, by the way--), which fundamentally changes how we see organizations, but it will be misused".
How people can take someone so self-indulgent and -absorbed seriously is beyond me. I am sure he is great at his job of nurturing start-ups, but what he had to say, he already said a long time ago.
It is funny that we take such nonsense seriously when the inventor of radio and the brilliant people who built the telecommunications system on top of it said: "try this thing, you can call your mother who lives in Australia from the United States".
No, "the waves that can be used are not the eternal waves," they built a miracle and made people use it.
Suit yourself. I personally find it quite wise and generally applicable, and was able to intuit the exact response I was hoping for despite never having completed a read-through of the Tao. (In the first section, no less!)
I find the Tao has a knack for describing my atheist worldview in exactly the way the Bible doesn't. "Where is he who has been born king of the Jews? For we saw his star when it rose and have come to worship him."
I don’t think it is a bad outcome with misusing a concept.
Because it does not affect those truly embracing it.
Having an excuse is not going to save bad business run by bad people. It might let them slide a bit more but ultimately it is not doing the job for them.
But it does make it more difficult to talk about a concept when it's been (re)defined and argued about and gaslit about instead of what PaulG is doing: providing a simple definition backed by some real-world examples that actually are founder-mode being done more or less right.
Yes, and when various people start to fake-claim that they work in Founder Mode, you're more likely to hire the wrong managers or CEO, once it's time to find someone for whatever reason.
I turn 70 tomorrow, and have been programming for about 53 years. Started a new job about six months ago. I immensely enjoy banging out the code every day. It still feels like a guilty pleasure! "Shouldn't I be doing homework right now rather than creating 3-D worlds algorithmically?" My first language was APL, which I learned in a college computer science course. This hard-wired me to think in functional terms; I personally think Iverson and Dijkstra were saying the same thing, but Iverson said it better: reason about your code from an "outside of time" perspective rather than mentally imitating the fetch-execute cycle of the machine. I view software development as a form of discrete mathematics; inductive reasoning for sequential blocks of code, Pnueli-style temporal logic for concurrent and parallel code. I've learned from some wonderful people how powerful it can be when a team likes each other and gets into a collective flow state. It is a bit like a mental version of quantum entanglement, and it is a very satisfying and meaningful thing when you get there. I've benefited from friends who helped me get that next job, and I've helped friends get their next jobs. About 20 years ago I made a switch to medical device software development. That is a domain that requires dedication to learn relevant mathematics, it is not going to go away, and you become a valued commodity when you have specialized skills and a talent pool that is not too large. And, you get to do things like visit your grandchildren in a NICU and see neonatal ventilators that you helped develop. So, I've been lucky, being able to play all day and do something I love! There are a million different paths through the space of software development; I've tended to traverse the space using the "what would be fun to goof around with on a Saturday" metric.
Goofing around with the Y combinator, and quines (programs that print themselves): I wanted to see how much a quine written in C could be made to look like lisp-style self application. Worked out pretty well!
In short:
A beautifully designed abstraction is easy to understand and use.
It is so trustworthy that you don't feel any need to worry about how it is implemented.
Finally, and most importantly, it enables you to reason with rigor and precision about the correctness of the code you are writing that makes use of it.
reply