The 3 stages of domain mastery:
Stage 1 - No knowledge or structure to approach a domain (everything is hard, pitfalls are everywhere)
Stage 2 - Frameworks that are useful to approach the domain (Map to avoid pitfall areas)
Stage 3 - Detailed understanding of the domain (in which you can move through pitfall areas freely and see where frameworks fall short)
Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.
All good decisions must be made by stage 3 persons, but ironically, training is most efficiently done by stage 2 persons. Hedgehogs get more limelight because 90% of the population is at stage 1 and values the knowledge of stage 2 (and can't grasp the complexities and nuances of stage 3).
Many hedgehogs struggle to touch stage 3, and instead see stage 2 as mastery. This is compounded by the positive feedback loops of success - the frameworks save time, it gives them reputation, it allows them to save stage 1 persons from their ignorance, and it's the foundation of their current level and achievements. Frameworks are also convenient and broadly applicable to many problems; detailed domain mastery in contrast is difficult, time consuming, and highly contextualized.
All of this makes it hard to move beyond stage 2 into stage 3.
Works for almost any X - writer, programmer, driving, etc.
In my experience slavishly following the "rules" or "best practices" can actually be worse than never following them. Not understanding when it's good to deviate usually means of a lack of understanding of why the "rules" or "best practices" exist in the first place. So much attention is spent following the letter of the rule rather than what problems it was meant to solve.
Look no further than modern day "Agile" vs the actual Agile Manifesto
DRY is about knowledge management. The intention is to not avoid duplication of code which is a knowledge representation of the same thing. It does not mean every coincidentally similar looking code must be made into a function, as it would result in a premature abstraction, etcetera etcetera. It's why "minimum three duplication" principle is effective, because it filters which duplication represents the same knowledge and which does not.
KISS is (ironically) a more complex topic. C2 wiki talks about this in an interesting way.
My more nuanced take I guess is "don't duplicate logic, except if it may in fact be mutable data". It does take some forward thinking to think that this e.g. list of 10 rules should be unrolled and not compressed by meaningless helpers, etc
KISS and DRY are different paradigms, i.e. come from different schools of programming.
Most people would not recognize simple code if it hit them I'm the mouth.
KIES (keep it easy, stupid) is what they mean, and that means following idioms, frameworks, that are familiar.
He does well to explain how PEP 8 can be good but some people focus on it because they may not have experience to contribute otherwise.
Another example is that I recently wrote a comment that received greater than 10 votes and was flagged (I presume by users) almost immediately. The comment was thus both commonly agreeable (perhaps people could empathize to the subject matter) and yet a violation of social norms. An unspoken social rule violation.
That comment also received contradictory responses: I’m not woke, but.... Contradictory responses are generally an explicit mention of internal confusion where the person cannot even agree with themselves on social conformance: I’m not lying, but.... If it’s not complete bullshit then don’t posture with a qualifier.
"Some consider it noble to have a method;
others consider it noble not to have a method.
Not to have a method is bad;
to stop entirely at method is worse still.
One should at first observe rules severely,
then change them in an intelligent way.
The aim of possessing method
is to seem finally as if one had no method."
“Learn the rules like a pro, so you can break them like an artist.” - Pablo Picasso
In my opinion, once you play a game for a while, it's easy to know exactly when and how to break ANY rule: you must understand what the rules are there for.
Think of any agile framework, a ton of them are practiced like cults by the most. However, if you think that those are there for 2 simple purposes:
1. you want to ensure nobody in the team is ever idle
2. you want to ensure everyone is working on the most relevant available task that hasn't been picked up from others
This is just an easy example, this is really about anything in life though!
I think mine are closer to the agile manifesto, but perhaps yours are closer to how agile is actually practiced in most places.
Problem 1: you want to ensure that you build the right thing.
Solution 1: you should get feedback from end users as frequently as possible.
Problem 2: you want to be using work processes that are appropriate for your team and task.
Solution 2: teams should be able to modify their own work processes.
And by rules, I don't mean offside, handball, freekick rules. I mean how to hold the line, when to dribble, when to shoot, to hold formation, play as a team, end product etc.
And then some stage 3 folks make a new stage 2 thing and the cycle continues. But I think people don’t want stage 3 attainment by giving up the social buffer (which is very reasonable, as being a social animal in most cases is better life path than a lonely scholar/innovator).
EDIT: I think what I am talking about applies better to spiritual/philosophical/psychological attainment, rather than technological. The effect is probably still there for less spiritual things like tech or writing, but probably less so.
While interviewing recently, I've found a similar anti-correlation between general competency and people who focus on teaching frameworks and libraries.
The more competent candidates basically don't do any teaching based on frameworks/libraries (but they might have experience mentoring individuals); whereas the candidates who focused on teaching frameworks specifically (often to groups) were the least competent - the more they focused on teaching the less competent they seemed to be! I found this kinda surprising and worrying, though my sample size is fairly small. To clarify, I know only so much can be evaluated in an interview, this is basic competency.
The article talks about contingent advice being better than universal advice only in stage 3. If you're not then universal advice is helpful. I think that holds true for most people and most subjects, including myself.
Originally, I thought the article did a great job describing a common scenario that occurs usually in decision making and I wanted to describe my intuition on why I think it comes about. It's not really a universal theory, more my own digestion / explanation on the interrelation around hedgehogs vs foxes and my own interpretation of the issues that the articles describes.
I just realised my reply may have come across as mean-spirited rather than the light joke I intended. Sorry about that.
...Now that you bring it up, the OP is offering a piece of universal advice. The irony seems stronger there. Not sure if that invalidates his advice or not. Probably just invalidates taking it as a hard and fast rule.
In its defense, it's also extremely intellectually gratifying
It is more about some thought-leader being keen on blockchains, machine learning, supply-side economics, or what have you, and looking at every problem/situation through the lens of wanting to apply this technology/policy to solve it, possibly ignoring the downsides/details/side-effects.
The article gives the fictional example of a project “just needing a relational database” but the “domain expert” trying to push them to use SpringySearch because that can also work as a relational database (and because this hedgehog is sold on SpringySearch).
At stage 3, things aren't necessarily easy, but you have the skills to navigate much larger amounts of uncertainty than stage 1 or 2.
See for example https://news.ycombinator.com/item?id=27468360 where the person mentions pressure to simplify advice and Tetlocks own work shows that the hedgehogs were the more famous and successful people. So some people may migrate backwards by simplifying a message for maximum impact.
Definitely at stage 3, could also be the reason not as many people using it.
If one stays open to new idea while at the same time keeps asking why that idea can be good or bad, one can escape the stage 2 trap.
Often, moving to the stage 3 is a waste of time and resources. There are a few cases if your business's main secret sauce is stage 3, but for most other things - commoditize and focus.
I've seen so many teams and engineers trying to master stage 3 but no real business need and ROI. Engineers love mastering things but good leaders guide them in the right things to master and avoid getting addicted to useless stage 3 expertise.
You say that as if there were any agreement about what that means. But the point of the article is that this is not true; you'll always find someone that insists on using technology (or technique) X for your project, only technology X will be different for each person. Moreover, X might be actually making the project harder to understand, longer to develop or have other significant downsides.
The point of reaching step 3 is not necessarily to be able to develop your own custom solution (although that can be sometimes valuable), but to able to pick the right technologies for a given project given the myriad options available.
I've repeatedly failed to follow this approach myself and have come to regret it. As I mentioned the other day, I chose Pulumi over Terraform for infrastructure as code, because Pulumi is undeniably technically superior in some ways. I now believe that was a mistake.
So, you're right, there's no one right way to "commoditize and focus". Sorry for the wall of text; this thread hit too close to home.
Technical advantages needs to be weighed against scaling up technical velocity (hiring developers). This is why you you might want to forgo more niche technologies for mainstream. However, if you're never going to scale past 3-10 people or the technology really suites your business case well (ie. whatsapp built on erlang), you can break that rule. Thought leaders on both sides will advise for either case, but stage 3 will know their requirements very, very well, and how to choose the right thing that will avoid all pitfalls. Another way to put it; if you hit an unanticipated pitfall down the road later, you didn't really understand the domain.
Some decisions are literally millions of dollars or more for the business, and can only be made once - in this case, you definitely want a stage 3 person who sees all the pitfalls. If you decide on lisp for your language and you can't hire enough developers to scale up, you might be bleeding customers due to lack of engineering velocity, or large price tag acquisitions may fall through because they can't do anything with your code.
Other decisions are worth 0 dollars to the business, and many engineers spend too much time on them trying to do stage 3 decision making. They usually do this with an incomplete understanding of the pitfalls (most 0 dollar decisions don't have pitfalls by definition, just a bunch of "better or worse" arguments from thought leaders - react vs vue anyone?).
FWIW, I would agree that most of the time you don't need an overly fancy solution. Some technologies are good for many (but not all) situations - such as relational databases. Others are useful in much more specific scenarios. That still leaves a lot of room for debate about which specific tools to use. It also leaves a lot of room for debates along the lines of "yes, I know that the whole 2000 people company uses bongoDB at scale, but in this particular case, it's actually not a good idea", something which unfortunately many people can probably relate to if they've worked at a bigger company.
I'm fighting red tape for my team as we build out a dashboard.
Outlook is packed with 1–2 hour meetings for the next 3 months where so far I'm:
* being asked to load test our system to make sure it can handle the load (of 3 people?)
* being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
* told to have this run in k8s since everything runs in k8s
* other pedantic tasks by Sys Ops who think everything is a nail and love to argue their points ad exhaustium (or worse, argue why their fav stack is the golden child)
I understand the need for standards and making sure they're followed, but there really needs to be a human element of, "is this truly needed for what I'm trying to do?". So many engineering departments are all about automation, but don't truly think through how much automation is needed, rather than a 1 size fits all approach.
I appreciate that this article comes to the conclusion that the more correct an answer will be, the more complicated it tends to be. I wish more people in decision making positions would understand this.
Hide concessions to various leaders in the project roadmap.
This isn’t just a “bureaucratic trick” as the OP suggested, it’s actually a way to convert unconditional advice into contingent advice, by encoding a priority.
This is one of the most important things I've learned as a developer, and one that I thought I invented myself, before I knew about agile, by keeping a whiteboard near my desk with yellow sticky notes ordered by property:
"Yes, I get that it's a must-have feature, but where do you place it in relation to these other features?"
The concept of prioritization of features, and of saying "if I stopped dead at some arbitrary point in this list, would you have been happy with your order?" seemed so eye-opening to people at the time.
It's amazing the things that stop being a "must have" as soon as they have to spend more money.
"if you do not give this an ordered priority, I will resolve items as I see fit. Should we need to stop for one reason or another, there is no guarantee of which have been resolved".
Often times that is okay. I also tell them to always take the ones they're most uncertain about first. Better to front load hard problems and uncertainties.
Add more "must have" scope, and something else has to give.
Software projects are kind of like ovens-- if something cooks perfectly at 300 (temperature units), using 25 minutes and using 5 (money units), that does not mean it will cook perfectly at 600 temperature units using 12.5 minutes and 10 money units. Most likely it will burn.
Every part is necessary, but that doesn't mean that there isn't an ordering. Finding something that's easy to manufacture is pretty useless if it turns out later that it kills the patient. On the other hand, a drug that's safe and effective, but is difficult to manufacture is still a viable drug; worst case, you do what drug companies do all the time and charge obscene prices per dose until you figure out how to scale the process.
The overall tone of the program was “we basically have infinite money, just get it done and the government will pay us back”. So they had a fucking army of consultants to accelerate a process that normally takes 5+ years down to 6 months and they were building down multiple roadmaps just in case they hit a block on one of them.
(Obviously one can go too far, blah blah blah. But just as with code, we have a much larger problem in practice grabbing too much from the project feature buffet than too little.)
Usually when your unfinished prototype ends up in production.
That's the danger of reporting progress to people who think you can go to space on a paper glider.
Probably half of more of start-ups end up failing like this as their quickly delivered prototype fails to capture the market due to not being actually better enough, or crumble under the initial success.
Good for investors and managers who bail out early enough, very bad for users.
When you do it this way, you can decide well ahead of time if you need to bring in a contractor to build a must-have feature your team won’t have bandwidth for. It flips the narrative and puts the responsibility on the business side (which usually controls the budget anyway).
It does not work if you can not defend your priorities.
That actually seems to me like the root cause of all the calamity in the article, a culture of lying.
Rather, the “cause of all the calamity” in the article seems to be the fact that the business has a culture of requiring feedback from random individuals who have very little stake in the project or product delivery.
I could be misreading the article, though.
I agree that it is detrimental to trust to lie about the project roadmap to stakeholders.
- Ceremonial unit tests for every little thing. The whole system is buggy as hell and we don’t have any confidence that the unit tests are truly covering critical parts of the app. But alas, test coverage, the god damn Pope that can never be bemoaned.
- I’m not making this one up: A/B testing for an internal enterprise app.
While it doesn't alleviate the problems entirely, you can also run things like mutation tests that check that your unit tests actually test conditions, rather than just execute all the code.
I've written an depressingly high quantity of code in my career that blows up literally the first time it runs. I'd much rather that happen in a unit test than in production.
Any test that exercises a given branch is better than nothing.
> Any test that exercises a given branch is better than nothing.
I disagree with this. If you have a test that doesn't actually test anything, you can't tell that you're not really testing that branch. No test is better than a bad test because it's easier to fix.
During peer review I encourage Engineers to verify that the actual business logic has been tested, for example calculations.
If done correctly, a low unit test coverage can actually be of more quality than enforcing an 80% threshold
They all gave sighs and shudders of disgust, but then again, they had normal programming jobs, so I suppose it seemed quite backwards to them.
Seriously though, rsync is your friend. :-)
Even rsync might not be atomic enough for some situations since it'll update files as it goes rather than in one huge transaction at the end.
 I worked on the World Cup 2006 site for Yahoo! and we had this issue - solved with 'rsync --link-dest' and swapping symlinks.
1. stop service
2. copy files
3. start service
Hot patching wins, but needs good design to work in the first place.
As an industry I suspect we tend to over-engineer rather than under. There is a huge spectrum between my single person business with a brochure site and what Google or Apple needs. I'm willing to bet most programmers are working closer to the first than the second.
That assumes you can stop the service which, for many things (like the World Cup website), isn't really possible.
It was sufficiently small enough (no heavy media files) that I didn't mind if I left some unused files up there. Pretty much the only thing that I had to do was make a copy of the sqlite database each time just in case.
The problematic load in a dashboard isn't users; it's querying the data sources to get up to date information. For example, if you're running a query to aggregate a bunch of things with lots of joins and that query takes 1.5s to run but your dashboard tries to run it every second so it can be 'real time' then you're in for a bad time even with just 1 user. You absolutely need to load test a dashboard application that's running against production data.
being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
It might not be vital right now, but if you make a dashboard for it then it'll quickly become vital. Putting metrics in front of people focuses them on those metrics...
It's just as likely that OP did already know that what you are insisting on is not relevant to their use case. That might be why they stated it.
There are a lot of people talking about computer programs, and telling us we should do things this way or that way. Even telling us that their way is certainly the best or only correct way.
A great many of these people - perhaps the majority majority - are plain wrong. Some of them talk such nonsense that I suspect they don't have any actual ability to program at all!
How can they be so sure of themselves?
I've seen a production system handling one request (which takes a handful of ms) every 2 seconds (work hours only, mind) in k8s running 8 pods. It is quite breathtaking.
I'm a grizzled, scarred old codger that spent most of his career, saying "Are you sure that's a good idea?", only to be ignored, and then put in charge of mopping up the blood.
I have learned that "I told you so." is absolutely, 1000% worthless. It doesn't even feel good, saying it.
What I have learned, is that, when I see someone dancing along a cliff edge, I quietly start figuring out where the mops are kept. If that person has any authority at all, I'll never be able to stop them from their calisthenics.
One of my favorite quotes is one that pretty much describes "hedgehogs":
"There's always an easy solution to every human problem; Neat, plausible and wrong."
"The fact that I have no remedy for all the sorrows of the world is no reason for my accepting yours. It simply supports the strong probability that yours is a fake."
It's like a witch doctor's formula for headache cure is bat urine, dandruff from the shrunken head of a fallen warrior chief, eye of newt, boiled alligator snot, and ground willow bark. The willow bark is what did it, but the dandruff thing is the most eye-catching ingredient, so it gets the credit, and everytime the chief gets a hangover, they start a war.
Somewhere down the road, a copycat substitutes hemlock for the willow bark, and headaches become a death sentence.
I prefer to drive into the wall with people instead, working at it together, when that’s what is going to happen despite any concerns I have. Usually when you end up being right, people will listen to you more the next time if you’ve stood there with them.
It also helps a lot when your prediction turns out to be wrong. When RPA became a big thing in the Danish public sector a few years back I was one of the stronger voices against it in most of our national ERFA networks. When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen. I still think RPA is really terrible from a technical perspective, but I can also see the merit in how it’s currently saved us around 90 years worth or manual case-work at the price of a few months of developer- and support-time in total. Because I was quick to jump aboard what I still thought was going to be a sinking ship when it was going to sail no matter what I did or thought, people don’t hold how wrong I was against me but instead lovingly tease me or sometimes cheer me up with other times where I’ve been right.
You have to want to do this of course. If your workplace doesn’t have the sort of people you’ll want to drive into a wall with, the your way is probably better than mine.
That sounds almost exactly like traditional Japanese consensus.
Everyone argues for their opinion during the planning meeting, but once The Big Boss does the "chopping motion" with his (it's always a "he") hand, then everyone is expected to fall in line, and commit to the team effort.
They actually despise "I told you so." It's not smart to do that, in a Japanese corporation.
In the Japanese corporation, japanese culture seems to drive old experienced people's pride high. It's not a wrong or a right thing. It's just the social mechanism, and once we know that we might be able to leverage from it.
In the end, the game in every organization is not just about being right or doing the right thing, it's also about power, authority, influence.
Back to the great-grand-parent comment, I assume, being
a grizzled, scarred old codger, along the way you may have found a method to identify people that thinks like you. If you have, I would appreciate it much if you share about it here!
I even have had crises of confidence, thinking that "I told you syndrome" is a psychological issue with me. I do tend to be overcautious, and tend to underachieve because of it.
But I get a grim satisfaction in knowing that when the next time the bodily fluid hits the air circulator, my pail and mop will make it liveable again.
There was some dedication which I thought came from a John Le Carre novel, "For those who served and stayed silent". I can't find the source now, but that's my spirit.
(I am not in the IT area, I am in academia.)
"I told you" is a social sacrifice I suppose.
This is proof of bad company culture.
Doing post-mortems is important to learn what went wrong in the decision process and how to prevent it.
A postmortem is a clinical, reasoned, and scientific review. Everyone is on board, and agrees to abide by the results.
I worked for a Japanese company, for a long time. I made some colossal mistakes, during my tenure, and was told "That was, indeed, a mistake. We expect you to mitigate it, and not repeat it." Often, I would actually get more trust and responsibility, afterwards.
I agree. The fact that the parent feels like his warning were ignored and his voice was unheard is a cause for frustration.
Having a postmortem process after each incident would prevent such frustration.
> A postmortem is a clinical, reasoned, and scientific review.
I'm well aware of this.
I've had no issues admitting mistakes at my places of work, and vice versa: "This is what happened/I did, which led to this. We fix it this way. What can we do to prevent it/similar things happening in the future?".
People do most things with good intentions and for hopefully good reasons. When things go wrong it's usually down to unforeseen interactions or second order effects (or brain farts), and/or a lacking review process.
The person making "the mistake" is usually just the last person crossing a faulty bridge, and if it wasn't them it would've been the next person. It's not a problem identifying the person if everyone realizes we all want a sturdy bridge.
I see it as etiology vs. teleology - rather than thinking "how did we get here?" we think "how do we get out of here?" The two are interrelated, but the second somehow gets things moving, and reduces resentment in toxic workplaces.
I work for a large (not FAANG but spatially close) ecommerce company, and I’ve yet to see substantial changes or learnings after outages or mistakes.
I often find that wishy-washy post-mortems smear responsibility and deflect accountability. This doesn’t incentivize a change in behavior, and when people blindly get more trust, they often seem to simply repeat mistakes.
I think I’ve yet to master post-mortems and transforming “I told you so”s into improvements - I’d appreciate tips and ideas, thanks!
"If I don't make a choice, it's not my fault".
It's much better to establish a culture of "try doing good things, and we're in this together". If you trust your colleagues and are good at recruiting, you'll get a lot more done.
I don't know if I have any specific tips and ideas for post-mortems as it usually... "depends".
Someone mentioned "bad corporate culture." In fact, the OP was really a sort of indictment of dysfunctional culture.
The Japanese are heavily process-oriented (not always a good thing). They have a consensus-based approach, where all stakeholders agree to a common set of rules and remedies before the meeting begins. If a problem is found, then the meeting doesn't end until there is a plan (and responsible person) for a remedy. Assigning tasks (not blame) is a goal. People are assumed to accept personal responsibility for their own mistakes. It's not the job of the meeting to do that (there's a common cultural mythology of responsible managers committing suicide, if they screw up badly enough. I never saw it, so I can't speak to its accuracy).
I worked for Amazon and postmortems were taken seriously and done regularly - but things can be different in other teams.
If people warned about a risk in the past this would be noticed.
If nobody flagged the risk people would start asking why.
Trying to empathize with their position, I think they think failure happens because the right hedgehogs didn’t show up to the design review that day, or forgot to harp on whatever point that time. They are never satisfied with the “it depends, there’s no hard and fast rule, you have to let the experts think about it in context” responses I give them when pressed for policy. This is limiting my advancement. But worse, someday someone will join the team and will write those hedgehog policies, and then I’ll have to live under them too.
Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is.
"Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is."
The connection is a bit tenuous but I think contingent advice can be shown to be better than non-contingent advice. I also think people are too confident in their opinions.
Also another submission here: https://news.ycombinator.com/item?id=27462255
One thing I've found (as a person who advises engineering managers and startups) is that recipients of advice seem to value non-contingent advice more. They just want simple answers that don't make them think.
When someone asks me a question like "how should I interview candidates?", my default answer is "it depends". Tell me about the role. The company. The culture. The product. Remote or in-person? What's the team like. Then i can give a framework that gives you the answer. But people want answers like "use take-homes" or "do 2 behavioral interviews and 1 coding interview".
Same for technical decisions. They don't want to hear "it depends". They want to hear "use Rails and MySQL hosted on Heroku".
So I naturally find myself being pushed to give non-contingent advice.
All of us, when we're in that position, desire a solution. I'm not sure what differentiates those who want to fully understand the whole solution space, and all the context that dictates -why- a particular solution may be the 'best' (given a specific set of tradeoffs), but certainly, whether we are like that or not, we all desire the right solution ASAP.
I'd be super interested in how you respond to those who ask such questions; do they seem interested in explaining their problem in detail? If you, rather than say "it depends", instead immediately launch into questions, are they engaged in answering them? Can you then finish with a "given what you describe, because X, Y, and Z, I think (solution) would be the best fit for you. It has the downsides of A, B, and C, but those don't apply to you", or whatever. I.e., basically change the tone to always be focusing on solving their problem, while also allowing you to inform them, rather than "it depends" which could imply "there isn't a clear-cut solution to your problem".
In other words: I don’t need a rule per se, just an argument. I’ll work out the contingencies on my own.
Every fad and every champion of every technique or framework has something to teach you, and they are often very happy to teach it to you at the wrong time. Trying to please everyone at the start of the project is tantamount to design by committee, and is a sure way to kill a project.
To a hammer, everything looks like nails.
Software advice isn't totally a prediction, but it sort of is.
Its the classic stone soup story. You see this especially with software and tools that focus on front load new users making it really easy to do trivial things but failing catastrophically when you need more.
You also see the reverse of this, great ideas that don't get bye-in failing by virtue of being too niche.
The property that seems to be common in addressing both is benchmark-setting. The advice of "kick the can down the road" for less productive advice is premised on knowing that it doesn't fit your success benchmarks, but not wanting the confrontation(since a hedgehog benchmark is going to boil down to a single-issue attachment). Likewise, a battery of narrow binary questions that have a definite pass/fail characteristic constructs a form of fox knowledge - it's pragmatic in how it describes the "potential shape" of the outcome, so it makes for a better holistic benchmark than asking "what's the best way to do this?"
IIR, there's a word or idiom that describes this kind of solution, I can't think of it and now it's going to bother me until I do. It's a stackoverflow issue, someone asks "How do I do X?" Someone will counter, "Why do you want to do X?" and upon receiving additional information, answer, "You don't want to do X, or this other thing you're doing before doing X. You want to start this way and go down this path and that way you don't have to do X." Maddening!
Tetlock seems to have a slightly different interpretation to Berlin - (paraphrased from ) "hedgehogs have one grand theory; foxes are skeptical about grand theories".
(while Aesop's https://aesopsfables.org/F89_The-Fox-and-the-Hedgehog.html is completely unrelated, though also interesting!)
The book is absolutely worth a read if you're into the subject.
A) I’d love to have a coffee with you. Virtual or otherwise!
B) What do you think about alignment of priorities -within- a team? I’ve seen some interesting behaviors and misbehaviors in a team, where initiatives that are both trivial and non trivial die a death of a thousand cuts because of various and sundry plausible reasons. If I peel back the onion on it, it seems like those situations are ones that arise because of a fundamental lack of trust. Would you challenge or support that premise? If supported would you consider external stakeholders’ objections to stem from the same root lack of trust? It seems like we get more “hedgehog” like behavior when we don’t trust each other, and more “fox-like” behavior when there’s better trust and communication.
Also Ctrl F: "beleive" -> "believe"
For example, another type of useless feedback is so general as to be insulting; "this needs to scale" or "it needs to be high quality".
"too general" vs "non-contingent" are nice distinct buckets.
I'd be interested to hear your thoughts on that take since I thought it was very insightful.
I've been surprised by how reluctant sales and marketing people are to Brier Scores when it comes to their forecasting, given their interest in delivery estimates from engineering.
Having been on both sides of these types of discussions, I have a few thoughts:
Advice isn't always unconditionally uncontingent. An infra person saying that something should probably be done in some overly specific preachy "best practice" way is sometimes thinking of things that a product person may not. For example, maybe the data guy told you to use WebScaleDB because scaaale, and you chose to use a simple YourSQL thing instead. But it turns out that in the next semester, a metal team you had never heard of is working on chaos testing and they're making sure WebScaleDB handles datacenter failovers properly (but they don't know about your snowflake YourSQL instance silently chugging along in a forgotten corner of one DC). This sort of stuff can be very tricky to anticipate, especially in large companies with siloed teams. I've found it useful to fully embrace the idea of leveraging technical debt: yes maybe YourSQL won't scaaale and maybe it'll die horribly and without explanation when failovers start happening, but if it can carry us to the next point in the evolution cycle, then we can reevaluate our options then, instead of being trapped in analysis paralysis and getting nothing done for the entire duration of time.
As a person giving advice, I feel that I fall in the contingent camp (looking at specifics before giving suggestions), but over the years, I've started to try to be mindful of cognitive overload: saying "it depends because X, Y, Z" often goes over people's heads especially when they're already trying to soak up advice from a million different directions. Sometimes, it's better to just take a stance and spit out the TL;DR. If the stance happens to align with "best practices", you can just point at them and people are usually satisfied; if it doesn't align, you can often sway people to understand that there is nuance with a clever enough soundbite: "no, actually you don't want to enforce 100% coverage, full coverage tells you nothing about test quality, uncovered code is what tells you what you're lacking" (or "you don't need WebScaleDB; a billion db rows can be binary-searched in 10 comparisons"). Even if your dumbed down advice now lacks nuance, there's always the opportunity to course-correct as the team builds more experience on top of that advice.
Sometimes, you have to be the thought leader and drive the change you want. At my company, for the longest time, every team was suffering the pains of Jenkins. You can't do X because otherwise Jenkins will not be able to handle it, they'd say. We've invested a lot in Jenkins, they'd say. A scaling solution is coming soon, they'd say. My team couldn't wait anymore and we took the initiative to bring in an off-the-shelf 3rd party solution that had all of the pain points figured out (and then some). This turned out to be a really good call because just a week after we deployed the new solution, our Jenkins cluster - shadowing at this point - completely gave out due to scale limits. This third party solution is now what other teams in the company are adopting - including teams that were investing in jenkins integrations before.
Sometimes you may even be asked to take a decision or commitment on the spot to just an "idea". STOP right there and don't fall into their trap. They just want their "idea" to win, and then they'll disappear in the execution, leaving you holding the bag. Worse still, in case the idea was flawed, they'll refuse to admit. They'll come back and reinforce the idea, not allowing you to pivot or learn from mistakes. That's the nature of thought leadership – the "thought" matters more than everything else.
All ideas are open and welcome, but you don't take commitments based on just ideas. Ask them to show a spec or concrete doc, and start discussing spec vs spec, detail vs detail, plan vs plan, data vs data or anything concrete. You'll find many of these thought leaders silently disappear into the background then.
They will come back and try to abstract-ify the discussion again before decisions are taken. That's why you set ground rules before the meeting begins, and not when it's happening.
Thought leaders are all nice and fancy, until rubber hits the ground. 100% agree with just this title alone: Don't feed them.
> My husband and I took Jason and his older sister, Leslie, to the Museum of Natural History. We really enjoyed it, and the kids were just great. Only on the way out we had to pass a gift shop. Jason, our four-year-old, went wild over the souvenirs. Most of the stuff was overpriced, but we finally bought him a little set of rocks. Then he started whining for a model dinosaur. I tried to explain that we had already spent more than we should have. His father told him to quit his complaining and that he should be happy for what we did buy him. Jason began to cry. My husband told him to cut it out, and that he was acting like a baby. Jason threw himself on the floor and cried louder.
> Everyone was looking at us. I was so embarrassed that I wanted the floor to open up. Then—I don’t know how the idea came to me—I pulled a pencil and paper out of my bag and started writing. Jason asked what I was doing. I said, “I’m writing that Jason wishes he had a dinosaur.” He stared at me and said, “And a prism, too.” I wrote, “A prism, too.”
> Then he did something that bowled me over. He ran over to his sister, who was watching the whole scene, and said, “Leslie, tell Mommy what you want. She’ll write it down for you, too.” And would you believe it, that ended it. He went home very peacefully.
> I’ve used the idea many times since. Whenever I’m in a toy store with Jason and he runs around pointing to everything he wants, I take out a pencil and a scrap of paper and write it all down on his “wish list.” That seems to satisfy him. And it doesn’t mean I have to buy any of the things for him—unless maybe it’s a special occasion. I guess what Jason likes about his “wish list” is that it shows that I not only know what he wants but that I care enough to put it in writing.
(If it's customers, I just write it down)
However, I've found that being really open and collaborative with people helps mitigate the manipulation factor by a significant margin. In other words, you get them to agree that the project is not the highest priority or the highest ROI thing to be working on. You ask: "Given the list of W, X, Y, and Z, and keeping in mind that we only have enough resources to tackle two of these at a time, do you think X is the most important?" and they say "Well, X would be cool but yeah, W and Z would give us the most ROI, so let's hold off on X and Y until we have more time and resources."
The key is to be (or appear) really genuine with this. If it's obvious that you're kicking the can down the road because you don't want to do it, you won't win any friends or influence people. But if you can approach it with "I'd love to do X but the realities of our situation mean that we can't" in an authentic way, then you stand a much greater chance of having both sides walk away with a sense of accomplishment. They feel heard and valued, and you don't have to waste resources on something you don't think is a good idea.
If you can't be authentic about that, then I would just go the truthful route of "This isn't going to happen" and try and just be honest about the realities of the situation. They might feel hurt and rejected, but it's better than them feeling manipulated, IMO.
Half the time they won't bother. -Your- effort is free, but -their- effort has a cost.
The other half of the time they will, because they care about it, and so it goes into the backlog, and they get to see what stuff takes precedence (and it's a legitimately good faith effort on my part to see it ranked appropriately, and that they feel informed as to what is coming ahead of it and why).
Even a small task is usually enough to filter out requests where the requestor is basically trying to move a task from their list to your list..
But if you have a product manager (and they're doing their job), then all you have to do is tell them the truth. Let them figure out which features are priority, or will lead to the most revenue, or whatever. That's their job.
I was in an Extreme Programming estimating session one time. A particular story came up for our consideration, and several people groaned. Nobody wanted us to do the story, because it was going to be a bear to implement. I said "Just tell them the truth. They'll figure out why this is a bad idea." We estimated six months, and they decided that they didn't want the feature at that price.
"Good strategy works even when you know it's coming" - something like this from "Sanctuary for all" :) One example of that was mentioned here few times: features need money. And resources and time.
But sometimes features can be crammed into project without bigger investment - just talk to devs and and often they will find a way. Sometimes it works perfectly, when overall architecture is good or extendable. And often it make total mess in codebase. But cost nothing ! ;)
Yeah, the future trick is kind of interesting because it solves the immediate problem and it allows people to feel like you heard their concerns and valued their advice. If that is what they are looking for then its a great solution. If they have spotted legit problems then you need to actually reassess things.
I guess like everything it is very contingent on the environment. It worked in this specific context.
Some people like to call these mental models or lens, and that you should add as many as possible—switch out the green lens for a red lens and see if that makes things look better. And I agree, but I think if you have to consciously make “mental models” you are probably going to struggle to think critically about what the problems are anyway.
The truth is we probably all are a hedgehog at various times without realizing it. The only solution is to be as widely read as possible so that you do not short-cut to a few ideas that may or may not fit the challenge you are trying to solve.
When are hedgehogs right? In the "obvious" stuff:
- You should always use correct indentation in your code
- Document your architecture decisions. Have conversations in your team to get feedback and buy in.
- When practical, try and keep down the number of languages and databases you use. It'll make onboarding easier and it allow the team to build deeper expertise
And so on.
The "relational database advocate" usually isn't making an argument that relational databases are better all the time. They're making an argument that relational databases are the right default, and this particular use case isn't weird enough to justify the cost of learning and deploying something else. They might be right - its extremely difficult to know without taking into account the task at hand and the skills of the team.
Bryan Cantrill gave an excellent talk on this a few years ago, talking through the values different and systems encapsulate. Its an excellent talk:
I didn't tell them I'd implemented the existing system whilst working for their previous supplier.
Shouldn't that have been 2020, or did you switch to mid-terms there?
If we put "Rust all the system code!" at 2020, we have four years to think about what the next one is going to be.
The thing is that the questions/predictions are all around specific/measurable outcomes (I believe that $X will happen if $Y, I believe that $X will happen due to $Z). Asking someone "What do you think of Iraq?" will yield significantly different answers than "What predictions do you have with regards to Iraq over the next 5 years?".
I have noticed one common thing, which will cause scope creep in projects with almost 100% certainty: The shitty question.
In my mind a shitty question when building software can be a number of things:
- Something outside someone's subject matter expertise
- Open ended
- Without timeframes
- No context
I would argue that the problem is not that people are getting feedback from Hedgehogs/confident forecasters and that they should discount/ignore their advice. The problem is people keep asking shitty questions, or questions outside the Foxes scope of expertise. I think that product/engineers actually need to be asking more questions of people with experience, not less, but they need to be good questions. This is a skill that requires more effort than most people think.
Sure, there are Hedgehogs/people who blab on about the newest tech, but not having a feedback loop is how you get disconnects between your users and the product. I have seen this play out in so many different ways and its amazing how quickly a product team can become disconnected from reality, even in a small company.
Isn't this a very confident broad stroke forecast of the very type TFA rails against?
Sounds like they had the wrong idea of what alignment means. It should mean making sure everyone knows what problem is being solved, and focuses all feedback and concerns solely towards whether or not the project is on track to solve it. See the quote later in the article:
> The problem with all the bad advice was that it was unrelated to the problem we were trying to solve.
Yup, there's your problem. There was no true alignment on a goal.
Their solution to punt all objections downhill works, and in many places it might be the easiest answer. But the better answers is to wrestle objections as soon as they arise, focusing on whether or not they are relevant to the problem at hand. It is harder, but has a better result on all counts.
So when you get a suggestion from such an individual, it seems like an attractive option just to humor them and go build what you wanted to build.
Then I got experience and deeper insight so I became a hedgehog, because that seemed the thing to aspire to. However not for long, because I started to suffer from the lack of nuance that hedgehogs sometimes have.
Now I'm a confused wannabe fox. I want to be a fox, or rather I cannot be a hedgehog or parrot, but I'm in a continuous state of confusion and doubt and have an impatient urge to know more about everything. When it is required of me I can be pragmatic and clear, but those are snapshots given the circumstance.
One could also say I'm a curious and critical thinker. But that would be an euphemism. I wish I could be a hedgehog and act on it, while having a peace of mind.
I also love his technology substitutions:
springy search = Elastic
Stoplang = Go
IronOre = Rust
BeetleDB = CockroachDB
> That was our number one secret to scaling when I was at warble
warble = google ?
>they are always talking about a thing, that is the most critical thing in every case.
Yes, the quality person will always advocate for more tests. The safety person will always prioritize safety. The person managing the schedule will always prioritize deadlines. The cost manager will always prioritize the budget. It’s intent to human biases.
However, unlike the author I think “reaching alignment” is actually pretty important. But I don’t think alignment is about “the thing” central to the domain experts focus, but rather about reaching alignment about acceptable risk.
“What’s really at risk if we don’t meet the standard for test coverage?”
“What’s really at a safety risk if we don’t implement that testing strategy?”
“What’s the risk to the schedule if we miss this deadline because we implemented that extra safety?”
“What’s the risk to the budget if we miss schedule?”
In each of these, if you can reach alignment on acceptable risks it does a lot for the effort. It doesn’t mean any one “thing” has to be a priority in every instance but rather put in the context of overall risk profile. Conversely, if you avoid reaching alignment I’ve worked on teams that will actively subvert the effort because they don’t believe you agree with their “thing” as a priority in any case.
Standards aren’t written in stone but are there to partly guard you against biases. Are you not meeting them because the risk changed to be more acceptable? Or are you just v rationalizing some unconscious bias? Explicitly defining risk helps here.
I will say for that to work you need to have people who are willing to openly accept risk. I’ve also worked on teams where that wasn’t the case and alignment could never be reached because nobody wanted to be on the record accepting risk because if they didn’t make a decision they still had some plausible deniability.
I'm old, I could come up with endless such stories that would require epic solutions, but in not sure there would be much gain. Tell me why we will hit that issue and why we can't fix it any other way... not just the vague notion of a problem someone else had for who knows what reason...
And the reason I want that level of detail is because I've failed to accurately predict such issues time and again too ;)
Manager either demands or rejects an idea based on a totally different company -> I spend days/weeks proof of concepting/documenting/testing/validating for or against his mandate -> He says something like "oh, great!", and then continues to spread the mantra that x would never work for us because of previous experience.
It was really tiring. Having to constantly document why a piece of technology for a startup in a totally different industry would yield totally different results than when he'd used it at a top 5 company with insane scale and a 500 person engineering team. Especially because most of his understanding was just overhearing people say certain things, so he just said them too, but didn't really know why.
Human imagination is endless. You can't 'try catch' for everything that has happened sometime, even more so when addressing it at a high level without detail / nuance.
You easily can get nothing done.
That's exactly what we did. A year spent conducting experiments to validate or invalidate positions based in dogma rather than building, iterating and adding value.
Really wish I could just edit my comment.
As a leader of an early stage high growth biz, it’s critical to prune the team as these folks emerge. It’s not a happy event, but not everyone is the perfect fit for their current role and sometimes tough changes need to be made.
Not making these changes leads to A players - the innovators who ship - having exactly the experience described in the post. And they tend to leave as a result.
How do you identify them?
Simple answers conserve brain power.
I think we can keep using simple answers, we just have to apply different simple answers to different situations. Maybe one way we can do this is to collect simple principles in big lists and, when we feel like we might need to change our perspective on something, look through our list and choose a hypothetical principle to apply. This is basically the I Ching / Book of Proverbs, but you have to compile it yourself.
Regardless of what you think of OOP (I think it's overused) ORMs are genuinely a terrible idea for a multitude of reasons even if a lot of people seem completely oblivious to other options (and as a result think it's a great option because if you can only imagine one option it's automatically great, at least that's how a lot of people seem to think).
All those people with their wishes and throughts which bloat one single project usually have precious insights on how to unbloat the overall processes of creation and maintenance. It's of course impossible to fix it all in the span of a single project, but managing every project with the "it works and was delivered on time" mindset is the best way to tank the overall productivity of the company and lose developers, because you missed the insights they had about to improve what they do, and do it well, and do it efficiently.
I guess it depends on context.
Yep. Occasionally there will be a complex problem with a simple solution. More often, complex problems have difficult solutions. And unfortunately, If you're in a room discussing the problem and one person gives a simple solution and you try to start a conversation about the complexities involved and resolving sub-problems... Well, you could be the one who is wrong, but in my experience that's often not the case, though unfortunately people favor the easy answer. That said I have also occasionally seen the simple solution proven right. There's just no one size fits all approach.
Personally, I've been wrong both ways and right both ways in my career. When I get it wrong and I'm fortunate, I also come up with the better solution too. Regardless, I've learned to be skeptical of my own simple solutions too.
Whenever this has happened in my experience, it's come with the monkey's-paw irony that while the solution is simple, the reason why it's a solution, or at least a complete solution, is not. That almost makes it worse - people who aren't intimately familiar with the problem will see the simple solution, think the problem must have been similarly easy to understand, and then think that there must be something wrong with the solution.
A simple “that solution doesn’t seem to address $subProblemA or sufficiently handle $efgeCaseB” would do the trick, eh?
I've managed to head off some of those. Had a CEO who favored simple solutions, but would listen to options. And I had a good manager who brought me along to important meetings. The CEO would hear me out. Sometimes they agreed.l, and things worked out. Sometimes the opposite. Once though I did see them cut a Gordian Knot in a very simply way-- though for some rather painful. I still think it was a drastic oversimplified solution to a problem where there were better options, but others had had a chance at those better options and dropped the ball (basically ignored it), and the problem did have to get solved...
Confidence (and its cousin, charisma) being dead-ends-- that's a profound thought shift from the dominant perspective that confidence is a modern leadership quality.
Anyway, I think this scenario sounds very much like the "pigs and chickens" metaphor in agile development. An external thought leader is a chicken. They have no skin in the game and their ideas are easy to communicate without really understanding the context.
In many of these scenarios, multiple product teams are supposed to exist specifically to allow you to use whatever the hell DB engine makes sense to your team. The question isn't then "is squeaky better than bongodb" but which is suitable in this scenario? Will it perform well for this project, can it be supported by this team etc.
Perhaps engineering leadership likes the idea of homogeny across the entire company "we use MySQL exclusively", but is that really necessary? Let's ask the Thought Leaders!
Other than this, good article - I'll forever be more conscious of how I give advice going forward!
> Uncontingent advice is what I think of when I hear the term thought-leader - someone has a single solution that seems to fit every problem.
Apparently it's a reference to folks who always confidentially advocate the same go-to strategies, regardless of context.
It takes courage to exercise this.
In decades of projects, I have yet to receive one.
When receiving offhand hedgehog like suggestions from someone important, e.g. the CEO, I tend to ask if they insist we pause and consider it, given that to do so would likely affect the cost and progress of the project.
In decades of projects, I have never been asked to do so.
One raised "serious objections" to the QA section, saying it wasn't detailed enough, we needed a full test plan, and demanding TDD be used for all development. They cc'ed the CEO with their review.
The second QA reviewer said the test plan was a "good high level approach" and that only thing they wanted to see was an explicit "design for testability" requirement for one of the critical, and hairy, modules.
Springy Search - ElasticSearch
beetleDB - CockroachDB
bongoDB - MongoDB
StopLang - GoLang
IronOxide - Rust
YourSQL - MySQL
I come from a EEE background, and "Foo" and "Bar" just don't make sense to me in the way that i, n or f do. Sometimes Foo is a function, sometimes it's a variable, sometimes it's a whole framework.
BeetleDB, on the other hand, is clearly a DB client. It's on par with "Alice" and "Bob" in terms of clarity and minimal mindfucks.