> they hired a team of developers without having a technical person on staff to vet them
For any non-technical entrepreneurs reading this: please don't do this. If you aren't competent to hire developers, please borrow or rent a few trusted technical experts and use them as a hiring committee. Otherwise you are less likely to hire the best technologist, and more likely to get the most glib and appealing technologist.
Over the years I have met far too many smooth-talking consultants and would-be employees. And I've seen them cause enormous garbage fires when managers hire beyond their ability to evaluate. The size of these garbage fires are a direct result of what got them hired: if someone is good at telling managers what they want to hear, they can go a very long time saying, "Yup, we're building it and it will be great! Just you wait and see!"
And then, no matter who you hire, demand a "ship early and often" schedule. Ship to internal users. Ship to alpha users. Ship to external testers. Heck, ship to just person for just one narrow use case. The earlier you can start seeing real-world success or failure, the earlier you can course correct.
And yet there is an answer, it is hire an experienced engineer. And the author glosses over the problem by not realizing they "solved" the problem by hiring him.
It is the bad calculus where one "experienced" engineer is going to cost you maybe $150K - $200K and yet you could get anywhere from 3 to 10 junior engineers for that depending on how far out you go.
It would be awesome of there were some sort of certification for "experience" sort of equivalent to the guild master status, it isn't an education thing it is a "oh yeah I've seen that before, lets not do that here's why" kind of thing.
While I am sure that _some_ fail because of the abject inexperience of the staff running them, this is hardly ever the complete story. Moreover, projects do fail even with a fully experienced, fully vetted staff running the show.
The article just seems like a consultant marketing his acumen. There's nothing wrong with that. But keep in mind, he dropped in over a weekend and spent a few hours detailing problems that needed fixing and the staff subsequently made corrections and released on time.
It could easily go the other way too. The "inexperienced" team could have instead brought in another seasoned consultant who, after a few hours, got the wrong idea and led them down the wrong path precipitating an even larger slippage of ship date. THAT HAPPENS TOO and I suspect it is even more common.
Of course, that's probably more true of my environment (third world country, enterprise software) than others.
Sure - but in that case they usually don't fail because of technical decisions (which is what OP was talking about). There are however many other ways a project can fail (bad management mostly).
Ultimately, if you cannot identify critical staff and don’t know anyone who can, you’re screwed anyway.
A few of those trusted technologists will make a fine hiring/oversight committee. You won't need a ton of their time, so you can pay very generously for the insurance they provide.
You may have to hire them as consultants. You have to dedicate a day or more to discussion with each of them -- maybe considerably more if your problem is complex. You have to reveal information about your plans and/or the condition of your systems that may make you anxious or embarrassed.
After you've gone over the problem separately with five or ten putative experts, you will have a pretty good idea which of them understand it (if any), and you will also know how to ask much better questions going forward.
If necessary repeat this process, using your better understanding to find better experts and gain a better map of the territory.
Once you can rank expertise in your domain reliably, you might hire one (or more) of the known experts, but the best ones will be hardest to hire. Or you may use your new meta-expertise in the domain to look for more junior hires who have an unusually good grasp of your issues. Also, you may be able to get the best of your experts to act as advisors.
This takes a lot of time and may cost some money depending on the domain. But not doing it will take more time and cost more money in the medium to long term (and even in the short term sometimes).
I wonder if the people who become fantastically successful simply have the ability to evaluate expertise, without having any of it. i.e. by "reading" people, in some sense.
This relies on the "expert"'s own evaluation of their expertise, which mightn't be accurate, because of Dunning–Kruger, or conversely, self-doubt. So, one would need to "read" those too.
But you don't have to become an expert. If I hire a plumber or carpenter, I cannot do what they can. But with enough work I can understand which plumbers or carpenters will do a good job.
The problem of separating experts vs non-experts might be the single biggest problem society faces right now.
Think Elon Musk learning about rockets, back before he lost his mind.
I did it once for climate change, took a couple months, some math. Got there though.
You don't realize the difficulty of the problem until you must hire someone outside your own areas of expertise.
Small problem: Ask friends for someone they have worked with on a similar problem.
Large problem (building an org): Ask 5 VCs/CEOs/VIPs for intros to the best person they know in that role, under the explicit direction of “learning what good looks like.” Interview all 5 with open ended questions. Ask each for one for one more contact. Eventually you will learn to pattern match, and one of the folks you like will volunteer themselves or a friend, and you will know how to interview.
At some point, you want to bring that knowledge in house, the recruiting company will then help you hire people or let you convert some of thier contractors (not thier full time people) to fulltime.
Some will have a combination of onsite developers, “rural source” developers who are in a cheaper part of the US, contractors and outsourced developers but they manage the entire project.
I was offered such a position - the pay is above market but it required too much travel for me now.
Scenario #1 - you are a founder with a great idea, you hire a good software dev lead who then fills out the department. You work with the Dev lead to hash out your plan They organize it and after certain checkpoints, the dev team demos progress. As the founder, you are at the top of the chain making sure that there is a product market fit, and ensuring things are going as planned from a high level.
Scenario #2. You contract with a consulting company. Your project is led by a dev lead/project manager paid by the consulting company and they fill out the team using thier own staff.
They organize the team and after certain checkpoints, the dev team demos progress. As the founder, you are at the top of the chain making sure that there is a product market fit, and ensuring things are going as planned from a high level.
There is no difference in the delegation chain or the risks involved.
The main difference is that you are writing one check to the consulting company and they are paying the contributors and in the other case, you are paying employees individually.
The usual retort is that employees have more loyalty and ownership in the company. No they don’t. The employees at your company, if they are smart, have one eye toward thier next job just like the consultanting company has one eye toward thier next contract.
I have this problem internally at my company. Recruiting have crazy goals (hire 30 developers per month) so they push us to accept whatever they can get.
Sure, we can keep refusing the candidates but at some point you have to accept someone to get the job done. Even if you need to double check everything they do.
The reasons for that are too complex for a small post but believe me, there are many.
Hire a recruiting company. There are recruiting companies that have a staff of full time consultants that will bring in a whole development team and manage the product.
You are replying to a scenario that is just the opposite of my suggestion. The client company is not hiring developers at all. They are contracting out with the consulting company to manage the entire project. The founder is working with a project manager who is employed with the consulting company to develop a product. The recruiting company is doing all of the project management, development, QA, etc. The founder is hopefully coming up with acceptance criteria for the product since it was his idea.
After the project gets off the ground, then the recruiting/consulting company works with the client to develop in house expertise starting with finding a dev manager who hopefully understands the business and the technical and the dev manager works with recruiters to staff up.
You’re not hiring anyone at first. You’re basically outsourcing the entire development project to an outside agency. If you are a non technical founder, this may be the best way to go.
There is a local recruiting/consulting company that has been trying to get me to come work for them directly full time in the role of team lead, I just can’t do the travel right now.
There’s no easy way around it. In fact I’ve seen some teams too eager to rip up existing production workflows every time some new technology becomes popular and that’s not productive either.
But how do you know when you have done that? Experience alone doesn't necessarily mean they can solve your problem.
Experience alone doesn't necessarily mean they can solve the problem and ship/write good code. I've worked with plenty of "senior" engineers that have 20+ years experience yet have never dealt with any serious issues. These usually work at companies where a small team of "ninjas" handles those sorts of things, and the run of the mill Dilbert coder shows up and does his thing. This is not a knock on those people. They are important in their own way, truly. But I would not want them to be the responsible party for steering the ship in a new app.
>oh yeah I've seen that before, lets not do that here's why
Which can actually be a pretty bad way of going about it imho if you only use your own direct experiences for that. Just like science, basing your decisions on a limited number of anecdotes is problematic because small biased subsets are not representative of the whole.
And like science you can base it on the collective experience versus your personal experience. Preferably from rigorous studies but we mostly lack those in CS. So hopefully you're reading books, blog posts, studies, conference presentations, discussing with colleagues, etc.
>this is why in most serious technical interviews today, you will often get asked about your past experiences.
Interestingly the top companies seem to be focusing a lot less on experience and a lot more on problem solving/system design/etc.
> Interestingly the top companies seem to be focusing a lot less on experience and a lot more on problem solving/system design/etc.
As long as we're making conjectures: this is likely because a lot of them have tools and processes that are very different from what you see "in the wild". e.g. working at Google means you can leverage all their deployment infrastructure, something you likely didn't have access to outside (this is somewhat changing with k8s and cloud services).
The general reason given for non-representative algorithmic whiteboard coding problems is "you can't trust anybody's experience." But those questions are themselves not resulting in production-ready code, so you're flying blind on "can figure out a quick way to solve a problem" vs "knows what to do to turn a quick solution into something that will last."
And then there's a time-based calculation you have to make: hire fast and hope they'll learn on the fly, or slow down and delay until you find people you hope won't have to. You're hoping either way, after all. :)
But "experienced" and "cultural fit" are mutually exclusive for most startups.
Junior wont even know the problem and hack it together.
But hey, thats what reddit /r/askprogramming is for... right?
You need someone who can bring other developers along, teach with purpose, and etc. Experience developing alone does NOT automatically provide that.
There are experienced developers who are ok, some are amazing, and some who for all their skills straight up can't lead / help other developers for any number of reasons. For those who can't / don't want to that doesn't make them bad or anything, just not suited for helping others.
And the bloke needs time to train. If the developer is getting chased from dumpster to forest fire and back he won't be able to pass on knowledge and best practices.
How do you deal with someone who gets to a problem and cant figure it out and then just drops it without telling anyone or saying anything, and you dont figure it out for days or weeks?
Personally: I would advice getting lunch with them and getting to know them better. Set up a 1:1 if lunch isn’t possible. Everyone likes honest feedback, or at least a chance to talk about their difficulties in getting tasks done.
If it was important and estimates on how long it will take are overrun, then don't wait to follow up on it and find out how it's going.
Unless the problem is more nuanced like a small component of a larger body of work? Like knowingly leaving flaws in an implementation, or something.
I wrote about once simple process here: http://williampietri.com/writing/2014/simple-planning-for-st...
The key points to keep things on the rails: 1) it's a work queue for the whole team, and the whole team is responsible for every work item, 2) the units of work are small, so they should generally be finished in a day or two, and 3) you to quick daily checkins to make sure work is moving along.
If you add pair programming with frequent pair rotation (e.g., 2-3x/day) then you make it impossible for a person to get stuck for long periods. Even if two people are stuck for a couple of hours, when pair rotation happens then the pair can ask somebody with specific experience to tag in.
Not to solve this problem they don't.
I wouldn't assume I could start a restaurant or run a medical practice, why do so many people with no technical background assume that they can start a business centering around software?
History is littered with people who ignored your rule here and have been successful.
I guess we can flip that around: why do so many people with no business background assume that they can start a business?
If you want to start a company which makes software, actually knowing about software is going to do wonders for your chances of success.
As for your flipped question, “business” is not a background. Other than perhaps Math, English, and common sense, there is no knowledge that is so abstract that it can be usefully applied to any kind of business.
Unlike other people responding, I agree that people without a business background struggle when starting a business. There are a lot of developers who start a business and have no clue about pricing or marketing or sales. They have a high failure rate.
If you know one side, you may succeed if you're a quick learner and/or your core idea is good enough, but you're at a disadvantage compared to an individual or team with knowledge of tech and business.
 Note that these are all things a developer could learn without a business degree, just like I learned to program without a CS degree.
Do you have some examples? And do you have data supporting "lots"? Because I find that highly implausible.
Nowadays, though, there are dozens of different dynamics involved in the management of restaurants. As such, it is very difficult for someone who has both a culinary and business background to open a successful food shop. To believe that someone who does not possess such an expertise could do well in this arena is foolish.
Managing restaurants has always been difficult, but it is becoming harder with each passing year. Nobody in their right mind would recommend a complete amateur to open one of these shops given the current economy. They would need some sort of experience in the food industry before even attempting the feat.
 - https://en.wikipedia.org/wiki/FDA_Food_Safety_Modernization_...
Because starting a business, in and of itself, is trivial. Children do it with lemonade stands. It's "the thing that the business does" that is hard.
This is clearly a management failure. If you start a new department you should at least try to seed it with experienced employees from other parts of your company, even if they're on a loan.
Repeat this process several times until you have someone who will interview for you.
I think to some extent, it's a matter of being lucky picking the right person to vet, since you can't vet that person yourself.
They lacked experience in many areas. Ironically they ended up firing their only experienced executive (CTO). It's too bad, they had passionate engineers who could have built some amazing things (as evidenced imo by the amazing things they later built for Netflix/Google/Facebook/Amazon/etc) but they were stuck at a company that sort of wanted to be a tech company but didn't want any of those annoying "IT people" involved in decisions.
> they had never released a production application before.
It's entirely understandable to end up with a poor-performing team if you start out without any in-house technical knowledge. But how did they end up with a team comprised entirely of people without any experience at all? That to me indicates that, rather than being unable to identify good candidates, they intentionally tried to cut corners. Telling an organisation like this how to identify better candidates isn't going to make a blind bit of difference because they aren't interested in better, they are interested in cheaper.
Or maybe they suck at sourcing candidates, but don't know it. So they hired the top 1% of their candidate pool, without knowing what good looks like.
This makes it impossible for someone with skill N to hire people with skill N+1, other than by dumb luck.
Current political discourse contains lots of examples of people at level N+1 (or emulating that) convincing lots of people at level N that all the people at level N+k, k>1 are dumb, crazy or evil.
A key issue here is how one responds to not knowing how good those ideas are. Dangerous responses: "I can't tell so they must be crazy" or "I can't tell so they must be brilliant." Accepting that one doesn't know mitigates the risk.
10 years later, I still know what I need to learn to get to the next level.
Perhaps you're trying to reformulate/restate the "Blub paradox" ?
I do like your idea of shipping early and often so you can see the progress and actual results. If I had a dollar for every "we completed 80%" but not being able to demo the product excuse, I'd be so rich that I'd be buying Jeff and Bill dinners. :)
Neither of these assertions is true. The second one is easily falsified (even not being pedantic about "nobody"), as evidenced by the existence of "used car inspection" on the price lists of at least some mechanics, if anecdata is not compelling .
The first assertion is, in its falsity, makes for an excellent analogy to the topic at hand, which is evaluating developers.
Even a legitimately trustworthy, expert evaluator will not be albe to provide an evaluation that will reliably predict real-world performance over the next several years. This is true for interviewing candidates and for used cars. What one hopes for from the evaluation, is to significantly reduce risk by filtering out obvious (for ones definition) dealbreakers.
 Personally, I only buy used cars (and typically 8+ years 75k+ miles). The logistics of getting them to my mechanic for an inspection is the least easy part of the process, when it's a non-dealer seller.
In France, every used car sale must be accompanied by a recent inspection certificate. Go to any inspection center and they will do it easily for a fixed fee.
Note that inspection centers can only offer inspections, no repair of any sort, to avoid the obvious conflict of interest.
That depends on the definition of "inspected".
> recent inspection certificate
What information is provided? Are there any details, or just that it passed inspection successfully?
Details that are important to me are generally proxies for abuse or poor maintenance habits (e.g. cheap aftermarket parts), though sometimes they're merely an indication of what maintenance has or hasn't been done, in the absence of records (e.g. signs of age/crack on rubber parts).
> Go to any inspection center
This, already, makes it useless for me. I prefer a certain subset of car manufacturers, and I need an expert in the repair of that make of vehicle to adequately reduce my risk to tolerable levels. At the very least, as the buyer, I need to be able to choose the inspector, to avoid the obvious conflict of interest.
> easily for a fixed fee
This sounds remarkably like California's requirement for a recent "smog" certificate prior to a used car sale. It's a easy enough, but some sellers still don't do it. Other than avoiding a minor hassle and cost, it offers almost no benefit to the buyer.
> only offer inspections, no repair of any sort, to avoid the obvious conflict of interest.
This conflict would seem to exist only because it's chosen and paid for by the seller. As a buyer, I very much want the inspection to be done by a mechanic who is experienced in actually maintaining and repairing that particular vehicle.
Cracks would most certainly be part of it.
You can choose where you do the inspection. There are a lot of choices, it's like repair shops, but specialized into inspection. It's done by a real mechanics and it's certified.
Having thoght about my own process, I realized that, even if all the relevant details are exhaustively documented in writing (which would be a huge waste of time/money), what's valuable to me is being able to talk to my mechanic about them.
The written report is just a summary/highlights for me (and the seller) of topics that are either directly of concern, or, as I mentioned, proxy indicators for issues that are impossible to spot with an inspection.
> If it's any important, you have to fix it and come back for a counter visit.
That is something I most definitely do not want. If there's any major maintenance or repair to be done, I trust my choice of mechanic more than the seller's.
I also very much don't want a forced repair to cover up how bad the situation had gotten previously. If the certification process doesn't include the full history, including failure/remediation, then it's a borderline scam.
> Cracks would most certainly be part of it.
And yet they aren't certain to be signal instead of noise. Context matters. Reading "surface cracks in timing belt and CV boots" is useless.
Knowing the difference between the timing belt merely showing signs of age appropriate for the age/odometer reading of the car (and that it can be replaced on schedule) versus being shockingly old and could snap at any moment, possibly bending the valves (which varies by engine design, so even that context is important), is a hugely valuable indicator.
> You can choose where you do the inspection. There are a lot of choices, it's like repair shops, but specialized into inspection.
I, the buyer, can? You've made it sound like it's the seller doing the choosing. Do sellers end up with a stack of cerificates from different buyers' choices?
> It's done by a real mechanics and it's certified.
At the risk of sounding like a No True Scotsman argument, I'm skeptical. If these shops are employing their mechanics full time, and they only perform inspections, they're not real mechanics, for my purposes.
Certification is meaningless, unless it provides the buyer with some kind of practical, monetary protection.
So, yes, useful inspection, that allows me to buy used cars that have a predictable long-term maintenance/repair cost of USD0.12/mile is hard (in the sense of tedious and time consuming). An inspection that merely prevents me from being cheated and buying a total piece of junk (in some large number of cases) may be easy, and, though not totally useless, may be close, if it doesn't improve my chances beyond random luck.
The seller has to get the car inspected. It's a standard inspection to detect many many problems. The buyer can certainly gets it re inspected if he likes too. The inspection is a legal document that's important in case of litigation. It provides protection to the buyer and the seller.
A car cannot be sold without an inspection certificate. I think the car is also forbidden from being on the road if it didn't pass an inspection in the last 2 or 3 years.
A timing belt that's shockingly hold and on the verge of breaking is a serious issue. The vehicle should not be allowed to be sold or driven in these conditions.
Oh, I understand the concept. It's government regulation of behavior with the intent of protecting the public. I'm quite accustomed to a form of it, with California's emissions checks, as I mentioned.
I just don't believe it produces the desired (to me, the used car buyer) outcome and therefore don't want it . I'd be interested in seeing data showing otherwise. Are long-term maintenance costs reduced by this system?
> The buyer can certainly gets it re inspected if he likes too.
I posit this is the inspection you need to be comparing the ease (and frequency) of, not the initial, seller-initiated one. How buyers in France actually bother? If it's close to zero, then it's just as hard (if not more so) to get a car inspected in France as in the US.
> The inspection is a legal document that's important in case of litigation.
Now that's something I do profess being entirely unaccustomed to: litigation over a used car. I don't think I even know someone who knows someone who has been involved in something like this.
I'm sure it's because US laws don't, in general, support it (and our courts, even "small claims" ones, wouldn't have the bandwidth for it).
I don't want it, anyway, as it could serve to drive up the average price of used cars or drive down their availability. I have a high risk-tolerance , so I favor a freer market.
> A car cannot be sold without an inspection certificate. I think the car is also forbidden from being on the road if it didn't pass an inspection in the last 2 or 3 years.
This is exactly like California's "smog", and like other states' "safety" inspections (which may now mostly be in the past).
> A timing belt that's shockingly hold and on the verge of breaking is a serious issue. The vehicle should not be allowed to be sold or driven in these conditions.
I strongly disagree. Although even I might personally choose to tow such a vehicle to the mechanic after purchase instead of driving it, I most certainly don't want the government forcing me to do so .
I most certainly don't want the seller being forced to cover up previously poor maintenance habits by replacing only the items that are visibly bad. That actually increases information assymmetry, in a way that cannot be alleviated with additional inspection by the buyer, which is, presumably, contrary to the point of any required and documented inspection. (You never answered if France's inspection certificate includes a full history of such failures.)
Although it doesn't guarantee the behavior, it certainly also encourages the pre-sale repair of only the required part and to use as cheap a replacement part as possible. To the extent that, as a buyer, I'm reimbursing the seller for the expense of that repair as part of the purchase price, it's usually money flushed down the toilet.
The example of timing belts on Hondas exemplifies my predicament. The water pump is driven by the timing belt. A water pump never lasts two timing belts. The vast majority of the cost of replacing either is labor, and the incremental labor cost to replace both at the same time (plus ancillary drive belts) is close to zero. Total cost is around $700 (and reliably lasts 75-120kmi depending on model). A timing belt (only) replacement might be $400, possibly even less with a cheap aftermarket part. The problem is, the water pump could fail immediately (likely enough if the timing belt was way overdue before replacement), and that repair is going to be around $600, plus towing costs and the cost of my time and inconvenience of a breakdown . If there's an aftermarket timing belt, I'm just having the whole $700 replacement done immediately and not risking it, just as I would with an obviously-too-old belt. If neither of those signals is there, I'm left with the risk of not knowing. That risk exists regardless, but it's increased if the seller is routinely forced into this behavior.
 I was reading about Lisp's "social" problems on c2.com recently, and there was a mention of an attitude of "you only don't like it because you don't understand it." Such an attitude could be uncharitably condescending, but I don't intend to suggest such an attitude here, as it would be unchartitable on my part.
 There already exist options for a relatively wide spectrum of risk tolerances, including new cars (which are covered by "lemon laws" and have available extended warranties, a.k.a. mechanical breakdown insurance (MBI)), manufacturer-"certified" used cars (which, with some manufacturers, make them eligible for the same extended warranty/MBI as a new car), and merely not-as-old used cars with third-party MBI. I actually purchased and used MBI on my one American car (a full-sized van, so no Japanese, or even, at the time, European options), originally puchased new, and it worked out very well.
 You may detect a ("small L") libertarian political bias here, which I admit to, but I actually would agree with regulations prohibiting driving a vehicle maintenance problems that are immediate safety concerns. Tires with inadequate tread depth, for example, continuously affect the safety of the vehicle during its operation. The same can't be said for a timing belt that may fail.
 Oh yes, I forgot to mention, reliability is another goal besides my 12c/mi long-term maintenance. In my entire time of implementing this goal (i.e. after my first couple cars of my youth where my explicit strategy was to save money with minimal maintenance and by not repairing until a breakdown occurred), excluding the aformentioned American car, I've experienced only two mechanical failures that rendered the car undriveable. Interestingly, both were burst coolant hoses, and both were close enough to be emergency-repaired and driven to the mechanic without a tow. This is 20+ years and over a million miles.
The regulations works fine here. It ensures that problems can be detected and acted upon. It's simple and it's much better than not having any inspections or inspections run by the repair shops.
For the documentation. The seller should provide the history with the cars, although there is no legal requirement for it. The few expensive tasks like the belt are definitely things to watch out for both parties. It's important to negotiate the price up or down, or avoid buying entirely.
I've had the case once of buying a car second hand, where the seller meticulously kept every visit and repair bill since he bought the car many years ago and it was exclusively done at the manufacturer repair shop.
Although it's true that I'm generally skeptical of this form of regulation, primarily because there's plenty of evidence of its ineffectiveness, in addition to plenty of undesirable unintended consequences, I'm not categorically against  regulation.
More importantly, I fear you've missed some nuance, in your haste to apply that uncharitable characterization.
Even if the regulation is successful in its goal, that goal still differs from my own.
In this situation, I want neither what (I think) the regulation purports to achieve, nor what it actually achieves.
> The regulations works fine here.
I'm sure you think that, because that's what you're accustomed to, if I may paraphrase your earlier comment.
> It's simple and it's much better than not having any inspections or inspections run by the repair shops.
You say this, but you don't back it up with credible evidence or even reasoning.
I'm not, of course, suggesting that prohibiting any inspection would be better, merely not requiring the "simple", mandated process you've described. I've explained why it's worse than buyer-chosen inspection.
You certainly haven't explained why inspections run by repair shops would be worse, other than alluding to a conflict of interest. As I mentioned, I only saw such a conflict with the seller, not me, the buyer.
> It ensures that problems can be detected and acted upon.
This is an example of a goal that I don't have. An arbitrary definition of "problem" on someone else's car I, at best, don't care about.
> The seller should provide the history with the cars, although there is no legal requirement for it.
If it's not included/mandated, then it's safe to assume it's not going to happen (often enough that it cannot be relied upon). "Should" doesn't enter into it. It means that the forced repair prior to certification is tantamount to a cover-up. This is an example of, at best, an unintended consequence, and certainly a goal I don't want.
> The few expensive tasks like the belt are definitely things to watch out for both parties.
That was neither a particularly expensive example, nor is it particularly rare. Clutches and suspensions can easily be more expensive and have the same OOM of wear-out time, for example. Their remaining life is also generally unknowable from inspection, so proxies that are visible, like the timing belt, are critical to me.
As I said, the repair-before-certification part of the inspection process can make it impossible for me, the buyer, to "watch out".
 Trust is an entirely different matter. It has to be earned. Better yet, don't resort to trust in the first place or "trust but verify". We can measure outcomes of regulation. There's no excuse for me (considering myself intelligent, informed, and scientifically-minded) blindly accepting "trust us! we know what's good for you!" from any group of humans, be it politicians, civil servants, corporations, or individuals.
I did that...
In reality, there’s a balance between moving fast and and moving slow. It’s difficult to communicate that balance because every type of product demands a different balance. I suppose that intuition comes from experience, which is a terrible answer for someone trying to learn.
I'm guessing this continuum also depends on the nature of the business one is in. In a fast-moving, consumer-oriented world like social networking, "move fast and break things" is probably very good advice.
In a slower-moving, enterprise-oriented world, "move slowly and steadily without breaking things" is probably the right orientation. Those kinds of companies probably die in fast-moving consumer land.
A while ago I read an article about the programming teams that work on the space station, on satellites, and that sort of thing. Alas, I cannot find the link right now, but the gist was that those teams move very slowly and work very hard towards correctness. Any bug found is generalized and their entire code base scoured for similar bugs. That's a different environment than social networking.
With those guys, it's fine to execute risky changes on productive systems, as long as those changes are tested, communicated, scheduled to a non-critical time of the customers and include backoff-plans.
But yes, this naturally slows things down, or makes systems more complex and expensive due to redundancy and staged rollouts.
If the risk to your end users is low, or the hazards are minimal, then yeah you can push crap code to production daily because nobody is actually harmed by it. If the crap code carries a risk of say bringing down your payment or accounts system, causing you to lose revenue, then you might spend more time mitigating the risk by testing, reviewing, and improving the code.
On the extreme end of the spectrum, where the hazards posed by bugs crash your spacecraft into the atmosphere or break your $300 million dollar probe, you tend to spend a lot of time validating and verifying your solution because a single mistake is very costly.
Somebody at our org converted everything to microservices because some fast-talkin' marketer got into their head. It's now a pasta mess. We didn't NEED microservices; wrong tool for the job.
I'm not so sure about that particular example, given the privacy implications. (And beyond...)
If your mantra specifically encourages breaking things in production, you've chosen to be flippant instead of thoughtful; or you actually want your product to be broken. It's likely that a more thoughtful mantra could result in better outcomes without discouraging urgency and innovation.
When a bug in your application presents itself in production, something very specific should happen: 1) identify the symptoms, 2) isolate the causes, 3) address the issue in production, 4) perform a postmortem to identify the next steps, 5) fix the problem's causes throughout the entire supply chain so it cannot happen again.
A lot of organizations get stuck only doing steps 1-3. That's not good. Operational processes are just as important, if not more so, than development processes. They're what ensure that when you find inefficiency or bugs, that they not only get fixed, but that they never happen again. This means actually making policy changes, training changes, etc in addition to just fixing code.
This is the heart of lean manufacturing that supposedly influenced a lot of modern software development methodology. But some people may have forgotten that the lean manufacturing chain doesn't end at development.
Surprisingly few organizations genuinely care about product or process. The market opportunities for companies which do, are staggering.
Many enterprises are still stuck in effectively yearly release cycles, not to mention the number of enterprises still encumbered by legacy systems e.g. mainframes etc. Pick a "target" by making some phone calls, doing some networking, find a business which is encumbered this way, and start a competitor. It should not be difficult to steal business from a competitor which a) will take a year to release a response b) whose response will be handicapped by needing to support internal legacy systems with its response.
If you think easy targets like that don't exist, you need to get out of SV and spend some time with the rest of the world.
Technology != software.
This is a distinction that I've found the vast majority of software people implicitly assume doesn't exist.
A sibling comment points out that physical logistics sometimes can't be solved (competitively) by technology alone, but I posit that this is true even within technology. That is, not all technology problems can be (best/competitively) solved by software alone.
None of FAANMG depends exclusively on software, running on someone else's hardware and network.
G famously customizes their servers, and they had hardware frugality as part of their core strategy from the very start.
> encumbered by legacy systems e.g. mainframes
That may not be as great an encumbrance as you imagine, considering the kind of performance modern mainframes can deliver.
Are they over-paying for that performance, compared to best-case commodity hardware? Of course, since mainframes (for the latest models) are a monopoly. They may also be over-paying for the software and/or developers.
A startup with AWS-based infrastructure will be over-paying, too. Cloud may not be a monopoly, but it's at least arguable if the competition is robust.
That distributed computing system, with all the inefficiency added by the distribution, may be remarkably copetitive for someone that's paying below commodity prices for infrastructure. For someone paying 10x commodity to a cloud provider, not so much.
It takes an effective management team to make this step work. If the postmortem isn't blameless, this stem becomes completely ineffective as employees (not unreasonably) prioritize their own careers over the health of the company.
And there's a big difference between "blame-free" and "unaccountable". It's not the fault of a team of developers if they aren't experienced enough to take responsibility for a critical production service, but the solution is not to berate them for being inexperienced: it's to give them the support they should (in hindsight) have had in the first place.
There are definitely things you can't fully automate, but the goal should be that doing the right thing is really easy and making mistakes is hard.
It remind me of an outage a few months ago. Lost a pair of systems, couldn't login and run anything anymore.
Turns out, an operator ran a "rm -rf /home/user".
He was following a procedure that includes cleaning a subdirectory. The subdirectory didn't exist so he figured out let's clean whatever top path exists!
It is true that some people are not good at their jobs. It's also true that the only way to discover that people are bad at their jobs is their performance.
However, I think the process approach is applicable to a much larger domain than many people give it credit.
One of the greatest software development quotes I've read comes from DJB :
> For many years I have been systematically identifying error-prone programming habits—by reviewing the literature, analyzing other people’s mistakes, and analyzing my own mistakes—and redesigning my programming environment to eliminate those habits.
The difference in security bugs found in qmail and sendmail speaks for itself.
This is an example of the superiority of process: it shows that what some people consider a "root cause" is actually only a proximate cause - that the root cause is the programming environment that allows the bugs to occur.
If an incompetent person is able to take down a competent person's system, the "competent" person is to blame.
This doesn't work. You shouldn't focus too much on making causes of bugs to never happen again, they will happen and they don't matter for operations as new code will bring back old causes of bugs and introduce new ones. What you should focus on is making sure that when bugs do happen they don't cause any trouble. Be realistic about bugs, expect bugs.
I would say most bugs are entropy, or a human doing something wrong, or processes that haven't accounted for some edge case. A lot of these can be prevented, and a common way to do that is to watch them happen, document them, and then implement a mechanism to fix the cause when it starts to happen.
This strikes me as extremely arrogant and naive way of thinking. "Customers, stakeholders and investors. Your interests have been damaged but rest assured lessons have been learned. We will continue with this approach as it makes sense to is"
Having data loss while migrating database to new release version is never acceptable. Db migrations should always work correctly, no discussion.
Having some css issues where some menu is displayed in a crooked way is perfectly acceptable.
Once you start qualifying it with "it's only for cosmetic changes not for serious stuff" it stops being worthy of a discussion.
About your steps. The problem I often see around work is that A bug in prod doesn't get patched in the mainline then pushed all the way up ASAP so it can't accidentally promote up again as it exists somewhere in test or preprod as well.
I like the CI/CD approach of just having everything go down the same pipe as it seems to be a decent way to at address point 5.
This is not some universal law. It’s true only when the project/team is not willing to make the trade-offs (like development speed) needed to deploy bug-free code. For most software it’s better to release with bugs, but not
so for all software. For some software, like safety critical projects, things launched into space, etc, it’s better to take your time and release without bugs.
Besides, good development practices, good requirements, good change control, and good testing can prevent the vast majority of bugs from reaching production. Sadly, this idea that bugs in production are inevitable is pervasive in industry.
Anything less than perfection at any stage will be a vector for bugs, whether that bug is a syntax error, unexpected behavior, or a fully tested and functional feature that doesn’t meet expectation.
Mistakes are inevitable and not necessarily bad when you handle them well (move fast and break things says nothing about what you do with all the breakage, so feels immature). Calling them bugs doesn’t change that.
> good development practices, good requirements, good change control, and good testing can prevent the vast majority of bugs
prevention of the vast majority of bugs does not negate the claim that bugs will always happen in production.
It is not even easy to tell whether a piece of software has no bugs or just none you have seen.
If breaking stuff means that your website looks weird, that's survivable.
If breaking stuff means that performance sucks for a while, that's survivable.
If breaking stuff causes unavailability during a critical period of end-user demand, a few incidents might be survivable.
If breaking stuff causes your company to have a terrible reputation for privacy, security, or competency, that might not be survivable.
If breaking stuff causes your company to divulge financial information, that might not be survivable.
If breaking stuff ends up costing your customers any significant amount of money, that probably will not be survivable.
I reckon this stuff is particularly survivable. You can be like Linode, fuck up everything over and over again and still do just fine.
Unfortunately, most people don't share my beliefs so Linode and Equifax get away with it, and Home Depot's "it's not worth it to invest in security because there's no ROI" is completely true. Unless companies pay a heavy (I'd say existential) penalty for lack of security, nothing will change.
Source? Because every time I've read about Linode they had glowing user reviews, including on HN...
Or again in 2015 when Linode's poor security lead to PagerDuty being hacked: https://www.securityweek.com/how-attackers-likely-bypassed-l...
You can find some more discussion in previous HN threads about it: https://news.ycombinator.com/item?id=11136399
I think your comment somewhat proves my point :) Nobody remembers this stuff!
Many of the worst - and most strategic - examples of moving fast and breaking things come from situations where the feature needs to be written or the product launched, or else the company goes out of business. You need to hit the milestone to close the next round of funding, or you're dead. You need to add a feature to sign a customer, or you're dead. You need to launch - at all - or you're dead.
It's rational for the milestone/feature/launch to be buggy if not doing it at all means there is no company. Remember that all startups start out "default dead", and getting to the point where they're "default alive" and killing the company is worse than the status quo requires a fair amount of effort and a lot of early decisions that may or may not be reversable.
Let's not also forget the consequences of "breaking stuff" when applied to industrial process control, avionics, car "autopilot" (cough) etc. "Survivable" is sometimes a literal term.
Granted, I'm a pretty inexperienced developer, but I see companies being founded by people who are even less competent and less knowledgeable than me.
And sure, finding a competent person is hard. But it's like the movie Seven Samurai. Can't afford to pay a samurai? Find a hungry one. There are plenty of developers who are young, but still good. And if you can't even do that, then take any and all money you have, and pool it to find one or two (or seven) developers who can find other competent people.
I was young, and I was good. (Now I am moderately young, and I'm very good.) But I am a better talker than I was a developer. Only recently have I really become as good a developer as I am a talker. How are those startup founders supposed to evaluate me?
A truly inexperienced team would have created a spaghetti monstrosity that was practically unsalvageable.
What gets delivered isn't just a set of features, it's a piece of software, which implements those features, but might have a whole bunch of baggage attached which can make it really hard (or easy) to change those features or add new features in the future, or to allow the software to adapt to changing external conditions.
I don't think this is specific to software. E.g., I'd imagine it'd be possible to build a building a lot faster if you didn't need to worry about it not collapsing in five years.
Though to be fair, the mere presence of technical debt on its own is not a problem; there is some amount of technical debt that's appropriate, and it's hard to know exactly where that is. You certainly can't get by just ignoring it completely.
Hiring junior developers because they are cheap and keeping your fingers crossed isn't going to work most of the time. Not shipping will sink any business if it goes on long enough.
I hope the article didn't paint the company's decisions in a negative light. They're a great company to work with, and I think they've done a fantastic job.
>I’m almost certain the cost of fixing the code exceeded the margin on revenue due to writing it in the first place.
Does bussiness really not care that you have to build the same thing twice?
Especially in the paradime of black box/microservices, if something comes in that doesn't line up with a known senario, there should only be enough code to log the data and raise an alert/error. Having something bad happen, and handling it so it becomes 'less bad' and doesn't get flagged, escalated, and tested properly is going to cause issues.
100% agree and the idea behind my post was that writing code in such a way requires a lot of experience and an appropriate mindset.
They do operate that way; when an Apache or Postgres worker crashes, a new one is started.
Just this last week there was a major outage on an internal tool because the connection to the LDAP server got flaky, and the internal tool didn't know how to handle that -- instead of recovering gracefully, it freaked out until the whole server crashed.
That could've been avoided with just a small amount of recovery logic, but instead, the original author must've thought "If we can't keep our LDAP server connected, we have serious problems". But in the real world, things go sideways sometimes. In the real world, there are bugs in Microsoft's July 2018 patchset that ruin the TCP stack (cf. https://support.microsoft.com/en-us/help/4345421/windows-10-... among many others).
It's simple naivete to assume that you won't have problems as long as you keep your nose clean. There are externalities that impact even the most meticulous of us. If you want reliability, your systems must anticipate this.
You don't have to have many crash issues for a process manager to make sense. If you just have a crash from time to time it's nice if the whole system recovers without somebody having to restart it. However, every crash should be taken very seriously and analyzed.
Apache goes further than that. When running applications like PHP, it has default settings to restart the worker after every few requests. Historically, PHP applications suffered from memory leaks.
Because I believe http://wiki.c2.com/?LetItCrash is primarily associated with Erlang/OTP.
This is nothing knew unfortunately.
Working for large corporations I've often notice this :
Either the devs on the project don't have the required experience and will deliver poor quality code creating immediate "legacy" , or either it'll be outsourced to another company with no governance plan to review the code quality creating a "black box".
Because there is very little standard and regulation in software industry about competence, you can have two devs claiming they are "Software Developer" with one able to deliver a full project by himself the and the other not capable of creating a simple web page in HTML.
Hence , I've noticed very little company understand the importance of "tech" , "IS Governance" and investing in their staff in general, they often see "tech" as a constraint like : "We have to do a website for our customers , otherwise we'll loose market share" and not "We have to seize the opportunity to create a platform for our customers, it will drive growth massively"
This is sad , but this is pretty much the standard in corporation these days.
There do exist components suitable for most environments, like Postgres, but usually at the cost of 10x complexity over what'd be needed for the pure functionality. That engineering is worth it for things like databases that get used everywhere, but not for any custom component.
It's hard for me to take this article seriously without even a single example -- I can't tell if this is a strawman or not.
Is it about code that isn't thread-safe? That relies on undefined behavior? That doesn't scale? That has race conditions?
"In dev" and "in prod" can mean so many different things (is it about scale? or running on different OS's? or running unattended? or running on different hardware? or running compiled?), out of any particular context they're essentially meaningless, and in case particular case, it would seem to be the particulars that matter...
Second result: https://stackoverflow.com/questions/1475297/phps-white-scree... describes how to put debugging statements in the code which is good for local development but which shouldnt be kept for production.
A lot of times you’ll see a solution snippet with inadequate exception handling or resource cleanup (file handles, dB handles, etc.).
I’ve also seen a lot of regex examples that show a principle but don’t handle edge cases. See as canonical... email validation.
I felt similarly: it just rang slightly false to me.
For one thing, a code review wouldn't be the first solution I'd reach for when diagnosing the kind of problems described (performance issues, memory leaks, random crashes). There's plenty of tooling available for most platforms that will generally get you to a solution more quickly and reliably (granted, the post mentions only spending half a day), and often highlight problems you might easily miss in a code review. I'm not saying code reviews aren't useful, but that measurement of real-life behaviour can be more useful in these circumstances.
Still, the point about not copy-pasting code directly from the internet is well made: been bitten enough times over the years that I've become extremely wary of it, even - and perhaps especially - when in a hurry.
From my experience of real production codebases, the difference is mostly in accumulated bug fixes, which may or may not be documented but often make the code less clear and in lots of logging and debug code in parts of the code base that are particularly bug prone.
If I was a cynic, I'd be tempted to conclude that we don't actually know as an industry how to write "production code", all we know how to do is write non production code and then try to patch it up until it kind of works in production.
It can be any of those. Not a blog post, but one example that immediately comes to mind as something you shouldn't use in production is the [Redis KEYS command](https://redis.io/commands/keys).
I know I’ve included a disclaimer like that in an internal presentation I had done in college and I’m pretty sure I did not come up with the idea myself.
The entire Supercharged playlist on Chrome's Youtube channel does this, for instance.
Crazy idea, what if we found one of these people who writes these blog posts, an especially well vetted and knowledgeable one. Maybe even one who writes the frameworks and software these people are using. cough https://pragprog.com/book/phoenix14/programming-phoenix-1-4 cough
What if we took this person, and paid them money to write a bunch of blog posts about on topic, in a way that it read in a linear fashion? We could even maybe hire someone to help them, to proof read and make helpful suggestions.
We could give the knowledgeable developer a year or so to write the blog posts, then print out the blog posts, bind them together with glue, and SELL IT!
People could pay money for the well researched, well edited series of blog posts about a specific topic written by the expert. I think I'll name this blog-printed-out-onto-paper... a honkycat. After it's inventor.
Sarcasm aside, I find people's insistence on googling and sifting through half-thought-out blog posts infuriating. I am a HUGE proponent of a well written book about a software product. Compare "Programming Elixir" to their free online book and documentation. It's night and day. And I can always look in the same place for reference, the book. Written by Dave Thomas.
A book has: ( At least ) one author, one editor, and a publisher. All of these people work hundreds of hours to produce a useful manuscript. A blog post is something a script kiddie shits out because they're trying to get their first programming job.
Yet people say the same thing: "A book?!? I can just find it on the internet" and do the same thing: "I'm sure I'm smart enough to just start writing this software without any prior experience in the language, or forethought" and make the same mistakes over and over.
Skip the blog. Read a book.
Blog posts and Stack Exchange questions are great and have their place but shouldn't be the foundation of one's knowledge when the work requires an in-depth understanding of that system, technology, language or framework.
Particulary in the trendy JS world, where there might not be a good book for the stack you are using, and even when there is, you ask questions online and get responses that are roughly "It hasn't worked like that for 6 months, why are you trying to do it this way?"
You give the example of the Phoenix book; I don't know if that's what this team was using. If it was not, the fact that there is a good book for that stack is not particularly relevant. If they were using Phoenix, then the fact that there is not a good book for most stacks would explain why they wouldn't consider looking for one for the stack they are using.
I am sure there are exceptions but they are hard to find.
Something breaks, so the manager naturally thinks he must add a control that will prevent the person or a from ever being able to do it.
Very soon nobody can do anything without approvals by managers so far removed from actual decision that they don't even know the names of components being subject to change. The change then must travel from the only people that know what the change is but are absolutely barred from ever seeing production environment to people that being so focused on operations don't have time to actually understand what the change is.
Once you are being treated as untrustworthy it will show in the rest of your work. Why code correctly from the start when you are not deemed trustworthy to do it correctly and there is lengthy testing and approval process to make sure you are not going to bomb your company?
The correct step is to recognize, that introducing more controls can very soon lead to low trust between employees which is self-fulfilling prophecy. When you are not allowed to do something because you are not deemed trustworthy to do something, then soon you have no experience and then this serves as a proof that the initial decision was correct.
In development land, the attitude is "seniors are old, useless, and outdated" and any form of oversight at all "removes trust." E.g. A young dev suggests building their own database. After being told politely that this is an enormous task full of seriously hard problems and that it's not feasible to try, they yell about their "ideas being shot down" and that the work environment "lacks psychological safety."
What is important, though, is how the very young devs are molded the first time they come to an organization.
I have long experience working for various enterprises. One outlier company I worked for is Intel. Intel or at least the organization I worked for seems to be aiming to hire almost exclusively fresh out of university just to be the first one to "mold" the young engineer. Young engineers are immediately treated as adults trusted with their projects. Managers (not all, but I would say majority) seems themselves as shepherds rather than commanders. They will ensure projects are assigned to engineers and that engineers create value and they will try to debug if there is any cause of concern but will not assume the typical "single point of contact, single point of decision and authority on everything" role seen in most corporations.
This of not true of all managers at Intel but it was very striking to me when I joined the company with some 15 years of experience in dev under my belt.
Junior engineers seem to adjust to this immediately and seem to genuinely try their best to do best and try to seek wisdom which there typically is a lot of in the close vicinity.
I really liked the high trust environment. We knew what we needed to do and we did it. It worked great.
But I had really good people on my team. Some of the other teams weren’t like that, or had a very different definition of when something was ‘good enough’, and the results were predictable.
I’ve also been in the ‘high control’ environment, and it’s hard to get anything done at more than a snail’s pace. It helps tamp down issues from people who need supervising (due to inexperience or quality) but it puts a very low cap on everyone. There is no way to prove yourself enough to change things much.
End result is everything moves slow, including necessary fixes that would speed up development.
Of the two I think the first one is far better, but you really have to be careful to get good people with the right attitude.
I do think there are people who prefer the right controls because it absolves them of some responsibility.
I personally think, with no experience yet to prove it, that it is a mix of both. Meaning, you need people capable enough to react to the trust placed on them (meaning, really, normal people) but then the trust placed on them causes them to further improve.
I think most people react positively when given genuine trust, feedback, reward/punishment, encouragement.
It is also my opinion that most dev managers bring NEGATIVE value and their teams would function better without them except when there is extreme disproportion in experience and the command/control manager can carry the team that is not really allowed to do any high-leverage tasks. These are most miserable teams I have observed in the past.
This happens because most people get corrupt very quickly. Once they are promoted they suddenly get to think they must be better than their team and their decision must also be better than everybody else in the team and so they should be taking all decisions. The team feels powerless peons, stops feeling responsible for the project, morale sinks, new hires are immediately indoctrinated that this is how things work around here. This happens extremely quickly, in matter of days even, and requires much, much more experience to fix. Managers that are capable to understand and fix this are typically promoted further assuring development team stays miserable.
I was the second person in the team, and the one who hired me was very good. I’d learned discipline at my first job where I worked on (and later solely owned/ran) two systems that handled money. If something went wrong you could trivially figure out how much your screw up just cost the company. So you had to be very careful, which was a great lesson.
The people hired after me were generally quite good (to excellent), but even without much experience they came into an environment where production was very stable and we were careful to keep it that way.
If the people weren’t good enough they wouldn’t have been capable of keeping up that standard. Some didn’t cut it.
I really lucked into a lot of opportunity and fantastic lessons at my first jobs and had great bosses who did good mentoring. I know that’s hard/impossible to replicate on demand.
I agree about negative value. Having seen so much through my jobs I can clearly see bad decisions that make the product less reliable being made, or at least horrible problems being ignored because customers aren’t complaining so let’s make tiny feature X instead.
It seems like most people in the job just have no idea how to manage software development for the long term and can’t see past short term goals/ideas.
That’s probably common across all types of projects/industries, I just wonder if it’s even worse in our industry.
This makes for a larger intersection of inputs to the copypasta that produce outputs that are expected by the blogpost/googling developer and inputs to the copypasta that produce outputs that are generated at non-prod runtime.
When you don't quickly see what's semantically wrong with the code (the problem mismatch) because you pasted it and it just worked (maybe with a few small tweaks), then you quickly forget about it and black-box the snippet in your head. Then, when input space gets real-world and things break, because the copypasta never had much cognitive gravity (because you didn't think much about it), it doesn't attract log dump elements in your log-scanning brain as much as stuff you had to think through yourself does.
I guess you could automate some static checks for this by some massive trawl of snippets from a few websites (at least popular SO posts in the same language?) and finding verbatim matches in your codebase. Sounds massively impractical to do this at scale though. Does anyone know tools that do this?
I should say my point is: "moving fast" and "testing things" are goals that aren't contradictory. You can and should do both.
The "do not use in production" is the programming equivalent of taking a strong stance on an issue but then appending "but hey, I don't know" or "don't take my word for it".
They completely defeat the purpose of the entire statement and in the production comment's case, the code samples.
Let's ask a different question, what am I supposed to use in production? I had an email discussion with some blogger who wrote about a very narrow problem which I needed to solve. Their code samples worked, but poorly. The entire thing was a dumpster fire but it solved the problem and were the only one I could find after days of Googling and Stackoverflowing.
I did not care about their "do not use in production" comment. I needed their solution in my production system. What was I supposed to do?
Sorry for ranting, but I believe it is irresponsible to put code out there which shouldn't be used except for when it is the point.
Fast forward I had to write the whole code twice. The second time I wasn't completely sure how it should look like either, but I put a much greater emphasis on interfaces and modularity. That way I could guarantee that parts of the program were actually doing what they were supposed to do, which made the bug hunt so much more effective.
So sometimes the longer way is actually the faster one and experience matters when it comes to recognizing such situations.
"I have an overwhelming empathy for developers in this position. They have more information than they will ever need, but it’s completely disorganized. It’s like trying to build a ten piece puzzle, except you have to find the ten pieces within a pile of 10,000,000,000 pieces, all of which are square, and you don’t know what it’s supposed to look like at the end. Good luck."
Even with a team of highly experienced developers things quickly turn to a mess if it's not perfectly clear what to work on. I don't mean things like "button should go 1px north to align with header". That's trivial. In fact, I would go so far as to say: most production-ready programming is trivial. Trivial in the sense that someone, somewhere, solved it in a way that's correct. But, when it is not /crystal/ clear what the requirements are, what the business case is, or what the edge-cases look like: oh my. I've been in the incredibly unlucky position of only being in settings like this. Start-ups, academia, R&D positions. And, as a result even with the best intentions and the best teams you'll still produce production-quality garbage. Mostly because incentives were misaligned and there was no coherent vision. Agile story-driven development only works when there is leadership that has the guts to say "no", and when there is something well defined to work on. Sometimes I yearn for some CRUD webstore ... at least then the things are knowable. Anyway, just my rant. I'll probably go the meme-way of becoming a plumber/welder/etc at some point ;-)
This is typically missing from startup culture.
Anecdotally, mentorship improves everyone involved, but I don't know if there's data out there showing if engineering mentorship within a startup improves its chances of success (or is otherwise beneficial ).
To some extent, groups like YC participants and alumni end up being mentors to each other, but I doubt that extends beyond the founders. Might the OP have an opinion on something like a rent-a-mentor program/startup?
 Not that certain benefits, like productivity, would be enough to motivate a change, as evidence by the open plan office situation
[Cough] - Hire a few more (expensive) experienced developers? It might not be a silver bullet, but it goes a long way towards avoiding the situation described in the blog post. Also, get help from a trusted technical friend to hire the initial one.
FWIW I see something similar with almost every client I get in my own neck of the wood. Growth is flat. Turns out nobody is worrying about whether they're building a product anyone wants - they're doing it on paper but nobody is actually talking to end-users. Marketing is under-delivering. Turns out it's all juniors worrying about bringing traffic without wondering if it's the right type of traffic; or they're not measuring anything, plain and simple. Sales aren't selling enough. Turns out it's all juniors who aren't selective or systematic enough; or sales are doubling as pseudo-project managers and/or pseudo support-engineers because nobody is managing operations. Projects take forever to deliver. Turns out the project "manager" holds nobody accountable to deliver any specific task let alone by a certain deadline. The list could go on, and on, and on.
(Admittedly this seems like it'd be very hard to verify, since most fields either legally require certification or don't have it, and there's probably tons of other variables that would get in the way too.)
This is hard for a startup in their early stage. Not sure how mature the company was, but let's assume it was young enough to have this very problem.
Direct team hire is a double-edged sword. It works well when your team is very competent and good at weeding out false positive. Basically, when you have a bunch of senior people with maybe a couple junior and mid-level teammates. A normal distribution is actually perfect for a team. Having a direct contact with your future teammate allows you to know the person ahead of time, especially for a very team. Furthermore, direct hire means you are hiring not just a generalist. Your SWE may not know what to ask a network engineer, even from a coding point of view. The downside is what happened in the story.
The way I'd like to go about it is to make sure we have two interviewers per round, even from the same team would be far better than one person vetting.
It seems to me that that sentence has no significance at all and can be removed safely ;)
That said, a professional can still cut corners as and when appropriate (and clearly label and explain such things) but that’s not what’s happening here, is it?
The advice I would give in this situation is simple: stop and think about what you’re doing. If you can’t do that, you’re not a professional, you certainly shouldn’t call yourself an engineer and without that skill you stand no chance of improving.
So, here is my pitch: if you don't have technical expertise, and you realize that you need someone to help understand just how the heck to get started with technology, what to build, and determine if you even need engineers, then hire me! I'll help!