From my perspective - speed is about 2 things: more smaller iterations, confirming you're working toward a desired outcome.
Over my career, I've been surprised multiple times when I presented early draft to a stakeholder and they said, "oh, that's great, I've got what I need now"...and this was maybe 1/3 of the effort I was planning for.
The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.
Along these lines, in my work with stuck startup founders, I often ask, "if we were going to demo a prototype to a customer (in 2 days | at the end of the week), what would it be?"
I cannot possibly emphasize this more strongly. It's so important to deliver complete and working products quickly and regularly, that it's worth the cost of the occasional lost customer who leaves because all of their needs aren't met immediately. A) they're rarer than people seem to think and, B) the people you gain due to this incremental approach more than cover the losses.
That said, some devs interpret "quick" to mean "sloppy", which is not correct. When you can cut releases hourly the cost of some kinds of bugs does go down, but fewer bugs is better, and as always some categories of bugs are, "never, ever" (e.g. data corruption/leak, security bugs, outage inducing bugs).
My highly specialised sarcasm sensors (honed in the UK) suggest that this is not in fact your current deliberate strategy; that said, I have worked for so many groups and companies where that is exactly what happens, even if they think they're planning for something else to happen, and it's obvious to pretty much everyone involved that that is what is happening and will keep happening.
I went straight from hobby programming to freelancing, so I've been "rediscovering" best practices the hard way.
I did actually intend to do things properly, but client asked "can it be done faster" and I thought "sure I'll just ship fast and then rewrite it later"... ha! Project became so unmaintainable, development ground to a halt. (I am doing the rewrite now...)
This is the side of software where soft skills are critical. Navigating when and how to say no can be extremely difficult, but it’s arguably crucial to operating effectively and sustainably.
Early in my career, everything was a yes and it cost me a lot of sanity. It rarely cost me clients because I was so diligent about keeping things on the rails, but that seemed to come with an inversely proportional cost to my well-being.
It took me a long time to realize that saying no is kind of like saying yes; you’re saying yes to getting the project done on a reasonable timeline and with the budget the client has, and most importantly, with your happiness intact. Saying yes at the wrong time can be saying no to finishing the project at all. It’s the right, and in a sense “nice” thing to do for everyone involved.
If you’re a chronic people pleaser, this kind of work can be extremely taxing (speaking from experience).
More to the point, it's much better than delivering incomplete yet working products quickly and regularly, complete yet broken products quickly and regularly, complete and working products slowly and regularly or complete and working products quickly and irregularly.
> That said, some devs interpret "quick" to mean "sloppy", which is not correct.
Totally agree, but I've seen some devs completely melt when we need to go fast/iterate. For them, they really do produce sloppy work when they go too quickly. Many "big thinker" types are like this. Not defending it, just an observation.
With a good sales team, you can land customers who don’t see all the features they need, because you’re routinely delivering a solid if simple product that gets major new functionality a few times a year and if you just hold on we will get your feature to the top of the list.
Those paying customers are where your MRR comes from. Another case of perfect being the enemy of good.
As a customer of startups, I really abhor this attitude. Especially when it lingers for years after the initial product launch. Which is shockingly common.
For example, partially complete features are often completely abandoned for years and years, because they're "good enough." Unfortunately, good enough, often means unhappy customers actively looking for new competitors.
One company in particular, I absolutely despise, because of this. They are pretty much only focused on releasing new features. The old features, while functional, are in desperate need of improvement. (In many very obvious ways.)
This particular company had a good IPO. Their stock then dropped 90%. I wonder why.
> They are pretty much only focused on releasing new features.
My guess is that behavior is NOT motivated by the theory "deliver early drafts with more smaller iterations", but is instead motivated by the theory "the people who decide to buy our products do it based on feature checklists, rather than quality of anything."
These are actually not even aligned theories. The "smaller iterations with early drafts" appraoch is best done with fewer features included, not more features that are incomplete.
Many markets incentivize companies to deliver crappy software, and it is frustrating, but not, I think the fault of an agile/iterative/deliver-early-and-often approach. If you can make the most money by delivering a giant feature list of crappily implemented poorly-thought out features that don't fit well together, you'll tend to do that, regardless of theory of project management.
Having been on the other side of this - often times our users are unhappy with the unfinished features, but our customers are delighted.
That is to say, the CxO or director we've sold to has everything on their checklist and is getting "good enough" results out of their organization. Our job is to understand which of the unfinished features will cause grumbling and which will motivate users to convince an executive to switch to a competitor. It's very unusual for the former to ever be worth prioritizing.
Yeah it is the sad but true state of affairs. Bad for users is OK as long as it makes money. But without pleasing the people you sell to how do you compete?
I think you misunderstood.
His point is that the user who uses the product can be different from the one decides and pays for it (often the case in B2B).
So you actually pleases the people you sell to, just not this specific type of users.
> Unfortunately, good enough, often means unhappy customers actively looking for new competitors.
Here's the thing though - it's your intuition that customers are unhappy, but the startup in question has actual data. It's entirely possible that a small group of people similar to you are unhappy but for a majority of the userbase the feature has gone as far as it needs to. Resources are scarce and priorities need to be changed, which sometimes means making hard decisions.
We don't know that, actually; you assume that the company has the data, and speculate that they interpret it correctly. But in practice, "customer loves using the app" and "customer puts up with the app but will replace it as soon as they find anything else" look the same right up until they don't, just as "high interaction" presents the same data as "app is disorganized and inefficient".
Usually the company is quite aware the feature is half finished, but the data shows that hardly anyone uses it, and hardly anyone is asking for it to be improved. So that time would be better spent improving things that there is lots of usage on.
There are multiple ways to interpret the same data. Your interpretation is a valid one. There are other valid ones all from the same data.
If your company sees the simple lack of feedback as a valid signal, then a feature with lots of usage but no feedback means you should not improve it but simply leave it be. It's used, so it seems popular and nobody complained about it, so it must be good enough. Build something else.
Of course your company may also view a simple lack of feedback as not enough signal. If a new feature that is half finished sees hardly any usage this can be taken as a signal that the feature needs to be improved. It's not useful enough in its half finished state to attract usage. Or your users may simply not be able to find the feature because in its half finished state it's too hidden and thus you have neither lots of usage nor feedback on it.
I find it so weird to look at the stock market to approximate this kind of metric. Stock prices have more to do with things like the ebb and flow of the risk profile of institutional investors, than with customer satisfaction with specific features.
Institutional investors care a great deal about metrics like customer retention indirectly because it influences customer lifetime value which then impacts profitability all of which is dependent on customer satisfaction.
Groupon for example has decent consumer metrics, except they couldn’t keep the business partners happy which resulted in their ultimate collapse. Finding weaknesses in companies businesses models like this can both be extraordinarily profitable for institutional investors directly and create an aura of competence to attract more investors.
Theoretically. But then sometimes they're just switching from equities to bonds or whatever. My point is just that there is no simple function from customer satisfaction to stock price.
There's a joke on econ twitter whenever there is a big move in some individual stock, that the explanation is that the move clearly happened because the expected value of future cash flows changed. The joke is that under the efficient market hypothesis that's always the explanation, but in reality we all know a bunch of other stuff is going on all the time.
I agree with your point in general, a stock doubling or getting cut in half doesn’t necessarily have any obvious explanations. However, stocks aren’t a pure random walk, they are somewhat bound to the underlying business even if you don’t have enough information right now to understand what’s going on.
So, the kind of extreme stock shifts like dropping to 10% of a previous valuation are much more likely to have an understandable cause even if the trigger is random.
I'm in a cynical mood, but I'll begrudgingly accept "somewhat bound to the underlying business" :)
I guess at the root of my skepticism is that so many "growth" stocks never pay any dividends, so it's unclear to me what the connection between the stock and the company's cashflow is even supposed to be. If a company never returns any of its profits to its investors, isn't it just kind of a gentleman's agreement to pretend that the traditional way of valuing the company's equity still applies? It really does seem to me that the market for many stocks has detached from the company's business, and is instead driven almost entirely by competing memes.
Companies are ultimately controlled by their shareholders, no gentleman’s agreement needed. If that price falls far enough corporate raiders are happy to chop up the company for a quick buck.
Not really though. They're controlled by their executives, who can theoretically be removed by their boards, but often with significant difficulty. And the connection between the board and the shareholders at large is also more tenuous than I once thought.
It's true that a dropping stock price can lead to a takeover and new management, but again, that could happen to both a well managed financially strong business that has just lost the narrative game.
That's the only point I'm trying to make, that in theory stock prices are driven by financials, but in practice it's a mix between financials and narrative, and I think narrative dominates more than I was taught back in economics classes.
Narrative dominates day to day, but it’s also very easy to overstate its importance. In a bound random walk the bounds and the randomness doesn’t have consistent impact. In the middle of the range randomness completely dominates what comes next, and at the edge the bounds completely dominates the randomness. That’s IMO a better model of these things.
Take say money, second by second the value of USD is determined by people’s perception. However, people in aggregate are required by law to pay a fraction of US GDP in taxes based on the value of stuff besides money, like millions of cars and cans of soup etc. That relationship means without printing new money the value of all USD in circulation must be enough to pay taxes with or you get the monetary equivalent of a short squeeze. Which then represents a bound unlike say cryptocurrency which can actually fall to zero even as the economy continues normally.
I bring up the tax angle specifically because it’s normally irrelevant but changes the behavior at extremes. People tend to think of economies as fragile things because even minor changes have large implications, yet Ukraine’s economy continued even in the middle of an invasion and massive migration etc. Stocks seem divorced from reality up until the point where fundamentals matter.
We'd need a lot more information about this one particular case but I'd bet almost anything that a 90% drop in share price was caused by something a lot deeper than customers being unhappy at some unfinished features. I intentionally didn't address that half of his comment for that reason.
That’s reasonable, though I am not suggesting a pump and dump is only going to show up in the code base just that it would also look like they described.
Having the data isn't enough; someone also needs to look at the data and draw conclusions. If it's an environment focused on pumping up metrics for the newest thing then this may not be happening.
It's interesting that people almost never disagree with this when they're the customers. We love seeing progress, even if it's a small portion of what we need. When frequent updates show that progress is slow, we're glad we're aware, and we don't blame the updates themselves for slowing the project down. In the rare cases where we feel like the process demands too much of our attention, we don't stress about it; we just say, "Looks great. Let's talk again in two weeks or when you have completed feature X."
Those hypothetical drawbacks only loom large in our minds when we're the ones doing the work.
What about those invisible people who gave it a try and decided there were just too many gaps and bugs? They are not customers anymore and probably never will be.
When the only way to get feedback on something is to go live to random customers, yeah, that's a difficult decision. I was thinking of situations where you are getting feedback on pre-launch software from a customer or from your own product organization, or you have an established product and some of your customers opt-in to beta functionality because they want the opportunity to influence your direction. I agree there are situations where the worse mistake is giving people a bad first impression.
I once worked on a team, in house at a company, where we were tasked with developing a prototype, in a scripting language with a well-known web framework.
Meanwhile another team at a consulting company, was tasked with developing the final product in java, all very enterprise.
The outside company ended up with a failed project, missing deadlines and going over budget multiple times. The prototype became the product.
Let me guess. Someone used JSF and hibernate in the same project. That is an easy way to blow up development time by a factor of 5x to 10x for literally no benefit.
If they find the appeal to authority useful, I'd grab them a copy of _Software Engineering at Google_ and leave them a bookmark in "Help Me Hide My Code".
This is interesting and I agree with it. I think the problem is that it's not necessarily irrational insecurity. In computer science especially people can be very pedantic and quick to call out an individual or shut down the work of someone who is learning but is not yet on the same level of skill. Now if nothing is being hidden and all code is made public there is a big window for such criticism.
The thing is that, often, a developer already knows how to make their code better, solve their problem or get rid of some unnecessary abstraction. But they just don't have the time to do that. They don't have time for perfection. So when their work then gets criticized pedantically even though they already are aware of all those things it must be a very frustrating.
I certainly know what it's like to have the urge to rewrite something and I may on paper already know exactly what needs to be changed. Having something you know is imperfect out there feels stressful.
> The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.
I experienced this today, just now. The whole org’s release for fall was hinging on what I was proposed… the relief of everyone once the presentation was over and it all clicked, when they saw the light at the end of the tunnel, that the path that I had uncovered towards that is reasonable, and tenable is priceless.
I understand now that I live for this shit. 10/10 would do it again and again and again.
It's also dramatically easier to have a discussion around what does or doesn't work with a concrete example. "This but with X" or "This but without Y".
I've found this also helps just internally - start with a prototype and you have engineering discussions that are more practical than theoretical.
> ..and this was maybe 1/3 of the effort I was planning for.
Quick anecdote. When just starting out I was building a tool to automatically find the composition of essential oils used given a GC/MS output, and was told it needed to be "fast". I spent a long time optimising it, working on parallelising it and more. I spoke to them after getting it down to maybe 15 seconds or a couple of minutes something like that and was getting ready to explain why I couldn't make it fast.
"So you said you need this to be fast to be worthwhile, but"
"[intejects] yeah being fast is important, if it takes more than a day or two it'll be a lot harder to use"
> speed is about 2 things: more smaller iterations, confirming you're working toward a desired outcome.
Well put. Too many managers and execs use the word "speed" but never define it for others to make actionable.
Scenarios in real life:
Coder: "what's important while we're working?"
Exec: "Speed." _feeling_cool_
Coder: "I authored features a,b,c,e,..x,y,z ~ already working fast, gimme a raise!"
Exec: _realizes none of those features were desired objectives_ ...
Or
Senior software engineer: It'll take 9 months, 5 devs, and $200k to run...
Everyone else: ... not even sure business will be kicking in 9 months ...
I think even talking about the draft gives you an idea about the severity of the problem. My experience has been the same. If the problem is severe enough a crappy draft will do. Conversely if someone is asking for multiple refinements even to arrive at basic decisions, I begin to write off the problem as not critical.
I really like the framing of "providing relief". I think it also makes sense to allow the definition of who experiences the relief to be flexible (e.g in some cases the audience that experiences relief might be developers, security engineering, sre, etc. not just customers...)
I'd rather name it an MMP - a minimal marketable product. It was good enough that it worked, and the buyer (in this case the stakeholder) was happy enough to "buy" it.
There's IMHO a slight difference between MMP and MVP in the perspective one takes.
MVP can also be minimum viable proposition which is a similar idea. An MVP of this nature can be a chat in the pub and an email order of whatever was promised!
When it comes to business (corporate), you hear "do it right". But in reality that means "do it fast even if wrong". If mistakes are made, what happens depends upon management.
In most cases, I have seen people who do a project 100% correct, sometimes slightly slower get ignored.
Other people who get it done fast, even with many blaring mistakes, get rewarded. Usually the people who worked on it end up working 24/7 to get it fixed. That is looked upon by most managers as very good work.
If your projects go live without issues, it is usually forgotten, people who fix their own issues by putting in fixes after go-live by working 24 hours, get noticed and rewarded.
graphic designers sometimes ship their draft deliverables with 1 or 2 obvious bugs, and allow the customer to recommend these 'changes' on their own for final delivery. and if they don't, they're just fixed for final delivery anyway.
i read about it before i had this done to me for the first time, and i played along with it. it's a neat trick because it's plausibly deniable for everyone involved and the outcome is good while minimizing the bullshit work in favor of a sacrificial final word.
I've done this in hardware design, presented a design for review with a problem that had already been discovered and had a fix in progress. It was a red herring that gave the customer something to latch onto and feel like they were contributing to the success of the project. Somewhat dishonest, yes, but very effective in getting customers to work with you rather than be adversarial.
There is a concept of a sacrificial feature to prevent customers from becoming fixated on other aspects of the design that would actually create problems if changed. I prefer to avoid that sort of dark pattern though because it erodes trust.
Yes, but more importantly, it solves the problem of ritualistic feedback, which is very common. People think it’s their job to produce suggestions no matter if they have any or not. This gives people license to provide harmless feedback while adhering to the religious framework.
Code reviews is a great example of this, of course. You’re supposed to comment something, otherwise how will people know that you are smart and did your job? Famously, the most straightforward code changes always gets some comment to change something stylistic or menial.
This was known as adding a 'duck'. It is unfortunately very helpful when your client stakeholder doesn't value your expertise as much as they should. It works for web design too, but you must be careful that your duck doesn't become a feature!
This is a weird thing to tie to graphic designers. Just because you noticed a graphic designer doing that doesn't mean it's a thing that's more commonly practiced among graphic designers than any other trade.
The reason is most people can tell red from green and big from small, and most people use graphic designs daily, so will have an opinion on whether a design element should be bigger or redder.
Other professionals with a similar plight could be copywriters, and devs when selling a PR.
I am a professional graphic designer. Presenting with obvious defects to fix before shipping is not common practice. In any serious design engagement, the client would probably just be confused about why something changed after approval and it would lead to an unnecessary meeting. On top of that, I'd probably end up having to revert to the shittier version they agreed on and waste time re-packaging the deliverable. Purposefully using up contractually allocated revisions with deliberate flaws, while very difficult to prove, is essentially fraud. If the graphic designers you patronize do that, get new ones.
What do you do when it doesn't work? This seems like a really bad outcome - they approved the design with the glaring flaw, which makes it hard for you to remove it, and they potentially also requested some other 'improvement' which makes the design worse.
The fact of the matter is unless your company is very mature, getting 3 big features to prod in a month even if that means bringing down some system for awhile is generally much more valuable than getting a single feature out with zero downtime.
As the company and number of devs grow, the probability of a dev doing something stupid increases substantially, so those early mistakes can actually be quite valuable in showing where your infrastructure has gaps.
Let the imperfect code happen early, it often leads to free chaos-testing and more stable infrastructure.
How can you distinguish between "project is not important/forgotten and due to that, no issues are discovered" and "there are no issues, so it is forgotten"?
I encountered this in a previous job. I told them during design phase the implementation was failing basic computer science knowledge and would cost the company a lot of excess computing costs (due to redundant work).
a quarter after launch it was a crisis because $5k a month customers were costing $50k a month in compute. The original designer put in some patches that got it down to $25k, and then $3k and was a "hero". By my estimates the limit of the actual work being done should have been < $500. But we didn't have time to implement that due to a quarterly insistence and folks all nodding their heads at eachother to get promoted.
In the end he got promoted and i got a negative review for not being a "team player" (ie being disagreeable about a design I knew had undesirable properties)
> told them it was failing basic computer science knowledge
> i got a negative review for not being a "team player"
Effective communication is good for your career. People don't want to work with someone who slings distracting insults.
You can say "This will cost $50K, but we can't afford more than $5K, and I think we can do it $500", or "a 10x customer will cost 100x to serve, but a different design can bring that down to 10x".
I know the scenario. I've been there. Sometimes people just want to move on. I've been in meetings where people got annoyed at me asking the "tough" questions and pressing them on their design for an hour. At the end you have to realize technical merit is not the final arbiter, it's mindshare, and that's usually driven by the "chosen" group or favorites.
Unpopular opinion: I find with long projects that development speed later on depends a lot on whether we made good decisions early on. Stopping to figure out an architectural thing can make things easily changeable later. Keeps your speed up, even a couple of years into a project.
"Do it fast" feels really, really good. It's motivating, and you feel like you're pushing forward. But it can really slow you down in the long run. Eventually, everything's so coupled that it takes weeks or months to add new features or fix bugs because you can't touch anything without breaking five other things.
As a mentor-of-sorts once told me, you end up feeling like the street sweeper after the parade
Of course, this is only rational if you know what you're building ahead of time, and that you'll need the things you're preparing to build
Speed is underrated. I've worked on a lot of side projects and for a long time I couldn't get them done. I spent too long "perfecting" baseline things like folder structure (really) and overall system design. This made things slow-going and I tended to abandon them.
Over time, I started just hacking things together and shipping them, worrying less about perfecting those initial things. (I used YAGNI a lot in my decision-making.) What I learned is that there were so many more things I had to do and had to learn to do to ship. I could only get to those tasks and learn those skills by "skipping" through the earlier tasks. Working quickly helped.
I started thinking of projects as this vertical stack of work that you move up from bottom to top. If you could look holistically at absolutely everything you needed to do to ship a project, you could mark some as having a larger impact on the success of the project than others. Those are things that require more time and energy.
When you move slowly, you have a very small scope of the overall project, just stuff at the bottom, and predictions about the future. You may not really know what's ahead. If you go slowly and try to do everything perfectly down there, you spend a lot of energy on a small subset of tasks which may actually have smaller impact than something in the middle or towards the end.
Speed allows you to move through those early tasks move towards a more holistic view of the entire system so that you can determine which are high impact and which are not. You might need to double-back on an earlier task if you misjudged something as low-impact and ended up spending less time than you should on it, but at least you're not pouring energy into low-impact tasks on average.
It's not quite the same thing, but building a prototype is a good example of learning the end to end of a system without worrying too much about quality. It gives you an initial idea of what's possible and you use that to get a better picture of what's high and low impact in a project.
This is a valid approach, but not because of its "speed".
When you start a project, you have a great many things you don't know and need to learn. This is the case in 99% of projects out there. By definition, you cannot at the start of it optimize for things you don't know anything about, because... how, exactly?
If you work in a particular stack a lot, prepare a starter project template. Put some time into it, set it up so that it is optimized for letting you learn the things you don't know and helping you discover and make architectural decisions as they become necessary to make. And then just go wild.
This is the bottom-up programming approach advocated by pg in his book, IIRC. Have a setup where you can iterate quickly in the small, and make it easy to abstract and refactor as you go. You don't need a Lisp for that, just some tools with proper configs. It's also what R&D guys tend to do naturally with their Jupyter Notebooks - the difference, other than the kinds of problems faced, is that devs don't have the luxury of leaving the code in a form of REPL session transcript.
Not OP. But applies to long term support, remember the loooong overdesigned redesign? Yeah, don't do that and rather make small quick improvements. "OH you found a typo in our config format? Update it and it'll ship in the next patch". Instead of "naw it's fine, leave it we're rewriting the configuration subsystem in that fancy new yaml/json format".
Just a note : I've noticed that when working with management, they often have issues "grokking" the issues they are asking for you to solve.
By offering an imprefect, quick solution, it lets them understand the issue and readjust their needs in consequence - and often, the quick imprefect solution is good enough/enough work to show it's a bad idea and to move on.
When starting project I have 2 steps : vomitting and touching up. You puke out a solution(minimum viable product), without thought about optimisation. When that exists, you change hats and optimise. Works wonders.
> You puke out a solution(minimum viable product), without thought about optimisation.
And that's where it ends, because unless your employer vastly overhired, there's already a dozen (or more...) of tasks in the backlog just waiting for you to vomit them out as well.
After a year or two of working like this, you have a mountain of technical debt and no time work on it all. The entire system slowly disintegrates under its own weight.
There needs to be a balance between fast and correct, and it is only up to the developer to resist the pressure from management to work fast (at the expense of correctness). I would go as far as to say that it is one of your main non-technical duties as a developer to resist and manage this pressure as best as you can.
Maybe you're a genius who can analyze problems and implement solutions both fast AND correctly, but that is very rare.
The truth is that complex problem solutions can have complex, messy implementations. They can also have simple, beautiful implementations. Or anything in between. But what they almost invariably require is complex investigations... which take time to conduct properly.
Most (not all!) "fast" programmers I've worked with over the last 15 years produced output of questionable quality, often stemming from the "what you don't know you don't know" part of the knowledge pie chart, instead of some deeply thought out cost/benefit calculation.
IMO if the code sticks around long enough to disintegrate under its own weight, than that code was a rousing success!
"Resisting management" is almost always a horrible idea. Code is only useful when it provides a solution at the right time for the right price. If not, it is no longer useful and the world moves on.
Overall quality is nothing more than a reflection of business maturity. Like a mighty old tree with a great big trunk. It has had time to grow strong and majestic.
Most businesses don't live longer than 5 years. Most products and solutions are temporary fixes or ideas that don't last longer than a few months. Investing huge amounts of time and resources into those things is a recipe for disaster.
I think this depends strongly on your companies bussines area. If you have BB with companies really relying on your product and long running contracts, the customer wants what he paid for. Including support for the heap of defects. Worse if you got a reputation to uphold and also want to keep your BB costumers. You can quickly become deadlocked in maintaining your old quick shots, unable to move anywhere.
I think it is hard to make general rules without specific market context.
Agreed. It's worth noting in the context you describe, you have valuable customers who have paid for long-running contracts. This warrants the ongoing time and investment in quality.
Most of the time, this is not the case.
My experience is of course mainly at startups, so I am most familiar with the shorter time-horizon, lower-cost initiatives.
> After a year or two of working like this, you have a mountain of technical debt
What's the reason to stay longer than 2 years? From what I see it's x10 times harder to get raise inside an organisation vs when changing it.
Probably, the reason is that companies adapted to employees getting rise and leaving soon. So a raise inside org is more a calibration for those highly underrated on entry ones than some real compensation for any experience.
As much as we want to pretend that slowness correlates with attention to detail and quality of code, this is often only rationalization. Some people are perfectionists (in a bad way) and overcomplicate things that don't really matter to the end result, others have analysis paralysis and take too long to make decisions, making the code suffer, and others are just not that great and generate the same result as a faster developer at best. There are surely good but slow developers, but it's not as common as "internet common sense" paints it.
And disavowing their role in cause and effect. They play chicken with other developers, who end I sacrificing their own productivity to fix the issues. Bad managers see this productivity gap and reward the hack instead of punishing him.
But not necessarily about the solution they envision. Case in point: I worked with one customer who wanted to add a search box at the top of a list to search through ~50 elements. I showed him the standard browser search with ^F and he was delighted. Saved us a day of work.
They know their problem, you should figure out a solution.
I'm not a programmer. Could you estimate how much more comple/ how much longer it takes to implement quicksort?
Otherwise, my point is that this is often good because, when begining projects, management can lack perspective of how the project will actually get done. Having a "draft" version that is planned to be scrapped lets you decide where optimisation is necessary, see future bottlenecks more clearly, and thus you can set a better foundation for the actual project.
And if it's possible to make that version say 20x or 100x faster than a V1 of the final project, I think the insight gained is worth it.
Edit0: Sometimes you even learn that it's a bad idea at all - better to learn that quickly and start over.
Edit1: After a cursory read, it seems quicksort it as simple as it gets in execution, being very few lines of code. But I'm also reading that in production code, quicksort is just part of the custom sorting algorithm that should be written for the specific database it's being applied to. If that's the case - using quicksort is the vomitting. i get what you mean "without thought about optimization"...some optimization expected, but heavily weighted towards rapid execution and low complexity.
Sorry, I'll try to elaborate. What you described can work _sometimes_. If a particular feature is isolated from everything else, and if the optimization step is a couple of tweaks that can be done later on, sure.
However, the dangers of applying this tactic indiscriminately are:
1. Version 2 is postponed, and a lot of dependencies on Version 1 appear. After a while, moving from V1 to V2 becomes a herculean effort, you have to do heart surgery while the heart is beating.
2. There may be nothing in common between V1 and V2. Implementing bubble sort gives you no insight into quick sort; you literally have to start from scratch. This may be acceptable if there's urgency in delivering a solution as fast as possible, but in my experience most companies have the time to do it right but choose not to (or don't know any better). Also from experience, it would be nice if people understood that it may not be so simple to "just optimize it later", and that it would be better to spend a little bit more time in the beginning of a project to consider how to foundation of the project will scale in the future.
I wonder how the author thinks about this now, 8 years later.
Way back in 2016/2017 I saw this video on prototyping where the programmer whipped up a snake clone in around 4 minutes. I deliberately practised programming in this way: how fast can I go from idea to working prototype? Code style, organization, testing, best practices be damned: you have about how long it takes to run a load of laundry, how much can you get done?
I did this for a few months. I focused on games. I roughly tracked how much time it took to get from empty file to the first interaction with the game. I tried to focus my practice on getting to that first interaction. And the overall time I spent and how far I got with the idea.
The thing with going that fast is... the code sucks. You're not writing it for an audience, for your future self, to make it possible to extend and compose; when you want to go fast and prototype in this way you're deliberately not going to use this code for anything later on. You just want to get the idea out there to see, "is this even fun? Would it even work at all?"
... before you go ahead and spend the next 6 - 12 months spending your limited time on something. Because finding out your game isn't actually fun after all of that effort is demotivating.
In this sense speed is useful. It's useful in certain contexts when doing programming on the job. Especially at startups where the runway is limited and you don't know if you even have customers yet.
However!
Be prepared to throw it out. Once you find some traction for a feature or process; throw out the prototype/MVP code. Make sure it's cheap to do that: you can write software that is loosely coupled, you can write a better API that uses the MVP code underneath, write good tests against the new interface, remove the old code paths piece by piece, etc. If it's a core part of the product and there are customers for it get rid of that prototype/MVP code some how (it's easier to plan for this and not get too attached to the first iteration if you do it intentionally).
Hmm I had a similar experience but with different results. I took fairly hard but common problems that I had to do and started timing myself.
First time it would be like 2 hours, then I'd get it down to 1.25 hours. Then 45 mins. Then finally I got it down to 18 minutes and couldn't get it any faster. I learned so many tricks and it really burned into my brain all the different faster ways of doing things.
I noticed at work I got way, way faster. I ended up outperforming my entire team, by myself. I think mostly because A) I now had a habit of working fast B) I viewed it as a competition, so no water cooler talking C) I didn't have to think about how to do it, I just knew, and I knew how to do it quickly.
It literally took me from making 75k to 400k in 2 years. I just applied that to everything I could, so when doing interviews I was able to just flood the interviewer with the depth of my knowledge and how fast I worked.
Now I really put in a tremendous effort into this, not just a few hours, it was an obsession for me. But once I did it I got those skills for life. Even now that I'm rusty I still go really fast.
Also, my code quality itself is quite good, because I'll write it as fast as I can (frequently only takes 15 mins), then refactor like crazy (5 to 10 mins) and still be an hour faster than everyone else.
Agreed - There's a certain thing about greenfield development in particular(which is quite fully the realm of startup coding) where you have to approach the first, second, maybe third iteration as one that will be thrown away, so when it does get thrown away, you make it easy to do so. And that leads towards some pretty dumb code, often "copy-paste-modify, sort it out later" code.
A lot of "best practice" ideas that get made into blogposts impose a structure that hinders disposability in favor of a certain aesthetic goal: to do the thing with less repetition and slightly cleaner abstractions. The goal is often not unreasonable, but it conflicts with the action of rapid iteration.
So I tend to write now with an eye towards trying to keep the code disposable until I face a certain class of error to eliminate, that needs the more abstract method. Often part of the disposability is in not actually solving the entire problem, but cheating and returning a wrong answer, e.g:
Instead of fancyAlgorithm() being written in a way that creates any dependencies I write a skeleton "function fancyAlgorithm() {return true;}" and then later upgrade it to a lookup table, and then to the algorithm. Because most of the code that matters to the app won't be in the algorithm, but in how all the other data is being passed around. The sooner I can get that piece to return a wrong answer with known error, the sooner I can arrive at a complete but wrong app, which can be corrected into a higher quality one. When not all the pieces of the app are there, it's not clear what error you need to be solving for, which leads to squeezing effort onto code that isn't important.
This is kind of conflates the idea of a person doing a job quickly, and the idea of tools having low latency/quick response, which confuses the issue if what you care about is working quickly — the bottleneck for work is usually the speed of our thinking, or the limits of our motivation to work. But, it's usually not the speed of our tooling. You wouldn't become a better programmer with a faster computer, and usually not a more effective one either. I'll ignore the second definition.
In my opinion, you want to spend about as much time as ever, but spend more of your time working on something that is close to a solution. So, you want to work on the first draft as quickly as possible, so you can either refine it, or toss it out and do another iteration. The advantage is overall quality and confidence in your solution.
For reasons mentioned in the article, I do not prefer to be thought of as working quickly and well, since that often means someone sees your first draft and says "great, looks like you're done, now let me give you something else to work on". Better to have a reputation for taking about as long as most people, but producing better results.
I'm very slow at doing things, because I consider things too much, I very much believe this is the main skill I need to work on, because the quicker you do something, the quicker it goes wrong, so you can course correct.
The book Creativity INC by Ed Catmul talks about this and they actually studied it at Pixar and they discovered the teams that moved faster were the best, the teams that considered everything for much longer, were wrong just as much as the teams that moved quickly, the only difference is because the other team moved quickly they could course correct sooner and also they were less tied into there original idea, so happy to drop it and go the other direction, were as the team that was slower and more considered took longer to drop the non-working idea, as they dug deeper and tried to make it work, since they were more invested in it.
I worked at a company where the team was asking for an internal dashboard to see real time metrics from these events we handle. My boss was like "Yeah, we are gonna need 6 months and a team of 5, and we need to plan this out right for once".
I went and wrote a version of it in a weekend and it ended up being what they needed. They had a few minor requests which took an hour or two but not much overall.
My boss was pretty mad at me that I made him look bad though lol. I could have handled that part better I guess, but I hate when people want to spend forever planning especially when you can get something out real quick.
When folks speak hypothetically about speed, they almost always assume identical outputs. Which is pretty contrary to at least my experience, if not reality.
Apple is the worlds most valuable company, and they're first and fastest in essentially nothing. So it really depends on the market you're going after.
Anecdotally, (and backed by language learning research iirc), when learning is involved iterations are important. Learn a little, practice a little, get feedback a little, repeat frequently.
But afaik it all comes down to the cost of failure. Public iteration on your encryption algorithm that's guarding billions of dollars is probably a bad idea.
> I’ve noticed that if I respond to people’s emails quickly, they send me more emails.
This is why I intentionally wait before answering emails that aren't related to my core responsibilities. I want fewer of these, so I best not establish a reputation as somebody who quickly provides an answer to everything. Got other things to do :-)
As the article states, this only works for stuff that can be broken up into low effort chunks. Not everything is like that such as maintaining legacy systems or working with large teams. :)
As somebody who works with legacy systems, it might take me a while to understand everything, but I think being able to experiment with different solutions rapidly is incredibly useful and has helped me zero in on the best possible solution considering the constraints. It also helps in understanding how the system works on a deeper level that often isn't possible just by reading code. Sometimes you just have to break things while implementing a possible solution.
In fact, almost nothing that's actually _useful_ is. Anything that can be broken up into predictable low effort chunks is something that can be automated away.
>
If you can automate away programming, there's some people who would like to give you a lot of money.
A lot of the work that programmers do (which is not identical, but somewhat related to "a lot of programming") could be left out (which is not identical, but somewhat related to "automated"). The reason is that a lot of programming is working around the gigantic brokenness of the whole software ecosystem. Here there does exist an insane amount of possible cost savings.
Concerning the "some people who would like to give you a lot of money": I actually talked about my thoughts with a friend who knows a lot both about economy (he works as a business consultant) and programming. He clearly said that my ideas on this topic were really smart and thought through, but getting money is a lot more complicated (it demands both being a good salesman and having connections). So "some people who would like to give you a lot of money" is simply not true. :-(
> gigantic brokenness of the whole software ecosystem
If you mean the employing organization and its ignorance, I agree. For most organizations today, their value is locked up in technology and those who maintain it. The rest of the business (non-technical people) are essentially overpaid punching bags for everyone to smack around. In an ideal world programming would not be a special skill, but just another means of communication.
I don't think that's what the parent to your comment was about.
You can't automate away programming, but you can write one program that behaves equivalent to many that accepts configuration from people who don't know how to program.
I don't think he says predictable or low effort. Obviously you can't build and release a bridge in a week, nor a big software system. But you can still push yourself to move fast (a relative term), and these projects do in fact get broken down into steps, even if you can't predict all the steps from here to there, and even if you can't release something along the way.
Speed matters when the thing being built is isolated.
If you're building a new feature/product whatever, and the business views that both as a standalone feature, but also as a stepping stone to support another product or feature then the code you get when writing quickly is going to slow you down in that next step.
Scale that up a few levels and you end up with 'enterprise code' that no one wants to touch for fear of breaking all the parts that depend on the pieces below it.
So write code quickly when its isolated and not planned for use in additional products, features, capabilities.
It is very difficult to convince management to prioritize tech debt and the only successful way is in the frame of enabling additional product capabilities.
If you're building a stepping stone, you still have to figure out how to make it isolated in a way then do it quickly. Like if it's a software service, spend the extra time figuring out a good API then speed through the initial implementation.
Aside from this post ultimately seeming like a bit of a joke given its conclusion, I found myself reflecting on a few things:
- Prompting people to send me MORE email is a horrifying thought and set me right off thinking this would be a parody piece.
- He's write that writing quickly can lead to more ideas. But if I'm writing for someone else, "quickly" can have multiple meanings: I can write one thing quickly, then have to rewrite it several times quickly to remedy all the assumptions and missing concepts even a competently, cleanly, and professionally written piece of text will have, OR I can slow down, answer all those questions as I go, and effectively write once well. Slowing down has been the best writing skill I've ever found for professional writing.
- Having a straight brain-to-Google interface appropriate for 2015 described as permitting almost impulsive thought is ... scary. Discipline benefits even creativity.
- I couldn't help thinking that this piece could have been simply "Work at a comfortable pace, and work on a single item, and don't get distracted by the world" and been more beneficial than "work quickly" (though, again, defining "quickly" is tricky).
From a DevOps perspective, speed is often the indicator that you're doing things right. If you integrate code quickly and often, if you deploy quickly and often, if you patch and upgrade software quickly and often, if you detect, diagnose and fix issues quickly and often, those are all indicators of a high-performing team. Quality isn't always correlated, but high-performing teams tend to care about quality, so it's not uncommon to add quality improvements to the list if you're doing everything else.
I think that part of the problem is companies/executives/manager who think that teams need to 'do' that in order to become high-performing rather than, as you stated, it's an indicator.
There needs to be a suitable environment, skills/training, support framework and everything else around to enable the team to become high-performance over time, that when starts showing that indicator.
Correlation not Causation.
All too often these are mixed by those who read the books that offer 'Acceleration' and think that enforcing these processes will somehow make the team fit.
We tend to get better at the things we do often. Companies which have annual release cycles are almost always terrible at actually shipping software in my experience. One of my first goals in any new organization is improving iteration / feedback cycle speed.
A good post overall, but the final paragraph is particularly great:
> Now, as a disclaimer, I should remind you of the rule that anyone writing a blog post advising against X is himself the worst Xer there is. At work, I have a history of painful languished projects, and I usually have the most overdue assignments of anyone on the team. As for writing, well, I have been working on this little blog post, on and off, no joke, for six years.
When I got my first big corp job at a FAANG company, in the first week a senior dev gave me the worst work advice I've ever received: "Always write code as if you expect it to be around for at least 10 years."
I think that's how that dept was doing things, and it was paralyzing. This was business logic that needed flexibility, not something like an OS kernel. Not only did new features not get shipped, but the pedantic approach actually caused tech debt because better solutions were seen as too hard. Code really did sit around for 10 years, and not because it was good code (it wasn't).
The CRUD IDE/Stacks of the late 90's were closer to the domain and that was a large reason for the productivity I used to see: more done with less key/mouse-strokes and less code. Web stacks make one waste time fiddling with janky stack parts & layers instead of domain logic, where it SHOULD be spent.
It's hard to match their productivity with current web stacks, often requiring specialists and/or big learning curves. (Sure, the 90's IDE's had their warts, but were getting better with time, until web murdered them. And I'll limit this to small-and-medium apps rather than enterprise/web-scale.)
It's usually assumed the features that made them productive are mutually exclusive with web-ness. I'm not convinced; the industry just stopped experimenting. What would help is a state-ful GUI markup standard so that GUI's don't have to be re-re-reinvented via HTML/DOM/JS, which is inherently ill-suited for rich CRUD GUIs.
Having your language/tools/standards being a close fit to the domain matters. It's why COBOL lives on despite being even older than me. I'm not saying we should mirror COBOL as-is, only that it still carries important lessons, including FDD: Fit the Damned Domain.
To a point. burnout matters too. If creating emails quickly leads to quick replies and more emails, before you know it you've emailed to your exhaustion point.
this is a tactic and strategy matters too. speed matters on things that matter.
I don't know, this is good advice perhaps for things which are not important.
But, if you are writing an email to a person whose time is valuable, better not to do it quickly. Save a draft, come back to it tomorrow, read again, then send.
If you are writing a legal document, don't do it quickly. Write it, come back to it again to review. Is it still a good idea?
If you are doing something completely new, do it slowly and step by step -- if it is new you don't know if it is right without checking each step.
If you have an idea, read the literature, patents, see what is out there first.
Great post, the summary for me being: speed is not just speed. There seem to be many other benefits aside from just maximizing productivity. It's a change in mindset that reduces decision paralysis/fear of starting, as well as encouraging experimentation.
As an aside, I was originally not going to post this comment as it doesn't really say anything not present in the article itself. But then I decided that going for it instead of overthinking or reworking it counts as an exercise in working quickly :).
I think about this another way. If you're fast enough, you can afford to do the gardening you need to keep your codebase in a good way without having to schedule it all in. If you're fast enough, when doing data analysis, you earn the space to ask the next question, and the next, and to do a deeper analysis.
But... as a lead, I've struggled to train speed before. Some people I've worked with have had it, junior or senior, and some have not.
Has anyone had good experience helping others become faster at their work?
No. I think humans are all unique. Just like a pro basketball coach, you are looking for the right talent for your particular team and your particular context.
I have had extremely smart, talented, thorough engineers who are slower than whale shit in an ice flow. For a startup, those engineers are a death sentence. It doesn't matter how good they are, if they aren't fast, they hold everyone back.
Are you building the roman empire, or are you building a small group of bandits that move swiftly through the night? Each case has it's corresponding strategy.
I think this effect is due to a cost of brain plasticity. You brain likes to close stuff regularly. Any new theory should be met with a kind of categorization, conclusion or a new idea to try later.
> I’ve noticed that if I respond to people’s emails quickly, they send me more emails.
> It is a truism, too, in workplaces, that faster employees get assigned more work.
Maybe there's irony I'm failing to perceive, but how exactly are these supposed to be good things from the fast worker's viewpoint? The truism is that working quickly is only rewarded by being assigned more work that's expected to be done even faster, which is a vicious circle if ever there was one!
Eh. Anyone who observes small children will see that they learn very slowly, but constantly. And they are extremely good at it - better than nearly all adults - at a minimum they pick up language, gross motor skills like walking, running, climbing, any number of fine motor skills, within 2-3 years.
They don't work quickly, they display no haste or concern for efficiency. In fact, they're quite inefficient.
> Now, as a disclaimer, I should remind you of the rule that anyone writing a blog post advising against X is himself the worst Xer there is. At work, I have a history of painful languished projects, and I usually have the most overdue assignments of anyone on the team. As for writing, well, I have been working on this little blog post, on and off, no joke, for six years.
And, amusingly, it currently loads really slowly too.
I'm not against or in favour of speed. History showed us many times that most people don't care about truth and correct solutions, people care about efficiency and moving things fast. Just look at gaming industry and preordering. Agile project management. This is what is called taking risks and making a jump into abbyss before we look what is underneeth. Taking risk is always praised. Question is which jump will be last one and will kill us all?
In addition to the many great points made in the article, I would add that speed does a couple of other things for you:
- The faster you finish work, the more time you have to make your work better
- The faster you are even at things like browsing HN or Reddit, the more value you get per hour (e.g. seeing more new ideas, reading more blog posts, etc)
> I’ve noticed that if I respond to people’s emails quickly, they send me more emails.
That's funny: when I learnt this, I have encouraged myself to reply with a delay so that give people a chance to resolve problems on their own and generate less emails for future me to handle.
This reminds me of Steve Yegge's classic blog post "Programming's dirtiest little secret"[0]. Yegge's post is of course about touch-typing, but it essentially makes the same argument. Doing things slowly is a liability, not primarily because it consumes more time, but because it makes you opt out of doing certain things because of their perceived cost.
"You'd be absolutely astonishedly flabbergasted at how many programmers don't know how to READ. I'm dead serious. You can learn to speed read even faster than you can learn to type, and yet there are tons of programmers out there who are unable to even skim this blog. They try, but unlike speed readers, these folks don't actually pick up the content."
Nice article. This is basically in the article when they talk about email responses back and forth, but not explicitly called out: if you are fast, people that interact with you get faster, too. Toyota figured this out with kanban (https://en.wikipedia.org/wiki/Kanban) and revolutionized process thinking. So, if you are thinking about how to get your team to work faster, focus on what is making slow team members go slowly.
I always see arguments for working faster.
Never arguments for taking your time to get solutions correct even if it doesn't produce something immediately visible.
> Whereas the fast teammate—well, their time feels cheap, in the sense that you can give them something and know they’ll be available again soon. You aren’t “using them up” by giving them work.
Well that is not always true. Producing results is one thing, making sure you learn enough in that process is another. Managers eager to cram down one micro manage task after another will use up a persons knowledge without restocking it. Yes faster can be better as lot‘s of people agree here with but there are limits.
"If you work quickly, the cost of doing something new will seem lower in your mind. So you’ll be inclined to do more." well if it is work that I know I can do quickly I'm just gonna procrastinate the hell out of it, if it is something I enjoy doing I am already inclined to do it
Being fast and iterating reminds me so much of the EM algorithm.
Probably a bit of a stretch but still:
You start with an initial estimate of the hidden variables (solution), check what the consequences are by implementing it, and apply the knowledge about the right consequences to improve the solution.
Consistently I see this advice from everyone who I consider has achieved outside success in their field.
Greg Brockman (OpenAI Cofounder) : Developer Speed is the most important thing I care about: (https://www.youtube.com/watch?v=CqiL5QQnN64). I believe he said more about this topic but I can't find it now.
Frank Slootman (Snowflake CEO): In his book Amp It Up, he stresses why pushing people to move fast is extremely important
Elon Musk (Let's not get into a debate about Elon and just agree that he has helped setup 2 100bn+ companies) : "I believe in maniacal sense of urgency, if you can implement something right after the meeting, do that", upon his work philosophy when taking over twitter.
In my personal experience one of the most impressive coders I met, worked extremely fast while also being very accurate, almost indicating that no tradeoff even exists.