Hacker News new | past | comments | ask | show | jobs | submit login
The Last 1% (jaredramsey.com)
213 points by jram930 on Aug 8, 2023 | hide | past | favorite | 90 comments



I think the whole premise of this article is wrong.

If a project is successful, it isn't 99% done; essentially it will never be done so long it is alive, supported, and/or used by anyone (I won't get into semantics here).

As others have pointed out, many of those things should be done way earlier as part of the development, AND as part of the ongoing development/maintenance after launch.

On another note I'm also skeptical that a project should ever be 100% done even if that was possible. I had the privilege of working on a project during the whole lifecycle, including when it was deprecated and eventually decommissioned. It was very pleasant to go through all of the open Jira tickets for this product and close them as Won't Fix forever. All of those features, bugfixes and enhancements were never implemented. The project launched, lived, and died without them, and it was fine.


I think where the semantics become relevant is that people say "project" to mean "product". When it's a product, it might never be 100% done, and it might be a poor idea to try to frame it as if it could be. But some projects aren't products, and don't have perpetual stakeholders, and for some of those it can be possible to set realistic 100% done targets and reach them.


I think this is the difference between products and projects. Projects definitely have an end. Products do not.


Good call.

Software is a living organism, or it is dying. Just as a practical observed matter of how it actually works out.

There are a few exceptions, but few indeed.


In todays age sure, But just looking back at games pre monetization / MBA era, they still work. Sure there are some bugs/ glitches/ but not much that breaks the game.

You can pretty much play any game pre PS3 era still enjoy it to it's fullest.


Similarly there’s the feeling of DONE for games of that era. Burn that Gold Master, or mask those cartridge ROMs, and it’s done. Fini.

Whatever shenanigans were done late in the project, if it passed QA, that code base was effectively read only, with little concern for ongoing maintenance.

On to the next one.


Kind of? But only because of forced upgrades.

The best code I've written is the code that's still running 15+ years later and no one even thinks about.


Firmware is also software and as far as I can estimate it never got updated after release in the vast majority of cases up until like a decade ago.

Think BIOS/vacuum cleaner/shaver/toothbrush/alarm clock/dumbphone/non-smart TV/airplane kitchen equipment/medical diagnostics devices/PLC's that control most of the entire world's infrastructure and so on.

It's entirely possible and very common to have an actual "final" software release and be just fine (at least up until recently before the whole "everything needs to have internet for telemetry" hype).


And that's a problem. Often there are full of issues which eventually are all known but never fixed.

Updateability is the defining advantage of software over pure hardware solutions. If you don't use the advantage then you are stuck with just all the disadvantages.


Nice theory, but it is overfitted to a broad category of all software and a product that is thriving shouldn't be conflated with a product that is improving.

Some products with software continue to "live on" successfully and thrive, without updates. Think of a digital alarm clock who's goal was to help typical users to wake up on time most of the time. If you ship a product that does that and it isn't being updated, is the product really dying or doomed to failure?

No alarm clock will ever wake everyone up on time, but we can always strive to get closer to that goal if we chose to set that goal. An unreasonable goal could cause unnecessary bike shedding, etc.

But a simple pacemaker for the heart, the goal is closer to the idea of helping as many people as possible, rather than most. Hopefully we write good software and we go 15 years without needing an update. I think that is better than bad software that has to receive more updates. Which software is more "alive" and "thriving". Is the good software with no updates for 15 years really "dying" since it isn't "improving"? Again, thriving and improving shouldn't be conflated.

So, a product setting appropriate goals helps determine how much maintenance is actually necessary and some goals can be met without requiring any future maintenance. Other goals may benefit from frequent maintenance. Some products can thrive without improving. A product's goals determine's the importance of improvements.


While that's true, there are also projects that can live and die with very few code changes for years. And as long as those projects are continuing to provide value for their lifetime, I'd call that a successful project.

On the other hand, I've seen projects get to 99% and they provide value for a few months and then because they never get to 100%, every time there's an issue no matter how small, it's very difficult to debug and the original owner has moved on to a new project or left the company so users are left holding the bag or they just end up deciding to stop using the product or build a new one.


Another way of looking at it: Nothing is ever 100% done when it launches. Nor should you wait for it to be, if it's been tested to your own complete satisfaction. I frequently launch things over 3-4 different beta phases to increasingly large swaths of customers, while documentation is still being hashed out by staff who are coming up to speed on the feature themselves. My goal is that at the 1-year mark, more or less, there are almost no bugs, by which time I've forgotten the nuts and bolts of the code. Meanwhile there are other features and pieces of software that need building.


All of those things should be added. I would argue they take far more than 1%. They're easily 10, 20 maybe 30 or 40% of the work. Creating robust dashboards, alerting and documenting the work in an externally digestible and internally debuggable format is very time consuming and difficult work that pays dividends when done but often gets thrown out because we schedule our work for 'MVP and yeet' where 'minimum' is defined as 'minimum to click button', not 'minimum to support button'.


Having worked in multiple environments with differing code quality standards, it's pretty apparent how both "minimum and yeet" and "completion or bust" fail.

"Minimum and yeet" works surprisingly well if you have unsolvable debates on customer value, options to yank the feature if it sucks, and generally competent engineers. If you are really good at yanking the feature when it sucks - you may even be able to get away with incompetent engineers. However if you repeat this cycle on the same code base 100x over, every feature gets harder. Eventually you hit a point where no one really knows how to do anything anymore as a feature that used to take 2 weeks now takes 8 months.

"Completion or bust" works really well when you can't take a feature back... ever. However the definition of completion tends to grow over time. I've seen launch checklists which amounted to 6 month projects on their own (for both good and bad reasons). Sometimes completion becomes an excuse for architectural astronauts to enforce a change resistant paradigm on the code, eliminating any gains from completionism. Other times, the engineers and managers become convinced that any change will take N months and start ignoring everything that doesn't look like it will provide N months of value.

In an ideal world, I'd love to see organizations better adapt their standards to the needs of individual teams or invest in tooling which makes it "cheap" to do the right thing. However this is not easy to pull off.


I love this whole post!

Small add: "generally competent engineers" can sometimes (almost) inline most of the code completeness details for cheap, reducing their cost greatly.

Writing documentation is always a time sink, though, in my experience. Or maybe I'm just not good at it :P It's usually an additional day of work overall, though.


Depends on what you're doing... For a developer focused documentation, I often will write a bit of the documentation up front as a project is getting setup with the intent on how it should work locally (or per developer). Then fill in details as each part gets more flushed out. Same for library APIs, command-line scripts/tools etc.

For end-user products with a user interface, less so as it comes down to being flexible... similar with dashboards, as a developer I often don't know what is wanted up front... when something is asked for and you learn the domain more, it can come together easier. Larger projects with a front end component and many developers, nearly have to forget it at the dev level.


> Writing documentation is always a time sink, though, in my experience. Or maybe I'm just not good at it :P It's usually an additional day of work overall, though.

And it saves you days and days of re-discovering truths about your code weeks/months/years down the line. Sometimes those rediscovered facts are not even true which will come bite you big time.

Integrate your documentation with your testing. That makes it easier to create and maintain. You don't have comprehensive automatic tests? Then that's your problem right there.


There are several forms of documentation:

  * comments in code
  * team-based comments
  * project design docs
  * knowledge-base articles for handling on-call rotation around feature
  * how-to guides for customers
Each of these has a different cost and a different direct/external usefulness. I absolutely believe in good documentation and I absolutely believe that it's valuable. It does not negate the extra cost of including these forms of documentation - especially the "not-in-code" documentation.


Of course they "cost". But the issue is the mindset that they are "extra". They are not. They are an integral part to professional software engineering. You can't take them away without moving from a professional craft into some hobby hack.

When developing medicine you wouldn't consider safety studies as "extra". When you fly an airplane then the take off checklist is not "extra". Arguing that automated test suites or documentation are just "extra" on top of making software and could be skipped is similar to arguing that you could fly a plane without any checklists or releasing medicine to the public without evaluating its safety. That's just unprofessional nonsense.


Writing docs, especially detailed and complete, can really save you in the long run. Plus the people that come after you will greatly appreciate your efforts.


>invest in tooling which makes it "cheap" to do the right thing. However this is not easy to pull off.

Automated testing FTW. Then you can change and refactor and extend all day, as long as the tests are all green, you are golden.


What do you mean by this? > In an ideal world, I'd love to see organizations better adapt their standards to the needs of individual teams or invest in tooling which makes it "cheap" to do the right thing. However this is not easy to pull off.

What does that look like in practice?


For the tooling part, I suppose it's something like a "golden path"[1] where you have predefined templates from which the developers may choose the most appropriate for their problem.

[1] https://cloud.redhat.com/blog/designing-golden-paths


Not to mention a good AT coverage - it can easily take 20% of project time, and often much more.


But it saves heaps of work after. Far beyond 20%.

(The projects I work on typically spend over 50% on automated tests, but I'm in an area of the software industry where we really can't have and bugs escape into the wild. I wish more areas would work like that.)


> Creating robust dashboards, alerting and documenting the work in an externally digestible and internally debuggable format is very time consuming and difficult work that pays dividends

This is why having a solid BI and Data science org is underrated. To assign these to eng teams is disparaging for every org.


Documentation is a deliverable as much as software. Just as important too.


None of those things are going to make a bad feature successful and not doing these will not make a good feature unsuccessful. They're all a form of technical debt that should be used very sparingly. Some of them will be more of a distraction than they're worth.

Somewhere around 2010 a sort of product development pseudo-science started to take hold of the industry. Telemetry, A/B testing, surveys... oceans began to be boiled to avoid the uncomfortable fact that good product is developed intuitively by good product people. This "last 1%" is somewhat akin to waterfall development.

Stay lightweight, hire people proven to design and ship awesome product and iterate as fast as possible. Use your own product. Talk to people who use your product. Take chances with your product design. Don't let organizational turf wars shape the product (something telemetry driven development is notorious for). Good old fashioned craftsmanship and creativity can go a long way. It's how new industries are born.


Just wondering, have you ever worked on something with a longer lifetime than the typical young engineer's job rotation? One thing we run into all the time is awesome quickly iterated things outlasting many people who understand how it was done.

Admittedly there's a big difference between web apps and products that are meant to last or even contain hardware components.


Yep, I've worked on consumer products with decade plus lifespans. I was lucky enough to have worked in places that valued creativity, so shipping interesting features was always prioritized over direction by committee with data augmentation. I'll say that the end-users really loved that. They for the most part don't want you rearranging their living room (so to speak) they want new home additions.


Agree. Tech is a very broad universe. Is not the same to create a consumer facing platform for a new business than creating a Core System for a Telco, Utility or Bank. In those cases, the 1% (minus documentation) is crucial, as core features are not removed nearly ever.

When you are experimenting with consumers and a feature might end up being a failure, then yes some of those items could be cataloged as tech debt of sorts.


The differing views I reckon is Startup VS. Older business. We are now veering into "strategy". You need to think about what your "100%" looks like for a project. Lower case agile makes these 100%s smaller by the projects being smaller so you have more information for the next iteration.


> ship awesome product and iterate as fast as possible

And that attitude is why so much software just sucks. Spending some time to contemplate and test and find out what's good and throw away what's not before it ships, all that goes a long way with quality. "Iterate as fast as possible" is one of those immature ADHD approaches and I'd run if a manager forced my team to do this kind of rushed nonsense.


> Spending some time to contemplate and test and find out what's good and throw away what's not before it ships

Why would you build something to throw away before it ships? Hire good people and they won't build un-shippable crap. "As fast as possible" doesn't mean you rush things, it just means you don't waste time with all these peripheral activities.


> Why would you build something to throw away before it ships?

That's because you learn things along the way. About underspecified requirements, about design choices that looked good on paper but ended up brittle, about tech that promised something but couldn't hold up to it when tested out thoroughly. There are many reasons and in every field you see this effect.

> Hire good people and they won't build un-shippable crap

Ever heard of a prototype? Those exist for a reason.


> None of those things are going to make a bad feature successful and not doing these will not make a good feature unsuccessful. They're all a form of technical debt

You said what I was thinking but couldn’t come up with the words. These aren’t the difference makers.

Every “successful product” is held together with spit, glue, and an ocean of tech debt. There is no utopia.


I only partially agree with you because that list also includes things like documentation, error metrics and alerting... those have nothing to do with marketing folks pushing for more telemetry. Things like documentation and alerting are something a good engineer should worry about IMHO


> None of those things are going to make a bad feature successful and not doing these will not make a good feature unsuccessful.

Which is why the article says: "This last 1% isn't just what separates a great product from a good product, it's what separates a product that might not eventually fail from one that will eventually fail."

A feature can easily be successful while being impossible to maintain because when it comes time to pay the cost, all the main people involved have already claimed the required benefits and jumped ship.


> They're all a form of technical debt that should be used very sparingly

I'm not sure if you mean you should hardly ever, or almost always, do the "1%" things.


intuitively i agree with you, i've seen a lot of the pseudo-science.

But curious, can you elaborate on how telemetry driven development leads to organizational turf wars? I think i've seen that too but interested to hear your thoughts.


Everything from what gets tracked to how the data gets interpreted. It also can put a target on certain growth areas that will attract the more "ambitious" people from the company. All will try to make the case with the telemetry data.

Does more time on a page mean users are more engaged or are they struggling to find out what's going on? Depends on which product manager can make a better case, probably involving even more telemetry which has to get prioritized.

In the creative IC model, the designer and developer work to solve a problem based on their experience building product. Or someone has a kick ass idea one day and just implements it creating a step change in usage. That type of environment requires freedom and trust, it also puts a lot of control in the hands of the ICs which is why it's not popular with product managers, directors, vps...


I agree with you in spirit but you have to realize that every developer and designer pairs will think they are the ones that know what they are doing and should have the trust. I've seen trust be given for periods of years and teams simply wasting time and not meaningfully improving anything. This only works with good product people on board and most product people aren't any good at product.


Saying that you’ll use data to make decisions doesn’t change that - most people can’t tell the difference between meaningful data and random sets of numbers with a headline, so it’s still trust awarded to the most convincing salesman.


> In the creative IC model, the designer and developer work to solve a problem based on their experience building product.

This is great for helping to build products, not companies. Companies are larger than just their products; they also have obligations to existing customers, stakeholders represented by auditors, etc. That's not an argument that new products should be anchored down by these other concerns; it's an argument that "creative IC" should only be launched with clear alpha/beta/preview-style labeling, and that there should also be engineers who are not paired with Product, whose job it is to "fill in the rest", so to speak, because it's also important.


That last 1% sounds like a godawful lot of work. With this attitude, you can never call something "done", and I deeply hate this notion of "things that are never finished". Like "you are never done learning C++". It implies that everything you do will haunt you for the rest of your life, that everything you do is a liability, and that there is some moral code or obligation to developers to do things because it is convenient to other people.

This, together with the ever-increasing complexity of well, everything, and the increasing number of "things developers should know about X", together with the notion that developers should always work fulltime and learn in their own free time, is non-sustainable.

It's a painful fact that in this "modern" environment, we just can't build anything anymore. There have been three critical vulnerabilities just today (Downfall, TunnelCrack, Inception). If you make a website it's probably hundreds of kilobytes big and you get people whining over accessibility and how it breaks dark mode of version 23.42.23 of their obscure browser.

Have you ever noticed how productive people like Fabrice Bellard just don't care about that stuff? The last procent is just a trap to suck you into the tarpit of spending time on useless shit. Choose a stable target and reasonable feature set, release, and never touch your project again. Bliss.


I fond it works to have a division of labour between builders and maintainers. Just like in property management, the different phases need a different approach.


If builders don't experience the maintenance costs of their decisions, how will they ever learn? And even if they do learn, where's their incentive to do it better next time?


You limit builders by forcing anything they build to hold an alpha/beta/preview label, where it doesn't graduate to having the full backing of the company if maintainers can't fill in the rest of what's necessary for long-term maintenance. The incentives revolve around how many "builder" projects eventually end up losing the alpha/beta/preview label and how long it took to lose the label.


Maintenance concerns should be part of the requirements for the project to be considered successful, the same as pretty much any other engineering. If the only reason to have SEs care about maintenance is to have them do it later, then incentives are wrong.


Communication between the builders and maintainers, including "costly signals" (commercial consequences for poor work, poor reputation, losing contracts etc). There are a lot of builders out there.


This is precisely the reason for the existence of SREs... to be able to push back on builders some of the costs and concerns of maintainability.

In the home builders example... that would be lawyers and lawsuits.


Similar take was shared recently, except it put more emphasis on marketing in the last 10%:

Stopping at 90% [https://news.ycombinator.com/item?id=36967594]


I can't tell if the author is agreeing with my blog post or mocking it :D

https://austinhenley.com/blog/90percent.html


It seems like a different take with similar sentiment: The project isn't over once it's functional.

It's a nice coincidence tho, you and OP should collaborate on a joint research to get the bottom of it.


Yes, I initially thought that the OP was someone resubmitting that post, in fact


Interesting, I didn't even see this the other day. Guess it's a common sentiment :)


Really? The structure is almost an exact replica of my post. The wording of the first paragraph and the text before the list is also very, very similar.

I thought this was a direct response to my post.


The article is, on its face, completely wrong.

> This last 1% [is] what separates a product that might not eventually fail from one that will eventually fail.

Then proceeds to list a dozen different types of instrumentation, dashboards, and documentation.

I don't know about anyone else here but I've working on many, many successful software projects that have made their owners many millions of dollars with out-of-date or missing documentation, minimal instrumentation, and minimal or no dashboards to speak of. Most projects I've worked on have achieved at least some level of commercial success and the vast majority missed at least one of those major categories, usually several.

Yes you'll be in a much better place if you have it, but if your competitors and building features and iterating the product while you're building out instrumentation and dashboards, you're likely going to lose over time.


Or win, while the other company has had

  * churn
  * turnover and nothing of theirs is documented, everything is tribal knowledge
  * their former "star performers" are now trapped working exactly on this single project because they're the only one that knows how it works.
  * customer loss because fixes take weeks since you don't even get the metrics to know that their systems are down and their customers have come to believe them to be unreliable.


> I don't know about anyone else here but I've working on many, many successful software projects that have made their owners many millions of dollars with out-of-date or missing documentation, minimal instrumentation, and minimal or no dashboards to speak of. Most projects I've worked on have achieved at least some level of commercial success and the vast majority missed at least one of those major categories, usually several.

Since we're being pedantic, plenty of software projects that have made their owners many millions of dollars have subsequently failed. Who uses WordPerfect or Lotus 1-2-3?


And no automation or performance dashboards would have made them “not eventually fail”. (Among other reasons - I like how this article just assumes your product is Saas)


Those competitors will be known for crap quality and eventually their product beasts will be unmaintainable and hard to extend. So, while you keep steadily improving and extending they will eventually fold under their own unsustainable mess. I've seen it many times and been on the other side of it often enough.

Considering automated testing as "technical debt" is why there is so much crap out there.


Putting "automated testing" into the last 1% is the issue right there. It needs to be part of what you are doing from the start. You are done when your automated tests are complete and reliably green, not when some underpaid tester in Bangladesh gives their thumbs up.

Ideally your documentation is tightly integrated with your tests too, then it will be done as well and won't go stale.


Awfully similar to a post we just had on here: https://news.ycombinator.com/item?id=36967594. Including the fact that both are short and finish with a list of items.

Interestingly neither post seems to be at 100% (not a criticism!), which kinda relates to what I said in the other discussion: https://news.ycombinator.com/item?id=36971378


I honestly thought this was going to say “marketing, sales, support”! The things I hate to do… but have to do.

But actually this blog is apparently talking about a single feature of a larger product. In which case the larger product should already have the necessary infrastructure (metric collection and dashboards), and so perhaps it is just 1% of the time.

For the kind of indie style product I’ve worked/am working on, I think these are more like 20% issues, but I don’t think that you need all - or even most - of the items listed until after your product gets a bit of traction. Better to launch an 80% product now than a 100% product in 6 months.

Until you’ve got many customers, you can get most of the usage, performance and error instrumentation from watching the appropriate logs, perhaps adding a bit of dedicated perf logging in performance critical areas. Building all this other infra too early is just a waste of time.

In fact I’d say that, in the early days, continuous deployment and a bit of testing is more important than instrumentation. It’s way more difficult to retrofit CD, and it saves so much time, especially when you’re pushing lots of updates, i.e. at the start of a project.

But like I say, my comments apply to a new product. Not a new feature of an existing product. In which case I’d expect engineering standards and infrastructure to be well defined.


I will do these for myself and my team but only if management recognizes this as work. If my performance review comes back with a low rating despite me having done this work, I am not going to do this any longer. Management can feel free to deal with consequences.


And this is the crux of a lot of software developer angst. Most management doesn't know the benefit of code quality (refactoring), unit tests, metrics/monitoring, security, etc. and will push back against the extra work they take.

Good software developers do know the benefit of those things. They resist the management's efforts to ignore those things. In the end, the software developer often give in because they know who has the power.

There's balance to be had, and cases where's it's not worth it to put in more effort on certain things... but the boss rarely has any visibility into the tech side of things.


My personal conclusion is to never work under management who are not software engineers themselves. All my bosses understand what building great software entails since they could do it themselves. So what I do to improve qualiry has always their backing. And nobody can BS them into some nonsense.


>So what's in this last 1%? Here are some of the most frequently skipped things I've seen:

Internal (maintenance) documentation

External (how-to/FAQ) documentation

Performance metric instrumentation

Easy-to-decipher performance metric dashboard

Usage metric instrumentation

Easy-to-decipher usage metric dashboard

Error metric instrumentation

Easy-to-decipher error metric dashboard

Alerting

Automated testing

None of the above are even important in launching a product, much less an MVP.

Startups have raised hundreds of millions and get millions of users without any of those. In fact they'd probably just slowed them down.


> None of the above are even important in launching a product, much less an MVP.

Hence an MVP is a MINIMUM viable product

> Startups have raised hundreds of millions and get millions of users without any of those

A very narrow definition of what makes a good product or long term enduring company. Unless all we should care about is VCs and founder exit sales


>Hence an MVP is a MINIMUM viable product

Mimimum public features-wise. Not minimum because it doesn't have a performance dashboard, alerting, and automated testing.

>A very narrow definition of what makes a good product or long term enduring company.

None of the things listed are essential for "a good product".


Huge emphasis on metrics: 6 of his 10 points. However, those only apply to a rather small subset of projects, mostly big web apps or SaaS projects.

In my recent experience, the biggest point that applies to the vast majority of software projects is his second point: user documentation. As often as not, you get pointed to a website, on which it is impossible to find anything useful. Maybe there's an FAQ, but the questions were never asked by actual users.

If there's a problem, or something isn't properly documented? Maybe there's a support email, or an online ticketing system. Whether you'll get an answer is a lottery. For one piece of software, I submitted a ticket in February asking for clarification of their poorly documented API. No answer, so in March I submitted another ticket requesting an answer to the first ticket. Promptly answered: I just need to be patient. To date (August), nothing.

Development schedules for anything "online" are always crazy: You've got to get something out there, fast. Decent documentation never happens, because the company is already off on the next project.


I find that a lot of that can be added as the code is written.

I tend to use a lot of headerdoc-type of stuff, and the last documentation is often running Jazzy on my codebase, and uploading that to the "docs" directory in GitHub.

Error handling should not (IMNSHO) be put off until the end. It should be designed in from the start. We can add "do-nothing stubs," but they should still be there, for when we need to go back, and hook up the reporting.

Localization is also something that I don't think should be left out until the end. I design my code, so that every string is a placeholder token, and is replaced at build/run time. This also allows for easy integration of marketing "talking points," and a "corporate glossary." These may not be a big deal to the coders, but people who sign checks like them.

Etc.

I have a little screed on how I do documentation, here: https://littlegreenviper.com/miscellany/leaving-a-legacy/


Those don't sound like 1%. They sound like a lot more than that. Which is why they don't get done. Who has the time?


Everybody has time to write an automated test. If you don't, then you don't have time to write code in the first place.

That's the difference between professional software engineering and just hacking something quick in your free time.

You wouldn't ride a motorbike at 150mph on a German autobahn if it was put together by your neighbor with parts from a junkyard in an afternoon. Unclear to me why you'd treat software any different.


Fair, automated tests are important, but it's still ludicrous to call automated testing "1%". It's a lot. And let's be real - there isn't always the time, requirements being what they are, especially for properly isolated unit tests. But I'm always an advocate for a "worse-is-better" integration-test-script that requires a full test environment rather than having to build a full mock architecture. Unit tests are the gold standard, but a few integration tests per-feature are a good bare minimum.


If something isn’t making for example the cost of a fulltime salary in revenue then the last 1% is a waste of time.


Until something undocumented, unmonitored or untested fully collapses and then you loose potential revenue or harm the brand and company market cap.

Revenue is not the only metric to consider IMHO.


Sure if you want to spend tons of money on a product that doesn't sell. I've seen corporations do it multiple times before (I myself was staffed on a project that was going to be 'the future of our department', worked on an MVP for almost six months, then the project was shuttered when the two major clients they were planning to sell it to didn't end up signing the contract in the end (one eventually tried to do their own in-house though, and tried to get us to just hand them our business logic for it).

If we had spent an extra four months or so getting this 'last 1%' done, it wouldn't have mattered. It would have just been even more of a waste of time and resources. At least they realized it before we polished the hell out of it.


Do the stuff you hate as part of the project. A lot of that stuff is at least "during" and maybe best as "beginning". Write the docs first, or the landing page first, or whatever and you may even change the very feature you are working on as you realize it makes no sense.


If you've finished 99%, and your product isn't successful, the last 1% wouldn't matter.

Actually, I would even go with the 20-30% threshold.

Many great products got traction from the start. Think Twitter, Stripe, Facebook, Craiglist, and etc.

Even the first version of iPhone was far from done.


Doing devops/sysadmin, I frequently find myself spending 90% of my time on the last 1% of systems that I can't declare out of scope but that are broken/nonconforming for "business reasons". I'd be 10x more productive if I could just turn those off. Every time, I try to make the fixes/write the code that will smooth the next time, but the next thing always arrives before I finish. I've written enough code to recognize the same pattern in any large codebase.


I would suggest doing the last 1% first, but therein lies an infinite regress.

More practical: have a well defined checklist of must-do's. And a separate list of significant nice-to-do's.

Then wind up a project any time all must-do's are done, with however many nice-to-do's actually got done.

So the last 1% (10%? 25%) are optional nice-to-do's that can be scaled back without concern. The optionality of nice-to-do's pads schedules, making targets dates easier to hit.


screw all that stuff for now and ship it :)

seriously tho, that stuff is very important and should get some priority but only after you have customers. if you are not in a position to worry about revenue (maybe you won the lottery or this is a project backed by Google), then do as many of those things beforehand as you can. but your hardest sale is the first one, which will be even harder the longer you wait to put it on the market.


Am i the only one who hates these minimalistic/retro-looking blogs with a few posts that sells you truisms or already known facts?


The author is confusing the journey for the destination.

There is no finished. Only more done. Or less done.


I don’t think it’s very likely that a developer will ever implement automated testing.


I'm a developer and I require automated testing even on my personal projects. Every change I make, I _know_ it will not fundamentally break for my users. I never manually run through to check things before release, tests cover it. (Yes bugs happen; that is different than releasing code that can't talk to the database because you can't be bothered to check that the app works at all)

If you are not pushing for automated tests from the start on a project that will be released to users, I think you have some professional maturing to do. It can be small and simple but it must exist.


Seconded.

Recently while refactoring I realized I was breaking nearly everything. Most code didn't have tests -- I'm working on an experiment.

So, I coded 3-4 tests of the "run the main program and if it doesn't crash then it passes" style. These tests are silly, but they catch lots of refactoring bugs! And it only took 30 seconds to write the test.

If later I decide the actual _values_ matter, I can add another test, but this "break glass" test is already giving me benefits. I can refactor bravely and see how much stuff goes up in flames. Very useful!


Not on the list: reflection, in the vein of action research.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: