> Pick three key attributes or features, get those things very, very right
I think the trouble with this advice is, when you’re developing a product, isn’t not really clear which attributes are key. You have a hypothesis, but you could easily be wrong.
Or perhaps this advice could be interpreted as: decide on your hypothesis, and go all in on it, and only it. If you’re right, it’ll turn out well (as discussed in the article) and if you’re wrong you would have failed anyway and you probably fail quicker following this advice.
The supplementary advice (which I guess is left out of this post for brevity but which I've heard PB say in public talks) is that you should work very very hard, both before and during the dev project, to determine what those three key attributes should be. It can change over time.
In the case of Gmail, he spent countless hours sitting next to Google colleagues at their desks, watching them use MS Outlook throughout their work day, and asking them why they were using it the way they were and figuring out what could be improved.
Then when he had a first working version of Gmail, he'd get people to use it and watch intently to figure out how they responded to it, what they liked/disliked, and how it needed to be improved further.
(The first version was mainly just a search box for PB's email inbox. The first thing he was told that needed to be improved was that it should be able to search the user's own inbox instead. But it was still a useful test; they loved the search experience enough to want it for their own inbox.)
Anyway, there was every opportunity to chop and change what the three key attributes should be along the way; the point is to not try and be all things to all people and build every feature anyone ever asks for, lest you end up with your own equivalent of the Homer car.
Part of the reason Agile/XP/etc. exist is to argue against this line of thought. Iteratively putting a working product in front of people is more effective at getting you high quality information than millions of dollars of upfront research.
Agree but it's not really a dichotomy. From initial conception to final release, you should be researching by iteration. Eventually that means user-testing prototypes, then betas, then RCs.
Agreed but part of that process is forming and re-forming your goals. It's ok if you aren't exactly sure what the goal is or if it turns out that the ones you thought you had were wrong.
It certainly fits the startup style concept where you expect the product to take off or eventually "pulled" out from the company by customers. Customers in those situations don't jump on board for a bazillion features (that actually sounds like a headache) rather it is probably just for the core of whatever the product is. If they're not interested in that... bazillion side features won't get them either.
I was working on a ancillary product connected to my employer's main product a while ago. It was barely done, missing a lot of features but they wanted to ship it. I was all "ok but .."..... and the folks who wanted it ate it up. No complaints about all the missing tidbits I was thinking of, not a peep. As far as adoption went, it didn't matter. I probably could have spent a month or more longer on that stuff, but it wasn't what was important to get started.
Going through that process a few times before, I learned about the importance of understanding which among the many equally time consuming ways to build something will give you the best tradeoffs as the product grows (or doesn't).
Should a record be updated or a new version inserted? Hard or soft deletes? Creating a column for each setting or have a JSON blob?
No one answer is correct for every situation. You can be future proof, ready to implement reporting/backup and restore/power user features when the time comes, or setting yourself up for pain.
> Disclaimer: This advice probably only applies to consumer products (ones where the purchaser is also the user -- this includes some business products). For markets that have purchasing processes with long lists of feature requirements, you should probably just crank out as many features as possible and not waste time on simplicity or usability.
This is a problem with almost all open source software: for all the talk of doing one thing and doing it well, they almost all end up being kitchen sinks in the end. The only major success story I can think of was when Mozilla was forked and trimmed down to Firefox. Hopefully the same thing will happen to Firefox, and we can get a browsing experience where the focus is the average user experience, with easy ways to change it into an expert user interface. And even more hopefully something similar happens to every desktop manager, email client and shell tool out there. Make things extensible, then remove the cruft. Continue until the core functionality is all that remains.
I find Vivaldi (not my daily browser) pretty good at that: by default it's a browser like any other, but you can personalise everything pretty much everything, and unlike something like KDE it feels thoughtful and easy to use
It's not open source, but as counter-example I would offer Directory Opus: loads of features, totally customizable, extendable, complete documentation, while somehow managing to not get bloated (they've been at it for decades, after all). Old Opera was also very good in the same way, and long before Firefox even showed up.
What he means: If your product (first version) is Great (at a few important things), it doesn't need to be Good (at everything)
I think the unstated “first version” part is important. You should be gradually adding the missing parts. And not just leave it at the great, simple but missing too many things state forever.
True but I think another important factor is that by avoiding the missing parts, you might realise they're not really missing after all. What once seemed essential actually isn't.
This works sometimes. Like, the article notes, it worked for the iPod. For the iPad, it landed much softer than its predecessors, even if they still made a killing on sales.
The large format was awkward to hold as a content consumption device, and while it could have made for a neat content creation device, that never became a mass-market phenomenon. Later, smaller models were then eclipsed by phones that grew in size. Then, recently, the line was repositioned to import some more traditional computing expectations, after the Surface line proved that there was demand for that paradigm.
If you squint, you'll notice that the iPad wasn't a failure, but the critiques leveled against it at the time that challenged whether consumers would recognize that an iPad was something they needed turned out to be right. Those who waited and never bought an iPad were rewarded by phones that grew in size until they could reliably satisfy much of the iPad's usecase. Those who used the iPad for content creation found that software was a significant part of their workflow, which translated poorly to competing platforms that later emerged to innovate with hardware -- so their long-term retention was a blend of lock-in and merit.
At first, the high price of the product initially contributed to a significant self-selection of its customer base -- by dissuading unsure prospects from an impulse buy; but this gatekeeping effect was lessened when smaller, cheaper models were introduced later. Gmail achieved gating through invitations; Facebook achieved gating through requiring an ".edu" email address. The iPad, Gmail, and Facebook all benefited from the marketing value of gating, but in the latter two's case, the network effect kicked in in earnest once the gating was lifted. In the iPad's case, once less-invested people began buying iPads, that now-shifting market of people began moving closer and closer to the consensus that it's simply a Really Big iPhone, with all the benefits and drawbacks that come with that.
I'd generalize that lesson to say that your product must appeal to a self-selecting, highly-invested fan, so that it's profitable to solely cater to them and ignore everyone else. Then, to survive an intentional or unintentional pivot to a more mass-market appeal, your product must readily offer a smooth and coherent way to satisfy a set of usability needs people currently have. Gmail, Facebook, the iPod, and the iPhone did this, but the iPad fell well short.
These really good observations. One thing that made me appreciate the _current_ ipad a little more is my recent experience with MS's Surface which is an explicit tablet/desktop hybrid. There is a lot I like about what MS is doing, and I recognize the hybrid model is extremely difficult to accomplish, but their tablet experience just doesn't stack up to the ipad's. I felt frequently frustrated by the lack of gestures which would help me quickly navigate between applications, and (even worse) the click targets don't change for the tablet mode, so pressing things with my finger is nearly impossible. Watching youtube is like rocket surgery because of how the controls hide until hover and remain very small. Even UIs which the OS could ostensibly alter, like click targets for window controls, stay small and difficult to hit.
Microsoft is trying to hit two products at once, and (to a degree) the desktop mode is still strong, but the tablet mode just isn't. It gives me more respect for Apple's decisions than I previously had.
Speaking as someone who contributed to those killer sales, the justification for my iPad 1 purchase was for presentation purposes - never again did I need to present on a laptop screen, and slides flipped elegantly. Of course, with that purchase I discovered the kindle store with the complete works of Aristotle, Nietzsche, etc as well as classic literature. In fact, I single-handedly attribute the iPad for turning me into a reader.
> We took a similar approach when launching Gmail... The secondary and tertiary features were minimal or absent. There was no "rich text" composer. The original address book was implemented in two days and did almost nothing
Even today, Gmail doesn't do a lot of things that Outlook does like rendering web fonts / inline calendaring, but I still vastly prefer Gmail (even with the new clunky UI) over any other client
Also shoutout to Paul for envisioning the iPad as a quarantine savior in 2010 haha
Exactly. Paul B gets a lot of credit for designing a nice AJAX product, but me and most people switched from hotmail mainly because of the storage, and partially because of google's cool aura at the time (remember "don't be evil"?)
The search bar was nice but would have been pointless in hotmail where you were forced to delete your old emails anyway.
Reminds me of something a colleague once told me (not sure if he was qouting someone): "your product needs to be the best, but that doesn't means it needs to be good."
That is to say your success is relative to your competition (and sometimes all solutions are "not good")
This advice seems to me like the parallel for enterprise products.
Yeah, I like to think of that as the "Salesforce clause." Salesforce is a very powerful system, and can be bent to do just about anything. But, actual users frequently hate it, and customization is involved enough that you need someone who's an expert in doing it.
I consider myself lucky that I don't have to use very much software at work that was chosen for me by a CTO, or, worse yet, a CFO, or other executive.
Very true. Microsoft Word is a great example of this. It has a ton of features. Nobody uses them all. Everybody uses a small fraction of them. The problem is that my useful subset is probably not the same as your useful subset.
And, this is about a piece of software that's fundamentally just about producing documents for people to read. You know, text, and the occasional picture or graph. Excel is worse: never become known as the person who "knows Excel" in the office, unless you want people bugging you forever about it.
Why do so few software projects ever reach a state that's considered "finished," anyway?
One way to consider this is to put yourself in the mind of a product manager with kids and a mortgage. If you declare the product is "done", the company no longer needs you to improve it. That means trying to find a new role in the company at best and getting laid off at worst.
The same goes for UI designers who keep reinventing the wheel in different ways or devs who keep refactoring the same functionality from monoliths to microservices and back.
In an ideal world, the company owners (those who would financially benefit from reducing ongoing development costs) would be up to date enough to realize the product is "done", but in all but the smallest companies there are too many layers between them and the products for them to make this assessment accurately.
> Why do so few software projects ever reach a state that's considered "finished," anyway?
I tend to write all my software as “finished.” Even my WIP projects are of “finish” quality, quite early in their development, so they become quite useful, quite quickly; even if only for parts to be cannibalized. Some are destined to lie fallow, as I learn more, and render them obsolete.
I wrote this in another post:
Shipping is boring.
Ever watch a building go up? A good prefab looks complete after three months, but doesn’t open for another nine months. It looks awesome and shiny, but is still behind a rent-a-fence. What gives?
That’s because all that interior work; the finish carpentry, the drywall, the painting, etc., take forever, and these are the parts of the building that see everyday use, so they need to be absolutely spot-on. The outside is mostly a pigeon toilet. It doesn’t need to be as complete; a solid frame and watertight is sufficient. They just needed it to keep the rain out, while the really skilled craftspeople got their jobs done.
I like to make stuff polished, tested and complete. I don’t like making pigeon toilets.
> Why do so few software projects ever reach a state that's considered "finished," anyway?
I think this is the cause and its effect in one.
We generally all agree that all software has bugs. We know the march of time and changing of standards means software will need updates to stay relevant - whether making it work on newer systems, following "better" UI trends, handling new file formats, etc.
So you're going to need maintenance.
But your maintenance team will suck if they only touch the code once a year. They'll take forever and add more bugs than they fix. So you keep a team around and you have to give them something to do so they stay on their game. Ergo bloat.
I remember years ago, when 4:3 CRT monitors were still prevalent, opening up Word at work and turning on every optional menu plus the ridiculous paperclip assistant just to make this joke in a snarky email (before there were snarky Slack messages instead).
The resulting blank space left for actually typing something with every feature enabled: zero.
If you have any old browser toolbars installed, remove them ASAP.
Often they were subcontracted out, even by name-brand companies, and the result was a security bugfest. In particular, I'm thinking of one that starts with Y.
A couple of days ago, I gave up on Keynote, and reverted to PowerPoint.
Keynote is fun, compared to PP, but it’s a fairly typical Apple product, with a few, highly-polished features, a prescribed workflow, and difficult to repurpose.
It just didn’t have a couple of crucial features that I needed, like being able to treat individual build steps as phases of the presentation.
I dream of dumping PP, but Keynote isn’t the one.
I do like this guy, though. I feel as if the tech industry could benefit from more people like him.
I think the trouble with this advice is, when you’re developing a product, isn’t not really clear which attributes are key. You have a hypothesis, but you could easily be wrong.
Or perhaps this advice could be interpreted as: decide on your hypothesis, and go all in on it, and only it. If you’re right, it’ll turn out well (as discussed in the article) and if you’re wrong you would have failed anyway and you probably fail quicker following this advice.