Hacker News new | past | comments | ask | show | jobs | submit login
When users never use the features they asked for (utk.edu)
583 points by jermaustin1 on Sept 29, 2021 | hide | past | favorite | 219 comments



Before the launch of the XBOX 360, the company I worked for was one of their suppliers, and without going into enough detail to make me identifiable, the management of our two companies had arrived at sort of a stalemate: We didn't want to tell them exactly how our algorithms worked, and they didn't want to tell us exactly what they were doing with them-- think of it like a data compressor where, we don't want them to know the exact compression algo, and they don't want us to have their data.

So, somehow, It is arrived at that a website will be created that encodes MS's proprietary data, but doesn't store it, but also our algorithm is implemented server side, so there's no leakage of proprietary stuff.

A month later, the app is done and tested and I'm decoding data from the website on real hardware. I move on to the next project.

Three years later, the Xbox 360 is out, I decide I'm tired of filling out reimbursement forms for Heroku and its got to be a security risk with no updates in 3 years... so I take a look at the apps stats to see if its feasible to shut it down--exactly zero users. Nobody ever attempted to use it, not even once.

Ironically and unknown to me, the source code had been lost in a freak accident and when I deleted the Heroku account that was the only copy of the source left.

Even worse, a year later someone claimed to have a use for the app and where was it please? And I had to explain we no longer had the source. I remember a very long e-mail about how irresponsible we were.


The worst programming experience of my life related to lost source code.

I was working for a place that had a service running from a Java app that was customized for each customer, about 200 copies of roughly the same app. There was source control at some point, and when a new customer was being on boarded they’d just make the customizations they required, compile the app, and deploy it.

By the time I worked there (years later), all source code had been lost, and I was tasked with creating a CI/CD system that included managing these apps (they’d now decided they needed to be maintained).

For each one I had to decompile the whole thing, rewrite the code into a human readable format, redeploy and test it. They were filled with all sorts of horrible anti-patterns too, like hard coded file paths, and some of them were only used once a year (but for an absolutely business critical process).

About half way through we had a major data center failure that meant we had to do a failover, which of course broke most of these apps with all their hard coded configuration. So instead of having months to work through all of this garbage, I was asked to get them all working immediately.

I got them all working in 3 days, which I thought was quite a monumental accomplishment. But my CEO was very dissatisfied that it took so long. I quit a couple weeks later, and still almost regret putting so much effort in to saving them. A friend of mine still works there and apparently all of these applications are in exactly the same state as I left them.


That's brutal. You were right to quit.

There's a certain personality type that will never appreciate or reward hard work.


Yeah I was definitely right to quit, my career has been pretty great since then.

But I did get a lot out of the few years I worked there, even though a lot of it sucked. It was my first proper job and they’d basically let me work on anything I wanted to, even though I was incredibly green and was just figuring it all out as I went. I definitely fit a lot more experience and learning into those years than you’d reasonably expect a person to. I think a lot of my success today came from turning their incompetence into opportunities.


I be also had a job early in my career where I was basically allowed to work on anything that interests me and 20 years later I’m still relying on that experience.

Maybe this is something every junior should do - try a bit of everything and see what you like.


That's why I think a good consulting agency can be a great first place to work. Lots of projects, hopefully different technologies, perhaps international clients with travel. Then once you know what you like and specialize a bit in it, it's easier to step to some product company as a senior.

Plus, in my experience consulting companies are more focused in employee well-being as it's their only asset.


> There's a certain personality type that will never appreciate or reward hard work.

Now, how would someone figure out these personality types ahead of time? Any red flags to detect people or workplaces to avoid in a professional capacity?


Ask them flatout how they reward initiative and accomplishment. If they start fumbling, there's your answer. If they don't, ask them for a concrete example. If that's when they struggle, there's your answer.


1. "...Able to work under pressure and meet deadlines.."

PASS !

2. "I(boss) coded the first version..."

50% of the time it means:

Your solutions or design decision will never be "as good as his" and you will forever hear "I did it like this and that. It should be fast to implement xyz"


I think the problem ties largely in mental illness... Sociopathic behavior, psychopathic behavior are pretty much the norm in C level management.

There's a book called "Dangerous Personalities" which takes a dive into practical psychology for these sorts of people.


On the topic of lost source code... we had a few blessed binaries that were vital to building System 8 and System 9 at Apple. There was a single build machine that could generate these binaries from other object code but the source was long gone. Oh please oh please old Quadra 700, don't give out on us! Those days are over right? Surely nothing like that is happening with OSX...


Every company has a shrine to vital things that can not break under any circumstances. We have a holy Win7 Thinkpad.

Often there are gurus too and monks live in the caves next to the relics, trying to guard the sacred grounds and spreading the gospel.


That's awesome. :-) ]


Hilarious. Why didn't you refactor the customizations into a config file, and just deploy a single jar though? If you went through the effort of rewriting the entire thing!


The OP wasn't re-writing them.

Instead the OP fed the old version into a tool (Disassembler) which spat out terrible source code, worse than the original with no comments and bad variable names.

This was the only way to get source at all.

Then they made the disassembled source compile in a really nice way.

Finally the boss asked them to make huge changes to this horrible source with time pressure.


Pretty much this.

A lot of the work would have been roughly equivalent to trying to de-minify some JS.

With a modern IDE the task would have been a lot easier. I don’t know if Eclipse had refactoring assistance built into back then, but if it did, I’d never even heard of the idea. I was sitting on not much more than a year of programming experience at the time.


Eclipse had refactoring from the start. It was pretty advanced IDE for the time.


It had to be, given that it was basically Visual Age for Smalltalk rewritten in Java.

Hence why it had, and still keeps it, a Smalltalk like code navigation.


Textually comparing disassembled code variants can be complicated because the disassembler might/will generate different names for locals.

Because of that, a better approach might be to compare the byte codes of the variants first to find out in what classes the variants differ.

Then treat the different class files as source code until you need to change any of them (so, variant 1 has foo1.class, bar.class, and baz.class, variant 2 has foo.class, bar2.class and baz.class, etc). Don’t immediately try to clean up their code, but ‘just’ try to figure out what they are doing (for many of them, that, hopefully, can be done from function names)


The issue I found when I was first looking at it was they were all a little bit different in random, and confounding ways. Also, whatever the “base version” of the application was had been undergoing development over the years when all these forks were being made.

Looking back on it, that likely would have been the best approach. But at the time the complexity of designing something like that was a bit beyond me.


I think the OP is saying each program was slightly different. Like it had been forked 200 times.


I can't edit my post anymore-- before someone realizes my mistake, it was the XBox One, not the 360. Rails barely existed at the time of the XBox 360 development.


Thanks for updating! I was a little confused since Heroku wasn’t around before the 2005 launch of the Xbox 360, but figured it was shorthand for something else.

Given the Xbox One SOC had a lot more security in mind, I could see how Microsoft was more cautious about these things. [1]

[1] https://m.youtube.com/watch?v=U7VwtOrwceo&feature=emb_title


Just a lapse of memory and thanks for the benefit of the doubt. Another commenter mentioned turbolinks and I was like "whoooaaaa yes I used that and this timeline doesn't line up at all." This lead to a long internal discussion about when exactly did I play Bioshock and does that line up with my memories :-)

I guess I don't mind talking about it so much-- we made a sub $1 component that could compress certain specific waveforms and play them back-- the trick was that they were highly structured, so the compressed data basically just parameterized the silicon in what might be considered to be essentially an assembly instruction-- frequency, carrier wave, data to be sent, etc. The chip then would generate the signal and you could shove it out the electromagnetic radiator of choice with a little amplification a diode and couple of resistors. The problem with these signals was they required very precise timing-- they weren't super complex, but not something a general CPU could handle-- the variability in the timing was too high.

Now yes, you could use a general purpose DAC and an amp and generate the signals, but we sold you a turnkey thing-- with 10 lines of code you could be transmitting signals and the whole thing only use a few kb of ram.

It wasn't super high tech, but our value add was high. I'd say most of the major electronics companies used our chips at one time or another.


Waveform generator for haptic feedback?


The description sounds like software defined radio. A CPU's frequency jitter doesn't matter at all for haptic feedback. Presumably it's not for WiFi, as there are plenty of cheap well-tested WiFi chipsets, so maybe for wireless controllers.

Edit: I was wrong about the frequency range, a sibling comment to yours mentions IR, not radio. Though, their mention of "radiator of choice" sounds like this same chip (or related chip) could be fed into an RF modulator/demodulator to build a software defined radio.


I was going to say TPD158 HDMI redriver, but that doesnt compress anything and is made by TI (TPD158).


So essentially your are a hardware shop. I wonder how a web service on your side fits on all this..


Yea, we were a consumer electronics company that mostly was subcontracted out to make components or complete hardware. I remember working on products for Denon, Crestron, Roku, Samsung, LG, Direct TV, Microsoft, HP, Nintendo, Sony and Audiovox (and their subsidiaries--Monster etc), and in Europe, One For All brand. It was honestly pretty exciting.

Regarding web services, like I said, we had a library of these waveforms and that was our cash cow-- you got the chips for only a few cents over cost. You got the API that operated them for free, but access to out database cost a fair bit.


Ah, so I'd guess you were producing some chip that generated IR signals to control various consumer electronics devices, and the library is a library of control commands of thousands of devices of different manufacturers.

Basically what you need to build a universal remote control, or a remote control for a specific device that includes controlling capabilities for other manufacturers' devices.


I think you hit the nail on the head with that.


I'm pretty sure about that, especially since One For All is a brand dedicated to universal remotes.

If I had to take another guess, I would even nail it down on him having been employed by a company called Universal Electronics, which produced a series of universal remotes called "JP1 remotes" in hardware hacker circles (see for example http://www.hifi-remote.com/files/help/The%20WHAT%20and%20WHY...) and also acted as an OEM for a lot of other companies, so their tech could be found in various remotes of different brands.

[edit] I was slightly wrong. He was working for Zilog (which sold their universal remote business to Universal Electronics Inc. in 2009 however) and probably involved with a cool-named product called "Crimzon RC Blaster": https://www.zilog.com/docs/ir/PB0171.pdf


Aye, you win. Zilog it was :-)


This is super interesting! Shame that you can't tell us more - I'm very curious what could link products from these companies.


You just described how music is compressed on youtube.


Cool story thanks for sharing - so how did they get around the issue for three years without the app?


Thank you!

If they were that serious about security-- the chips we made also implemented this same compression algorithm. It would have been trivial to hook one up to an arduino and feed it the data you wanted to encode. I assume someone realized this, or they never really had a need to encode their own data (we provided a extensive library of encoded data).

I strenuously disagreed with the business justification for keeping all this stuff secret. But it was my job to respect those decisions.

It was a lot of fun writing the (rails) app itself.


back when turbolinks were hot shit :D


I remember a conversation with an inexperienced PM, who was still in the mode "we need to add X because a customer asked for it".

What I've learned - although I've never had a PM role - is not to blindly just give customers what they ask for, but to figure out what they really need. Their feature requests are often a proxy for something else they aren't able to articulate.

Another thing to be careful about is adding that one thing a customer asks for but nobody else does. You end up wasting cycles and bloating the product with these one-off asks. Sometimes you have to stand your ground and say "no", because the ROI just isn't there. Startups in particular are vulnerable to this because they haven't got a large enough customer base and capital yet to comfortably say "no", and end up with features jammed into the codebase that are still there, years later, when the original customer that asked for them are long gone.

There are exceptions of course - perhaps the feature is for some legal requirement e.g. "we need to show this specific set of Terms & Conditions before the user clicks Yes" they must have and can't live without.


Exactly these 2 points.

I as a PM for a platform may sound arrogant, however I strictly adhere to the phrase: Only what customerS need, not what they want. I also rely on numbers and classification: does this feature help one or more user? Which type? Power User(s)? Everyday normal users? Is this just emotional, a want or need? How much does a user benefit from the change? Can we do better, are we missing something (symptom vs. root cause)?

Product development is incredibly hard to get it right. However, do less as a general advice. Think about Word and its feature overload. There is a downside to customer feature wants: clutter and distraction. Users of all kind are allergic to noise. Keep that in mind. I call this "friction free".

I think in systems, as a general abstraction and modules. I classify features. For example Power Users can get a special Role and Admin area where you can abstract there needs away. Their productivity is different.

I learned as a rule of thumb, that you should talk a lot to your customers and blend that with data. Data itself (usage tracking) is highly misleading.

This is hard and I aim for general satisfaction level. People who like your products tend "get" your idea and don't fight it.

Have fun! :)


> clutter and distraction. Users of all kind are allergic to noise

It's actually worse than this - additional information and option hinders the users. It make them less able to find, understand and successfully perform otherwise easy tasks. At some point, it also make them stop use the product. Those major drawbacks are important to state to all stakeholders.


That's weird, I always thought the absence of additional information and options were what was hindering me from performing easy tasks.


Good doctors don't ask patients what medications they want, they probe about symptoms, context, desired outcomes, etc. I find this a reasonable (not perfect) analogy in my work as a consultant.


I absolutely hate modern design that tries to get rid of all the "distractions". I have had a bunch of my favorite applications get MUCH worse due to hiding or elimination of key features in the name of making a simple slick design.


> Data itself (usage tracking) is highly misleading.

This is so true. Why is a feature unused? It could be:

* It is not needed

* Nobody can find it

* Nobody understands what it does or how it works

* The name is misleading people into thinking it's not what they want

* You left the feature flag off, the feature isn't even live

* It doesn't actually work

* It works sometimes, but frustrates users when it breaks so they don't come back

* Your data is wrong, and they do use it

* etc.

Lots of folks stop thinking at the first one.


Also the "feature spiral of death" when a customer's feature request is really just them trying to say "no" politely. There is no amount of features that will make them say "yes". But once they've started down this road it's very difficult for them to admit that they didn't want any of the features they're asking for, they just want you to leave them alone.


Wasting time and money on this sounds like just deserts for being too pushy with one's sales techniques.


I never even considered this. Given the number of times we've built some really stupid feature because a potential big client had it as a requirement, I bet at least a few were just telling us to go away.


The standard way of finding this out is to ask them to pay at least something towards developing the feature. If they're genuinely interested but genuinely need this feature, this is reasonable and they'll be agreeable. If they're not actually interested, then this doesn't make any sense for them and they'll refuse.


One option is to implement a non-working version of the apparently unnecessary feature. In the unlikely event someone tries to use it, an exception is trapped, and the appearance is given that there's a bug inhibiting the feature use. A brief apology is issued.

Then, and only then, does the development team actually implement the feature.


Real life lazy evaluation, I like it


A quick way to teach users never to click that button again, I'd say.


If they'ed rather not click the button than get you to fix the button, clearly they don't need a working button.


I would run from that product.


You would never know that this was happening.


"but to figure out what they really need."

I actually had a senior manager at a large company I once worked for tell me after we demoed a new feature that had been added for his team: "that may be what I asked for but its not what I want".

It can be hard to come up with a constructive reply to stuff like that.... :-)


This is why I always build things as iterative prototypes. Get something in front of the customer asap, so they can tell you what's wrong with it. Trying to get humans to accurately and completely describe what they want in a spec is doomed to failure.


Indeed, and if this chap hadn't seen the multiple mockups and iterations during development I might have had some sympathy.

Edit: We eventually found out it was pure office politics, nothing to do with what we had actually built.


Since we sharing :)

One "trick/lesson" I learned from working with a "big dist system" (think scraping-jobs) with lots of clients(read unique errors) and requirements was:

Never-ever-ever do:

if(client->id=123){

//do special workaround,process,feature or magic-dance

}else{

//normal non-magic-dance,process,flow

}

The only "allowed method" for implementing such code was to add a "JobOption" to the client-config.

Thus you have:

if (client->joboptions->magic_dance){

  //magic-dance-feature-workaround
}

You'd be surprised that over time how many clients have "similar requirements" but different combinations and just being able to do a checkbox-check on their job-config (enable magic-dance) was gold for maintenance and keeping out the spaghetti code!

It was the 'greatest of sin's in our org to write "if client->id=123"


One piece of advice I was told years ago as an intern. If a customer wants it, they have to pay for all of it, and once it's done, you have the right to provide it as a feature or service to other customers as you see fit.



This isn't somewhat related - it's the root cause :-)


> "but to figure out what they really need."

I've had a boss who was very "pragmatic". Used to come up with concrete and practical solutions all the time. "We need to add a button here" or "We need an extra tab that only X can see" and so on.

Wasn't pragmatic at all, just listened to the customers, then translated their requests into what he thought would be the simplest (cheapest, fastest) to implement. While this sounds Lean, it really is a sure way to accumulate an immense mess of complexity, legacy and clutter in no time.

What we really needed was some design sessions to translate the customers' request into something that fit the product and matched a vision (or to disregard it, it that fit is not found).


Customers are good at communicating there's a problem. They are bad at identifying a solution.


They are actually bad at both. They usually start by telling you an imagined solution to the problem they think they have. Then you need to pick that apart to figure out what the real problem is that they have and convince them of a solution that will actually solve their issue.

Sales people, depending on their experience, are not a whole lot better. They'll happily sell solutions that don't yet exists for problems that the customer insists they have and then confront their company with requirements that don't make sense. Closes the deal but sets everyone up for failure.


I wonder how many of these feature requests are because the client does not understand how the software should be used? It could be that it is not designed to solve there problem, and having purchased it they are trying to get some utility from it. Would not be the first time that had been forced to use a system that was not suitable.


This just shows how ass-backwards the whole industry is with its "standard practices", a lot of which I see repeated as advice in the thread. It goes like this:

1. Push a product (or these days, service) at people. Preferably a captive audience. In B2B, that may involve growing an "internal champion" at a customer company, preferably a manager that doesn't need the software themselves, that can be bamboozled by your sales people.

2. Don't explain how the product works or what it does to your sales people. That's techie stuff. Also, once you give your sales people the license to bullshit, it literally doesn't matter what the product does[0].

3. Don't ask your users what they want or like. They don't know what they need. Rely on thorough surveillance instead, as a good "data-driven" shop you are.

4. If some users manage to somehow deliver feedback to you, ignore it. Users don't know what they want, and can't articulate solutions.

5. If the client does not understand how the software should be used, that again means they just don't know what they want. Interfaces are designed to be intuitive. No training is ever necessary. Software manuals? That's so 1990.

6. The so-called "power users" are making point 5 harder. Ignore them with extreme prejudice. After all, them being more productive doesn't earn us more money.

It's a self-reinforcing pattern of industry-wide self-delusion. No wonder so many non-tech people hate technology.

On the "XY problem" someone mentioned in this thread: one of the biggest problems on StackOverflow and other forums, and previously on IRC, is that quite often, the person asking for X actually wants to know X, not the Y, and they especially don't want to have their sanity or competence questioned.

--

[0] - Per HG Frankfurt, bullshit is when you don't care if what you say is true or false, as the reality is orthogonal to the goal you want to accomplish.


See this comment in the thread for more of the same attitude.

https://news.ycombinator.com/item?id=28704103


No it's a failure to get the right people involved at the right time. I'm currently a CTO and I also do product management currently. So, whenever this stuff goes wrong it's actually both my problem and my fault.

My solution is to listen to sales and get involved in early in the sales pipeline to figure out what it is that our customers need and manage our roadmap accordingly. After the deal is closed is too late to do anything. Two things I've noticed with this is that sales people are selling what you don't have and not selling what you do have. Whenever that happens either you have the wrong product or you need to get your sales people up to speed with what the product actually is about. The worst is having features that would solve a problem that sales is simply not aware off or misunderstanding and therefore not selling to the companies they talk to.

The key thing is involving the right people throughout the sales and requirements process and closing deals that align with roadmap & product and ensuring that near future product work aligns with what our sales people need to be able to close deals.

When talking to customers, it's important to realize that the person purchasing is not going to be the person using the software typically. So there's a big risk of a he said that she said that he thinks that's what our users need summarized by the sales person as "the customer needs X, when can you deliver that". Most SAAS products sell a promise to IT managers that don't actually use the stuff directly but instead manage people that manage people that actually have to use the software. With sales in between development and all that, that's four levels of indirection.

Breaking, through that and getting to the core of the issues is important. Having people in the room that understand both the business and technical constraints as well as the actual domain and users is key.


One of my engineering bosses once said that the scariest thing in the office is to pass by a sales guy on a phone call who is vigorously nodding his head.


I'm in this exact situation. We run a niche software product in the telecoms industry, and we have one major customer who is also the most opinionated on how it should work. They keep sending in feature requests (for the old, poorly written application), 'we' (i.e. the boss) keeps saying yes. I'm still here and there working on features that we apparently agreed to build for them over two years ago. At least the boss is learning to say no more.


If it's in the enterprise sales phase then sales will just say "yes, no problem". Ideally they will talk with engineering, but...


To be totally fair to sales people, it's very, very hard to say "no" when an enterprise customer is dangling a contract in front of your nose that means your company keeps the lights on for another year. But enterprise is really something you should look at if you really understand the niche, have lots of contacts, and plenty of patience and runway.


>Their feature requests are often a proxy for something else they aren't able to articulate.

Our usual answer for this is: What are you trying to do?™


Agree wholeheartedly on both points. My product owner needs to understand this.

One of the difficulties is the users don't really know what's possible, so they'll come up with ideas based on the current solution. Often if you listen to them you'll realise they're actually battling against the current solution anyway and adding more features won't help with that.


> not to blindly just give customers what they ask for, but to figure out what they really need

Classically called an xy problem: https://xyproblem.info/


I once spent a week working on a fairly complex feature for a product that the users were demanding almost daily updates on. I delivered it and, by coincidence, happened to be where the users were a couple of weeks later and I stopped by to say hello and saw that the user who had been asking for the feature every day was still using the product the "old way". I asked why and he didn't seem to follow what I meant, so I reached over and showed him the feature that he had been demanding I add and he said, "oh, wow, that's awesome, that's going to be such a huge timesaver for us!"


This is why changelogs and release notes are not optional or tucked away in a dark corner. And users who request features need to be notified (actively, specifically) that something they requested or a bug they hit was resolved or addressed.

I know, that's a lot of work. But what's the point of fixing things if you never tell someone it's fixed.


That's great advice, but it's untrue that nobody reads changelogs.

I read them religiously for software I'm passionate about as a user, and when I solo'd a product for a decade I was regularly surprised how familiar some of my customers were with mine (most often when they had dedicated IT resources or were similarly small boutique outfits themselves). I've also managed large, custom enterprise projects where subject matter experts on the other end relied on them (in addition to other channels of communication).

What drives me nuts is how some companies decided to water them down. e.g. Windows Update's long list of "security related update" where you have to google KB's to find out what they are, and another company that just always puts in "various fixes and improvements".


I've started collecting screenshots of app updates that are simply "bug fixes and performance improvements" because it's such a joke at this point. What bugs? How much performance improvement? Did you also nuke one of my favorite features while you were at it, or require other actions of me that I wasn't intending to perform? Who knows? Guess I'll roll the dice once again...


When I read release notes like that I immediately assume malice. I don't believe that the information is not available or that the devs are too lazy to bullet point _at least_ the major changes or vulnerabilities fixed. I think this is generally an excuse to slip in tracking/etc without resistance.


Nah, it seems more likely that the mgmt. saw the list of bugs and said "don't admit to those!" and the performance improvements ... " Don't say we used to use a stupid O(n^3) algorithm!


From my experience, it's probably tens of bugs where each of them happened to less than 0.01% of customers, and requires a overly detailed explanation for anyone who didn't have first-hand experience with it.


yeah, it looks like windows is the virus


changelogs and release notes don't get read. unless you're sending personal emails to the users who requested the feature, the experience in the essay here is the only way: make it automatic or turned on by default.

and if it's not good enough to be automatic or enabled by default, then keep iterating until it is.


Changelogs are for developers, not users.

They're a bad way to communicate specifics because they're literally a long list of changes, most of which are boring/irrelevant to most people.

If you're releasing new features you need more ceremony. Maybe an email with targeted specifics, or at least a section on your website which highlights big important new features.


I wish software did something more like video games. Well-designed video games will notice that there's a skill you're not using when you should be and show hints on-screen about using it, e.g. press L1 to change weapons, R2 to take cover.


I’m not so sure. It’s mildly irritating getting numerous pop ups of “hey, look over here at this thing we added” even when permanently dismissible. Tracking a user and letting them know that we know they haven’t ever used feature X is likely to just give them the creeps. In video games you are in “play” mode, and very aware you’re being tracked, judged, awarded points etc. In software that’s often less clear and (from certain behemoths) deliberately so. You’re also likely trying to achieve a task, rather than have fun. I personally would find it irritating if not downright invasive if I hadn’t explicitly agreed to something clearer and more explicit than a cookie warning.


We've sent release notes our for a while but no one ever read them. Now we do showcases of the feature so people have no choice but to see what new features are coming.


The wonders of digital transformation. One of my biggest insights here was that digital transformation is not only the technical upgrade or simplicity of implementation but rather the cultural implementation i.e., process re-development and re-training.


The main things getting transformed in “digital transformation” are the people.


You did the customer success role.


How subtle was the update message letting the users know about the new feature?


I mean, they stopped demanding the feature every day. Clearly they were aware that something changed.


I've had something similar happen to me, and quickly learned not to assume that your customer POC will effectively relay information to the end users in their organization. They were pinging us constantly about a new feature because it was on their list of things to ask about. Once we told them it was done, they crossed it off their list and we never heard about it again until months later discovered by a chance interaction that none of the product users in the organization even knew we had added the feature.


The PM or whatever nag got the answer he wanted, and was able to check the box.

Nobody was yelling at anyone to actually use it.


Not at all.


Yeah sounds like the lesson here is that nobody reads the update messages.


If quantum scientists thought the slit experiment was exciting, they should study IT user preferences and be amazed how depending on the mode of communication/observation, customers will both scrutinize every letter you write while simultaneously not reading it at all, will know every wrong pixel of your UI but not know where the button for "Thing" (labeled "Thing") is, and how anything you make is somehow the best worst average thing they've ever seen.

If you're ever user facing it's a harsh but quick lesson that you can't really count on any users to follow any real pattern. Different persons latch onto different things, with adamant proponents and opponents of everything with a huge swatch of grey shades in-between. At a certain point you just need to decide when the "grey" is no longer grey for you on a particular subject and start to optimize for the part that cares while checking why the grey part isn't as interested.

A lot of conversations with clients I have are with those in the grey territory, where they don't quite know what they want but they know they want it. Often times, they're more looking for a reference architecture and something to compare their current processes/workflows to, and this usually is pretty simple to work with. Those that are proponents of a feature N are typically vocal and picky, but the feedback tends to be pretty good and the guidance is clear on how they envision it, since there's a clear workflow on their side a lot of times that they want to optimize. Opponents of feature N typically bring out the edge cases and demand a solution, and are equally, if not more vocal than the proponents.

It's really a tricky thing, and it's never really clear which is going to be "best" for users/the dev team.


It is a special case of Morningstar's law, you can't tell people anything. http://habitatchronicles.com/2004/04/you-cant-tell-people-an...


This comment made my day.

Users not knowing what they want but knowing that they want it is a useful phrase.


Couldn't have anything to do with how app update screens share the exact ergonomics that subscribe-to-the-newsletter nags and other ad popups have.


To clarify, was he not aware that the new feature had already been released? Or did he think the "old way" was the new feature (and that he'd simply been blocked before), thus suggesting that the urgency of his ticket was unnecessary?


Well, I definitely told them that the feature they had asked for had been finished.


One of many things I learned from writing software just for myself is that I'm just about as bad as anybody at figuring out what I really want, even when I'm talking to myself. I've even had one or two cases where I ignored a feature I asked myself for, and much later realized: Hey, I could actually use that. The worst is asking myself, "How many times do I have to ask for this before you quit procrastinating and just get it done???" I'm not sure which of me is the bigger jerk.


Sometimes it's wise to ignore what you want. So many times I started a project only to end up with 3 libraries and a custom build system. It's like I can't help myself. It got to the point I was too embarrassed to publish these things on GitHub.


Oh no. I was just tweeting about my new build system. Already got 3 libraries.


Oh hey its me. I've done some painstaking work to add a feature (hell, even just an excel macro that at the time seemed useful) and realize like 9 months later I've used the feature a whopping 3 times.


Try to enjoy the journey, not just the destination.


The challenge there is when people start using your personal project. Yes, you remain in control and you remain the primary user. But… now regression and features become a much greater concern because the last thing you want to do is harm other people even if they are only a secondary audience. As the project grows this becomes the most pressing concern regardless of who is the primary user.

That begs the question: Why bother growing the application beyond the smallest set of explicit use cases? When you are the primary user AND the only use environment or input samples are written by you life is simple. The moment you must analyze something not written by you, even if this usage is only for you, the problem cases blossom. If your application refuses to solve for those cases then you need to add more features or use a different application.

Those two scenarios seem to feed each other which becomes evident by traffic or usage numbers as you pull your hair out keeping up with some hobby application.


So I want to share a story about user asking for a feature then not using it.

I run an email forwarding services(https://hanami.run) basically you add your domains in and add some records.

We had this one heavy users who has like hundreds of domains. So our UI isn't design for that. Who has hundreds of domains? So they approach and asked us for a way to organize those domains into a hierarchy structure.

All good.

They are paid our highest tier ($30 per month) so we prioritize the requests and work on it.

2 days later that same user downgrade to the lowest plan and delete all of their hundred of domains...

That complicated features remain unused to nowadays...


If I'm being honest, that sounds like a net win for you and your company. Even if they aren't still around, now you've got a better and more attractive interface!


Yes but the question is - what do they NOT have because of this. Sounds like they bumped something more valuable to get this done.


Any how much does it cost to maintain a not-used, complicated feature going forward.


This. Now every new feature in that area of code is more complicated than it could be, for no benefit.


It's a pretty cool tool. You just got a new customer :)


I was part of a team that the developed a workflow management system for managing telecom's infrastructure. We were really quite proud of it, it pushed our boundaries during development and we learned a lot of new skills. It was implemented exactly as per the clients spec.

It was also quite expensive, my employer billed them €300,000 for the work.

They accepted it.... and then didn't use it.

There was some use in the first few weeks, but then it petered off - despite this they kept paying us to manage the hosting and suuport of the system.

After a few months our manager approached them to ask if everything was Okay. Their response was to say the thought it was an amazing system, but now they realised it was way over-specced and using it was more of a burden than they expected it to be.

I suppose it was the software development equivalent of ordering the biggest thing on the menu and then regretting it.


We buy books that we don’t read, gym membership that we never use, clothes we never wear… I guess corporations are no different?


I'd like a well researched blog post on this, outlining when/where this does or doesn't happen.

So if a savvy writer is reading my comment, here is my hope in that it'll be picked up! :)


I once had to develop a full application with really hard problem solving in a really complex domain that I had hard time to understand. This was for a big national phone operator.

We delivered it under a lot of pressure from the sales team (I was in a shitty company, with few employees) and the contract was in multiple millions euros. That was literally one of our 2 or 3 customer.

We delivered. One day we eventually get paid (our customer was well known in the industry to pay only when they are forced to - yeah, shitty customer too).

The project is finished, customer is happy with what we demoed and delivered.

Then maybe a year later, we decided to add a feature to this product to be able to sell it to another client. I got pull and build this damn thing. Nothing worked. Like, nothing. The login form was broken.

Then I remember. A full year and not a single support ticket. The company owner « joking » about how our sales team used alcohol, parties and embarrassing photos to make their customer buy our software. The insane amount of pressure we had to deliver this technically hard project.

I left this company the day this popped in my head.


That's a crazy story. My guess would have been, not that the product never worked, but that the delivered version was hacked together in the last 24 hours before delivery and never made it into source control. You know the technical details - do you think something like that could have happened, or really no one tried to use the product even once?


No, I don’t think it was changed. I just think that the people who bought the tool weren’t the engineers who needed it and that for some reason the engineers never ended using it, maybe because they didn’t even knew about the existence of the tool or because they felt it was useless to them. I don’t even remember them to be involved in the project, but once again, shitty company, I had no direct contacts with the end users. Everything was proxied with their buyings dept, our sales dept, and our product « managers » (who really liked their job of not being anything more than well paid proxies between sales and programmers). With this setup, I’m pretty sure our product, as complex as it was, was probably useless to them.

But meh, I left this company and I now work at a nicer company where I’m truly motivated knowing our end users enjoy using our products.


>The company owner « joking » about how our sales team used alcohol, parties and embarrassing photos to make their customer buy our software.

That was an episode in House of Lies where the consultants sell their idea only through blackmail.


Great article, but I disagree with the premise.

Code review is too late for most automated analysis (at the level of: "parameter isn't validated" as seen in the screenshot), it should ideally be done as a compiler/lint check in the IDE, and at worst as a git pre-commit hook.

In most cases it's not worth sending a code review if there is automated feedback which can and should be addressed before a human sees it. It streamlines the reviews for reviewers, and gives new contributors a much better experience as they have less feedback to address, and more confidence that their code is correct.


One automated thing I've been really wanting to add to our code review process is copied code detection, where a piece of code is very similar to code somewhere else in the product (though likely not part of the current review).

Obviously there's cases where this is code smell, but plenty of times it isn't. It's not something that you would want to block shipping. But if you do want to ship code like this, you should do so with the reviewer(s) fully seeing where it came from, to make sure everyone agrees that's the best way.

No one has implemented a bot for us that would add such comment for use but I do think it would be a huge win.


Sonarqube has, among other things, code duplication detection, where it detects very similar, but slightly different blocks of code.

It's possible to wire things up so that it runs when you open a merge request and notifies you via email the results.

It's not a silver bullet though.

It also has a linter plugin for vscode (and possibly other editors) that DID NOT work well, atleast the last time I checked which was before February this year. Just thought you should know, incase you decided to just use the linter.

https://www.sonarqube.org/


It’s not automated but IntelliJ does this.


There's nothing more frustrating than a stupid machine telling me it knows better when it doesn't. I used to work on a team where the build defaulted to fail if something wasn't used and debugging was a fucking nightmare because the moment you comment out a block of code there's a cascade of warnings into errors that is just never ending (it was typescript so it leaked all the way back to module definition). I had to piss away some time to write a script to turn a bunch of the rules off and then remember to re-enable them later or I'd break the build by breaking someone's OCD.


Well, don't turn warnings into errors. Doubly so on your development environment. That's not a good reason for not running a linter while developing.

But, of course, if the decision was out of your hands, people that do that are usually the same that enable all linter rules. Neither decision makes for a good development practice.


Had an issue like that. The workaround snippet that fixed it was aptly named stfu.js by the author.


I went with the commands:

> implaying

and

> srsbsns

to toggle it off/on.


Your comment is more true today than it was back then, not that I disagree with you posting it. Back then linters were a twinkle in the hopeful eye of software devs. Now they're battle hardened.


Back then was 2018. Lint dates from the seventies. I doubt it was the first static analysis tool, either.


You're right. I read 2008 in the post and thought the writer was talking about a time shortly after then. By the late 2010s linters were prevalent.


It seems like autofixing linters got popular about 10 years ago? That was when it hit my radar at least, but before that I wasn't really a software dev. Whenever it happened that was a quantum jump in usability.


It's a 2018 paper, but definitely: it feels like something that's come to the forefront in the past 6/7 years and I don't think people are leveraging it enough.

It's so nice to be able to write a compiler/lint rule with a quick fix and then see it catch both my own mistakes, and potentially remove a concern/checkbox from the code review template


This was a nice article, but I’m not sure the title is representative of its contents. I think the first (Keep your users in the loop, always. Do not go build in isolation) or second to last (A user's workflow is everything ) bullet the author uses for their summary is findings to be more enlightening and representative.

Internal tools definitely require a close relationship with some set of users to really validate what’s being worked on. It’s just too easy for teams to make unused internal tools. The feedback loop is generally just not there initially. You really have to go and find some users. In this way it’s similar to pre-seed stage startup life.


This happened to me so many times.

Like when multiple users asked me to add monitor brightness dimming when the sky is covered with clouds in the Location Mode of my app for controlling monitors: Lunar (https://lunar.fyi)

Turns out cloud cover reported by weather APIs isn't a good predictor of the ambient light. The feature was so useless (and expensive because weather APIs are not cheap) that it never made it into a release.

What the users really wanted was an external ambient light sensor that they could place where they're working and have the app adjust the monitor based on that: https://lunar.fyi/sensor

Even with the high entry barrier (buy sensor parts, assemble, flash firmware) it was still more accurate and used by more people than the location based approach.


> expensive because weather APIs are not cheap

Just FYI, you can get free hourly (or 30/15 minute, I forget) cloud coverage data from the likes of EUMETSAT (and equivalent US/asian agencies).


Thanks for the info! I'll keep that in mind for future projects.

Since then I also found Open Meteo (https://open-meteo.com/en/docs) to be a nice alternative for smaller projects.


I once spent 6+ month working on a feature that was to be one of the flagship features of our next major release. We'd talked to several major customers about it and gotten lots of positive feedback. Once released, virtually no one used it and no one cared.

On the flip side the by far most impactful feature I've ever added to any software in any point in my career came a couple of years later when working at the same company. It was inspired by an offhand comment by someone in a meeting, and it wasn't anything anyone had ever asked for or even really solving a problem customers claimed to be having. I'd 'coded' the feature on a piece of paper by the end of the meeting and implemented it 2 days later, mostly just to see if it would work. I quietly added it in a minor point release with only a 1 line mention in the update doc.

That feature went on to become an everyday part of most of our customers workflow, and transform how they used and interacted with our app. Nobody, least of all me, had any idea how useful our customers would find that feature.


I find this fascinating. Could you lay out what that feature did, generally?


It was an app for viewing, exploring and analyzing certain large, complex heterogeneous datasets. Both features where, at their core, essentially an additional novel way to view and explore that data. It's just that the second one really clicked with a lot of people in ways no one predicted.

If was to draw some conclusions, it would probably be that the successful feature made a common task a little bit easier for a lot of people in basically all use cases, while the 'failed' feature attempted to add an entirely new type of analysis/exploration to the application that it turned out people didn't need as much as they thought they might.


When you ask people to imagine a feature, they're going to imagine the absolute perfect version of the feature, for them personally. A feature that requires no effort to set up or use, provides exactly the information or functionality they want, and never gets in the way of anything else.

Everyone takes for granted how smooth their own workflow is to them, both because they're already used to it, and because they sunk effort into it to make it smooth (via learning or planning), which has probably been forgotten. So, people just take for granted that new features will maintain that acquired smoothness --- unfortunately, they usually don't.

As the author found out, most of the work in building features is often not in the "raw functionality" of the feature, but rather in making something that performs its functions while also not costing the user significant extra mental or physical effort to use.


Since everyone is sharing their stories.

Back in our 2000's startup, our AOLServer like application server, had something similar to what Rails Active Record brought to the world, just in Tcl.

Basically each RDMS driver had their own lowlevel Tcl bindings, and then the infrastructure code would wrap them up in the upper layers.

So sales got a very important customer, there was a caveat though, they were using Sybase SQL Server, which we didn't had any driver support.

Given that we had to deliver no matter what, a war room project was quick started to add support for Sybase SQL Server, across all layers of the product, drivers, model generation, SQL translation, the whole package.

In the meantime, the customer got talked into using Oracle as we were getting the finish touches for delivery.

When they finally got the new version, with Sybase SQL Server, requested by them as hard requirement, they decided to just keep using Oracle instead.


Did you at least have use for the Sybase support at some point?


Not really.

I guess the only benefit was that on later releases, we could share the infrastructure with Microsoft SQL Server.

For those that don't know, Microsoft SQL Server grew out of a partnership with Sybase, and the early versions were quite alike, just Microsoft version had better GUI tooling.


We found (in the world of building Hyperscan, a high-performance regular expression matching library) that asking users what features they wanted was a disaster.

You have to ask them what they want to achieve. Their suggestions as to how to do it were almost always terrible: partly, they don't have the domain knowledge to know what's going to be simple and practical vs complex and nasty.

Also, they usually have a comparatively clear picture of what they want to achieve, but their notions of how best to break that into "your problem (library)" vs "our problem (app)" are often very naive - you wind up owning way too much of the problem (usually), or occasionally going in the other direction and not really getting enough context information into your library.


This is relevant perhaps 10% of the time. The other 90%, the devs have absolutely no idea how to actually perform the task the user of the software is trying to do, don't talk to customers, don't dogfood it, often don't even understand the industry, and the result is a horrible UI which forces people to severely adjust their existing processes to fit the software. Then if course the devs complain about "the stupid users." To this day 99% of address forms expect me to find my state from a drop down list of 50 entries plus DC, Puerto Rico, Guam, etc. instead of simply typing in the two letter abbreviation.


Not to mention international address forms which expect you to scroll all the way to U for Brits and Americans. I understand the the alphabetical listing is the most internally-consistent way to display to the users, but either putting duplicate entries for the most popular countries at the top of the list or providing a text field that suggests the normalized country as you type ought to be the go-to, not a purely alphabetic dropdown menu.


Warning: vague, hand-wavy half-memory incoming.

One place I worked at had a tool, I forget the name, that could record and report on which features and parts of the site where got used and how much. At one point we were discussing a particular bug we found and how much effort it would take to fix it. I asked my co-worker who happened to have the usage reports at her fingertips, because that was her thing, how much the feature was used. Just wanted to prioritize it. Zero. Users had never touched the screen the entire time since it was rolled out. I expressed my sympathy for the developers who worked on it because they were told it must be done, but it was an easy call to throw the bug into the WONTFIX pile.


Slightly frustrated to read this, because this is user research 101! It's really impressive when someone independently discovers practices from an outside discipline, just by observation and inference. However, somebody could have saved this poor guy a lot of frustration by not forcing him to develop a product design methodology from first principles.


They were an intern, but I think it leads to a bigger point I was thinking about as I was reading this:

In every org I’ve been in, we’ve had interns do these kinds of things: build big intrusive things that suck and no one uses. The problem obviously isn’t the interns. It’s the projects we give them, which are meant to be basically “throw-awayable”, and the complete lack of actual guidance and oversight.

Why are tech companies so bad at actually mentoring interns? They _should_ have had someone teaching them about user experience 101, and they completely let them flounder. How are we supposed to continue to grow and evolve as an industry if we make make every neue klasse start from scratch?


Is there a book(s) you could recommend for product design methodology?


For this specific case (product discovery interviews) take a look at Interviewing Users by Steve Portigal, and Validating Product Ideas by Tomer Sharon.


I found "Don't Make Me Think" by Steve Krug to be a helpful guide. As an engineer who was shy to approach product design, I felt the language really helped demystify things.


I once stayed at work for 36 hours straight to finish assembling a batch of circuit boards for a customer's prototype. I dropped them off at FedEx for next day AM shipping right before they closed on a Saturday and felt pretty good to just barely make the deadline!!

Well Monday morning, the customer assembles the first prototype and it lights on fire. Turns out there were lots of other problems and all my work was for nothing.

I learned a lot from the experience!


> all my work was for nothing

No way. Not for nothing! The customer did not have the added problem of late circuit boards. Even if they could not build their prototype, you did good.


Also, who doesn't like to gather around a nice flaming circuit board on monday morning to warm everybody up and enjoy a coffee


Sometimes I wonder if software is becoming worse because logging is coming better. We have so much data about problems that users are having, feedback they're providing, etc. Perhaps following the mass of people is the root cause of software's constant and worse UI changes.


Jira takes the biscuit with their text editor. Always changing and hard to make sense of. Recently ctrl-enter and enter actions were swapped!


My favorite thing about their text editor is that it will delete all of your changes with no option for recovery if you accidentally hit the escape key. I also like how this has been an open issue for many years on Jira, but they refuse to change it for some reason (or at least that's the state of it the last I checked about a year ago. Thankfully, I don't have to use any Atlassian products anymore.)


I HATE that! too many times my vim muscle memory has cancelled out of a long comment or issue description :(


The funner one is unique to enterprise developers. Begged to implement a feature to the enterprise software, the development team puts in a hell of a lot of hours to get this time sensitive feature into the production. Then, out of nowhere, a manager of some import informs everyone that they cannot use that feature as its "against best practices" or some other language to that effect. Bonus points if said manager was on the email chain about the feature and its progress.


What do you do at this point? People complain about meetings saying that "this could have been an email" (including me). But I believe some people just don't read email/chat/jira comments. Maybe they're busy or just don't care at all, who knows. How else would you get your point across?


I think people don't really follow meetings either. We had a ton of going back to the transcripts to prove topics have been discussed.

There is no magic bullet, if someone has the power to make decisions, you are always at the risk of having them change course at any point in time. The only think you can do is deeply convince them it's in their interest you succeed.


“Some people say, "Give the customers what they want." But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, "If I'd asked customers what they wanted, they would have told me, 'A faster horse!'" People don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.”


This flies pretty opposite to what the lessons the author got from their experience ("Keep your users in the loop, always. Do not go build in isolation.", "If you make assumptions about your users, they will find a way to surprise you.")

I kind of feel your view is a variation on "if you built it they will come", which is true in some cases, and will wield spectacular failures in ton of other cases.


It's a Steve Jobs Quote. Sorry should have put that there but it's so well known I guess I didn't think to. But I think it is a little of both. To much of the Jobs approach can lead you down errant paths, but if you never anticipate what the customer wants you could miss great opportunities.


I think what Steve Jobs calls "market research" is mostly customer panels, polls and actual straightforward "what do you want" questions to users.

Apple did extensive research when designing their products, just not the naive/lazy kind that would be offered if they went to an established marketing firm.

This bias was also what bit them when they designed the trashcan MacPro, or the following coming to term with needing a Mac Pro at all.


Then you have Slack which forces everyone to use features nobody ever asked for.

AWS which ignores what people ask for until it happens to align with something AWS wants.

Microsoft which only implements what their double platinum partners program people asks for and forces everyone else to use that.

Oracle which gives people what they want and sues everyone who gets it from someplace else.

Google gives you what their algorithms determine you want. Like community support of all their products. You wanted that. They can prove that mathematically.


It's even worse where I work: I'm coding whole apps that are never used. My boss has a continuous stream of ideas that are always urgent and important but end up abandoned once he has toyed with them a few times.

The worse one is this app where employees are religiously entering data into it (if the server is down even 30s, I receive logs from them not being able to input data) but nobody uses the data (I check regularly).

Before I cleaned it up, my project folder contained about 250 Visual Studio solutions. Only 20 apps are regularly used. I have come to "sense" which projects I should put serious effort and which one I can just do the minimum.


As the author eventually understands— users almost always want the features they ask for, but they may not the interpretation of them you've created.

I recently started working in design after being a developer for more than a decade. Back then, planning how things should work for users felt frivolous compared to coding. Beyond that, getting working code in the editor just felt so good that I never wanted to put it off. These are the excuses I'd make to avoid really thinking about the users:

  - I can clean up the interface later

  - I'll figure out what features I can support when I figure out how it's going to work

  - I've got a good intuitive sense of what would be most useful and how people want things to work
Unless the tech requirements are extreme, knowing what features are helpful and how users will use them should inform development— not the other way around. Also, overestimating your understanding of how people want something to work almost always leads to suboptimal results. Unfortunately, this mindset led to clunky, miserable interfaces that many people rejected. Worse, core users would learn to love it because they had no choice, and they'd develop bad UI Stockholm syndrome. Nothing carves a lousy user experience into stone for all future users like reliable core users demanding nothing changes.

Obviously, solo intern projects are an edge case, but this case study perfectly illustrates the necessity of user-focused design is in software projects. All the better if you can find someone who specializes in it.

UX expertise brings:

  - deep experience reasoning about the way people use things 

  - knowledge of many workflows, working styles, and personalities

  - understanding of how much deviation users will tolerate

  - familiarity with user research techniques and their shortcomings

  - ability to distinguish between research signal and noise

  - knowing where to dig deeper or seek more data

  - understanding what needs to be prototyped vs. mocked up vs. textually described for tests, and ideally the ability to do all three
That perspective built into this project from the beginning would have completely changed its trajectory. Users would be less irritated, and the developer could have used wasted rewrite time making a more valuable end product.


Nice post. I liked this take away. "Users say things for a reason, but there may be more to it than face value."

Theres often more to what a human says than face value. The key is to always be asking questions, always ask why, multiple times. Ask questions until you feel like you're on the edge of pissing someone off.

Often, a good bit of time up front can get to a few outcomes which save you a massive headache.

1) You understand the feature so well, you're ready to go and know it will be used. 2) The REAL ask was something totally different, and now you know what you need to do (nor not in some cases, sometimes its training, using the product properly/as intended etc).

You're also going to be giving the requester the space to fully explain themselves, and sometimes they can talk themselves round as well. At the end of the day everyones a winner when we ask more questions and listen.



A metaphor for this situation (building exactly what customers ask for) could be the car Homer Simpson designed ("The Homer").

https://simpsons.fandom.com/wiki/The_Homer

Not that all users are equal to Homer, but what they think they want and what they actually need are very different.

It's better to get more detail on the problem, then present a series of possible solutions to evaluate and iterate on.


Went down the rabbit hole from your link:

> > It is a parody/exaggeration of the Edsel, a similarly disastrous automobile designed by Ford Motor Company.

Points to https://en.m.wikipedia.org/wiki/Edsel:

It has a "Design controversies" section that's very relevant to today:

> Complaints also surfaced about the taillights on 1958-model Edsel station wagons. The lenses were boomerang-shaped and placed in a reverse fashion. At a distance, they appeared as arrows pointed in the opposite direction of the turn being made. When the left turn signal flashed, its arrow shape pointed right, and vice versa

https://en.m.wikipedia.org/wiki/Edsel#/media/File%3A1958_Eds...

Why is it relevant? Related thread from 3 days ago about blinkers with the exact same design flaws , but in 2021: https://news.ycombinator.com/item?id=28661282


Not just that. If the feature is given in an usable or other poor form. They won't use it even when they have it right there. May be its not very obvious, or complicated, or too time consuming. Design makes all the difference.

If you give them something they never needed or asked for, but it's so obvious and simple to use, they will use the hell out if it.


People will buy products because if features they will never use. Not building costs money.


This is particularly true if the “users” aren’t the ones who are signing the cheques. Sometimes shiny stuff that looks good in a product demo or on the trade show floor is what it takes to actually move units.


Case in point: 90% of the pickup trucks sold in the USA.


Or cars in general. So many huge SUVs with three rows of seating being driven by a single person. In other countries people would just use a moped.


"When users never user the features they asked for..."

At least they are users asking for a feature. I've lost count the number of times where a sales guy swears up and down that he'll close the deal if we put features X, Y, and Z. The C-Suite gets hot and bothered about closing and I get overruled. Stories for X, Y, and Z get written, backlogs reprioritized, features developed. Ta da! Then what happens?

"Customer" goes dark and we never hear from them again. ::laughcry::


You know what's worse? When you have no metrics and have NO IDEA whether users are actually using the features you ask for.

I realized this a while back due to the fact we had a feature on customer request, which was in release for some time. It had a bug in it, and nobody ever told us about it - which told us nobody ever used it.

And the only reason I know nobody used it is that we collect no metrics. And yea I'm working on that with the company...


One pitfall I’ve seen user researchers fall into is they are obsessed with recruiting a fresh sample of users for their ongoing rounds of user interviews.

This recruitment requirement skips much of the understanding of user needs, priorities, workflows, etc that came from previous rounds of exploration and testing.

The real value comes when you gather a small but roughly representative sample of users and collaborate closely with them on the development of feature(s).

For example, I recently built a ‘User Advisory Board’ to guide the development of internal tools at a large organization (using software I developed for that purpose).

This subset of users was engaged throughout the development process - collaborating with designers and engineers throughout - resulting in a finished product that was closely aligned with user needs.

I don’t have data to back this up, but I got the sense users were more satisfied with the end result because they felt they had a hand in ‘co-creating’ it (aka ‘the IKEA effect’).

OP claims they did this early on, but my comment is pointing to a larger issue I’ve witnessed that may help those reading this to avoid the issue of users not adopting features they asked for…


I work mainly in BI. Data mangling, Dashboards for the simple minded, pretty reports, such things. I somewhat like the more informal atmosphere of SMBs and normaly I'm regarded as some strange kind of Wizard (of Oz of course). Thank god, seldom I have to face a Dorothy. More often cowardly lions and Tin Mans. So, I more or less do what I want. And what's right of course. That means, I monitor the user behaviour meticulously and remove unused features without feedback from time to time. In the past I always asked before, but that meant to impose some decision on them and they don't like that. It awakens their bureaucratic white collar sub instinct. Some of them can't even remember such features were requested by them or even used one or two years ago. The timespan I use as a rule of thumb after I make a feature disappear. They can be outright indignant I imply they don't know anymore, what they did or knew.

I even had the chutzpah to suggest the feature again some time later. Never had a problem.


I'll have to jump out and point out that this is exactly the type of study that I feel the academia nowadays more or less are forced to perform, but at the same time have questionable setup and limited impact.

Overall the study is performed a very limited time frame, with a last portion of the data analysis performed through emailing back and forth. The research scope is limited to 300+ organization, which in fact has 30000+ engineer in total. And the tool was developed without much of normal industry standard understanding, aka, introduce a code analysis tool requiring users explicitly invoking it (I think anyone who is at a typical senior level in large Corp knows that without management pushing, such change is never going to work). The mentors of the author might be full time researcher?

I am not really nitpicking or condescending. I am dismayed by the waste of time and effort on both sides.

I'll read the paper later. But I suspect it would be basically some well known facts in the dunstry...


"Don't underestimate engineering challenges that you only have an external view of."

I wish more people would heed this lesson, especially in HN comments.


Requesting features is like packing for a trip with a bottomless suitcase. You might wear those shoes, read that book, or use that racquet, or you might not -- costs you nothing to pack it, though.


I find it's usually release with the minimum necessary features first. Once they digest those, then add on the next set based on feedback. The more incremental the better. But do keep likely future features in mind, such as leaving space in the tool-bar etc. Certain things are hard to retrofit after the fact.


I moved from engineering to product management precisely because I realized that the problem of "what should be built" is often harder than building the thing.

Asking users what features they want is pretty much not fair - because most people don't have the skills to think through and answer that question, nor is it their job to do it. It's like asking a novel reader what they'd like to see in the next chapter. It's a cop-out with guaranteed suboptimal outcome.

Obviously product management is half art half science but in a nutshell, much better than asking an individual what features they want would be asking them - especially on a senior level - what problems they have and what keeps them up at night. Getting an understanding of that and then thinking about how those problems/risks can be addressed by your system is a much better starting point.

The reason I say that it's important to partner with your users on the senior level is the difference in perspective on the class of problems they want to tackle.

For example I used to work on a trading platform. If I went to my heaviest users and asked them what their problems were, they would probably say "I do 1000 trades a day, and I have to tweak each one manually - can you make that process faster" and maybe even have an idea of what that could look like. So let's say I did that and shaved a second off each trade handling so my user is happy cuz I saved them 16 minutes a day. Sounds like a job well done.

But if instead (or in addition) I talked to their boss, I might hear a very different story. Eg: "we do 1000 trades here every day, but 900 of them are straight forward. I wish I could automate those so trader can focus on the 100 complicated ones. But instead, he's so busy doing the 1000 trades that a lot of them get fucked up especially the complex ones."

That's a major change of perspective, from optimizing an existing workflow to optimizing the entire operation. The implementation is quite different - one is a UI optimization and the other is establishing some sort of automation capability that can be extended to all my clients over time. One may be doable with the team you have, one may require standing up a new group, etc.

At the end, the end user is happy (they get to do 10% of their previous load but this is the high value 10% they really want to focus on.) The key point is that neither the user nor their boss could tell me exactly what to build, but the high level perspective allowed me to understand their problem in a deep way and figure out a scalable solution for them and other clients.

Anyway that's just an example but yeah, your users can't tell you what to build and yeah you need a good product manager - or someone who can play that role well.


>Asking users what features they want is pretty much not fair - because most people don't have the skills to think through and answer that question, nor is it their job to do it.

100%, Usually early in the "project discussion" I try to boil it down to just ONE CORE PROBLEM they have.

For example: If you had a magic wand that could generate ANY SOFTWARE to SOLVE ANY problem in your business (but restricted to ONE problem). What would that problem/solution be ?

From there it's usually easier to generate a feature-list(requirements) as long as you solve the damn core problem :P


>It's like asking a novel reader what they'd like to see in the next chapter.

I love this analogy.


>> That's a major change of perspective, from optimizing an existing workflow to optimizing the entire operation.

Automating a workflow is the best optimization from the users point of view.


Summary at the end.

Writing this story makes all of the problems and solutions sound so obvious. Hindsight...

Let's review a few of the lessons I learned:

Keep your users in the loop, always. Do not go build in isolation.

Don't underestimate engineering challenges that you only have an external view of.

Voice your concerns to your team regularly and often. They might be to solve them far more quickly than you or they might be able to identify what will turn into a major roadblock.

Be ready to pivot.

Users say things for a reason, but there may be more to it than face value.

If you make assumptions about your users, they will find a way to surprise you.

Features will go unused if they aren't easy to use, no matter how great they are.

A user's workflow is everything. (I keep relearning this lesson...)

Users are far more clever than you think.


At no point in the first half of the story (ran out of time to read the whole thing) did I see the author using their own code review tool!


I wrote an article some time ago on this exact mistake that product teams do. I called it the baby bias: https://www.linkedin.com/pulse/baby-bias-how-start-product-d...


I remember the time we urgently had to develop an application for our Client. We had to work overtime to make the Deadline. After 3 years we had to Upgrade the Server an found a Bug that crashed the whole application. But apparently in 3 years nobody even looked at it.


Total aside, but the chocolate milk and coffee concoction he describes in one of the pictures was also my drink of choice. I dubbed it the Microsoft Mocha, although I don’t think the name caught on.

It definitely fueled the pursuit of multiple good (and bad) ideas during my time in Redmond.


Total aside, but the chocolate milk and coffee concoction he describes in one of the pictures was also my drink of choice. I dubbed it the Microsoft Mocha, although I don’t think the name caught on.


Never? Exaggeration much?

Also, features don't have to be used (much) in order to be useful. I don't do Google Take-out on a daily basis, but still that feature existing is a signal I appreciate.


This is usually happening with smartphones; there are a lot of consumers who tend to buy the latest model because of its features, but as time passes, they tend to ignore the feature.


It's a great story - thanks for sharing. People don't want to offend other people, so normally you not gonna get negative feedback when talking to someone face-to-face.

Also for the code review tool - it's a sort of "consultant issue". I work on the floor and probably overworked (and maybe stressed). Then a random person shows up with an "improvement" pretending to be of some use to me. Thanks, but you should ask me first what needs to be improved in my day-to-day job. That only causes negativity. Not sure if this was the case, but close enough.


I was a consultant and I used the rule of 3 whys. When a customer asks for a feature, get back to the root concern by asking "why"?



In short: don't let your users become your product managers. The real PM's exist for a reason.


> Turns out there were a few hiccups: everyone involved in that project we wanted to use had left the company, we had no idea how to build the source code, it had something like 500,000 lines of code, and the one person we found that knew anything about the project was 13 hours ahead and stopped replying to our emails.

When you don't talk to users this is what you end up with.


Building the feature right is hard. They might want it, but not the way you built it.


thanks for this great write up and I wish I could read this 10 years ago :).


Users are terribly unreliable when it comes to suggesting feature adds.


A fine post and I'm sure a fine tool, but why is an internal linter tool the subject of a proper academic paper?

I can't imagine writing an academic paper for any of the projects of similar complexity and general interest that I've worked on.


You should consider it then! Having concrete data about anything in a peer reviewed form is always valuable.


It seems like a waste of time, when a just blog post could be 95% as helpful and be way faster and easier to publish.

You don't need peer review for everything.


I was hoping for a good old fashioned, negative developer rant.


The author is atleast referring to those rants as "angry mails" he got.

I think these automated CI systems is a losing battle. It makes the build chain to complex. There is too much corner cases that gives false positives unless you keep it dead simple, like eg. enabling all warnings are errors or something in gcc.


... and this is how we got WebUSB.


Microsoft didn't run automatic code analysis tools as part of their code review process until 2017? What on earth was wrong?


> What on earth was wrong?

The automatic code analysis tools. Sure they existed, but they need to fit into a workflow and get approval for use (which takes about ten years in enterprise these days).


They wrote a bunch of them (StyleCop among others) for C#. And Intellisense which relies on being able to analyze code all kinds of ways. I find it hard to believe they didn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: