Hacker News new | past | comments | ask | show | jobs | submit login
If you want developers to adopt your software, make it less risky (2017) (reifyworks.com)
117 points by mooreds 12 days ago | hide | past | web | favorite | 43 comments
 help




Underlying paper: http://se-naist.jp/pman3/pman3.cgi?DOWNLOAD=579

Survey N = 20, selected by cold-emailing open source developers.


Bad questions too. Writing that something is low risk in a question does not make the person react like if the risk is small.

It can easily go the other way, people read risk and instantly assume it's large.


Off the cuff bullshit thesis:

Engineers read risk and assume it’s large.

Managers of engineers read risk and assume it’s small.

That bit in the middle where they are both right and wrong? ...


Wow, that's not very many.

Also biased as heck

Not at all. To be statistically significant, you typically need 1000 or more test cases / individuals.

That isn’t how statistical significance works at all:

https://www.surveysystem.com/sscalc.htm


It depends on what you measured. For example, if you ask 20 people a yes/no question and they all say yes, the chance that they would all agree by chance is 1 in 2^19.

If they are selected randomly

(On the other hand, if you go to a board game convention and ask 20 people if they like board games...)


Sure, but that has little to do with sample size.

It’s very difficult to randomly sample 20 people. And impossible if anyone is or even could be dropped from the study.

Do you include prisoners, homeless, those with diminished mental capacity, or even people on the run from the law etc.


The questions are horrible

Doesn't pass the sniff test. By developer use, npm, for all of its security issues and orphan packages is the most successful package ecosystem of all time.

The time risk of NPM (which is the factor considered by this paper) is small though. Security risk is way down most people's considerations.

The paper extrapolates from its survey questions about developer time to loss profitability so I think it's fair to discuss risk as more than just developer time. Even so, the developer time risk of poorly maintained packages is enormous (and not just security-related). Consider the time loss from a security issue in a package you have to replace, fix yourself or wait for someone else to fix. Not to mention the non-time losses. The npm ecosystem has thrived despite this and other risks-- if you want developers to adopt your software, make it interesting, easy to talk-about, frictionless to get started with and shiny. Time is a logical consideration that takes a backseat.

It's well known that success is not a proof of quality. The js ecosystem is an excellent example.

A common quip, but care to elaborate? What is so bad about it.

Yeah, the js ecosystem does a lot of things right that are done poorly in other languages. Compare Yarn to pretty much any package manager besides Cargo, for instance. Another example is that Typescript integration into VSCode is top notch, better than any other language I've used

Does yarn allow reproducible builds?

Yes -- deterministic builds was the primary selling point for many (including myself) when it was first released a few years ago. Npm has since improved somewhat in this regard, but yarn remains the clear leader.

Yarn (and now NPM) allow reproducible builds via a lock file.

What do you mean? By the definition of "risk" in the paper, seems like npm wouldn't be risky.

Questions asked to ascertain risk:

   10 hours are needed to make a module. 

   Making a module is not hobby programming. 

   Conditions  such  as  the  time  needed  to  learn  a  tool  are  the same for each tool. 

   The  tools  are  prediction  tools  such  as  a  static  code  analysis tool.

npm is full of packages (developer tools and libraries in general) popular with developers but that carry enormous risks-- like short support cycles, breaking changes, security issues, packages that themselves depend on packages with similar problems, among other things. You don't have to let a paper define how you evaluate risk, but even if you co-opt their definition of risk as loss profitability, npm's popularity is at odds with "minimizing risk."

Plenty of absolute crap is absurdly popular, that is not much of an argument.

I think the answers of these questions should probably be read through 'glossy folder skepticism'. If someone asks me about

> A tool that always reduces working time by 2 hours

VS

> A tool which saves 6 hours with 90% chance and costs 4 hours with 10% chance

I pick the first because there is a lot less space to fudge the first claim. Certainly, when such probabilities are quoted I am careful not to assume those probabilities will be accurate in my usage of the product.


Why would I want developers to adopt my software? Like "Doctors make the worst patients", my experience is that developers are the worst users. Doubly so, if you're looking for them to ever pay you.

I believe because there are products created for developers. They are the primary target audience. This applies to the product I'm working on.

I've had the opposite experience.

Usually, developers have a greater understanding of the effort that goes into making changes to software, and so are usually more understanding if a bug isn't fixed right away.


As a developer, when I hit a potential bug in an OSS lib I'm using, first thing I do is assume I'm doing something wrong (mileage varies by quality of the lib, of course). Then, back into a minimal reproducible example, if at all possible. By the time I actually submit a bug, I've already debugged for hours or days, sometimes even over weeks (once took me about 6 weeks of off again/on again debugging to find a bug in a Python DB lib I was using; the bug was in the C lib and in the certain circumstance I was hitting, uninitialized memory was being read).

Another case, I had a user raise an issue with mangling of datetimes in a C++ lib. Short of it is we were hitting an integer overflow issue. Under the hood, our datetime lib was using Boost to do the heavy lifting; our lib was mostly a facade over Boost. Boost being what it is, there was massive template instantiations. Templates all the way down... Anyways, somewhere along the instantiation chain, a 64-bit int got truncated to a 32-bit int and then promoted back to a 64-bit int. I think the bug was reported, and I don't there's a resolution some 10 years later (I haven't been doing C++ for a while, so haven't kept up). Definitely an issue trying to compute schedules for some 30 years in the future (Unix 32-bit Y2K type issue is in 2037 when 32-bit time_t overflows).

Also, I assume failure. I mean, software is written by fallible human beings. There will be bugs, crashes, etc. Sometimes the bug is in the implementation, sometimes it can even be a bug in the requirements. Users asked for X, but they really needed Y. If you want a favorable resolution to an issue, you need to be ready to work with the developer, not against them. This may be providing logs, inputs, steps to reproduce, etc.

I've had times where, as a developer, I've had users provide me fairly detailed steps to reproduce, but, well, I couldn't reproduce it. In those cases, ended up sending something along the lines of "let's cut the email; I'll come to your desk and watch exactly what you do." In those cases, they usually omitted a step they thought was inconsequential, but really wasn't (not that they'd know that).

I had one very expensive (for the client) incident at a former employer where we eventually sold the source code to the client to divest ourselves of a joint venture. The client then turned around and hired a 3rd party developer to continue development. As part of the hand off, we documented in pain staking detail the environment, versions of dependencies used, etc. It was a Python app and only have about 2 non-stdlib dependencies. However, those 2 dependencies each had bugs that had not been fixed upstream (patches had been submitted). We left very explicit instructions about download this specific version, then apply supplied patch, then install. We started getting emails along the lines of "Nothing works!!", and of course, we were like: "Did you follow the instructions?" and the response was: "Yes, of course we did.". We were unable to reproduce on our side, as we actually followed the instructions to the letter. After several weeks of back and forth and not being able to reproduce, I offered to visit the 3rd party developer to sit with them while they attempted to setup their environment. We started from scratch, and immediately it was apparent that they completely disregarded our instructions and downloaded the latest version of everything and didn't apply the patches (which wouldn't have worked because of the differences between the versions). 3 weeks of my time, billed at $500/hr 10 years ago. Yeah, I billed $60,000 over 3 weeks because a 3rd party didn't follow very explicit and detailed instructions. Not that I saw a dime of that...


That's been my experience as well, possibly because I mostly sell into corporates. They seem to love wasting my time and are often cheapskates.

I find it strange that searching in the text of the paper and the commentary on it for "documentation", "tutorial", "errors", "help", "faq", and "doc" turns up nothing. Anecdotally, salespeople who try to persuade companies to adopt a payments API have found that it is very useful to have both tutorial and reference documentation. I think if you're asking questions about software adoption, you'd want to study the impact of documentation availability and clarity.

You are 100% right. I made a rubric for my company's sales department to grade third party APIs against.

Documentation, tutorial, working example on github, and sdk in <teams_favorite_lang> were all worth points.


One question is what motivations the researchers realize.

In qualitative interviews, perhaps some subjects will say they frequently choose new and less-proven or less-personally-known frameworks because they're looking for greater advantage (which I think is often true).

But how many subjects are motivated to add frameworks simply because they are (or might) become good things to have on their resume, regardless of risk to their current employer (also often true, at least in the US, I think)?

Would researchers' focus on perception of risk (to the project?) miss this motivation, and be trying to fit the wrong psych quiz questions to the wrong models?

(Alternatively, the researchers appear to all be based in Japan, which might not have US dotcom prevalence of short-term job-hopping, so perhaps their results are more applicable there than in the US?)


I don't know if software developers are all that risk adverse. I mean we're typically changing development environments and tools a lot, usually at a very high expense to change, with little evidence of time savings. Sometimes however those changes may feel like time savings or is just a cooler way to code. For example consider someone coding standard REST services vs adopting GraphQL. The adoption itself is a huge risk, probably exponential cost of introduction, probably a little more convenient code once that initial risk is taken on. Yet companies are adopting it like gangbusters. Similarly with React and a lot of other (Facebook) tech.

Ton's of risk, arguably lots of reward, but doesn't seem to be perfectly correlated with risk alone.


As a software engineer working in finance, I tend to find myself very risk averse, because, well, there's a ton of money on the line if you fuck up.

At a previous job, I was on call 24/7 (we traded all of about 2 hours on Saturdays).

I was once out on a date on a Friday night around 9PM local (US), and I got a call from the UK on the work phone. Turns out, we were missing about $1.5 billion. Yes, billion. The call went something along the lines of: "We're missing a billion dollars. You need to find out where it went. NOW."


You'll frequently see people that make $1,000 a day complaining that software that saves them lots of time costs $100.

I don't see that much.

But I do see people making $1,000 a day complaining that everyone wants to upsell you into $20/mo SAAS offering with a bunch of crap you don't need, rather than sell you the thing you need.

For something you rely on, easy decision. For occasional use though? Often not worth it.


> Yet companies are adopting it like gangbusters.

Are they though? I don't see any evidence for that. If you are just going by what seems to be popular on HN or tech blogs, that's not really how things pan out in the real world. Most companies are running on "boring" tech.


I don't understand what they're getting at.

working out the premise, assuming these probabilities are uniform over 10 uses of the tool:

1 a. reduces time by 2 * 10 = 20 hours

1 b. reduces time by 5 * 5 + 5 * 0 = 25 hours

2 b. reduces time by 4 * 7 + (-1 * 3) = 25 hours

3 b. reduces time by 6 * 9 + (-4 * 1) = 50 hours


Related note:

This is a variation of a winning sales technique taught by Jay Abraham [1]: Reverse the risk to get a competitive edge

People are unwilling to take on risk. Do what you can to reduce that risk (even if it means taking more of the risk upon yourself, e.g. offering free returns) and those potential customers will be more willing to buy what you're selling

[1] https://booksummaryclub.com/getting-everything-you-can-out-o...


As others have pointed out the study itself is too small to draw significant conclusions, but I agree with the hypothesis at least that developers are risk-adverse with their time and data.

I think developers especially are worried about getting locked into an ecosystem, so as a new product it may be useful to make it easy and obvious in how to both IMPORT and EXPORT data from competitors.


They (the OPs) have an article for that as well:

https://www.reifyworks.com/writing/2019-02-25-the-first-ques...


If we assume this is right, it could be interesting to search why.

Is it the same for small teams and big businesses ? Are indie developers less biased towards risk avoidance ?

It feels to me that when you are in a large organisation, you have incentives to be risk averse, moreover in your expertise domain where you have to justify every delay, problem, risk...etc




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: