Survey N = 20, selected by cold-emailing open source developers.
It can easily go the other way, people read risk and instantly assume it's large.
Engineers read risk and assume it’s large.
Managers of engineers read risk and assume it’s small.
That bit in the middle where they are both right and wrong? ...
(On the other hand, if you go to a board game convention and ask 20 people if they like board games...)
Do you include prisoners, homeless, those with diminished mental capacity, or even people on the run from the law etc.
Questions asked to ascertain risk:
10 hours are needed to make a module.
Making a module is not hobby programming.
Conditions such as the time needed to learn a tool are the same for each tool.
The tools are prediction tools such as a static code analysis tool.
> A tool that always reduces working time by 2 hours
> A tool which saves 6 hours with 90% chance and costs 4 hours with 10% chance
I pick the first because there is a lot less space to fudge the first claim. Certainly, when such probabilities are quoted I am careful not to assume those probabilities will be accurate in my usage of the product.
Usually, developers have a greater understanding of the effort that goes into making changes to software, and so are usually more understanding if a bug isn't fixed right away.
Another case, I had a user raise an issue with mangling of datetimes in a C++ lib. Short of it is we were hitting an integer overflow issue. Under the hood, our datetime lib was using Boost to do the heavy lifting; our lib was mostly a facade over Boost. Boost being what it is, there was massive template instantiations. Templates all the way down... Anyways, somewhere along the instantiation chain, a 64-bit int got truncated to a 32-bit int and then promoted back to a 64-bit int. I think the bug was reported, and I don't there's a resolution some 10 years later (I haven't been doing C++ for a while, so haven't kept up). Definitely an issue trying to compute schedules for some 30 years in the future (Unix 32-bit Y2K type issue is in 2037 when 32-bit time_t overflows).
Also, I assume failure. I mean, software is written by fallible human beings. There will be bugs, crashes, etc. Sometimes the bug is in the implementation, sometimes it can even be a bug in the requirements. Users asked for X, but they really needed Y. If you want a favorable resolution to an issue, you need to be ready to work with the developer, not against them. This may be providing logs, inputs, steps to reproduce, etc.
I've had times where, as a developer, I've had users provide me fairly detailed steps to reproduce, but, well, I couldn't reproduce it. In those cases, ended up sending something along the lines of "let's cut the email; I'll come to your desk and watch exactly what you do." In those cases, they usually omitted a step they thought was inconsequential, but really wasn't (not that they'd know that).
I had one very expensive (for the client) incident at a former employer where we eventually sold the source code to the client to divest ourselves of a joint venture. The client then turned around and hired a 3rd party developer to continue development. As part of the hand off, we documented in pain staking detail the environment, versions of dependencies used, etc. It was a Python app and only have about 2 non-stdlib dependencies. However, those 2 dependencies each had bugs that had not been fixed upstream (patches had been submitted). We left very explicit instructions about download this specific version, then apply supplied patch, then install. We started getting emails along the lines of "Nothing works!!", and of course, we were like: "Did you follow the instructions?" and the response was: "Yes, of course we did.". We were unable to reproduce on our side, as we actually followed the instructions to the letter. After several weeks of back and forth and not being able to reproduce, I offered to visit the 3rd party developer to sit with them while they attempted to setup their environment. We started from scratch, and immediately it was apparent that they completely disregarded our instructions and downloaded the latest version of everything and didn't apply the patches (which wouldn't have worked because of the differences between the versions). 3 weeks of my time, billed at $500/hr 10 years ago. Yeah, I billed $60,000 over 3 weeks because a 3rd party didn't follow very explicit and detailed instructions. Not that I saw a dime of that...
Documentation, tutorial, working example on github, and sdk in <teams_favorite_lang> were all worth points.
In qualitative interviews, perhaps some subjects will say they frequently choose new and less-proven or less-personally-known frameworks because they're looking for greater advantage (which I think is often true).
But how many subjects are motivated to add frameworks simply because they are (or might) become good things to have on their resume, regardless of risk to their current employer (also often true, at least in the US, I think)?
Would researchers' focus on perception of risk (to the project?) miss this motivation, and be trying to fit the wrong psych quiz questions to the wrong models?
(Alternatively, the researchers appear to all be based in Japan, which might not have US dotcom prevalence of short-term job-hopping, so perhaps their results are more applicable there than in the US?)
Ton's of risk, arguably lots of reward, but doesn't seem to be perfectly correlated with risk alone.
At a previous job, I was on call 24/7 (we traded all of about 2 hours on Saturdays).
I was once out on a date on a Friday night around 9PM local (US), and I got a call from the UK on the work phone. Turns out, we were missing about $1.5 billion. Yes, billion. The call went something along the lines of: "We're missing a billion dollars. You need to find out where it went. NOW."
But I do see people making $1,000 a day complaining that everyone wants to upsell you into $20/mo SAAS offering with a bunch of crap you don't need, rather than sell you the thing you need.
For something you rely on, easy decision. For occasional use though? Often not worth it.
Are they though? I don't see any evidence for that. If you are just going by what seems to be popular on HN or tech blogs, that's not really how things pan out in the real world. Most companies are running on "boring" tech.
working out the premise, assuming these probabilities are uniform over 10 uses of the tool:
1 a. reduces time by 2 * 10 = 20 hours
1 b. reduces time by 5 * 5 + 5 * 0 = 25 hours
2 b. reduces time by 4 * 7 + (-1 * 3) = 25 hours
3 b. reduces time by 6 * 9 + (-4 * 1) = 50 hours
This is a variation of a winning sales technique taught by Jay Abraham : Reverse the risk to get a competitive edge
People are unwilling to take on risk. Do what you can to reduce that risk (even if it means taking more of the risk upon yourself, e.g. offering free returns) and those potential customers will be more willing to buy what you're selling
I think developers especially are worried about getting locked into an ecosystem, so as a new product it may be useful to make it easy and obvious in how to both IMPORT and EXPORT data from competitors.
Is it the same for small teams and big businesses ?
Are indie developers less biased towards risk avoidance ?
It feels to me that when you are in a large organisation, you have incentives to be risk averse, moreover in your expertise domain where you have to justify every delay, problem, risk...etc