
Cognitive Biases in Software Engineering - charlieirish
http://www.jonathanklein.net/2013/06/cognitive-biases-in-software-engineering.html
======
paulhodge
The negativity bias really irks me. I went through a phase in my career where
I tried to communicate everything in a positive way without talking about
blame or judgement. And I found that people don't respond to that as well -
they are more likely to ignore you or forget what you said. But instead, if
you talk about how X library sucks and Y developer did such and such wrong,
people perk up their ears and listen to you, and they are more likely to start
to regard you as the local expert. Seems like the effect is worse when talking
to the non-technical folks.

There's definitely way too much crankiness in the software world, and I don't
think it's because the people are naturally negative people. It's because
we're pragmatic people and we unconsciously gravitate towards the styles of
communication that get the most results.

~~~
jamesbritt
_There 's definitely way too much crankiness in the software world, and I
don't think it's because the people are naturally negative people. It's
because we're pragmatic people and we unconsciously gravitate towards the
styles of communication that get the most results._

Interesting. I'd gotten the impression that negative comments were a form of
dick-waving. I.e., if you can spot the the numerous flaws in some tool or
concept then you are _clearly_ the superior-minded hacker.

But I've never thought about it in terms of idea adoption or whether positive
comments might have a lesser effect on people.

I wonder if negative comments play off people's fear of being wrong or fear of
being seen on the "losing" side.

Maybe if you hear that there are good points for both Dreamweaver and
[emas|vi|sublime] and it's a matter of personal preference and what works for
you, then it's just some bit of data to file away. OTOH if you hear that
Dreamweaver is simply evil and profoundly flawed then your adrenaline spikes;
you don't want to be one of _those_ people who use _that_ tool.

Kind of disappointing if true.

~~~
mreiland
Because if "something sucks", you clearly are making the right decision by
going with the other thing.

As opposed to worrying about whether or not your decision was the right one
because all of the options had really good upsides.

------
mbesto
Umm, how can forget the greatest cognitive bias ever in software development:
[http://en.wikipedia.org/wiki/Planning_fallacy](http://en.wikipedia.org/wiki/Planning_fallacy)

 _The planning fallacy is a tendency for people and organizations to
underestimate how long they will need to complete a task, even when they have
experience of similar tasks over-running._

------
calinet6
Love that first one (attribution error, tendency to over-estimate individual
influence and under-estimate systematic effect). It's one of the biggest
pitfalls I've seen in software (or any industry, really, but let's talk about
software).

When something goes wrong, it's so easy to pull the "accountable party" into
the CTO's office and give him an earful, but it is so wrong, so ineffective,
and so anti-progress it's not even funny. Not funny at all.

The correct solution is the blameless post-mortem that Etsy does, as
mentioned. Kill all fear of punishment, bring out the honesty and systemic
analysis, and recognize the complex factors involved to come up with a real
solution.

This type of thing is exactly what's required for a successful engineering
culture, and for quality to be able to flourish. Give in to your reptilian
instincts to get mad at a figurehead, however, and you'll kill your company
culture and your quality all at once.

------
afarrell
> In my opinion the way around this is to deliberately stop and do an
> estimation exercise. First think about how long the refactor will take, and
> be extremely generous (e.g. double your first estimate)

That isn't estimation. That is pulling a number out of your ass. Try something
for me. For each of these data points, write down a low estimate and a high
estimate that give you a 90% probability of being within that range.

Surface Temperature of the Sun: Low [___] -- High [__] Latitude of Shanghai:
Low [___] -- High [__] Area of the Asian continent: Low [___] -- High [__] The
year of Alexander the Great's birth: Low [___] -- High [__] Total value of
U.S. currency in circulation in 2004: Low [___] -- High [__] Total volume of
the Great Lakes: Low [___] -- High [__] Worldwide box office receipts for the
movie Titanic: Low [___] -- High [__] Total length of the coastline of the
Pacific Ocean: Low [___] -- High [__] Number of Book titles published in the
US since 1776: Low [___] -- High [__] Heaviest blue whale ever recorded: Low
[___] -- High [__]

Now, you might think this is a silly exercise because you don't have
experience with the scale of these things, but how often do you estimate the
time it takes you to build a product you aren't yet familiar with something
using a tool you're not yet familiar with or people you aren't familiar with?

The answers are here: [http://my.safaribooksonline.com/book/software-
engineering-an...](http://my.safaribooksonline.com/book/software-engineering-
and-development/project-management/0735605351/answers-to-chapter-2-quiz-ow-
good-an-estimator-are-you/app02)

(This quiz is taken from Software Estimation by Steve McConnel (Microsoft
Press, 2006) and is copyright 2006 Steve McConnel. All rights Reserved.
Permission to copy this quiz is granted provided that this copyright notice is
included.)

~~~
pessimizer
Sometimes, for business reasons, it's crucial that you pull a number out of
your ass. My policy is to make that number as high as will be acceptable by
management, and to refuse the project if that number is less than twice the
length of my worst case scenario nightmare. Then, deliver early. Basically the
same as the OP.

Are you advocating something different here?

~~~
chriswarbo
That's known as the Scotty Principle, or Scotty Factor ;)

[http://c2.com/cgi/wiki?ScottyFactor](http://c2.com/cgi/wiki?ScottyFactor)

~~~
pessimizer
Never heard that before! Just seems the optimal way to look like you know what
you're doing:)

------
mathattack
I found that overconfidence is the worst. In part everyone thinks "I know my
work, leave me alone" but interdependencies mean that even if everyone is 90%
confident, large projects slip. And most people's 90% confidence is really
only 50% accurate.

The first antidote is educating people about interdependencies and
overconfidence. The second is introducing management.

~~~
alphagenerator
I think projecting overconfidence is a social/political survival strategy. The
50% accurate thing is just the hidden reality everyone acknowledges. It's sort
of like how people are polite when confronted with ugly babies.

------
superuser2
My personal favorite is organizations' vast overestimations of the
size/complexity and performance needs for their software projects, probably
related to delusions of their own grandeur.

Rails is more than capable of serving an intranet application for your 150
employees. But nooope, we're an enterprise, so we need J2EE/Oracle.

------
deciplex
>At larger established companies, do the refactor. At a startup that is trying
to release an MVP, maybe do the hack (your code will probably be rewritten
anyway). That said, always think of who else will be working on your code...

Sadly, at the larger companies, and especially ones where software is a cost
center, this is basically impossible. They'd rather replace their last
bullshit app with a new bullshit app, since it was the last director
responsible for that POS anyway, making all the same mistakes as before, and
rely on collective delusion (i.e. threats) to get everybody to go along with
the idea that the old mistakes are being corrected, and that mistakes made
_now_ , if any, are at least new ones.

Maybe it's the same at startups. I wouldn't know.

------
bluesnowmonkey
> Poor design decisions and shortcuts are sexy because they give you a small
> amount of value right now (not having to do the work to _architect things
> properly_ ), and you dramatically discount the value you would get in the
> future by doing it right the first time.

What do you call this bias? People like to think that if a different person
had been in charge, or if they'd had more time, then things could have been
done "properly". But really there is no "proper" way to do anything in
software engineering. Every decision is a judgement call. Everywhere you look
it's shades of gray. Especially without the benefit of hindsight.

~~~
jdmichal
Honestly, I call it "lack of proper documentation of decisions." It's pretty
systemic throughout software engineering, myself included. It's why we seem
doomed to "refactor" current solutions, only to end up with the same mess at
the end. We see the 80% use-case, think we can do better, then by the time we
hit those 20% edge cases, the code is as bad as when we started. This would be
avoided if all the decisions and use-cases leading to the current mess were
well documented, which would allow people to actually engineer a better
solution with all the use-cases, or just leave it alone because they can't.

------
dustingetz
> The Bandwagon Effect

I don't know if this is a negative bias. Popular tech is well understood with
known risks. Successful outcomes in software are hard enough without adding
unknowns to the equation.

~~~
chriswarbo
The effect isn't referring to popular technologies remaining popular; the
effect is that, when we make decisions in a group, we unconsciously tend to
agree with other people more than we otherwise would.

For example, lets say you're asked to submit a survey of the suitability of a
bunch of languages for a new project; say Python, Java, PHP, C# and Ruby. You
may give Python 8/10, Java 7/10, PHP 6/10, C# 7/10 and Ruby 8/10.

Now, imagine that instead of that survey, you were invited to a meeting where
the chief architect informs everyone that the they haven't decided whether to
use Java or C# yet, and since the CEO suggested PHP they decided to open it up
for comments. Would you still give the same scores?

The classic experiment for this used a social music website, where people
could vote for what they liked. Unknown to the users, they were internally
separated into groups, so that people in group A only saw scores based on
other voters in group A, and so on. In each group, there were a couple of
really high-scoring songs, with the rest scoring very badly; but the each
group had chosen different songs as their favourites! Those which were
regarded highly in group A were seen as terrible by group B, and vice versa.
As a control, one of the groups was never shown any scores at all, so their
votes were less biased. That group showed much less extreme results, with some
songs scoring better than others, but not massively so.

