
Why don’t software development methodologies work? (2014) - kissmd
http://typicalprogrammer.com/why-dont-software-development-methodologies-work
======
osrec
For me, the real issue is ownership. I come from a finance background and have
seen "business people" often run technology into the ground because they
simply can't let go of ownership or control. They want to define too much and
leave little in the hands of the developer. Development managers can be just
as bad with their overarching methodologies. Really bad ones can stifle the
creativity of their developers by micromanaging and just end up making people
miserable. I personally don't follow any particular methodology, but instead,
I agree with each team member a well defined deliverable and deadline, and
just let them get on with it (with whichever methodology they prefer). The
only thing we stipulate as a firm is the version control system and test
procedures. The whole team has an informal chat once a week or so to make sure
things fit together properly - no daily stand up rubbish. It's not complicated
or even that structured, but it works and my colleagues seem happy. Basically
our philosophy is, hire good people, make them responsible for something, and
let them find the best way to deliver. And definitely don't tie them up in
stupid admin tasks as stipulated by the latest fad.

~~~
maltalex
Ownership is good, but only up to a point. In this approach, developers end up
being independent contractors on a payroll.

If you focus only on deliverables and deadlines, you'll end up with developers
using a mix of different approaches, libraries and even languages. It hurts
team cohesion and makes the logistics of project management much harder since
Joe can't take over Tom's code now that Tom has the flu.

As I see it, one of the main tasks of a technical manager is to set
conventions so everyone would feel at least semi-comfortable with everyone
else's code. That's not micro-management, that's management.

~~~
bluesnowmonkey
One view of management is that it's about making sure the cogs of the machine
are completely interchangeable. Everything is uniform, orderly, perfect.
Everyone is replaceable. No one's contribution can be distinguished from that
of any other. Everything must be measurable and measured. See how the graphs
in our status updates always show progress!

This is actually an extremely inefficient and demoralizing environment for
those involved. Yeah, it's easy to take over when someone leaves, because they
were so hamstrung by the environment that they never built anything
interesting. And you're going to be doing a lot of this taking over, because
people are always leaving, because they were hamstrung. So this idea that we
can't trust individuals to stick around and do a good job, and we have to make
sure they never have enough power to do damage when they make a mistake, it's
a self fulfilling prophecy. It makes them untrustworthy and drives them away.

It really is about ownership. Programmers who are achieving at a high level
move _much_ faster and do _much_ greater things. It's worth letting them make
mistakes to retain the best people and get their best work. The only catch is
figuring out how to keep them accountable for their decisions. OK, you want to
use this new tech or try this new architecture. How do we tie your
compensation and career progress to the success of those decisions?

~~~
maltalex
> One view of management is that it's about making sure the cogs of the
> machine are completely interchangeable.

That's not what I was talking about. Employees are neither bricks nor cogs.
But having said that, if you run any long-term project, you should expect
people to come and go, it's part of the game. Thinking that this will not
happen in _your_ project is simply negligent.

> This is actually an extremely inefficient and demoralizing environment for
> those involved.

It doesn't have to be. Working with common conventions and tools doesn't have
to be demoralizing. Conventions should be set for a good reason, after an open
discussion, and possibly even a vote ("Do you think that it's worth bringing
in this library?", "Should we all use the same IDE?"). Also, conventions
aren't set in stone, they can change over time.

This gives _the team_ ownership of their project. It makes passing knowledge
between team members easier, it makes it easy for team members to help each
other when they've run into an issue, not to mention that it makes code
reviews and various technical discussions a lot easier.

If Bob is writing using a different language or a set of libraries than the
rest of us, who can review Bob's code? Who can help him out when he's stuck on
some bug? Who can help him flesh out his design ideas for some feature he's
writing?

Without team cohesion, you're creating not a team of developers, but a bunch
of individuals who happen to work on related stuff. I don't know about you,
but to me that sounds a lot more demoralizing. To me, one of the great
benefits of working on a team is that we all get to learn from each other,
plus we get to share a common goal. Agreeing to some common conventions seems
like a small price to pay for that.

~~~
bluesnowmonkey
Team cohesion is definitely not about using the same IDE.

I worked at a place that essentially standardized on Vim. They emphasised pair
programming so IDE standardization helped with that. If you already use Vim,
sounds great. If not, probably sounds terrible. So what looked like team
cohesion was homogeneity. They subtly turned away a lot of people who didn't
fit the mold in what should be a minor detail.

I've worked at two companies that basically banned Redis. Having a
conversation with an SRE who had a bad experience, trying to convince him it's
worth having this tool available, is one of my worst professional memories.
Consensus driven technical decisions suck. It's better to give people
authority and responsibility. I would have happily admined it myself.

Nobody's going to review Bob's code, not really. If they're not working
directly with him on the project and it's doing anything reasonably complex,
then they don't have the context to say anything useful beyond catching typos.
Things get rewritten every few years anyway.

------
JepZ
> I think programmers should pay much more attention to listening to and
> working with their peers than to rituals and tools, and that we should be
> skeptical of too much process or methodologies that promise to magically
> make everyone more productive.

Sounds pretty much like:

> Individuals and interactions over processes and tools [1]

I don't know why, but somehow people don't get it that agile is not a
methodology but a spirit.

[1] [http://agilemanifesto.org](http://agilemanifesto.org)

~~~
hawski
> I don't know why, but somehow people don't get it that agile is not a
> methodology but a spirit.

Because of all the agile coaches, boards, trainings, conferences and
companies. It feels then more like a religion.

~~~
BaronSamedi
I'm sympathetic to agile but it does have a quasi-religious feel to it. I've
noticed that its proponents make claims about the right way to do things with
an unwarranted level of certainty. When I ask "why" (politely) I seldom get a
satisfactory answer and often they get offended by the very question.

~~~
zzzcpan
All methodologies are created for people who do not understand "why", they are
inherently religious. Otherwise why would you need a methodology if you do
understand all the "why"s? You can just develop one much more suited for your
people and your projects and fine tune it over time.

------
gregdoesit
There is such a things as physics in software: between time, scope and people,
one of them almost always has to give. Exceptions I've seen are with mature
and well-bonded teams working on familiar scope they understand clearly, with
a timeline they themselves defined.

I've found the best "methodology" to deliver decent results are sticking with
short iterations. Software is often about doing something we've either not
done before, in a way we've not done it before, with people we've not done it
with before. So we will have surprises (aka delays) on the way. The more
frequent we check just what these delays are, the more realistic we can be
about whether we can make it on time, or if we need to cut scope or pull in
more help to make it on time.

~~~
saas_co_de
> There is such a things as physics in software: between time, scope and
> people, one of them almost always has to give

This can be true but can also be completely false. Massive differences in
productivity are possible depending on how individuals work together on a
team.

~~~
Drdrdrq
True. Also, there is a fourth variable where you can cut corners (even if it's
almost never a good idea): quality.

Great teams can produce much more than mediocre ones, but they too have a
limit. When deadline is set too close, one of these 4 things has to give, and
it is good to know in advance which one that is, so team can set the
priorities accordingly.

~~~
Clubber
I would say that is part of the scope. The scope being a well tested,
functional and relatively bug free software that does x, y and z.

~~~
Terr_
Unfortunately most business people don't make things like "doesn't corrupt the
database when an exception is thrown" into a feature bullet.

So while you can argue it's part of the scope, it's not part of the scope that
anybody else seems to think about.

------
Arnt
This sits uneasily with me. I cannot remember any time when a Methodology was
followed in practice. Just my bad luck?

The relogious people (mentioned in the article) harmed by false adherence.
They adhered to the headlines and warped the substance of what the Methodology
said. I remember (with pain) a place that wouldn't develop development
scaffolding. They had rules for software development, good ones, motivated by
achieving near-perfect uptime for customer-facing services. Implementing a
scaffolding service or crontab to that standard was a lot of work.

Then there's the non-adherents who eroded the Methodology. Like the scrum
shops that eroded scrum by deemphasising the product owner and stories until
the result looked more like a waterfall.

The Methodologies may be broken as a whole but the practice I've seen was
generally so distorting that I feel it's unfair to blame the Methodologies.

~~~
TulliusCicero
If a methodology cannot be followed effectively by (presumably intelligent and
sincere) individuals, why blame the people?

This reminds me of the people on the far right or left who believe,
"[Capitalism/Communism] can't fail; it can only _be_ failed."

~~~
Arnt
This would be an excellent point if there existed a methodology that people do
manage to follow.

Some of the failure I've seen can be partly explained by people who wanted to
have their cake and eat it. Who wanted, say, the promised advantages of Scrum
but were not willing to pay its costs (lack of long-term plans and fixed
finish dates).

That's not all. It's part of the explanation for some of the suckage I've
experienced.

I do blame people for not making up their minds. The people who invented scrum
were willing to give up some parts of long-term planning, and got remarkable
results for that. They are not to blame when others later failed by not giving
up blah.

Maybe some blame should go to conslutants who oversold the benefits of
Methodologies without stressing the costs. "YES YOU ACTUALLY HAVE TO DO THIS,
IT WILL WORK BADLY OTHERWISE".

~~~
taeric
I take the other view. I actively blame some of the people that "invented"
these methodologies.

Being introspective of your capabilities and your achievements is an extremely
valuable skill that I wish more of us had. However, selling your capabilities
and giving vague promises that it will help with development if you only
followed these practices is a deceitful way to make some money off of your
reputation.

Worse, many of them do this by attacking those that came before them, but then
taking the stance that their "teachings" are above attack. And that anybody
that isn't getting the same benefits they did just aren't applying themselves
correctly.

------
crdoconnor
I think methodologies could benefit by being treated like software, since
that's effectively what they are - 'human' software to manage teams. That
means:

* Frequent releases (i.e. do iterations or 'sprints' or whatever).

* Accept that your methodology has bugs and 'fix' those bugs between releases. Most software is horrible and buggy. Don't trust the "methodology gods" that they wrote a perfect piece of working software. It's probably half assed and worked semi-well for their specific use case so they 'released' it along with a reality distortion field.

* Accept that different use cases require different methodologies. Writing space shuttle code? You need vastly different team dynamics to a group of 10 people at a marketing agency running short lived campaigns.

* Follow the UNIX philosophy: don't have ONE methodology that you follow to a T - string together a bunch of small, self-contained rules and team processes that serve your purposes and iterate upon them.

tl;dr fuck scrum. it's the internet explorer of methodologies.

~~~
deanCommie
Scrum is great for non-software companies that need to write software.

They will not attract the most talented software developers (on average, not
in all cases), and the business people for whom the software is a means to an
end care more about consistency and predictability rather than quality.

As a result, fungible resources (humans), deeply regimented stories, regular
delivery milestones (sprints), and consistent velocity IS the best possible
outcome.

~~~
crdoconnor
Scrum is not 'great' in any sense of the word.

I don't think it really matters what kind of company you work for. I've worked
for many software and non-software companies and the same issues crop up in
both.

The main one is that scrum accelerates the accretion of technical debt, which
"business people" can somehow not care about right up until the point where it
drives them out of business.

It has some good ideas (retros, sprints, no deadlines) and some terrible ideas
(treating team members as fungible, story pointing/velocity, too many
meetings, PO has to make decisions about specific pieces of tech debt).

My main problem with it is the teachers, coaches and promoters who take an all
or nothing view of it and who treat deviations from the official 'scrum'
policy as, by default, problem with the team rather than, potentially, a bug
in SCRUM.

I used to think that it was a good base to work from, but after arguing
fruitlessly with the people who take a religious approach to it I've come to
the opinion that it just needs to be trashed wherever possible, because the
problems it does have will only be resolved by moving on to something else.
Better to move on sooner rather than later.

So, fuck scrum.

~~~
GordonS
What's the issue with story points and velocity? I've been using them for
years and personally I think they're some of the best parts of agile!

~~~
crdoconnor
Your team's velocity going up can mean that:

* Your team did more work this week.

* Your team worked more efficiently this week.

* Your team inflated its story point estimations this week.

Any one or all of those things could have happened in varying degrees.

Given all of that, what useful knowledge is it actually supposed to impart?

The best part is the number of story points / velocity can vary wildly
depending upon what the team believes the measure is being used _for_.

~~~
GordonS
In my experience, if you track velocity long enough, and 'categorise' the
values based on things like team size, technology etc, you get values that are
useful to predict how much work you should commit to in an iteration.

Yes, things change: productivity, moral, team members join and leave, some
teams are _shit_ at estimating etc - but at least in my experience if you take
an average you do arrive at a useful figure.

~~~
crdoconnor
In my experience if you have some sort of measure that looks a bit like
productivity then senior management will latch on to it and treat it as a
proxy for productivity.

That inevitably means that developers have an incentive to inflate their
estimates, which means story point inflation.

Averaging does not fix this dynamic.

------
jpswade
I think this is why DevOps is becoming so popular. It's less about
methodology.

There's no certificate, role, set of tools or prescriptive process. There's no
specification, it's not a product, or job title. There's no one true voice on
what DevOps is or isn't. It's about attitude, ideas, customs and behaviours.
Culture, paradigms and philosophy. It's a way of thinking, a way of doing and
a way of being. Practicing as well as preaching. It's a conversation. It's
about taking the best experiences and sharing those with others.

~~~
emodendroket
A lot of companies just rechristen their "ops" guys "devops" and then say
they're doing devops.

~~~
ivl
The company I work at did this. The CTO and I even joked about it being
"DevOps in name only". It's gotten better, but they're really still an ops
team.

~~~
emodendroket
I mentioned to some people at my company that "devops" was meant to be more of
a philosophy than a separate team and they were bewildered. But I guess I'm
happier with ops being a separate team anyway.

~~~
ivl
Right? Ops, but an ops team you'd trust touching the code.

I feel like devops to most just means "ops but closer to the code now that
there are so many code/release management tools and someone has to manage it
all".

------
thisisit
I find the problem with development "methodologies" are the people
implementing them. Frequently the Agile Master or someone who has read the
book and granted a certification but doesn't understand the subtleties of
company culture, team capabilities or project etc. They just want the
"process" to be followed.

~~~
raducu
Methodologies help. But ultimately there is no methodology that lets you make
money, deliver products that buyers want, outsmart your competition, eliminate
team/corporate politics and so on.

I don't know why people buy into the "when we'll be agile enough, everything's
going to be ok; until then let us use this whip and self-flagelate for not
being agile enough:

------
jandrewrogers
My own view is that something analogous to Conway's Law is at work: software
design reflects the constraints inherent in the software methodology used.

The problem is not software methodologies per se, it is trying to apply a
software methodology to software development where the priorities of the
methodology are fundamentally at odds with the requirements and goals of the
software being built. The root of the problem is the notion that there is one
software methodology that is efficient and productive for all possible types
of software development. I would argue that there is an optimal methodology
for most software but it is a different methodology for different types of
software.

If we discard the oft-argued proposition that a PHP website, an embedded
system, and a high-performance database kernel can -- and should -- all be
developed with the same software methodology then this entire discussion goes
away. A software methodology is a tool; they work best when you select the
best one for the job.

------
edejong
Seems to me yet another confirmation of the first rule of the Agile Manifesto:
"Individuals and interactions over processes and tools."

The methodology cannot be effective without creating the right personal
dynamics.

~~~
crdoconnor
I've seen many teams claim to have switched from 'waterfall' to 'agile', but
the only ones that haven't slipped back into a 'waterfall'-like mode after a
few bad releases have been ones that put a strong emphasis on tools -
specifically testing tools.

Good interpersonal dynamics are important, but the sheer level of irony that
the _first_ rule of the Agile manifesto emphasizes deprioritizing the _main_
thing that will actually get you away from waterfall-like development is
pretty staggering, IMHO.

Never mind. Forget the tests. If you have a meeting at 11am every morning
where anybody who sits down is shouted at then you've done it. You're "agile".

~~~
edejong
The manifesto emphasises the importance of sharing the values of a method,
before actually applying the method. It's the difference between wisdom and
dogma. Take tests for example: blindly going for coverage metrics will be much
less effective compared to having a deep, shared understanding of the pro's
and con's of testing. To know that your co-workers have a similar
understanding of the methodologies, makes the methodology itself of lesser (or
no) importance.

Personally, I am of the opinion that a strong emphasis on test-driven
development in the long run will cause waterfall-style development. Tests are
all about risk-prevention, instead of risk-mitigation. Prevention eventually
becomes exceedingly expensive, whereas mitigation is all about building
robustness into the running system. Due to that, the scalability of mitigation
systems, such as true micro-services or actor-systems, are inherently more
dynamic and cause less latency in development.

I don't understand your last statement. It seems to confirm my position: "...
anybody who sits down is shouted at ... ". The process (standing up, not
sitting down) is less important than good team dynamics (not getting shouted
at).

(edit: down-voters, please share why you down-vote! I'd like to know. Also,
please don't down-vote based on opinion, but on weakness of argumentation
instead.)

~~~
tome
> edit: down-voters, please share why you down-vote!

The one thing that I religiously downvote on HN is complaints about down
votes.

~~~
edejong
Well, at least I now know why you down-voted :) Sometimes I ask because I feel
a comment was properly written, non-offensive and with reasonable
argumentation. I put time and effort into finding the right words and thought
through my argumentation multiple times before writing it down. Since English
is not my mother-tongue, I'd like to know if i caused a possible
misunderstanding. Hopefully others share my belief that down-voting should be
reserved to punish abusive behaviour.

~~~
tome
I understand the desire for an explanation but more often than not in my
experience a comment that has quickly been downvoted to in a fit of follow-
the-leader will soon be voted back up into positive territory. You just have
to be patient :)

------
11thEarlOfMar
In the age of Internet, I believe there is a way to conduct experiments that
would yield an answer based on data. Spitballing here, but how about a kind of
contest for 'points' where a statistically significant number of devs
volunteer to participate. They each provide a 'resume' (GitHub, LinkedIn,
CV,...) into the system. The programming task is presented and the devs self
assemble into teams, and endeavor to complete the task.

As an example, let's say 100 devs jump in. The task is to create a simple
Android app, with a requirements statement provided, with server back end,
launch it into the app store, support it for some period with bug fixes and
improvements, and then declare it '1.0 released' to wrap up the experiment.

What you'd wind up with is a variety of team sizes, a variety of team
experience, a variety of development systems used, a variety of outcomes. But
all building the same software.

The key would be that as many attributes of each team's efforts as possible
would need to be recorded and entered as data to be studied in search of
patterns.

Repeat this _n_ times and I believe valuable insights could be gained.

Rather than trying to control for all the variables of team size, experience,
method, you control for the end product being targeted and then look for
insights into the variety of approaches that teams took.

~~~
jonex
The problem with the observational approach is generally that it's really hard
to decorrelate the results. In your proposed experiment you control only for
the project itself. What if good developers generally prefer modern
programming languages, so they chose Roslin. Does that imply that using Roslin
is a better language for programmers that are not as good? Does it even show
that they wouldn't have been even more productive using Java?

From the other direction, even if you get the value of controlling for the
project itself, that might also add some bias. Could be that for a project
with that setup waterfall actually works pretty well, but is it representative
of projects overall? Are most software projects comparable to developing a
simple Android app with a well defined specification up front?

I do agree that it would be good to do this kind of experiments where multiple
teams get tasked with building similar systems to figure out what works. But I
don't think it makes sense to actively avoid controlling for variables. That
would make the results very hard to interpret and much less usable.

------
mruniverse
Agile always seemed like a cargo cult.

When you don't understand how or why something works, this is how you go about
it. Let's make airplane-shaped things out of coconuts.

~~~
Joeri
Really being agile basically amounts to "use best judgment at all times". It
is highly unstructured because it provides absolute freedom. That means that
the only people who can actually be agile (have the experience and discipline
necessary), are the ones who don't benefit from being told to be so. Any time
a team is told to "be Agile" to fix their problems, that team lacks the
maturity to actually "be Agile".

------
snarfy
> But in terms of the only measurement that really matters—satisfying
> requirements on time and within budget—I haven’t seen any methodology
> deliver consistent results.

Good, Fast, Cheap. Pick two.

That's what you are doing when you are pitting requirements, a time frame, and
a budget against each other.

The first problem is this is a company wide process, not just software
development. The only thing development tells you is how long it will take
given the budget. Development doesn't define the requirements or the budget.

The third statement in the Agile Manifesto, which tends to get overlooked:

"Customer collaboration over contract negotiation"

That is entirely a business process and ultimately determines both the
requirements and the budget. It is something that is sorely lacking at most
companies, regardless of how hard their engineering department tries to follow
agile. It doesn't work without the full company buy-in.

~~~
emodendroket
In many software projects none of those are satisfied.

------
nickpsecurity
There’s a difference between methods that are shown with evidence to help
developers and methods that are merely marketed as such. The writeups on
methodologies should distinguish between these. It’s also possible these
people have thought about methodologies a long time without ever discovering
the ones that work. Fagan Software Inspections, Mills’ Cleanroom Software
Engineering, Meyer’s Eiffel Method, and Praxis’ Correct by Construction all
worked if we’re talking about developers delivering products in acceptable
timescale with low defects. They all let developers do their job in an
iterative way providing extra tools or restrictions that help ensure quality
and/or maintainability.

[http://www.mfagan.com/pdfs/software_pioneers.pdf](http://www.mfagan.com/pdfs/software_pioneers.pdf)

[http://infohost.nmt.edu/~al/cseet-
paper.html](http://infohost.nmt.edu/~al/cseet-paper.html)

[http://se.ethz.ch/~meyer/publications/acm/eiffel_sigplan.pdf](http://se.ethz.ch/~meyer/publications/acm/eiffel_sigplan.pdf)

[http://www.anthonyhall.org/c_by_c_secure_system.pdf](http://www.anthonyhall.org/c_by_c_secure_system.pdf)

On concurrency side, there were also SCOOP for Eiffel and Ravenscar for Ada
which eliminated race conditions by design. Some methodologies in high-
assurance sector were using tools like SPIN model checker for it. People spent
a _long_ time talking about those bugs while some design methods just removed
them entirely. A lot less debugging and damage might have happened in industry
with the aforementioned methods getting way better with industry investment.

[https://www.eiffel.org/doc/solutions/Concurrent%20programmin...](https://www.eiffel.org/doc/solutions/Concurrent%20programming%20with%20SCOOP)

------
seanwilson
> Try this thought experiment: Imagine two teams of programmers, working with
> identical requirements, schedules, and budgets, in the same environment,
> with the same language and development tools. One team uses waterfall/BDUF,
> the other uses agile techniques. It’s obvious this isn’t a good experiment:
> The individual skills and personalities of the team members, and how they
> communicate with each other, will have a much bigger effect than the
> methodology.

Another thought experiment: imagine getting two teams of programmers using the
same methodologies and everything else and expecting the results to be the
same. It's just not practical to perform studies like this because there are
too many variables.

~~~
gordaco
That's because you are using an extremely small sample; of course the
individual differences are going to matter. Now, get a random sample of 1000
teams using waterfall and 1000 teams using agile, and you'll get much more
meaningful data. The larger the amount of samples, the more the individual
differences are going to be smoothed.

Of course, not many people have the resources to do that kind of experiment.

~~~
seanwilson
> Of course, not many people have the resources to do that kind of experiment.

That's what I meant about it not being practical. Who would invest that amount
of money? There's infinite variations of different methodologies as well.

I'm not sure what the solution is but it's tiring seeing bad studies used to
promote certain approaches.

~~~
coldtea
> _That 's what I meant about it not being practical. Who would invest that
> amount of money?_

Tons of organizations and governments could.

20 10 person teams * 5 methodologies = 1000 people, for a 1 month project =
1000 * $100.000 = 100M dollars.

In the grand scheme of things this is insignificant amount -- in a world where
businesses spend $15 billion for buying Instragram. A military could do that
kind of spending for buying a single airplane -- and such a research could
potentially have a huge impact / savings on future soft-eng projects.

And it could be even subsidized or be tax-deductible. Or could easily drop to
like $70K or $50K per month compensation.

------
Pamar
I have a question: can someone describe something akin to what we call
“methodology” in a completely different field?

Does “scrum for surgery” exist? What is an equivalent of “waterfall” in
warfare?

Does something like this exist at all?

~~~
projectileboy
Not really, although in the book "The Checklist Manifesto", Atul Gawande
reports that having something akin to a "stand up meeting" before a surgery
has been shown to reduce the likelihood of errors occurring during that
surgery. And that brings us around to "agile" methodologies. I feel like most
people agree that the underlying value system outlined in the agile manifesto
made sense, and that many of the practices outlined can be useful, but at some
point (scrum, maybe?) a particular set of practices got bundled up and sold as
The One True Way of Doing Things, and then sanity went out the window. For
example, test-driven development can be a useful tool for thinking about how
to approach a problem, but of course it isn't the only way to write a program
(see Ron Jeffries hilarious attempt to TDD a sudoku solver as a
counterexample). But sadly, many teams aren't in a position to think
critically about what they do or don't do - instead, they have to show that
they're "doing Agile".

------
latch
Software methodologies don't work because we're not getting the fundamentals
of software development right. Reorganizing your kitchen layout won't help
your restaurants if your chefs are still struggling to make scrambled eggs.

The most important thing any software team needs is proper logging, monitoring
and metrics. No matter how great your process and engineering culture, you'll
need logging, monitoring and metrics; things will happen. The worst part is
that this is relatively cheap and simple to do (at the scale that most of us
operate on), with huge rewards, and most teams still do it wrong. Whether
they're logging too much noise, or collecting metrics that show what's right
vs what's wrong, or swallowing exceptions, etc. This is the litmus test.

Next are automated tests. Unit tests, integration tests and fuzz testing. The
downside with this is that it takes a long time to master. Yes, it costs time
at first, but that's why you have senior developers who should be able to use
tests to save time and teach others from their mistakes (like too much
mocking).

Finally, code reviews and pair programming. Almost every line of code is an
opportunity to teach and to learn. No methodology or tool can help if you hire
junior programmers and don't do pair programming (or some other really
involved mentoring, but I don't know of any).

Technical debt is real. Most of the time people don't have time to do things
right is because they didn't take the time (or didn't know how) to do things
right in the first place.

~~~
sgt101
What metrics do you use for software?

Most of the struggle with methodology research is due to the difficulty of
objectively measuring code productivity or quality.

------
rafiki6
They don't work because it seems most places treat software development as a
function separate from whatever the company does (with the exception of
software companies which rarely follow one "methodology" but just do what
works for them). Software is a way to tell a computer how to produce an
outcome. The more important thing is how that outcome fits into the business,
not the method of reaching it.

------
a_imho
I don't particularly think the methodology merchants are in the business of
improving software development. Their bottom line is affected by selling
trainings, consultancy and certificates, thus the single metric of a
methodology working is whether and how fast it can grow its user base, keep
them happy and coming back for more. It might loosely correlate with improving
software quality, but that is entirely secondary.

------
cies
I like the article, and agree mostly. Though I've come to look at it a little
different.

Agile/waterfall/etc are PM methodologies used to manage software devt
projects. Especially Agile has limited use beyond software devt.

Methods of deving software are things like: domain driven, data models first,
TDD, etc.

Then there are programming paradigms: OOP, FP, Actor based, etc.

So the set of "techniques" the article lists in one list are of different
types. All of these have to be evaluated against the people that have to use
them (don't for OOP on an FP team, or vise versa) and the type of problem to
be solved (TDD is less usefull for a simple UI project, than for a complex
algorithm involving time and lots of corner cases).

------
ssijak
"Whether a methodology works or not depends on the criteria: team
productivity, happiness, retention, conformity, predictability,
accountability, communication, lines per day, man-months, code quality,
artifacts produced, etc. Every methodology works if you measure the right
thing. But in terms of the only measurement that really matters—satisfying
requirements on time and within budget—I haven’t seen any methodology deliver
consistent results."

You star with a premise that requirements, time constraint and budget are all
set magically right before the project began.

~~~
maxxxxx
That's actually a good point. In my company most projects start with a fixed
budget, a hard deadline and unclear always changing requirements. Everything
is already set before the dev team gets involved.

~~~
JakeAl
That's a failure of the project manager. Before any deadline is set the dev
team leads are supposed to meet with all of the leads on the project, who in
turn meet and come to a consensus with their teams. The results of the
negotiations are supposed to start at the top with the idea and goals, go all
the way down to the people who will be executing them, and then go all of the
way back up to the top over and over until everyone until a consensus is
reached. Things may change, but in proper/formal project management a change
management process is in place where the leads must come to a consensus before
any change is approved. Good management enforces this process, and good
leadership at any level makes everyone above and below them aware of the
implications of what everyone wants. Everyone has to be on board and working
towards a common goal, and come to agreement to how best to reach that goal.
The biggest problem I see is at the organizational level, where the project
manager is not the one with authority to control the process. When this
doesn't happen, the people above and below them are not protected from the
implications and things go bad for everyone. This is taught in PMP courses,
but what isn't taught is what happens when the organization doesn't give the
project manager authority but instead turns them into a project coordinator or
project expediter. A formally trained, knowledgeable and experienced project
manager with the authority to control the process can and will make sure
everyone is on board, but they can't do that when their authority is
undermined by the organization, client, or teams. Likewise for the team leads.
This is the definition of cooperation.

------
andy_ppp
I find if you have excellent programmers who enjoy the problem they are
working on and are put under the right amount of pressure you get great
software no matter what process is followed.

~~~
qwer
In that scenario I bet they're even more effective without the condescending
"manager" adding nothing.

------
rtpg
The thing that I feel these methodologies solve is reminding developers of the
users.

Ultimately a lot of people tend to lose track of higher level objectives when
working on the (admittedly complex at times) implementation details. This is
probably the biggest productivity killer in the business

How many of us have had that first demo with some people external to the dev
team and for all the feedback to be super obvious things that could have been
caught before a single line of code was written?

------
coldtea
Because software has uncertainty (about requirements, environment, tolerances,
complexity) and creative elements thrown in.

It's not a regular pipeline kind of workflow, like some Taylor-inspired
assembly line, or regular old civic engineering.

Besides all those methodologies are unscientific BS invented by consultants,
not something derived from actual studies (even when there are some
comparative studies involved they are laughable in scope by scientific
standards).

~~~
TulliusCicero
Exactly. Software development is fundamentally unpredictable because you're
always making something that's new, at least to the team doing the making.

After all, if you were repeating yourself, you'd just re-use the methods and
classes and packages you'd already written; worst-case, you could copy+paste
the code and tweak it.

And since you're doing something novel, of course you're not going to be able
to predict how long it will take, beyond extremely broad guesses.

------
zzzcpan
> satisfying requirements on time and within budget—I haven’t seen any
> methodology deliver consistent results

There seems to be a false premise. Methodologies don't deliver consistent
results on time and within budget because they attempt to help to figure out
those software requirements, so that an actual problem can be solved, not
useless requirements satisfied.

------
nickbauman
The author misunderstands what methodology is. TDD is a single practice, not a
methodology. OOP is a way of structuring code, not a methodology.

What I have seen work is methodologies adopted to get the benefits of what the
methodology actually delivers. Not a checkbox so say "we are X".

~~~
gregjor
Thanks for the comment. I'm the author of the original article. I was
surprised to find this old piece on the HN front page today.

With all respect, I don't misunderstand TDD or OOP. I agree that those aren't
methodologies in the strict sense. But rigid adherence to OOP design or TDD
can paralyze a team and focus development on goals that aren't the customer's
priorities. OOP and TDD can influence how the team works and what shape the
project takes just as much as waterfall or agile. When I read articles
claiming that TDD is the sure path to reliable development, that's a
methodological claim, not a technical practice.

~~~
nickbauman
I think we're really in "violent agreement" here. Rigid adherence to these
"things" (not necessarily methodologies) never work. The benefits of what
these "things" offer, to the degree that they are understood by the
practitioners, are real. But they are often (perhaps categorically)
misunderstood, applied haplessly and held to claims they never made in the
first place.

~~~
gregjor
Yes, we're in violent agreement.

------
watertom
Software is no different than any other engineering discipline. We absolutely
do have software development methodologies that work, we have decided to
exchange speed for quality.

Better, faster, cheaper, you get to choose 2 and only 2, we've selected faster
and cheaper.

------
Brian_K_White
This article is unreadable on my phone. All I see is the immovable side-bar
filling 80% of the screen with "hire me" in the middle. Ah, no thanks!

------
didibus
[http://programming-motherfucker.com](http://programming-motherfucker.com)

The one true methodology. Preach!

------
luord
I spend more time now on asinine meetings than writing code. Must be some kind
of ironic hell. Needless to say, I agree.

------
gregjor
Some great comments here. Does anyone mind if I link to this thread from the
original article?

------
sheeshkebab
It sounds like author is not too fond of code reviews, test approaches, and
linting rules. These things are based on personal preference and semi-
religious beliefs often leading to conflicts, even on two person team.

------
charlysl
No silver bullet

------
tboyd47
> Maybe social skills come harder to programmers than to other people (I’m not
> convinced that’s true)

This is, in fact, the answer.

You haven't truly SEEN office politics until you've worked on a team of
developers. I'm shocked a reality show hasn't come out yet about software
development. It would make Survivor look like Family Matters.

~~~
ris
I think this is bullshit. I tend to get on pretty damn well with most of my
co-developers. The friction comes with managers trying to fit us into weirdly-
shaped boxes.

~~~
tboyd47
What do you mean, "weirdly-shaped boxes?"

I get along with my co-workers too, but social skills are more than just
playing nice together.

