
Choose Boring Technology - luu
http://mcfunley.com/choose-boring-technology
======
aaronbrethorst
Every time I spin up a new project, I try to answer the following question
honestly:

    
    
        "Am I using this project as an excuse to learn
        some new technology, or am I trying to solve a problem?"
    

Trying to learn some new technology? Awesome, I get to use _one_ new thing.
Since I already understand every other variable in my stack, I'll have a much
easier time pinning down those 'unknown unknown' issues that invariably crop
up over time.

Trying to solve a problem? I'm going to use what I already know. For web
stuff, this'll be a super-boring, totally standard Rails web app that
generates _HTML_ (ugh, right? How _last century_ ), or maybe a JSON API if I'm
trying to consume its output in a native app. For mobile stuff, this'll be an
Objective-C iOS app.

Waffling about it and saying 'well, I am trying to solve a problem, and I
think maybe a new whiz-bang technology is the best way to do it' is the
simplest path to failing miserably. I've watched incredibly well-funded
startups with smart people fail miserably at delivering a solution on-time
because an engineer was able to convince the powers that be that a buzzword-
laden architecture was the way to go.

You don't know what the 'right' solution is unless you understand the tools
and technology you'll use to deliver that solution. Anything else is just
cargo-culting.

~~~
shebson
I generally agree with your post, but I think there is a critical difference
between your argument and that of the blog post. Of course teams are more
productive with technologies they know, but that isn't necessarily an
arbitrarily-defined "boring" technology.

To pick on one specific example in the post: Node.js is popular enough that
there are lots of teams and engineers that are most comfortable and productive
working with it. For these teams, choosing Node certainly wouldn't cost an
innovation token, while deciding to build some service in Python, Ruby or PHP
(if we take at face value that this is more "boring") may end up being more
costly.

~~~
wdewind
> For these teams, choosing Node certainly wouldn't cost an innovation token,
> while deciding to build some service in Python, Ruby or PHP (if we take at
> face value that this is more "boring") may end up being more costly.

It absolutely does if it is only one team in the organization. If the entire
organization is using PHP and, let's say, you acqui-hire a team based on
NodeJS, unless they are doing something absolutely fundamentally different
they should learn PHP and push code in your existing infrastructure. This way
you have one way to deploy, one type of application server to support, one set
of gotchas relevant to your domain, one set of QA tools etc. Building good
products is about far more than just shipping the product, it's also about the
cost of long term support. Because what you are doing is fundamentally
automation, the less you have to manage the more benefit of the automation you
are getting, the more you can forget about it and focus on shipping other
things.

What you are describing is pretty much definitionally local optimization and
is exactly what you shouldn't do in large engineering organizations.

~~~
curun1r
> It absolutely does if it is only one team in the organization. If the entire
> organization is using PHP and, let's say, you acqui-hire a team based on
> NodeJS, unless they are doing something absolutely fundamentally different
> they should learn PHP and push code in your existing infrastructure.

Substitute PHP with Java, and you've described the situation at my company
exactly. The acquiring company had a legacy Java application and a lot of
automation invested in making that platform work. The acquired company was a
NodeJS shop that was using it long before this article or the comments in this
thread would advise (this was pre-npm days). To give you an idea of the
numbers, the acquired team was 4 engineers as compared to the 100 engineers of
the acquiring company (50/50 split with an off-shore development team). I
won't say which side of that divide I was on or go into the full year of
culture shock that we went through, but fast forwarding these past 4+ years
and now the bulk of the company's main product has been re-written in Node and
developers are significantly more productive. Features that used to take
months to push out in complex releases using a convoluted process of
branching, meetings and tons of arguments are now delivered continually using
the Github flow with little-to-no drama and far fewer production
bugs/downtime. Our customers have never been happier with us and developers
have never been happier to work here. All of this came from the fact that the
CMO who advocated for the acquisition supported the small team of 4 in every
effort to pervade the small team's technologies and practices across the
larger organization. Having been in organizations that performed at a much
higher level, he recognized just how much opportunity there was for
improvement and recognized that the team of 4 had the vision to create the
necessary blueprint for the rest of the organization to follow. It wasn't
easy, and most of the developers who were here at the beginning of the shift
are no longer part of the company. But it worked...and while a sample size of
one is hardly conclusive, I have a hard time agreeing with your point having
seen it play out so well in the real world.

~~~
wdewind
> Features that used to take months to push out in complex releases using a
> convoluted process of branching, meetings and tons of arguments are now
> delivered continually using the Github flow with little-to-no drama and far
> fewer production bugs/downtime.

I have a really hard time believing that Java was the culprit and Node the
savior rather than the organizational stuff you mention...

~~~
curun1r
It was most certainly the organizational stuff that was the main problem. But
if, as the poster I replied to suggested, the small team and simply started to
submit Java code instead of their Node code, that organizational stuff
wouldn't have changed.

Much like a change of location can help break someone's self-destructive
habits, the change of platform helped break a lot of the toxic organizational
habits that had built up over the years. The shift could have been to many
other platforms. And if the platform had been something other than Java, a
shift to Java could have improved the situation as well. The important part
was that the new mindset and practices around more frequent/frictionless
development and delivery.

I do think that it's easier to have that mindset when you use Node rather than
Java, but Java has gotten better in this regard over the past few years.

~~~
wdewind
You're talking about an organization that has an existing infrastructure that
is bad. This thread is about an organization that has an existing
infrastructure that is good but not 100% optimal for NewProjectX, and whether
or not it makes sense to use a a better fitting technology for NewProjectX.

------
saidajigumi
The innovation tokens concept seems to be a stand-in for both good engineering
judgement and iterative exploration of the design/implementation space _before
committing to a path_. I've been in several (successful) startups that
leveraged both of these principles to great effect.

Both "innovative" and "boring" can shoot you in the foot. TFA focsues on
"innovative" as a risk, but that's just daft. This industry is constantly
rolling its lessons learned back into its shipped and shared technology. Ever
gone back to a pre-Rails era web/backend codebase and screamed in horror? Ever
gone to a "new" shop that never assimilated those lessons, used "boring"
technology (thus dodging their shared/encapsulated forms), and recreated the
old horror? (personally: check and check)

Trite policies are not a replacement for spending dedicated up-front (and
occasional ongoing) time cycling between 1) evaluating/understanding your
problem, 2) researching the current state of the art {processes, technology,
etc.} related to your problem, and 3) using good engineering judgement to
choose the best path then-and-there.

~~~
Retric
I have seen some incredibly good 'legacy' codebases written with vary old
tech. There is a huge advantage when someone works with a technology for 10+
years and knows all the rough edges to avoid and then bakes it into their
design.

Java may be the worst example of a ‘bulb’ language I can think of. However, I
recently spoke with a team which had an awesome response to all the things I
hated about the language. The closest analogy I can think of is mechanics
working on popular cars get to the point where they can diagnose problems in
seconds because they know the kinds of things that break. Cars come with
plenty of sensors to help diagnose problems, but in this case familiarity
often beats better tools.

~~~
Johnny_Brahms
We are still a freepascal shop and it works flawlessly. Every now and then
someone wants to do a rewrite in something fancy, but we really have nothing
to gain. The current codebase is actually quite pretty, compiles fast, and is
easy to maintain.

New hires don't need more than a week or two to get the gist of everything.
The web frontend has been migrated from perl though. It got the job done, but
it wasn't pretty and nobody dared touch it.

------
MCRed
This seems to be written from the "Engineers are monkeys" perspective. As if
they spend their time flinging poo and you really need "solid" boing
technology that's already well designed so the poo doesn't mess it up.

You shouldn't choose node.js or MongoDB because they are "innovative"\-- but
because they are poorly engineered. (Erlang did what node does but much
better, and MongoDB is poorly engineered global write lock mess that is
probably better now but whose hype way exceeded its quality for many years.)

The engineers are monkey's idea is that engineers can't tell the difference--
and it seems to be supported by the popularity of those two technologies.

But if you know what you're doing, you choose good technologies-- Elixir is
less than a year old but its built on the boring 20 years of work that has
been done in Erlang. Couchbase is very innovative but it's built on nearly a
decade of couchdb and memcache work.

You choose the right technologies and they become silver bullets that really
make your project much more productive.

Boring technologies often have a performance (in time to market terms) cost to
them.

Really you can't apply rules of thumb like this and the "innovation tokens"
idea is silly.

I say this having done a product in 6 months with 4 people that should have
taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir
even) and couchbase and trying out some of my "wacky" ideas for how a web
platform should be built-- yes, I was using cutting edge new ideas in this
thing that we took to production very quickly.

The difference?

Those four engineers were all good. Not all experienced-- one had been
programming less than a year-- but all good.

Seems everyone talks about finding good talent and how important that is but
they don't seem to be able to do it. I don't know.

I do know is, don't use "engineers are monkies" rules of thumb-- just hire
human engineers.

~~~
wdewind
Having come from Etsy and witnessed the success of this type of thinking first
hand, I think you missed the point of the article and I think you are using a
tiny engineering organization (4 people) in your thinking, instead of a medium
to large one (120+ engineers).

The problem isn't "we are starting a new codebase with 4 engineers, are we
qualified to choose the right technology?" it's "we are solving a new problem,
within a massive org/codebase, that could probably be solved more directly
with a different set of technologies than the existing ones the rest of the
company is using. Is that worth the overhead?" and the answer is almost always
no. Ie: is local optimization worth the overhead?

Local optimization is extremely tempting no matter who you are, where you are.
It's always easy to reach a point of frustration and come to the line of
reasoning of "I don't get why we are wasting so much time to ship this product
using the 'old' stuff when we could just use 'newstuff' and get it out the
door in the next week." This happens to engineers of all levels, especially in
a continuous deployment, "Just Ship" culture. The point of the article is that
local optimization gives you this tiny boost in the beginning for a long term
cost that eventually moves the organization is a direction of shipping less.
It's not that innovative technologies are bad.

> But if you know what you're doing, you choose good technologies

No, if you know what you are doing you make good organizational decisions. It
matters less what technology you use than that the entire organization uses
the same technology. Etsy has a great engineering team and yet the entire site
is written in PHP. I don't think there is a single engineer working at Etsy
who thinks PHP is the best language out there, but the decision to be made at
the time was "there is a site using PHP, some Python, some Ruby etc., how do
we make this easier to work on?" Of those three python and ruby are almost
universally thought of as better languages than PHP, but in this case the
correct decision was picking a worse technology because more of the site was
written in it, the existing infrastructure supported it more completely and so
as an organization and a business we could get back to shipping products more
quickly by all agreeing to use PHP. Etsy certainly does not think of its
engineers as monkeys, quite the opposite.

~~~
mcfunley
> I don't think there is a single engineer working at Etsy who thinks PHP is
> the best language out there

Nota bene, the creator of PHP works for Etsy

(Hey thanks for the comment, Will)

~~~
panic
Does _he_ think PHP is the best language out there?

~~~
AnonJ
He'll probably not think PHP is the best language. But PHP is definitely the
language he has the most efficiency on though.

------
api
Case in point:

I recently went back to SQL from noSQL after I realized that a lot of noSQL
was just reinventing wheels. I realize there might be cases where noSQL
databases shine, but in most use cases SQL is better. It's _slightly_ more
work up front (only slightly) but it pays off later in keeping your data
organized and making it easy to query. It's a great example of a very old
technology with excellent longevity. That's in part because it's built on math
and logic (set theory, etc.). There are universal mathematical/logical truths
encoded elegantly into the structure of the SQL language, and they describe
things you are going to need.

Your tools shouldn't be the exciting thing. The thing you are building with
them should be the exciting thing.

~~~
collyw
I realized this before I even used NoSQL (having played about with BerkleyDB
and TokyoCabinet before the NoSQL name got popular).

------
antirez
By linking Aphyr's Redis article "call me maybe, Redis" as an example of
possible troubles with new technologies, the author of this article shows that
he actually does not understand very well the failure modes of MySQL itself,
which are identical to the ones of Redis failover (and of every other master-
slave system with asynchronous replication, more or less). This in theory
contradicts the whole article, but actually I think the idea _happens_ to be
reasonable, but formulated not very well. The point is not what is new and
what is old, is to switch to new technologies without a good reason which is a
useless risk. If you analyze the failure modes, and the strenghts, of what you
used in the past, and there is something new that performs much better, IF you
are a good programmer, you can analyze, and test for a few days, read the doc,
check some code, of something new, and understand if it is a better fit. This
is why it's always the set of the best programmers that adopt new technologies
that later turn into the next "obvious" stack, they are brave, not because
they are crazy, because they can analyze something regardless of the fact is
new or old.

------
threefour
I love the way Maciej Cegłowski describes his setup at Pinboard:

"Pinboard is written in PHP and Perl. The site uses MySQL for data storage,
Sphinx for search, Beanstalk as a message queue, and a combination of storage
appliances and Amazon S3 to store backups. There is absolutely nothing
interesting about the Pinboard architecture or implementation; I consider that
a feature!"

[https://pinboard.in/about/](https://pinboard.in/about/)

~~~
sigil
I also immediately thought of Maciej and Pinboard. He expands a bit in this
interview [1]:

> _Can you explain why you think that 's a feature?_

> I believe that relying on very basic and well-understood technologies at the
> architectural level forces you to save all your cleverness and new ideas for
> the actual app, where it can make a difference to users.

> I think many developers (myself included) are easily seduced by new
> technology and are willing to burn a lot of time rigging it together just
> for the joy of tinkering. So nowadays we see a lot of fairly uninteresting
> web apps with very technically sweet implementations. In designing Pinboard,
> I tried to steer clear of this temptation by picking very familiar, vanilla
> tools wherever possible so I would have no excuse for architectural wank.

[1]
[http://webcache.googleusercontent.com/search?q=cache:98zuG6u...](http://webcache.googleusercontent.com/search?q=cache:98zuG6u6-U4J:readwrite.com/2011/02/10/pinboard-
creator-maciej-ceglow+&cd=1&hl=en&ct=clnk&gl=us)

~~~
threefour
"architectural wank" \-- that's one for the dictionary!

------
pnathan
This is extremely good and betrays the originator's experience.

Another way to think about it is this: You get to change three axes in a
product: new underlying technology, new product, or new process.

\- Choosing one will allow you to progress with likely success.

\- Choosing two opens yourself up to non-trivial risk.

\- Choosing three means you will likely fail in this project.

There's a nifty talk by Steve McConnell about Software Engineering Judgement -
[https://www.youtube.com/watch?v=PFcHX0Menno](https://www.youtube.com/watch?v=PFcHX0Menno)
\- that goes into this kind of analysis.

You can debate which axes matter - you can debate the weighing and scaling of
them - but you can't get away from the conclusion that "pushing all your risk
boundaries at the same time equals failure". As a matter of fact, this is
structurally identical to the famous "fast, good, cheap" triangle.

n.b., this analysis really starts hitting home in multi-team environments,
say, over 50 engineers.

------
ignostic
I understand how someone might believe in "innovation tokens," but it's really
just a confused way to look at ROI. There's no inherent cost in some
innovation, though. If our programmers already know an "innovative"
programming method, there's no cost in doing things that way.

The author seems to be conflating the cost of innovation with the cost of
doing something you're less familiar with, which are not necessarily the same
things. The risk of chasing shiny new objects is real, but sometimes those
shiny objects can actually __reduce __costs and time spent to accomplish a
goal (like a MVP or new version).

Sometimes it's worth innovating if you already have experience in the area.
Sometimes it's worth innovating even if you have to learn and try new things.
Sometimes the time/monetary cost of innovation is 0, and sometimes it's so
high that you shouldn't innovate even if it improves your product.

This idea of limited innovation resulting in cumulative costs is overly
simplistic. The smart founder will recognize the difference between
innovations that will yield net returns and those that won't.

~~~
ZenoArrow
> "This idea of limited innovation resulting in cumulative costs is overly
> simplistic. The smart founder will recognize the difference between
> innovations that will yield net returns and those that won't."

I agree with you completely. Furthermore, I'd add that the view on maintenance
is too simplistic also. Effective maintenance requires more than just a tech
stack where the limitations are known, you'll also want something testable and
refactorable. Ballooning code bases are a real problem, sometimes the smart
move is to clean up the cruft. If you're smart about integrating new tech into
your stack there's no reason you can end up with a solution which is both more
robust and efficient.

Furthermore, there's the whole scaling issue. Perhaps the mode du jour is just
to assume increased server costs (regardless of where they're hosted) are just
a necessary part of scaling a website to more users, but rethinking your tech
stack can help keep these costs under control. Perhaps this is a decision that
can wait until you have a decent userbase, but it's still a good reason to be
open-minded about what benefits a new solution could bring.

~~~
ZenoArrow
> "no reason you can end up with a solution"

Should be... "no reason you can't end up with a solution"

------
sheepmullet
This article is a good _starting point_ to talk about technology choices. But
there are many issues with applying the advice in the real world.

First, he is intertwining two seperate issues, limiting tech choices in an
organisation, and incorporating new technologies. Keep in mind that you can
have seperate strategies for both.

Secondly, there is no notion as to how big a change a token is worth.
Obviously switching languages is a much bigger change than switching caching
libraries.

Thirdly, there is no mention of project size. Should a 3 month project get the
same tokens as a two year project? This year we have created ~300
microservices. If each was allowed 3 tokens we would have 900 new tech changes
in this year alone. That's unmanageable.

Fourthly, what is your organisational strategy and culture? If an engineer
prototypes in a new language is that a problem because it is seen as wasteful?
Perhaps it is something that will make the other devs jealous? Or is it
considered an investment in the company and a risk mitigation strategy? Do you
have the kind of engineers and tech leads who will do a lot of this
prototyping and experimentation on their own time?

~~~
NeutronBoy
Unfortunately I think the answer to all of them is, 'it depends'. How much
inertia does change get in your organisation? That will help place value on
the tokens.

For your third point specifically, I think taking a pragmatic view is the
best. You mentioned you created ~300 new microservices this year. I imagine
they're all based on the same pattern, so perhaps your tokens will apply to
that pattern rather than each individual project (eg. you get to change the
stack for future microservivces). On the other hand, at at least one new
service per day, it's obviously pretty efficient for you, so consider why
you'd change it unless necessary?

------
austenallred
The "innovation tokens" concept expands even past technology.

Want to innovate in the way your board is structured or remove standard
protections from the term sheet? Or even set up your Twitter account in this
never-before-seen way? Want to remove the idea of management, or rethink the
way offices work? You lose an innovation token.

------
_kerbal_
The site is currently down for me (503). While we're talking about boring
technology, please consider hosting your blog on a static file host + CDN. It
will be faster, easier to maintain, and virtually impossible to take down.

[0] [https://eager.io/blog/build-static-websites](https://eager.io/blog/build-
static-websites)

~~~
freehunter
I see people recommend static sites in general, but I've recently done some
research and couldn't find a static site generator that can give me a WYSIWYG
editor in my browser. What I need is a blog that lets me edit posts from a PC,
a tablet, or a phone, including picture uploads and one-click publishing.
Everything I found either left the editing portion up to the user or said
"just use WinSCP to upload your HTML/markdown".

I just went with Wordpress. My personal blog is not my job, I just want to
write down my thoughts. What specific technology would you recommend to
generate a static blog with a WYSWYG editor and picture uploads, on my own
server (not S3 or some proprietary paid hosting)?

~~~
qznc
You could try to use Wordpress for generating a static blog [0]. ;)

More seriously, I work in vim+git all the time, so managing my blog with it
feels natural to me. Editing in my favorite editor is more important than
drag&drop image upload for me.

[0] [https://wordpress.org/plugins/really-
static/](https://wordpress.org/plugins/really-static/)

~~~
worklogin
Just like the author, I don't have time to learn the ins and outs of new
"local optima" technologies all the time. I want a static site generator that
"just works". So yes, give me a Wordpress-style GUI for creating a blog, then
"compile" it to a static site, then deploy.

Seems like all the static site generators have lots of directory structure
conventions and hoops to jump through for simple things like pagination and
dates.

~~~
afarrell
And at this point we are really talking about what fits naturally in our own
hands. I use a documentation tool (mkdocs) for my blog, but that's because
like GP, I prefer working in git+vim.

------
grandalf
Ironically, Rails is now in the category of "boring technology" but each major
version introduces enough breaking API changes that many apps never get
updated. So all the pain of spending a token and little of the pleasure.

With smaller, more loosely-coupled modules, one can spend a fraction of a
token here or there and still revert back to the boring way when necessary.

~~~
frandroid
I'd argue that Rails is in the business of making new technology boring, so it
kind of sits at the edge of both; hence the breakage.

~~~
grandalf
That's true. Though it's just monolithic enough that the breakage can be
painful.

------
ytimoschenko
Sticking with the same set of technologies is a premature death for your
career as a programmer.

The whole article builds up on a point that people tend to fail more when they
are using new tools. That point is false. In reality, when you use a wrong but
’accustomed’ tooling in inappropriate situation, you end up writing code that
you would never write if you had chosen right tools. You are effectively
reinventing the wheel.

You also have an idea about ’innovation tokens’ that builds up on a static
representation of weight of a new technology in a project. That is ridiculous.

There is no definition of ’boring’ in this article. I don't understand why you
call PHP, Postgres and Cron ’boring’. What is ’interesting’ then?

It seems like you have made a wrong choice while thinking about the problem.
The problem is clear: people fuck up projects by using modern, hyped
technologies that are inappropriate for project's domain. They are just as
wrong as you are.

~~~
collyw
On the one hand I agree with you, but having looked at some CV's recently, I
see people who list every language and web framework under the sun. If you
have learned 10 new frameworks in the last year, then you can' have any in
depth knowledge of them.

------
cubano
This insightful article should be sub-titled "The difference between seeing
things from a company-centric vs. developer-centric POV."

My hats of to its author for having the courage to write it.

------
tie_
3 innovation tokens? The supply is fixed for a long while? People on
_HackerNews_ of all places buying that?

It's plain wrong. Innovation is good for any kind of organization, if done
properly. What the author should focus on is the lack of _agility_ that
prevents companies from experimenting and failing quickly. It's not the
innovative technology that gets you in the end, it's your inability to
evaluate/adopt/discard fast. Granted, that ability is hard to find in large-
ish organizations, but to willingly limit your innovation sounds like a recipe
for a slow death. It's like a gentleman boxer from the 19th century limiting
himself to just jab, cross, hook entering a modern MMA fight.

~~~
SwellJoe
So, call them "agility tokens". You've still got a limited supply when it
comes to trying out new languages, new databases, new whatever. If you've got
ten years Python and MySQL experience, and 95% of your codebase is in Python
with data stored in MySQL, what do you gain and what do you lose by
introducing Node.js and MongoDB into the mix? Sometimes it's worth the
trade...other times it's not. But, Node.js and MongoDB is _probably_ not going
to provide enough of a productivity boost to make up for the costs of
maintaining two codebases, two build/test/deploy environments, two databases,
etc. You're making a trade; sometimes it's beneficial (usually long term), and
often it's not (usually short term).

In short, yes, I'm buying this. I think it's a perfectly sensible analogy; a
somewhat leaky abstraction, if you will, since none of us actually have any
"tokens" that we are trading in for a new database. But, the meaning is clear
to me, and I can't find fault in it.

~~~
tie_
> Sometimes it's worth the trade...other times it's not.

That's exactly what the article disputes - "It's not worth the trade if you
had 3 of those already".

Sure, you could drown in meaningless (for your project) new technologies, and
this is a risk you should be aware of. If that's the meaning of the article
that you are referring to, I agree it is sane (and also wildly accepted). But
that's not what the article actually _says_. What it says is that you get a
superficial number of shots at new technologies, and that number is limited by
time/growth.

------
chetanahuja
[https://consul.io/](https://consul.io/) is mentioned as an "exciting"
technology. What is a "boring" alternative for this... that is, a multi-
datacenter, service discovery/health-check/config-distribution software
that'll "just work" ?

~~~
meatmanek
You can try using DNS with dynamic zones as a simple service discovery
mechanism (sharing one master with all your environments), but you'll soon
find out that: \- healthchecking really is a good idea in service discovery \-
clients are awful about refreshing state from DNS \- single-master systems are
a bad idea in a large environment. \- DNS replication is finicky; DNS caching
is slow.

Puppet with puppetdb can sorta fill this gap, too, as long as you don't need
fast convergence (or fast puppet runs, if your puppetdb is more than a few
milliseconds away from any of your nodes).

Consul may be new, but it's built on really solid ideas and technologies. You
can read papers[1][2] about the underlying technologies to get a sense for how
Consul will fail. I'd like to think that counteracts some of the problems you
get with newness.

[1]
[https://ramcloud.stanford.edu/raft.pdf](https://ramcloud.stanford.edu/raft.pdf)
[2]
[https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf](https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf)

------
beat
This also makes me think of the problem of legacy code. Once you've written an
app, it's feature complete, and it's profitable, expending effort to rewrite
it in a new tech is not only a questionable value proposition, it can be
actively dangerous. Replacing ugly but works with beautiful but fails is not
good engineering.

~~~
xamuel
Plus, if you're going with some trendy new framework that emerged in the last
month, chances are that when you do need maintenance, no-one'll be around who
wants to work on it.

------
code_reuse
Why not let the data make the technology choices for you ? The way I go about
making technology choices is by examining the data that I'll be working with
in conjunction the data access patterns inherent in the features that I'll
need to support.

I look at things like projected read/write throughput, latency
characteristics, total data volume, concurrency, and whether or not the
problem domain actually requires highly relational queries.

I think that a lot of shops don't put enough thinking into figuring out what
kind of data access patterns they'll need to support throughout the life-cycle
of the business. This is no big deal if the product doesn't experience growth.
But in terms of rich web applications which begin to experience growth the
team inevitably ends up with a massive scaling problem unless their system
architecture was designed to support these access patterns from the ground up.

It seems that this "growing pains" scaling nightmare has become almost a right
of passage for successful tech startups. Founders are generally led to believe
that it's a good thing for them to need to sell equity to outside investors in
order to "scale out" a much larger team to build out the infrastructure
required to perform in-flight rocket surgery on the application before it
either explodes or becomes increasing cost inefficient.

While this whole process greatly benefits VCs, the high end tech engineering
job market, and recruiters, it's absolutely terrible the founding team because
it means they inevitably get massively diluted as a consequence of
experiencing success. I'm not saying it's a conspiracy, but I am saying there
is massive financial incentive to keep this kind of technical knowledge about
best practices an open secret within the highly paid IT consultancy world.

TLDR: It's my supposition that small teams can build scalable, composable,
systems by thinking about web scale data access patterns from the beginning.

------
reillyse
Agreed. Best expression I've heard to sum up this concept is "This is not an
after-school club". Playing with shiny new technology is not the point. The
point is to make money for the company and you use the best tools for the job.
Most of the time that means solid tools that everyone understands.

------
adventured
My favorite piece of 'boring' technology: Sphinx (the search software).

I've been using it for maybe six years non-stop. I've thrown large data sets
at it and it always runs fast; it's trivial to set up and always has more than
enough options for my search purposes. It has also become a much better
product over the time I've used it, with an active development group behind
it. Sphinx works so well as is, I've never had a reason to look elsewhere at
the latest hot search tech, it would be a waste of my time to do so.

~~~
brianwawok
Never used but solr is also boring and good.

------
rthomas6
As someone who has never done web development and knows nothing about it, how
do I learn what to do? Every time I try to determine what people are using and
what is the path of least resistance to make a website, I am overwhelmed by
choices. How do I determine what is good? If I build a website, I don't want
to spend my time learning an interface that has been superseded by something
better that all pros now use. What backend do people use? Rails? Django? PHP?
Perl? Some Javascript? What javascript libraries do people use? etc.

~~~
nostrademons
Really old boring tech (c. 2002): Perl or PHP, MySQL, very little JS (and
usually plain vanilla JS if used).

Old boring tech (c. 2007): Rails or Django (the two are largely
interchangeable, it's largely personal preference), JQuery, PostGres.

Old tech (c. 2012): Node.js, Angular, Express.js, MongoDB. Not boring, because
you will still face lots of problems deploying this stack at scale.

Boring tech (c. 2012): Native iPhone/Android apps, JSON-RPC, often Java on the
server. Usually Guice or Dagger is used for dependency injection with Java.
Not really old (except for the Java part), there's still a lot of innovation
going on in this space.

Bleeding edge stuff (today): React, Polymer, Go, Rust, Erlang/Elixir (Erlang
is interesting in that the runtime and standard libraries are _rock solid_ ,
but because it's so different from most mainstream languages, you can face a
lot of integration pain when looking for third-party libraries), Haskell (old,
but very different from anything mainstream). Basically everything you read
about on Hacker News.

~~~
vezzy-fnord
Erlang isn't bleeding edge at all. In fact, it's a battle-hardened and
conservatively evolving platform dating back to 1986, which is one of its
selling points amongst all the technical benefits.

~~~
nostrademons
Depends on your problem domain. It absolutely is battle-hardened and
conservatively evolving, but it grew up in the telecom industry, and most of
its "mainstream" uses (Facebook chat, Whatsapp) are in messaging.

Erlang strings, for example, are lists of bytes, which will blow up your
memory requirements and algorithmic complexity if you do any serious string
parsing. You're shut out of common libraries like protobufs. There are
libraries available for things like HTML parsing, MySQL, Postgres, and even
Apple Push Notifications/Google Cloud Messaging, but many of them are some
guy's personal project on GitHub rather than something that's gotten
widespread use & testing and has plenty of StackOverflow posts for help.

~~~
vezzy-fnord
Lists of Unicode points (integers), specifically. That said, most real world
string manipulation is done by passing them as binaries. It really isn't much
of an issue, nowhere near as much as is the hell that is NULL-terminated
character arrays in C, which, mind you, power most of our software
nonetheless.

Erlang does have a good Protocol Buffers library, by the way:
[https://github.com/basho/erlang_protobuffs](https://github.com/basho/erlang_protobuffs).
Even if it didn't, you'd use a more native serialization format like BERT.

As for abandoned projects and library sprawl, that is true. However, I'd say
that this is far more bearable in Erlang than in other languages. For one, the
module system makes deducing how to use a program's API from source code much
easier even if there is no explicit documentation - every Erlang program
basically gets a user interface for free just by virtue of being a module. In
addition, if the library in question is a properly structured OTP application
or if it uses vanilla process primitives efficiently, I can have relative
confidence that it is less likely to blow up in my face than, e.g. a random
Java library.

Even still, there's all sorts of libraries despite the small community. If
you're doing web development and there's some RESTful API that has no Erlang
bindings, those are relatively trivial to roll yourself.

------
kailuowang
One of the reasons to choose interesting technology is to lure interesting
engineers, especially when your domain is boring. But maybe boring engineers
work good enough with boring business problems.

------
elevensies
My rule of thumb is that if the project has a deadline, then I use components
I already know. And to test something new I use it for something that is
internal only.

------
nadam
This does not really apply to me because I tend to bind boringness/excitement
to problems and not technology.

For example if my task is to create a typical website, that is relatively
boring by default. Speaking about databases I choose postgresql not because it
is boring but because that is the most convenient, elegant, and general-
purpose solution I know. I dont't get excited about NodeJS or MongoDB by
default. What I get excited about if I encounter a hard problem: maybe a very
hard scalability issue that is impossible to solve with PostgreSQL. Searching
for the solution to that problem is interesting and then I might get exited
about some NoSQL solution. Also I don't really get excited about new
programming languages as almost all programming languages try to solve the
same problem. On the other hand I got excited about recent advances in Deep
Machine Learning (and frameworks like Caffe, cxxnet) because with these new
advances and tools it is possible to solve problems which we had absolutely no
chance previously.

Also it is pretty standard for me that the stuff I do at the workplace is
relatively boring, and the stuff I do at night by myself is exciting.

------
slyall
The rule of thumb I used to have (which is a little dated now but imagine 5-10
years ago):

" Any server should have only one thing that is not installed via the default
OS packages "

So you could have one weird bleeding edge version of your language or some
unusual daemon that nobody else used but that was it. The idea was the rest of
the server was stock and it had only one weird thing.

~~~
deathanatos
We would fail your rule on so many cases. We run Ubuntu, which might be our
mistake, but off the top of my head, I think our installations of nginx[1],
Python[2], pip[3], rust[4], mongo, consul[4], openldap[1], gcc[3] and several
other things fail your criteria. Not all are on the same server, I suppose,
but there's definitely overlap. Most of these are simply because the version
in Ubuntu is unacceptably out of date, some are because there are bugs in the
provided version, some just flat aren't available, and some are re-compiled
with additions. (Like, non-default USE flags, if you're a Gentoo user; Ubuntu
lacks the concept.)

I think the issue I have with rules like yours, and that proposed in the
article, is that they're fine when you're working with no information, but
when an engineer lays out a need, shows how the "boring" package available
does not fit that need, and then proceeds to choose an "interesting" package
that meets the requirements of the problem, the last thing he wants is
nebulous objections over how the choice is an "interesting" tech. For example,
the article calls out consul (we also considered etcd and Zookeeper…) as an
"interesting" choice, but need a multi-node distributed database with a _good_
consensus algorithm for things such as service discovery and locking; what
other techs fit the bill that aren't "interesting"?. Consul fits the bill.
It's HTTP- and DNS-interfaces interest me because they play well with our
existing _boring_ tools, like http (or curl) and dig…

IIRC, [1]: default package lacks features [2]: unacceptable issues [3]:
default package too far out of date [4]: default package is non-existant

~~~
kibwen
Off-topic, but might I ask what you're using Rust for? We love getting
feedback on the language, especially so for people crazy enough to use it in a
production setting. :)

(I'm also curious exactly how old your version of Ubuntu is, as testing the
language on older versions of Linux is currently a bit of an annoyance and
it's glad to know that someone out there is benefiting from that effort.)

~~~
deathanatos
Sorry about the slow reply. We're using a mix of Precise/Trusty[1]; we're not
(presently!) using Rust in a production setting, I'm only using it to
experiment currently, but it's still something outside the package manager
that _if_ I wanted[5] would apply (and it's tempting…), so I decided to
mention it. I'm presently using it to parse an internal file format we have,
mostly as a project (one of two) to teach me Rust. (I'm using Rust presently
on Trusty & Gentoo presently, both using the binary downloads from the
website. I tried brew-installing it on OS X, but that doesn't give me cargo;
haven't tried again in a few weeks, and haven't filed an issue…)

While it isn't presently in a production setting, I would be completely
willing to put it into production. There are some spots where we need more
performance than Python can offer, and the memory safety and good static
typing are very appealing (I'm vary wary of memory-unsafe languages anywhere
near the path of user input…). (I personally use C++ presently here, but the
included gcc in Precise/Trusty IIRC lacks some of the more modern C++11/14
stuff, so heavy use of Boost is needed. Also, some third party libs — mongo,
to name one — have less than elegant C++ interfaces…) The Rust standard
library has made great strides in the few months since I started using it (I
started picking it up in December?), and since my normal work is in Python,
the static typing is a very welcome change. I'm still wrapping my head around
lifetimes[2][3], regex! tripped me up a bit[4]. I found some iterator based
functions — such as zip — odd: it's a member function on the Iterator trait,
and only allows two args. I find Python's free-standing zip function which
takes any number of iterables much more natural. Compare Python:

    
    
      zip(a, b, c, d)
    

to my understanding of Rust:

    
    
      a.zip(b).zip(c).zip(d)
    

I also wonder (since I've not tried it) what effect this has on unpacking.
Instead of (a_item, b_i, c_i, d_i) being the items of the iterable, I feel
you'd end up with a (a_item, (b_i, (c_i, d_i))); I wonder if a destructuring
let would be hard on that? (I've not actually gotten that to work…) Also, I
wish enumerate had Python's start argument; I use enumerate a lot for doing
numbered lists for humans, which start at 1.

Overall, it's shaping up to be quite a language. I sincerely hope it uproots
C. (I'm a big proponent of modern C++, and so I'm still attached there. :D)

[1] I really wish upgrading happened faster; I tried to cull items that
weren't relevant to Trusty because I do consider Precise out of date. We're
limited to LTS simply because people aren't comfortable with non-LTS (and I
don't know that we could upgrade quickly enough to stay on a supported OS…)

[2]: [http://stackoverflow.com/questions/28956195/how-do-i-
create-...](http://stackoverflow.com/questions/28956195/how-do-i-create-use-a-
list-of-callback-functions)

[3]: [http://stackoverflow.com/questions/29001667/how-do-i-
store-a...](http://stackoverflow.com/questions/29001667/how-do-i-store-a-list-
of-callbacks-on-a-struct)

[4]: [https://github.com/rust-lang/rust/issues/23326](https://github.com/rust-
lang/rust/issues/23326) — (I had to follow up in the IRC room here, it's not
quite as simple as the answer in the issue. Not only do you need to depend on
the crate, you need the "#![feature(plugin)]" "#![plugin(regex_macros)]" as
_crate_ attributes, and this can be confusing if your use of regex is in a
module-in-a-modue-in-a-module; the use of regex in that module
(foo/bar/baz.rs) causes you to need to edit a different file (lib.rs).

[5]: one of the things I miss about Gentoo's ebuilds in Ubuntu is that it's so
darn easy to throw additional packages into the purview of the package
manager. Building .deb files is vastly more complex than an ebuild.

~~~
kibwen
Loving the feedback, thanks! That's a good point about destructuring the
returned type of a chained zip, I'll need to take a look at that. And I'm
happy that you've found the IRC channel, don't be a stranger if you need help
or have any more feedback (especially wrt stdlib APIs that you'd like to see).

------
dataker
Boring is a dangerous word: who is your subject?

For a 'wannabe entrepreneur learning WordPress', anything with more than 50
lines of code is insane. They may be able to build an ecommerce, forum,....,
but usually no real value is seen.

Still, even if one takes an exception to this rule(example Groupon) it can't
be compared to a real technical innovation.

------
serve_yay
Aha. So now, the toothpaste in the tube (if you will) shifts to the issue of
which technologies are boring, and which use up innovation tokens. Surprise:
the technologies you prefer are boring (note the direction of causality
there), and the ones you don't like require innovation tokens!

------
AnonJ
The general point is quite good. The only thing is that PHP should be
considered as a particularly bad outlier and people still should avoid it at
all costs. It's just way too old. It's not "boring", it's antiquated. With
Rails/Django being around for almost a decade, they're much more sensible
choices than PHP. Sure if your whole massive codebase started out in PHP
that's one thing, but if you are spinning up a new project I see definitely no
reason for using PHP... Even if it means you'll have to spend a little bit
more extra time getting up to speed in the beginning, you'll benefit a whole
lot in the end. The same goes for MySQL vs. Postgres, to a lesser extent of
course.

------
kyllo
For me, Python is boringly productive. When I need to get something done, I'd
probably have more fun figuring out how to do it in Haskell or Clojure, but
chances are there is already a Python library that solves my problem with a
few lines of method calls.

------
Erazal
The article was very interesting - I find the whole idea of "innovation
tokens" very compelling. However, I didn't understand how Rumsfeld came into
play here. Did somebody understand this point ? I feel like I'm missing
something out here.

~~~
aero142
He was ridiculed for talking about "unknown unknowns" because of the
ridiculous phrasing. See
[http://www.youtube.com/watch?v=GiPe1OiKQuk](http://www.youtube.com/watch?v=GiPe1OiKQuk)
Interestingly, this ridicule became so common that now people know more about
known unknowns, and unknown unknows, which is perfectly sound logic, but just
weirdly associated with Rumsfeld now.

~~~
thoman23
I'm no Rumsfeld fan, but I never understood why that statement would be
ridiculed. The quote below makes perfect sense to me.

"Reports that say that something hasn't happened are always interesting to me,
because as we know, there are known knowns; there are things we know we know.
We also know there are known unknowns; that is to say we know there are some
things we do not know. But there are also unknown unknowns -- the ones we
don't know we don't know. And if one looks throughout the history of our
country and other free countries, it is the latter category that tend to be
the difficult ones."

~~~
antod
Yeah that statement also made sense to me despite not liking the guy. I like
to think of "known unknowns" as knowing the question but not the answer, and
"unknown unknowns" as not even knowing the question.

Of course knowing the answer but not the question is best left to Douglas
Adams.

------
impostervt
Cached version:

[http://webcache.googleusercontent.com/search?q=cache:http://...](http://webcache.googleusercontent.com/search?q=cache:http://mcfunley.com/choose-
boring-technology)

------
erichmond
These kinds of articles are a bit foolish.

The reality is, you need senior engineers, people who have built systems
before, and are over the stage of their career where they just want to build
stuff for fun, and are actually focused on building systems that provide value
for the companies they work for.

It's not about new tech, or old tech, or boring tech, or exciting tech. It's
about looking at the specific problem at hand and making an assessment about
which technologies make sense to use.

As much as our industry doesn't want to admit, there are advantages to having
real work experience.

------
shangxiao
Postgres is definitely not boring as there is a lot of interesting stuff
happening with this database. I think the author here should really be making
the distinction between sexy & unsexy.

~~~
nathan_long
Anyone who finds any database sexually attractive seriously needs counseling.

~~~
shangxiao
Only the people like you who would take that literally would need counselling.

~~~
nathan_long
Sorry, let me try again, without humor this time. :)

"I dislike the use of the word 'sexy' to describe things which are considered
attractive for non-sexual reasons, partly because it dilutes the original word
and partly because it doesn't convey any reason for the attraction to the
actual thing."

------
esfand-r
A few of my friends had shared this on Linkedin and since it was also
referenced in some of the mailing lists I am a member of, I felt compelled to
write up a response to this rather shallow article and line of reasoning.
[http://esfand-r.github.io/2015/04/don't-choose-boring-
techno...](http://esfand-r.github.io/2015/04/don't-choose-boring-technology/)

------
megablast
Was forced to use Xamarin Forms in a project recently. It is less than a year
old, and not suited to what we need. But somehow the idea was it will save
time.

It was a world of pain.

------
scrame
My rule of thumb is: you can do something right, or you can do something new.

You need to find a balance, but if you're doing something new in business you
should use reliable tech, if you're doing new tech you need to isolate that.

Corollary: you can hire for standard tech, but have to ramp up in-house tech.

------
grandalf
Using multiple tokens does slow things down, but it is sometimes worth it if
you get a bit lucky and make good choices.

These days I generally optimize for fewer lines of code (boilerplate or not),
as few dependencies as possible, and a general respect for the CPU cycles
needed to run it.

------
aaron695
Life's short.

Fuck it. I'll do what's fun.

I get the point, if the end goal is the reason, don't get caught up in hype.
But I won't lie, my job is not a end goal, as a employer never believe this.

Plus there's complicated issues like a bored worker is a bad worker.

------
slantedview
I've always had a soft spot for the choose boring argument, but for some
problems boring tech is a poor match. Rather, I try to look at each problem
objectively - decide what I want out of a solution, and select accordingly.

------
guilt
We started building with that approach. We're looking for people to tell us
what's cool and what's not!

We at GeoG are building a safe, easy and sustainable platform for IoT. We can
totally do everything if we put time and effort into it - which we are ready
for! What we want you to do, is start hacking and building things with us. No
strings attached!

We recently set up a Community page
[http://community.geog.co](http://community.geog.co) where we want to start
listening to you, your needs and suggestions.

Did we mention we have an API? We'd like you to start beaming data and create
cool things with us. [http://api.geog.co](http://api.geog.co)

Also, if you have reasons to hate us, bring it out! We're listening.

------
nijiko
None of the technologies you mentioned are boring, pick the right technology
for the job and don't worry about what others are using.

------
smcg
503 Temporarily Unavailable. Anyone have a mirror?

~~~
jessaustin
They must have used something "exciting".

~~~
mcfunley
Author here--yeah the irony is killing me. It's PHP though. I'm fighting quota
issues with my dumb host. I assure you migrating to something better is not my
day job. Here's a pdf of it:

[https://www.dropbox.com/s/1k7ngf52o822ccy/choose-boring-
tech...](https://www.dropbox.com/s/1k7ngf52o822ccy/choose-boring-
tech.pdf?dl=0)

~~~
nijiko
That's what boring technology will get you.

~~~
sgift
Snark by people who wouldn't do better on the best of their days? If that's
the only problem: +1 for boring technology

------
fixxer
I've always attributed "known knowns" and "unknown knowns" etc to Nassim
Taleb, not Rumsfeld.

~~~
MrZongle2
Yes, but how else could the author get a political dig into the article?

A lazy writing blemish on an otherwise good article.

~~~
gfodor
Not really, first hit on Google for "unknown unknowns" is this wikipedia
article which immediately mentions Rumsfeld:

[http://en.wikipedia.org/wiki/There_are_known_knowns](http://en.wikipedia.org/wiki/There_are_known_knowns)

------
ask5
great article, however the Jerry Seinfeld gif was bit annoying. Although must
admit it is funny and probably appropriate ;-) it was hard to read anything
around it. Could only concentrate on the content when the gif was outside the
screen.

------
Salter1772
Bringing politics and profanities into technical discussion != classy

------
chinathrow
Nice caption there next to Rummy.

"To be clear, fuck this guy."

------
elchief
Ya, like an elastic cloud to host your blog

------
erichmond


------
m0skit0
If people were thinking like this guy, we would still be living in caves.

------
AceJohnny2
Hah, I'm right now in the process of helping another group move from CVS to
Git.

Um. ;)

