
The Number One Trait of a Great Developer - r4um
http://tammersaleh.com/posts/the-number-one-trait-of-a-great-developer/
======
gfodor
The catch-22, of course, is that at some point, MySQL _was_ the "new thing" to
Diane, so how did she learn it in the first place?

The answer of course is that judgement is less about making decisions upon
familiarity per se and more about knowing when unfamiliar technology seems to
provide good trade offs given what is known about it. For example, PostgreSQL
may be unfamiliar, but choosing it is a wildly different type of decision than
choosing another less mature unfamiliar data store, since at a high level
PostgreSQL is known to be a well-understood, mature technology that may
provide significant leverage over MySQL in certain scenarios. Being able to
understand the various dimensions of technology choices and make good ones
with limited information and experience is what makes good judgement, not just
an outright aversion to unfamiliar things.

~~~
raverbashing
I see this as a 3D optimisation problem.

1st dimension: power of the tool

2nd dimension: ecosystem of the tool (developers/libraries/support/etc)

3rd dimension: developer's ability to use the tool (time to learn, current
ability, etc)

If you think on factors going from 0 to 10, MySql could be [4, 7, 8] and
Postgres could be [7, 7, 4] for that developer.

Of course the 3rd factor depends on the developer (and on the time available),
for example I know Go is very powerful, but I wouldn't use it, unless the
available options for me are not good enough for a given problem (so the time
to learn compensates for the loss that using other technologies would incur)

~~~
gfodor
I think another important dimension is the suitability of the tool to the
technical culture and composition of the organization. For example statically
typed java-ish languages tend to be more attractive if you have many
disconnected groups contributing to a code base, and if I'm being cynical,
overall less average skilled programmers just hacking out features. Whereas a
small team of top-tier developers are more well positioned to leverage the
benefits of more fancy tools and languages.

------
cmdkeen
This is a problem because writing working, maintainable code that solves the
given problem rather than an imagined problem should be the trait of a
competent developer.

I had the pleasure of going on a course run by a German developer recently,
and his brutal focus on quality and maintainability made me rethink what we
class as a good developer. Brutally in the article Jack is not a rockstar -
Jack is incompetent. A great developer solves complex problems in a way that
is maintainable - the problem is that most problems aren't that complex.
Dianne however didn't do anything that made her "great" \- any developer who
can't do the kind of engineering practices she did should be fired.

Hacking on something in your spare time is completely different, but as soon
as you want to work on something that others will work on as well you need to
be into that engineer mindset. It's one of the things that impresses me about
Patrick McKenzie's writing, and his experience of learning that the hard way
in Japan.

~~~
brador
Maybe we should be asking why the things we make need to be maintainable. Why
can't I just make program module x to input a and output b and never touch it
again.

I believe modular code is the future and the sooner we get there the better.

~~~
sanderjd
Here's how I read your comment: why do we worry so much about making things
maintainable - why don't we just make things maintainable? Building a system
out of small modules _is_ what makes it maintainable.

------
rachelbythebay
Joe asked the same questions and then wrote a collection of CGI
programs/scripts in one of those languages which start with the letter P. It
was dull and uninteresting. There were a half-dozen commands the devices
needed to call, so he hand-wrote the SQL, making sure to use the appropriate
techniques to insert parameters instead of "string building" and opening the
client to injection attacks. His code thus resembled very small filters: HTTP
requests mapped into SQL, and then results mapped back to the client's output
requirement (unspecified in the story).

His code wasn't super fast and wouldn't win any awards for being clever, but
he got it done quickly and it worked. They were able to ship. Over the years,
as they moved from one web server to another and from one platform to another,
the maintenance programmers they brought on later (for Joe had moved on, as
people do) were able to adapt it without too much trouble. The CGI aspect
meant it could run on IIS or nginx or whatever else they found.

It would never scale to huge numbers of queries per second, but it wasn't
supposed to. It needed to handle 500 devices in the first year polling about
once an hour. That's 12000 queries per day, or .138 QPS (queries per second).
You could almost service those requests _by hand_ if the timeouts were high
enough and people were cheap enough: just cut and paste!

Bob asked the same questions, wrote CGI and hand-coded SQL too, but made a
mess of it. It took him three times as long to write. It had numerous security
problems and took the site down when someone discovered they could pull a
"Bobby Tables" attack and change everyone's recipes to involve dog poop.
Customers rioted and demanded refunds. The company went out of business.

They used the same technologies and got two different results... and this
happens constantly.

Some technologies are used to create only garbage, sure, but the technology
itself won't guarantee success.

------
chrisrhoden
Dianne's solution creates a maintenance problem if the number of devices grows
too much or if network stability is a problem, or if there is an expectation
down the line of, for instance, push sync.

My point is not that her solution was not valuable, but that to suggest that
the maintainability of the two solutions is black and white like this is
wrongheaded.

The really unfortunate implication here is that learning new technologies is
either something that shouldn't happen or something that shouldn't happen on
the job. I'd say that something with these requirements would have been a
perfect opportunity for Diane to, "learn postgres," if she felt that it would
have been better. The extra day or two of effort would likely pay off – if not
on this project, then on the next.

> but she also knew that anything much more complex would be beyond her
> current skills.

And with this mindset, her skills for the foreseeable future.

~~~
willvarfar
Actually, why wouldn't a web service with a mysql server not scale for many
orders of magnitude more users than they expect?

~~~
bhauer
MySQL can easily scale to well beyond the "500 users" unless each of those
users puts an especially heavy burden on the data store. The scenario at hand
is synchronizing recipes and that is not a heavy burden.

Further, with a high-performance web framework, foregoing resilience (not
recommended, but just for argument's sake) a single web-application instance
should be able to handle a much larger number of users as well.

But all of this isn't really relevant to the point made by the OP. He's just
using these fictional data points as an illustration. Although I am personally
in favor of using examples that are more representative of reality, I think we
can mostly ignore the fictional data points to appreciate his point.

------
gngeal
_Dianne knew it wasn 't the most elegant solution, but she also knew that
anything much more complex would be beyond her current skills._

Since when are complex solutions elegant? I always thought the simple ones
were the ones deserving to be called "elegant". Also, the cheapest, fastest,
and most reliable components are those that aren't there.

------
outworlder
I don't agree with the article at all. At least, not in the way it was
presented.

Sure, designing such a small system to use Cassandra and Protocol Buffers
_might_ be overkill. Then again, it might not. Without more information, it
would be impossible to know which one is the best developer.

For instance: how much time did the Node+Cassandra+Protobuf implementation
take? How is that, compared to the junior dev that only knows MySQL? Was Jack
on a deadline, or did he have time to explore alternatives? Was Jack really
inexperienced, or he did have experience in the new tech, but wanted those in
his resumé?

Were they billing hours? What about the coworkers, do they only know Ruby and
MySQL too, or are they confortable about learning new things? Do they even
have coworkers?

Was Jack's solution modular enough that components could be replaced, if they
were a problem? (Cassandra, for instance)

And most importantly: what was it that made Jack's implementation
unmaintainable? Sure it wasn't because they are "new"(the article is rather
old) technologies. Was the code badly documented? Was the design flawed?

Jack's biggest flaws, as can be inferred from the article, is not asking
enough questions before diving into the implementation, and not getting
permission before using "new" tech.

I know the author did not get in depth about the personalities or the project
itself because he wanted stereotypes to illustrate a point, which was taken.
But the problem with it is that other people could get the impression that
"new" technologies should never be considered, or that any attempt to make the
application "scalable" from the beginning is a "Jack trait" and thus, wrong.

Not to mention that Jack might even be right, should the company's
expectations prove to be too low.

------
dasil003
Any article stating one trait as being most important for anything is probably
being hyperbolically reductive, but in this case it's particularly painful
because "judgement" is absolutely meaningless. Every single thing every person
does requires judgement, and if you don't have it you'll probably get hit by a
bus before you have a chance to get a job.

What this article is really arguing against are early-adopter developers who
chase every new fad for its own sake and prioritize their own amusement over
the best solution at hand. But this says about how good the developer is, only
how good their actual contribution to your company will be relative to their
potential. A shit developer can do their best but it still won't measure up to
a great developer's trend-chasing effort.

Now even assuming some base level of competence, there are many different
mindsets and personalities that lend themselves to different types of
programming. I could go on and on about specific skills but we're mostly
programmers here so I'll just hint at it: debugging skills, low-level skills,
architecture skills, analyst (business requirements) skills, hci skills,
modeling skills, clairvoyance, luck. All of these things have varying degrees
of importance depending on the job and team at hand, and some combination of
them will make someone the best developer _for some specific job_.

But if you find my answer too wishy-washy and you want specifics about what
makes a great developer I'll boil it down to two things. First, to achieve
competence as a developer, one needs have the tenacity and logical thinking to
debug any issue no matter how arcane. Second, to achieve greatness one needs
to be able to comprehend the entire problem space in such a way as to account
for many more factors than an average developer let alone a non-technical
person could ever hold in their head at once, and have the ability to
translate those constraints into a system which is at once easier to
understand and more elegant than the vast majority of competing solutions.

------
6cxs2hd6
Nearly all the comments so far focus on neophile vs. neophobe. But I took the
main point of his post to be orthogonal to that: __Dianne took time to ask
questions __. She didn 't immediately dive in and start using her {comfortable
old tool, sexy new tool}. She asked about the requirements: Estimated user
count, load, connection quality, etc.

That's the key point. A developer with good judgment will start there. A
developer with good judgment might even raise their hand and say, "As I
understand this problem, it requires technologies I'm not so good at. What
should we do?"

There are exceptions to every rule, and all sorts of projects. But in general,
that seems like the sort of developer you want to hire.

------
EnderMB
There seems to be a self-imagined divide between these two "camps" of
developer. We see:

1\. Hipsters that throw tools at a problem, and are eager to know as many
different languages/frameworks/tools as possible, to stay ahead of the curve.

2\. Developers that have picked a framework, and stick with it for life. They
learn when necessary, but their framework can handle most problems thrown at
it.

I would consider myself a part of both camps. I read Hacker News because
people talk about these tools in use, and I love sitting down at home and
toying around with new JavaScript frameworks, and different
environments/tools. However, I have a job where I use a proven framework, a
solid language, and a role where I model problems with a team of like-minded
developers.

Now, imagine the scenario. I toy around with a new framework and absolutely
love it for some use case. I decide to write a tool using Node.js, I use
Bootstrap, and then I post it on my blog, Twitter, and on Hacker News. Some
will check my work out and think it's cool, and others will look at it and
will vomit at the very thought of JavaScript running on a server when another
server-side language/framework would be far more suited to the problem at
hand.

The point I'm trying to make is that developers are so keen to slate
projects/code that was written for fun. My Node.js application isn't very
good. Hell, I can probably write it a million times better in the language I
actually use day-to-day in my job. I just wanted to release it so others can
learn from it, and to show that I'm capable of keeping up with everyone else
that is releasing apps-a-plenty in all these crazy new languages I see every
day.

I agree with the general idea of the article, but only because developers are
starting to embody these camps. I see entry-level/junior developers that apply
for jobs with CV's that list every language they've touched for at least half
an hour, and I see battle-hardened developers complaining about people trying
to legitimise new tools because they're "not as good" as the tools that
everyone else uses.

------
venomsnake
How is Dianne solution not maintainable or scalable? I have used flask gevent
and mysql to take similar load. MySQL is not bad at all if you check
everything you send it with explain detailed while developing.

------
lhnz
No, that's the number one trait of a great employee - and the sign of a
leader.

The number one trait of a great developer is literally their ability to
quickly write amazingly elegant code and manage complexity.

~~~
gngeal
_No, that 's the number one trait of a great employee - and the sign of a
leader._

Exercising self-control is being the leader of one person.

~~~
lhnz
You're right. I meant to say one of the signs of a leader, you also need
charisma && (authority || prestige).

------
pnathan
Jack drops a couple turds into your codebase, but so does Diane. Hers just
take longer to arrive. Inability to seek actual best tools based upon personal
fear of the unknown does not a great developer make.

Basing one's code off the familar or the purported "best practice" without
actual knowledge leads to drek down the road. A great developer chooses the
right tools for the task (along with other things...) Diane's, for instance
has these flaws:

\- Requires a relatively heavy runtime.

\- Requires HTTP stack.

\- Wastes CPU/power

These are not negligible considerations for embedded devices.

Someone who knew what the crap they were doing would likely have chosen a
binary protocol standard (many such exist), then pumped that information out
over an appropriate topology - someone suggested UDP. The database should
probably be SQLite; maybe berkDB, depending on the complexity of the data
model and the CPU used. Bluntly, the proposed "great" solution isn't. It's an
inelegant solution putting heavy demands on the hardware unnecessarily.

Judgement is fine. What you use as a proxy in hiring for judgement is
experience. E.g., you don't hire a Ruby dev to do embedded work. Experience
tells you that.

What makes a developer great is great wisdom: knowledge applied skilfully.

n.b.: I wouldn't hire either Jack or Diane based on this article; Diane is
fearful and Jack picked a javascript engine too immature for embedded. Jack
might be led to understanding and might be worth more; I don't want to go
around trying to convince people that better is better - I have no hope for
Diane if she can't spend a day learning postgres.

~~~
mwcampbell
I thought that in this contrived example, Jack and Diane were writing the
software that runs on the server, not on the kitchen devices. Also, what if
the "devices" are small-form-factor PCs? That would relax the constraints
considerably.

~~~
pnathan
Ah, see, I read the spec as saying that it would be things stored on the
kitchen devices (toaster, etc).

You do want to make sure your comms layer is easy to make reliable when
talking from the device, regardless. Might be HTTP, might not be - I'd likely
take a gander at ProtoBuf myself for feeding over TCP/IP to minimize comms
complexity on the phone home code.

~~~
mwcampbell
I think I'd use HTTP as the synchronization protocol, because GET and PUT are
specified as being idempotent, meaning it's safe to retry them. Also, HTTP is
a ubiquitous standard, meaning there are several implementations to choose
from, on both the client and server sides.

~~~
pnathan
The ubiquitous of HTTP is indeed a definite point in its favor(as is its
popularity and level of documentation).

> GET and PUT are specified as being idempotent, meaning it's safe to retry
> them.

Entirely OT: Abstractly, yes, that's how they _should_ be implemented, but
they don't have to be and consequently it's entirely up to the receiving
server whether they are or not. Which, in terms of assuming that someone will
make unverified and bad assumptions, means that the implementer at some point
will let them slip and become non-idempotent.

------
w_t_payne
Yes, yes, yes, and thrice more yes.

We embrace complexity at our peril. Dullness is a virtue.

~~~
arethuza
"Dullness is a virtue"

It's not as simple as that though - it all depends on the context of the
project. Sometimes it is perfectly OK to go crazy and use bleeding edge
technology other times that would be daft - it all depends what you are trying
to achieve mixed in, as the article says, with a heavy dose of judgement.

~~~
cmdkeen
It's about how you use bleeding edge technology though, especially after
playtime is over and you've established it does what you want. All too often
the engineering of things like abstracting the bleeding edge tech, or
documenting what is going on gets forgotten. Then some other poor soul is left
to pick up the pieces when said bleeding edge tech released a breaking API
change.

------
JulianMorrison
Raw UDP+checksum might have been even better. Easier to build on a tiny
microcontroller. Lighter on the network. Offers an expansion path to NAT hole
punching for push updates.

------
terranstyler
I agree with the author. A few (actually still good) devs I know prefer to
solve a problem in a sophisticated manner, scalable and all the stuff in 160
locs instead of using the non-scalable solution of 40 locs that will also
solve the problem, if only for the next 12 months or so.

I think the problem is the incentives. As a founder I need the stuff done
until some deadline while as an employee I prefer learning new stuff in a
company.

So I found myself on both sides of this story and I have never been more
productive than as a founder because I only choose the small scale solution
because I can complete two tasks per day instead of one task every two days.

Note that 40 locs is meant as a placeholder for "quick to program and easy to
maintain" code and 160 locs for "fast to execute, scales, worse to maintain"

EDIT: Worse to maintain as in "the solution is not immediately obvious or I
need to understand the abstract concepts employed or I need to know some
special lib to understand it". All three forms of "worse to maintain" are
examples of a greater investment of time and brain to understand what's going
on.

------
lazyjones
This kind of judgement is not a sufficient and not even a necessary trait of
great developers. It's nice to have, but 10 such developers will not allow you
to build anything better than Dianne's solution.

Many great developers have misjudged their abilities or the practicality of
particular ideas and solutions. I won't name (and shame) any, but look for
examples among famous game developers.

------
waterlion
I never quite worked out what "rockstar" meant, but surely choosing new tech
for the sake of it isn't it.

~~~
chris_wot
[http://www.urbandictionary.com/define.php?term=Rockstar%20Pr...](http://www.urbandictionary.com/define.php?term=Rockstar%20Programmer)

~~~
waterlion
Haha I like definition #1.

------
mbesto
_Listen to the why 's, and not the what's, and you'll hear judgement._

In my experience this is very ill-advised. The answer to "why" questions
varies so dramatically that it's extremely difficult to get an unbiased,
realistic answer to those questions. I normally ask a bunch of "how" questions
and eventually you unravel the "why".

The questions presented sound good on paper but don't matriculate that way in
person. For example:

 _What 's your least favorite part of Ruby and the Ruby on Rails framework,
and why?_

 _Tell me about the last time you used an interesting bit of technology, and
what you learned from it._

Are much more naturally spoken as:

 _How do you feel about your least favorite parts of Rails?_

 _How have you benefited from an interesting bit of tech?_

It's amazing how much more human responses become when you use the words "how"
and "feel".

------
progx
This is normal.

Some people (you call them Rockstars) are interessted in the newest technology
and they want to use them to solve problems. No matter how efficient it is or
how good somebody can maintain in.

But you forgot one thing: the "Rockstars" use technology, that will be normal
for the masses 2 or 3 years later.

Ruby, for the first time, was used by Rockstars too.

It depend on your company and the size, but you need some rockstars and a mass
of good developers. The rockstars look for new things, they try them, they
show how to use it. The mass of good programmer can look at the results and
choose what make a sense to use in productive environmnet.

There is nothing bad with a rockstar programmer, only with people that think
they are rockstars and talk like "i am the king baby". ;)

~~~
VLM
"But you forgot one thing: the "Rockstars" use technology, that will be normal
for the masses 2 or 3 years later."

Or sometimes they pick the wrong tech, and it dies, and someone else gets to
pick up the pieces 3 years later. Predictions are difficult, especially about
the future.

------
stormcrowsx
The problem they forget to mention is that Diane's skills stagnate. Sure she
did better this time, but Jack has learned more. Next time through his
solution will be faster and have less errors.

If everyone was like Diane and just chose what they knew we'd still be coding
in Assembly and be nowhere near the throughput and capability we have now.

I often will try multiple implementations if I have the time, break the
problem down to something simple I can do with a few languages and tools in a
day, contrast them when I'm done and then go full scale with the one that
seems a good fit. More times than not it ends up being a combination of all
the solutions I tried because it gives me a chance to discover the strong
points in each.

------
leokun
Yes. Because good judgement requires understanding complex situations, and
that requires skill. And judgement is improved by experience.

For example you see a bad thing in the code? If you can understand the problem
and the risks and benefits, you can make a good decision. If you can possibly
make it better, understanding the time available to you, and the risk to the
product, you should make it better. Not put it off and let the bad thing
become more entrenched technical debt.

Making that kind of decision, requires good judgement. It requires
understanding your own skill, time management, risk, and more.

Being able to exercise good judgement comes with extreme rewards for any
project, as there will be less costly mistakes, and more successes and better
code.

------
yashg
In my opinion a great developer is the one who finds a solution to a problem.
It doesn't matter which technology he/she uses. If your organization is
already using some tech then you expect the developer to use the same tech so
you don't have to hire 10 people for 10 different techs. Any any decent
developer should be able to look at any code and figure out what it does and
be able to tweak a few things here and there.

More important things are proper comments, readability of the code and
documentation. If there's a bug and you are not around, someone else should be
able to fix it by looking at your notes. The bug fixer doesn't need to be a
guru of that tech.

------
MrWhargarbl
I find the list of comments regarding this being "narrow minded", "contrived"
or generally this being "a bad article".

It isn't, it's spot on. If you're outside the bizzaro world of overvalued,
under-producing nitwits, the vast majority of technology is written on what's
already understood by the team/developer. Because, you know they have budgets,
and project managers, and families.

Jack isn't a rockstar, he's a self-important ass using his employer's/client's
funds to learn a new technology that no one can maintain.

------
akent
Maybe it's just because the post is out of date now (2011) but I'm not sure
why "new technology", "rockstar" and "produces unmaintainable code" are all
conflated together.

~~~
chris_wot
Because rockstar programmers tend to use the latest technology in bizarre ways
which makes for unmaintainable code?

~~~
StavrosK
Does "rockstar" mean "bad" in this context? Because "using the latest
technology for the sake of it" is not a trait of a good developer, and I have
no clue what the hell "rockstar developer" means any more (unless they're also
in a major rock band, then I know what it means).

~~~
chris_wot
It started out as a term of admiration, but rapidly devolved to one of
derision.

------
jaegerpicker
This is a pretty bad article. Sure judgement is important, very important but
sometimes using a new better set of technologies is the better judgement. That
doesn't make things inherently un-maintainable just because it's new. That's
where the judgement part comes in, knowing which new technologies to use in
order to build a better/more maintainable system. This smacks too much of the
author saying this is what I know so it's the best and everything else is
crap.

------
nanoscopic
Hacker news commentary options on coding advice: 1\. Spin the advice to make
it sound like the advice is bad 2\. Turn the advice around to make the author
look bad 3\. Criticize technology choices ( promoting your own choices of
course ) 4\. Some variant of "duh" or "don't reinvent the wheel" 5\. Lament
the sad realistic state of the programming world 6\. Cast the author and
others into various categories ( such as what I'm doing now )

------
restlessmedia
On this subject. What I find funny, slightly scary and hugely rewarding is
visiting old code and removing massive parts of it only to leave it doing
pretty much the same thing.

It touches on a coding horror article about wanting to write code. Don't
'want' to write it, want to write as little as possible of it. If you do have
to write it, write as little as possible and with the notion you'll be looking
at it in 6 months going wtf.

------
moron4hire
Learn new technologies on your own time. Don't use the client to fund you
picking up the latest-greatest whatever. That's what personal projects are
for. You take the knowledge gained from your personal projects (and unforeseen
circumstances in paid projects that you had to code your way out of) and do
what you _know_ will work the requirements, versus what will be exciting for
you.

------
solve
Not too sure about the specific content, but that blog theme is beautiful. I
wish there was a collection of more text-based web designs.

~~~
gueno
It uses BigText
([https://github.com/zachleat/BigText](https://github.com/zachleat/BigText))
to make each line of text to fit the container width

------
ef4
You lost me when you put "elegant" and "totally unmaintainable" in the same
sentence. Does not compute.

------
otikik
> Dianne started as a Unix Admin > Dianne wrote [...] with a MySQL DB.
> PostgreSQL would have been better, but she "knew mysql."

Is that even possible? I would bet that knowing PostgreSQL is basic for any
"Unix Admin" \- assuming he means Unix operations person there.

------
nealabq
The #1 trait of a great developer is posting subtly self-referencing comments
at HN.

Also up-voting said comments.

------
casca
This is clearly very dated. Everyone knows that Jack should be a ninja, not a
rockstar.

~~~
mwcampbell
Of course, the usage of both words in this context needs to die. "Jack thinks
he's a hotshot developer" would have been fine.

------
joshdance
These articles always should be taken with a grain of salt. Maybe a few grains
when they don't have a real story to back it up. I want to hear about 2 real
developers, what they did, and why one result was better than another.

------
blisterpeanuts
A star, in my book, is someone who gets the job done quickly, methodically,
and maintainably.

This is an old story, but I guess it has to be retold every few years, to
inform newer developers and to remind older developers (and management).

~~~
chris_wot
Sadly, in most cases you can only choose two of these qualities.

------
_random_
Diane will be happy to stay for 10 years while getting average salary. Jack
will move on to a cooler place while bumping his salary quite a notch because
he knows latest tech.

------
mattjaynes
Hire engineers that will optimize your apps and systems for the _highest
business value_ and the _lowest cost_.

It seems silly and obvious to say that, but many engineers seem to have
forgotten that goal.

The 2 mistakes I see over and over are systems that are optimized for
_Novelty_ and systems that suffer from _Neglect_.

## Mistake: Optimizing for Novelty

Bored engineers often optimize for novelty. That means they add technologies
to your systems not because they're needed, but because the engineers want to
play with them. Rather than consider the implications of making the systems
more complex and more costly to manage, they are easily lured by shiny new
technology and come up with justifications to use it even if it provides
_negative_ business value.

These engineers aren't bad people. They will legitimately think that the new
technology is a good idea. They may even make an impassioned business case for
it. But listening to these engineers is the equivalent of a naive young girl
believing a teenage boy "really loves her" at the end of a first date when he
wants to go to Make-Out Point "just to talk". The boy may actually believe
this in his heart, but any woman with more experience will see exactly what's
really going on.

To see companies that suffer from this, it's as simple as looking at a few job
postings for developers. Look for the postings that list requirements for
waaaay more technologies than the company could possibly need. It's a sign
that novelty has cursed that company.

For more on this, see my post: [http://devopsu.com/blog/boring-systems-build-
badass-business...](http://devopsu.com/blog/boring-systems-build-badass-
businesses/)

If someone offers to make your systems "exciting and cool!", be very afraid.
Your systems are the foundation of your business and they should be simple,
secure, scalable, and generally pretty boring.

## Mistake: Neglect

The other problem I see frequently are systems that are simply neglected.

No one knows if the database backups work. No one knows if there are critical
security updates that need to be applied. No one knows if the site is down
until a customer complains. No one knows if the guy who quit a year ago still
has a copy of the production database. On and on...

You will often see neglect in systems that became complex due to novelty.
Every new technology that is added to a system requires monitoring,
documentation, and security updates. The more technologies that get added to a
system means the more technologies that are at risk of becoming orphans and
neglected.

## Shameless Plug

Manage servers? One of the biggest wins against neglect is to use a
configuration management tool like Ansible, Salt, Chef, or Puppet.

If you want to make sure your systems are fast, scalable, and secure, the
first step is having full control and power over them.

Tomorrow, Sept 4th, I'm launching my book "Taste Test: Puppet, Chef, Salt,
Ansible" which is designed to save you the days or weeks of research when
picking one of these tools.

In the book, I implement an identical project with each tool so you can see
what each one is like to work with. You may be surprised at which ones were
super easy and which ones were really difficult to work with.

To get a discount for the book release, just sign up on the mailing list:
[http://devopsu.com/books/taste-test-puppet-chef-salt-
stack-a...](http://devopsu.com/books/taste-test-puppet-chef-salt-stack-
ansible.html)

------
ianstallings
_Slender fingers_

------
a-priori
I completely agree that pragmatism is essential to being a great developer.
When designing a system, you should be asking yourself questions like:

1) What sort of load will this system need to handle now and in the
foreseeable future, and what sort of latency can it tolerate? How will that
load flow through the system? In this contrived scenario, the server and
database need to be able to handle an average throughput ~9 requests a minute
with lax latency requirements (it's okay if things are backed up by a couple
minutes). The devices only need to handle one operation per hour. Planning a
safety margin of perhaps 10-100% above that is prudent; more than that is
premature.

2) What are the upgrade characteristics of each component: how will you
upgrade or replace each component? how long would it take? can you even
upgrade them? Some parts will be in a central location, and can be easily
upgraded with only minor downtime. Some will be deployed on hardware you don't
control or otherwise can't be easily upgraded. In the worst case, they're
embedded in firmware and you need to support them as-is for their entire
lifespan.

3) What are the failure modes of each component: what could cause them to
fail? how likely is that? what will happen if (when) they do? how will you
recover?

The answers to those questions determine how much risk you can take when
designing each component.

Also, look at where the answers are different for two components that talk to
each other. Any time there's a boundary between two components that can't be
upgraded at the same time, or where one component talks to another with
different failure modes, there's an interface. You need to think hard about
the interface between them: how will they communicate? how will you extend
that communication protocol over time while maintaining backwards
compatibility? how will they behave if the other has failed? Those interfaces
are locations where over-engineering a bit can pay off.

So for this example, I would think hard about what runs on the devices and how
they talk to the back end, because that code is probably very difficult to
upgrade. I would use rock-solid, dependable technologies for those. Similarly,
the database probably stores important information, and one of its failure
modes is that all the data is lost due to hardware failure. So I would use a
mature database there, either MySQL or PostgreSQL.

But the back-end service? It's easy to upgrade and because it's stateless, its
two failure modes are that it crashes and it needs to be restarted, or that
the hardware fails and it needs to be re-deployed. Neither failure mode is
that bothersome, so it doesn't matter what technology you use there. Use
whatever the team is most familiar with and will let you iterate the fastest.

------
arkj
c'mon!!! it's not all about trust, it's also about the code.

------
corresation
This is one of those "why _I_ am a great developer" sorts of posts that allows
people to pat themselves on the back and congratulate each other on how great
they are, passive-aggressively sending it out to the team in hopes of
denigrating some team member who likes newer things. In this case "judgement"
means "uses the things that I know", and there seems to be little evaluation
beyond that high-level use case.

Why was -- in this contrived scenario -- Jack's solution a "maintenance
nightmare"? Just because.

~~~
seclorum
Yeah, I agree with you: 'just because' .. the author of the article doesn't
know anything about protocol buffers, node.js and Cassandra. Probably because
those things are 'too new' and 'not proven' technologies - to him.

But if I had to take over a project from someone, to maintain it, I'd _MUCH
RATHER_ have it be done using those technologies than MySQL and Sinatra.
Yikes!

I recently had to make a bid on a project that would give the users the
ability to share their works, produced in an embedded environment, with their
communities - a "Cloud" for the users. I designed the system architecture of
the embedded system to be as self-maintainable as possible, and as simple as
possible. Since its an embedded OS environment that the users would be using,
instead of writing code I integrated such tools as git, and designed a system
topology that would allow the user to be shielded from all the git hassle,
while still getting a push-button experience.

I lost the gig to someone who decided that it would be better to re-invent
"topography ordering" (his words) in a custom MySQL+PHP application, and when
I was given an opportunity to challenge his design, I said "great, you're
going to re-invent Git, poorly".

He spent the rest of the day "pissed that you compared his project to ..
GitHub", which is what he thought I meant when I said "git". Nope, I said
"git", I mean "git" \- not 'github'. He actually didn't know how to use git.

Either way, off they go .. re-inventing the DAG with MySQL+PHP in an embedded
environment, when they could've just wrapped the feature in a git script ...
yikes++!

So this article just demonstrates to me that there are a lot of blow-hards out
there who just don't get why its so important to apply the NIH principle to
themselves, not just others ..

~~~
Amadou
_But if I had to take over a project from someone, to maintain it, I 'd MUCH
RATHER have it be done using those technologies than MySQL and Sinatra.
Yikes!_

Except that isn't really an option in most cases.

The two most realistic choices are: Take over a high-quality implementation
using somewhat stale tools or an average to shoddy implementation with newer
tools that the developer was learning on the fly.

~~~
corresation
_Take over a high-quality implementation using somewhat stale tools or an
average to shoddy implementation with newer tools that the developer was
learning on the fly_

My experience has been that developers who are most interested in innovations
in this industry are dramatically more likely to build better solutions -- the
passion and interest cuts both ways. Versus the career developer who does the
absolute minimum necessary.

But of course with any given situation and set of developers your mileage will
vary (I've known on-the-edge developers who build ridiculously nice code, and
hanging-two-generation-back coders who make disgusting code. And vice versa).
This particular scenario bothered me because the nodejs / cassandra canards
were propped up under the illusion that the only advantage they offer was
nefarious "Web Scale". Only in reality for many projects they make it
_ridiculously_ efficient to build solutions. The same goes for Go right now --
there are practitioners who can build in moments what would have taken months
on a, for instance, ASP.NET team.

~~~
oinksoft
My experience is that for every developer who adopts a new technology for its
true merits, there are at least two others who use it only because it is at
the top of Hacker News/Ruby Weekly/other trendy publication, and those
developers take advantage of very few of the technology's benefits.

