
Coding Horror: Hardware is Cheap, Programmers are Expensive - Anon84
http://www.codinghorror.com/blog/archives/001198.html
======
swilliams
"Also, having programmers who believe that their employers actually give a
damn about them is probably a good business strategy for companies that
actually want to be around five or ten years from now."

This is probably the most overlooked reason to get at least semi-good
hardware. Part of the reason I quit my last job was because of management's
stubborn refusal to get us tools that would make us be more productive.

~~~
bockris
I applied to a job in the spring of 2006 and when they were showing me around
the development area I noticed that everyone had single smallish CRTs and what
seemed like older computers. I asked about it and they told me that most
people had P3-500's! Unsurprisingly their salary offer was low also and I did
not take the job.

~~~
swilliams
Yeah, at that point you can pretty much conclude that the company is doing
something incredibly wrong, and it's only a matter of time before it bites
them.

If you have crappy pay and perks, then you won't attract good talent, and
whatever good talent you had will eventually leave. Without good talent, it is
highly unlikely you'll have a good app. If your app sucks, you will slowly
bleed off customers, and revenue.

Of course, that's not a scientific fact. For exceptions start with: Contracts,
Government.

------
gaius
This is a ridiculous myth that I wish we could just put to rest once and for
all. If your overpaid programmers are writing O(n^2) algorithms then that
cheap hardware is going to get expensive real quick. Even if hardware was
_free_ you still need to put it somewhere, power it, cool it and so on.
There's no excuse for sloppy coding.

~~~
mechanical_fish
_If your overpaid programmers are writing O(n^2) algorithms then that cheap
hardware is going to get expensive real quick._

Well, that depends on the value of _n_ , right? Not every piece of software is
Google or Twitter: They don't _all_ have a userbase that grows virally to
encompass every breathing human in existence, plus animals and robots.
Especially if you charge the users money.

~~~
gnaritas
_Not every piece of software is Google or Twitter:_

Worse, almost _all_ software isn't like a Google or Twitter. In almost all
cases hardware is vastly cheaper than programmers. The Googles and Twitters of
the world are extreme outliers and are no where near the median.

~~~
axod
I don't believe that's true. Google sure, but twitter is just another message
routing system, like any one of the IM networks.

The average irc server probably handles more than twitter - certainly email
software does.

There's a ton of software that needs to scale, and be able to handle a large
volume of data.

~~~
gnaritas
_There's a ton of software that needs to scale_

Yes, but there's vastly more that doesn't, you need to stop thinking that most
software runs on the Internet and has tons of users, it doesn't. The vast
majority of software is written for businesses and runs either on an intranet
or is available on the internet to the users of that business.

For every public app you know about that needs to scale there are a hundred
you don't know about that don't need to scale. Most programmers work on
systems that will never need to scale, EVER. It is a tiny tiny minority of
programmers that actually work on systems that become publicly popular on the
Internet and actually face scaling issues.

Secondly, we're talking about programmers here, not the software, most email
systems use _existing_ mail software written by that tiny tiny minority of
programmers who write such systems. The vast majority of programmers will
_NEVER_ write a mail system, a chat system, or anything that handles large
volumes of data or users. Most of them of doing one thing, biz apps for in
house projects at big and small companies exposing relational data to in house
users.

How many IM networks are there? Now how many programmers are there? I'll say
it again, most software does not need to scale. Because a lot of software does
need to scale does not make that statement in any way false because a shit
load more doesn't need to scale.

~~~
gaius
_The vast majority of software is written for businesses and runs either on an
intranet or is available on the internet to the users of that business._

That doesn't mean a thing in terms of scalability. One user can max out a fast
machine with OLAP cubes or other DSS. Or any sort of simulation or modeling.
Think in terms of datasets, not in terms of users (all many users are is a
large dataset with a requirement for a quick turnaround of each computation).

~~~
gnaritas
If an app only has one user, it doesn't need to scale, it needs to perform.
Scalability is all about users and only loosely related to performance.
Datasets are irrelevant.

~~~
sadfsa
I don't know what dictionary you're using, but here are a few definitions from
mine:

scalable - How well a hardware or software system can adapt to increased
demands.

scalable - the ability of a product or network to accommodate growth

The ability to scale hardware and software to support larger or smaller
volumes of data and more or less users.

the ability of a computer system to shrink or grow as requirements change

------
bdfh42
When I started as a programmer in the late 70s the relationship between
hardware costs and programmer costs were just about the reverse to those
described by Jeff. It was always cheaper to optimise the code than it was to
invest in new hardware. I must admit that it is hard to shrug off the frugal
approach to the hardware that I adopted at the start of my developer career -
I think it still pays off to err on the side of low hardware overhead whenever
the option arises.

~~~
davidw
Hrm. I guess the guy cited in my online journal was really thinking ahead:

[http://journal.dedasys.com/2008/12/04/the-economics-of-
progr...](http://journal.dedasys.com/2008/12/04/the-economics-of-programming-
languages-in-the-1950s)

It's an extract from a paper from the 1950'ies by Backus (yeah, that Backus)
saying pretty much exactly what Atwood says.

------
kragen
This argument is applicable only to code that runs on only a small number of
machines. Using roughly his numbers, you can afford to pay somebody US$100 000
to work for a whole year to make your code 10% faster if that code uses 100%
of a million dollars' worth of hardware, which is only 200 high-end US$5000
servers or gaming machines, or 2000 low-end servers.

But that's assuming that the benefit of the speed improvement is merely
improved throughput resulting in data-center savings. If the issue is that
your non-AJAX UI is frustratingly and inconsistently slow and so your ex-
customers have switched to your competitors' AJAX app because it has
consistent 200ms response times when they click on buttons, it may not matter
if their app actually uses more CPU cycles than yours (because half of it is
written in JavaScript and your ex-customers aren't running Chrome yet.) If
your game gets 20 frames per second and your competitor's game gets 35 frames
per second with nicer-looking graphics, you're going to lose sales, even if
the customers run the game for only 10 hours each.

And that's why gamer games and desktop apps are _still_ written largely in C,
C++, or Java with some Lua, and Google's large-scale apps are mostly written
in C++ and Java, while small-scale server-based applications and the stuff
developed by the in-house IT guys are written in all kinds of higher-
productivity, lower-performance languages.

~~~
teej
He is really writing this from the perspective of web applications. If you app
is humming along nicely on a single EC2 instance, it isn't even worth an HOUR
of your time to DOUBLE the capacity of your app.

The last big app I was in charge of ran 1M pageviews a day on $500/month. At
my standard rate, it isn't even worth a day of my time to double the capacity
of my app on the same hardware.

He certainly isn't saying don't optimize. He's simply saying that scaling web
apps STARTS with throwing hardware at the problem. Where and when you cut over
to developer optimization depends on your -actual- costs, size, and growth.

~~~
kragen
Yes, I agree, although it might be worth an hour of your time to improve the
app's responsiveness if it makes the customers happier.

------
mdasen
What this article lacks is an eye toward things that _must_ be optimized when
you get to a certain level.

For example, when you replicate databases, the replication overhead grows
quadratically (n^2) to the number of servers. So, eventually you can't just
throw hardware at it since eventually it's all overhead replicating between
the servers and you can do any selects/updates/deletes. Most people will never
get to this level and the article is probably written for people that won't
have to deal with this.

So, at a certain level, you need to optimize your code to shard data, avoid
joins, etc. because you simply can't throw hardware at it.

Most people will never get to that level so throwing hardware at the problem
is often the answer, but if you have a lot of quadratic algorithms in your
code, you're going to hit scaling problems that will make throwing money at it
expensive to the point where you'll loose money.

------
cia_plant
Do software programmers really worry too much about optimization? If that's
the case, why are so many programs slow as a crippled sloth? Why does it take
10 seconds to open a freaking email client, when 1980s technology can do this
in less time than is humanly noticeable?

~~~
gaius
Well, no, they don't bother, because people keep telling them that hardware is
cheap and they're too expensive to work on anything that isn't new features.

------
old-gregg
This argument works only for enterprise/server development, where hardware is
under your control.

When developing for the consumer market, hardware is not "expensive", hardware
is simply not available: users control their hardware, and even the argument
of "ever increasing gigabytes and gigahertz" doesn't work anymore, as more and
more prefer to trade speed for increased portability or (surprise!) smaller
price.

------
12ren
There's an issue of need satiation. Once a need is met, further improvements
can't be absorbed. Like water when you aren't thirsty, it doesn't matter.

Hardware helps performance - but this only matters when more performance is
needed. For example, if features or usability are needed, extra performance
doesn't help. This is obvious, but it's surprisingly easy to get caught up in
improvements that are better, but that don't matter.

For productivity of programmers, faster hardware only helps when performance
is a problem. The article says: _If [it] makes them merely 5% more
productive..._ \- that's an "if". Will a faster PC make you more productive?
These days, compilation is too quick for me to squeeze in a swordfight. For
other hardware (eg large screens) there is also a limit beyond which extra
real estate isn't useful (Joel on Software talked about several of his coders
that don't use their second monitor). It depends on the task, of course.

Faster is _always_ cool and impressive. But like a 300 miles per hour station
wagon, will it help you?

------
DaniFong
It's rare that I come across a problem where resource and speed constraints do
not become an issue. Javascript and the DOM and latency has been an issue with
client side web-apps. DB bottlenecks have been an issue on the serverside.
Numerical performance and AI performance is an issue in CAD/CAM/CAE games and
scientific analysis. In every scenario there have been algorithmic and
software level improvements that have speed up the system by a factor of ten
or more. Certainly it plateaus after a while, but one cannot hope to throw
hardware at complicated problems and expect them to be solved without some
serious elbow grease on the part of the programmer. Programs become
increasingly more ambitious, and expand to fill the resources allocated to
them: a kind of Hacker's Parkinson's law.

------
lallysingh
Simplest method is still the best:

Write something that works. Run it at production loads. If it's not good
enough, work on it. Don't worry about speed until you hit that. Only exception
is if you know N's going to be big enough to be problematic.

In that case, do just enough optimization to make it reasonable.

Why? Optimization and maintainability* are often at opposing ends. It's best
to avoid the costs of it when it's not needed, and the increasing capacities
of computers should make you bias towards maintainability.

* and ease of debugging, and concerns for low software complexity, and likelihood that it will be well documented, and .....

------
natch
Am I the only one thinking those salary figures are around 30-40K too low?

~~~
ciscoriordan
It's the whole US, it isn't broken down by geographic region or anything.

~~~
redrobot5050
But even then, 30-40K is weak. 40K should be your starting salary. Anything
less, and you're being taken.

~~~
DougBTX
He said "30-40K too low", as in "30-40K lower than I would reasonably expect".

~~~
natch
Yep, you read that correctly.

------
zandorg
I practise the 'path' method of speed optimisation: If you can not execute a
piece of heavy code every 10 in 100 runs, you save a lot, just by a simple
data/value check before it.

As for memory opts, pass by reference instead of value (in C++).

I've sped up code by a factor of hundreds by doing some crazy stuff (without
going to assembly).

~~~
gaius
Yep, memoization is often a big win, and so easy to do in Python with a
decorator.

------
richcollins
Managing servers can be a real pain in the ass. If you can get everything to
work on one server, life is much simpler.

------
hs
i'd say programming environment (unix/mac, vim/emacs) and skill levels (algo,
data structure, etc) matter more than hardware

it costs less to hire a skillful (but expensive) developer on ok machine than
less skillful one(s) on expensive machine(s)

the 10-100x gain myth on skill is pretty much true

------
arjungmenon
Now on Slashdot:
[http://developers.slashdot.org/article.pl?sid=08%2F12%2F20%2...](http://developers.slashdot.org/article.pl?sid=08%2F12%2F20%2F1441203)

