
Joel on Solid State Disks - dmytton
http://www.joelonsoftware.com/items/2009/03/27.html
======
smanek
I'm a big fan of incremental builds - no point in recompiling 1M LOC when only
a few source files have changed. Image based systems (like Common Lisp) are
particularly nice because you can just recompile the functions that have
changed and have the changes instantly take effect on a running image.
Unfortunately, ASDF (the most common build/packaging system) doesn't have
particularly good support for incremental builds yet, but it's an area of
active developement (particularly with XCVB and other new build systems).

But don't GCC, Java, and Visual Studio all support parallel builds pretty
easily? I've never looked into it, but I know I've read articles about it.

What sort of language/env. does FogCreek use? I vaguely remember hearing that
they had written their own compiler in Java that transformed ASP to PHP or
some other such ludicrous thing.

Edit: Some googling showed that they decided their software had to run on
Windows and Unix boxes, and that the best way to do that was to support ASP
(pre .NET) on Windows and PHP on *nix. They already had a large ASP code base,
so they wrote a compiler (named Thistle) that could translate a subset of ASP
into PHP. Then, it seems like MS deprecated ASP (and they started to realize
how much it sucked) so they wrote their own language (Wasabi) that is based on
VBScript and can be compiled into ASP, PHP, or JS. My God. I would gouge my
eyes out if I had to work there. From the fragments of code I've seen, it
frankly seems horrible. See
<http://www.fogcreek.com/FogBugz/blog/category/Wasabi.aspx>

Although, I guess I can't really judge. I recently just wrote some
ridiculously convoluted Java code that could automatically and transparently
trick the JVM into making certain recursive functions Tail Call Optimizing
(i.e., use constant space on the stack, regardless of how deep the recursion
is) by transforming the function into automatically issuing/catching
Exceptions to manually unwind the stack when I want to. Ah, I miss Common Lisp
- I never knew how good I had it.

~~~
thwarted
If you define your dependencies correctly, make can do parallel builds (using
the -j option). Of course you will have to use Makefiles in that case, which a
lot of people don't like.

~~~
smanek
Yep, right you are. That seems like a much more reasonable way to go.
Parallelize the build - not the compiler.

------
inklesspen
Ah, the cost of premature optimization: a couple thousand bucks, and a couple
days of monkeying around. Better to first figure out where the bottleneck is
(and maybe listen to your developer, since he thinks parallelization will
help.)

~~~
pkaler
Listen to the last Stack Overflow podcast. Every single month that they ship
FogBugz earlier is an extra $200k in revenue share for the developers.

The developer may know how to solve the problem in code. But, Joel is the CEO.
He has a better idea of how this affects the Balance Sheet and Income
Statement. It's kinda, sorta his job.

~~~
inklesspen
Except that (a) he spent a couple days of his time being wrong about it, and
(b) he then wrote a blog entry about how he was wrong about it.

Imagine if instead he'd said to the guy "okay, spend three hours profiling the
build process and get me some suggestions with time estimates", and they'd
found some likely prospects, and three days from now he gets to post about how
they'll be shipping the next release a month earlier because of the
improvements they made to the build process.

~~~
spolsky
No matter how much you speed up a 30 second build, you're not going to save a
month on the release process. Be realistic, now.

SSDs provide so many other performance benefits--even just launching apps--
that they're going to make our developers a lot happier anyway. And my time is
far less valuable than a developer who is in the critical path to shipping.

~~~
calambrac
A 15 second build over a 30 second build saves way more time than 15 seconds a
build. It means that I'm more likely to hit compile after a smaller change
before moving on to the next thing. It means I can iterate more rapidly on my
approach to fixing a bug, keeping the issue hotter in my mind. It means I have
half the time to get distracted by something shiny. Any time you can make the
build perceptibly faster, you win big.

~~~
Timothee
Playing the devil's advocate, I would say that you can also look at it the
other way and say that a developer might not build as often but instead make
sure his/her code is right before building, thus being more careful about what
s/he's writing. (just for the sake of argument ;))

~~~
calambrac
Right, because I was advocating typing line noise until it makes it past the
compiler.

I can't count the number of times I used to make a tiny change that I didn't
think was worth running the build for, only to have it be the first thing to
pop up as wrong next time I compile (now I always run a build, because we made
it super fast).

And iterating over a bug, that can involve a lot of little changes, it can
involve writing a ton of unit tests trying to duplicate the problem, it can
involve subtle interactions that all seem right until you figure out what the
issue is. Yes, you have to think, but sometimes you just need to churn through
it, too, and frankly it's ridiculous to suggest that having a faster build
process wouldn't help this.

And I, personally, get distracted pretty easily. This comment courtesy the 5
minute test suite I'm plowing through in the background right now.

------
mmelin
I think this was just an excuse for Joel to get SSD's for his laptop and
workstation - and a good one at that!

~~~
spolsky
Yeah, otherwise my boss never would have let me spend the money ;/

~~~
mmelin
Yes - I love everything about being self-employed except for the fact that my
boss is an asshole.

~~~
wglb
Same here. And he wants me to work all the time.

------
CRASCH
I can say my build times went down considerably. However I moved to a
completely new machine.

I didn't plan on writing an article so I don't have any official bench marks.
I'm guessing they dropped from over a minute and a half for a full recompile
to under 30 seconds. Incremental builds used to take about 20 seconds and now
take between 2 and ten seconds.

I went from a athlon X64 at 2.2Ghz and 2GB Ram with a 74GB raptor and another
74GB raptor for code to a Core i7 Extreme at 4Ghz and 12GB Ram with 2x80GB
intel SSDs in raid 0 and 1 80GB intel SSD for code.

A couple of important things to keep in mind. Intel is still the way to go.
The other SSDs can actually be slower in write performance than a normal hard
drive if the usage pattern is small files. That is pretty much what you get
when you compile.

One other point is that crappy anti-virus software can affect IO performance
anywhere from 12X trend micro, 24X Norton. Yes that is Norton will reduce your
IO by 2400% in my tests. AVG Free is 19%. So he could have negated the
performance gain from the SSD by running bad anti-virus software.

I can say my productivity feels like it nearly tripled.

The important measurement from a programming perspective is not necessarily
just the second or two here or there. My work flow changed. I used to hit
build and walk out and get drink from the fridge in the garage and it would
still be compiling. Now I can't even do anything but glance at my email. These
interruptions sometimes take you out of the zone and it can take a long time
to get back in the zone. Getting up could easily cost me fifteen minutes or
even a half an hour. Just because I lost my train of thought.

Update: I'm running windows 7 and visual studio 8 on this machine.

------
tsally
Suggestion: check whether a process is IO bound _first_ before spending
hundreds of dollars trying to make it faster with SSDs.

~~~
rgoddard
My guess is that this was the cheaper option. It only cost Joel a couple of
days and several hundred on the hard drive. What it did not do is interrupt
his programmers. As Joel has already described, he is now a grunt worker. His
job is to make it as easy for others to do their jobs. Their time is probably
worth 10 to 100 times his, because they are the one actually making something
that directly results in more sales and money for the company. So all Joel had
to do was save a couple of hours of work for the programmers and he would come
out ahead.

Plus these additional side benefits: The programmer who was complaining does
not feel like he is being ignored. Joel is now going to upgrade most of their
computers, making it better for the entire company. So lots of warm fuzzies to
go around, leading to happier programmers, leading to a better product.

~~~
litewulf
The cheaper option would be to run a compile and open any of the many
different programs in Windows that tells you whether you're using lots of CPU,
RAM or IO-waiting.

Then you go AHA! I'll spend money on CPUs or making my compiler/build
parallel.

------
MichaelApproved
So he didn't fix the compile speed but he fixed the speed of _everything_
else. I know every second I have to sit and wait for my hard drive to catch up
is a second I'm spacing out instead of working. As a developer I just want to
get my idea out and code. Anything that can be done to make that faster makes
me a happier coder.

------
blasdel
He doesn't mention that the "compiler" that they're bottlenecking on is the
one they wrote themselves, for "Wasabi" -- compiling their private language
into VB or PHP.

On top of that, _you don't fucking parallelize compilers!_

You use a goddamn build system to do that (make, et. al.). Of course, that
requires that your compiler be decent enough to be able to compile modules
independently, and not have to recompile unmodified source. What do you want
to bet that their compiler is just ridiculously awful?

------
jrockway
_There’s an open source app called Clonezilla, which, I have to say, is only
free if your time is worthless._

What about dd if=/dev/old-disk of=/dev/new-disk bs=16M ?

~~~
spolsky
If disk sizes are different, which they almost always are, AFAIK dd doesn't
quite hack it, otherwise I would have just booted off a Ubuntu live CD and
done that.

~~~
wmf
Yes, after the dd you have to run parted to adjust the partition table and
then possibly use some other tool to grow the filesystem.

~~~
blasdel
If the new volume is smaller (which it almost always is when switching to a
SSD), you'd need to _shrink_ the filesystem _before_ you block-copied it with
dd.

------
richcollins
I guess you shouldn't always trust your assumptions, but did he really think
compilation might be I/O bound?

------
ilaksh
With the current abundance of incremental and/or parallel tools, the fact that
the compilation process takes up a significant amount of time is your first
clue that there is something fundamentally wrong with the technology selection
and overall thought process at this shop.

Also, he could have saved $350 dollars and bought the 80 GB drive (unless he
just HAS to store 40+ feature-length pirated movies on his boot/app drive?).

------
markessien
This could have quickly been tested by using a memory disk and trying if the
compile would go faster when the files are located there.

------
jwr
To summarize the article: yeah, SSDs are faster, but not quite for everything,
because if you're CPU bound, SSD won't make your code run faster.

So, why does this get upvoted, again?

------
rozim
They should try distcc

------
chiffonade
So... he concludes that CPU bound tasks are CPU bound, and that parallelizing
compilation (technically it would be the dispatch of compilation) is better?

Every time I read this guy I fail to get what his allure to developers is.

------
mynameishere
Seriously? Compiling? What language are they using that requires actual,
regular, compile processes?

~~~
smanek
What's wrong with that? All else being equal, compiling will generally be
faster. Most languages (Python, Java, Ruby, most Common Lisps, etc) _do_
compile - just (usually) to some intermediate and portable bytecode rather
than raw machine code. That's probably what their compiler (Wasabi) does too
(except, it seems like they are using PHP/VBScript/JS as their bytecode ;-)).

~~~
spolsky
In FogBugz 6 the compiler output was PHP and VBScript.

In FogBugz 7 the compiler output is .NET/CLR bytecodes.

It also emits compressed JavaScript which runs on the browser.

~~~
blasdel
You should write a new article about Wasabi as it is now, so we can stop
making fun of you for it.

