
The machine's view of time, if nanoseconds were seconds - rictic
https://plus.google.com/112493031290529814667/posts/LvhVwngPqSC
======
kirubakaran
I liked this better:

L1 - You've already started eating the sandwich, and only need to move your
mouth and take another bite. (2 seconds)

L2 - There is a sandwich on the counter, so you need only find it, pick it up,
and begin eating. (10 seconds)

RAM - You're near the fridge, but you need to open it and quickly throw
together a sandwich. (3 minutes)

HD - Drive to store, purchase seeds, grow seeds, harvest etc. (1 year)

[http://www.reddit.com/r/programming/comments/90tge/hey_rprog...](http://www.reddit.com/r/programming/comments/90tge/hey_rprogramming_i_made_a_visualization_of_the/c0b1t0v)

<http://news.ycombinator.com/item?id=702713>

~~~
rryan
Actually, this sandwich analogy is not as good as the library/book one.

In this analogy, you are busy while you are making your sandwich (because you
are putting it together from stuff you got from your fridge).

In the library-with-free-delivery-service analogy, you can do other work while
you wait for data from the library/RAM to be delivered.

Modern superscalar processors can re-order non-dependent instructions while
waiting on a memory lookup, and that is what the free-delivery-service aspect
of the analogy illustrates.

~~~
slipperyp
Very much agree. The analogy I'm used to (and use) is:

CPU = person (researcher at library) RAM = bounded physical space on desk Swap
= cart for stacks Disk = the stacks (requiring scheduling of the elevator)

In my mind, the discussion of second/nanosecond is unimportant and makes this
seem more technical than it needs to be to illustrate the point that "fetches
from (non-SSD) disk are very slow and waste a lot of time." But this doesn't
seem to be quite as focused as "A complete idiot's guide to the main
components in a computer and what they do." (though I'm not sure that it's
either time scales or components or SSD)

~~~
gjm11
No! That version of the library analogy makes the ratios much too small.

CPU to main RAM: actually about 150:1; more like 30:1 in your analogy.

Main RAM to HD: actually about 200,000:1; more like 120:1 in your analogy.

The reason why "the discussion of second/nanosecond" is worth having is
precisely that if you just say "very slow" and "a lot of time" then you're
likely to think about ratios of the sort in your analogy, when the reality is
_much much much worse_. (Extreme case: HD to CPU registers. Actual ratio:
about 30 million to 1. Ratio in your analogy: about 4000 to 1.)

------
dazbradbury

      And yet, if you can wait three years for the first wooden
      boat, it can often be at the head of a convoy which will 
      keep you busy for many thousands of years, sometimes even 
      orders of magnitude more if you take a minute to request 
      that another convoy sets out.
      -  James Gray
    

I was going to make a point about random access of one bit vs. sequential
access of large portions of data, but the comment from google+ above summed it
up perfectly.

Thanks for posting. A very insightful analogy, really putting things into
perspective.

------
jgw
Cool analogy. Makes a great reference point.

As an ASIC guy, I like to occasionally casually mention to software guys that
at 3GHz, light travels about four inches in one clock cycle, and it frequently
really blows their minds.

~~~
johngalt
Worked on me. I had to do the math.

180,000mi/sec * 5280 * 12 = 11404800000 / 3,000,000,000 = 3.8inches.

Extending that a little further... on a 45mm i7 chip going from one end to the
other and back would be ~3.5inches of travel. Gives me an idea of how much of
a constraint packaging is.

~~~
dxbydt
Didn't get any of that so had to redo in metric.

speed of light = 3 x 10^8 m/s

3 GHZ = 3 x 10^9 /s

So 3 x 10^8 / 3 x 10^9 = 0.1 metre = 10 centimetre = okay, 3.9 inches

But then metric always saves your ass. Back in physics class, they'd ask you
how deep the well was if you dropped a stone & heard the water splash in 10
seconds. Before the American students could even begin their work, all the
Indians would yell "500 metres!" And that's cause the gravitation constant is
10, so one half 10 times 10 times 10 is 500.

~~~
narkee
Maybe the Americans were busy incorporating the delay due to the finite speed
of sound.

~~~
rrrazdan
Is it just me that finds this suggestion offensive. Maybe the Indians had
already thought about that and since they were already taking g as 10 m/sec^2,
they thought that the difference because of sound delay would be
insignificant. As would be the variability in speed of sound near the water
surface due to moisture.

~~~
narkee
My response was designed to highlight the equally offensive suggestion that
"all the Indians" were metric geniuses, while American students were all slow
and ignorant of the metric system.

------
zackzackzack
I really liked this. This is the first instance of a time scale for computing
that really made sense for me. It's a really good mental metaphor that cleared
up how computers work in a way to this script kiddie.

Extending that thought to multiple cores/threads. Comparable to a small
business in a way? You have one guy who can go tell other people to do certain
tasks. They take anywhere from a few minutes to a few hours. You can set it up
so that there is a task list of things for people to do so that you don't have
to continually reassign each one, just tell them to pick up the next thing to
do. It's much harder and requires more organization, but ultimately, like the
division between a small business and a one man show, you get more shit done
with multiple people/threads/cores working in parallel than one single unit
working by themselves.

Thanks for posting this.

~~~
cube13
>Extending that thought to multiple cores/threads. Comparable to a small
business in a way? You have one guy who can go tell other people to do certain
tasks.

Depends on the architecture, actually. What you're describing is a lot like
the Cell processor design. For x86-based processors, it's much less organized
than that, because each core is, for all intents and purposes, effectively an
independent processor.

The best analogy for multithreaded programming I've come up with is a(take a
drink) car factory. Each core is a generalized assembly line that is capable
of producing any part, but it takes time to switch to a different task. The
end goal is a car, which means that you can have one core working on the
transmission, one core working on the interior parts, one on the exhaust, and
one working on the motor. If you can balance them out, you can have all of
them finish around the same time. But if you're trying to actually make a car,
you're going to end up with some overhead to finish the entire thing. There is
some overhead where the body is made, and all the parts that were produced by
the other lines are actually put into the car.

Compared to a single-assembly line factory, it's possible to make cars much
faster with multiple lines. But there will always be some percentage of time
that you cannot split across multiple cores.

~~~
zackzackzack
Ahhhh. There is a nonzero cost for going about organizing all the tasks. No
matter how many ways you can divide the task, even if you don't actually
divide it up, there is always a nonzero time at the end that you have to
everything on hand to make the final product. Is that a good way of thinking
about it?

------
scott_s
I've done used this analogy in reverse. My roommate was also a CS PhD student,
and I explained that when it comes to toilet paper, _we can't afford to let
cache misses go to disk_.

------
daeken
Wow. I've been doing low-level work where I have to intimately understand
computer architecture and optimization work where every nanosecond counts for
as long as I can remember, but I've never put it into perspective. This is
awesome.

------
buff-a
OCZ Vertex 3's have been pounded for reliability problems[4], so much so that
they've just started a special deal on Newegg [2]. And coincidentally, I'm
sure, a jolly story about "a machine's view of time" replete with olde-worlde
charm, shows up on the front page of a major tech site, and oh, by the way,
let me end by saying "I use OCZ Vertex 3's"...

Tom's Hardware suggests that Crucial's m4 series are faster than OCZ Vertex
3's, and don't come with a horrendous approval rating. A 256Gb m4 is $319 on
newegg [1].

Intel's new 520 SSDs appear to have given them a proper SSD instead of the
floppy-disc-like performance of the 510.[3] Though its $499 for 240gb. [5]

All drives have failures, and while it sucks to be the one that gets the dodgy
drive, there will always be someone who can post "it didn't work for me".
However, the OCZ Vertex have an unusually high number of "It didn't work for
me" reviews. Is it a stitch-up? It'd be easy for "a motivated third party" to
buy 27 drives off newegg and post negative reviews. It'd also be in OCZ's
interest to fan the flames of doubt on the SF2281 as they are releasing new
SSDs based on their own, newly-purchased, Indilinx controllers. But taking off
the tin-foil hat, it does look like Vertex 3's have problems.

[1]
[http://www.newegg.com/Product/Product.aspx?Item=N82E16820148...](http://www.newegg.com/Product/Product.aspx?Item=N82E16820148526)

[2]
[http://promotions.newegg.com/OCZ/022912/index.html?cm_sp=Cat...](http://promotions.newegg.com/OCZ/022912/index.html?cm_sp=Cat_SSD-
_-
OCZ/022912-_-http%3a%2f%2fpromotions.newegg.com%2fOCZ%2f022912%2f696x288.jpg)

[3] [http://www.tomshardware.co.uk/ssd-520-sandforce-review-
bench...](http://www.tomshardware.co.uk/ssd-520-sandforce-review-
benchmark,review-32373-4.html)

[4] [http://www.newegg.com/Product/Product.aspx?Item=20-227-707&#...</a><p>[5]
<a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16820167088"
rel="nofollow">http://www.newegg.com/Product/Product.aspx?Item=N82E16820167...</a>

~~~
rictic
I have exactly 0 hardware manufacture connections. I give you my word that
this was just a fun post that I've been thinking about off and on for a long
time. No one influenced me to write the post at all, in any way.

Why did I recommend the OCZ? AnandTech writes about OCZ products a lot, and I
trust them. I wrote the post a few days ago thinking that mostly my friends
would read it. It only occurred to me today to post it on hacker news, when I
saw that I got like 7 +1s from friends.

~~~
buff-a
This AnandTech?

"Back in October SandForce announced that it had discovered a firmware issue
that resulted in unexpected BSODs on SF-2281 drives on certain platforms. Why
it took SandForce several months to discover the bug that its customers had
been reporting for a while is a separate issue entirely. SandForce quickly
pushed out the firmware to OCZ and other partners. Our own internal testing
revealed that the updated firmware seemed to have cured the infamous BSOD."

Yay!

"As luck would have it, our own Brian Klug happened to come across an
unexpected crash with his 240GB non-Intel SF-2281 based SSD two weeks ago when
he migrated it to another machine. The crash was an F4 BSOD, similar in nature
to the infamous BSOD issue from last year."

Oh. This was written 2/6/12.

...

"Whatever Intel has done with the 520's firmware seems to have fixed problems
that _still remain_ in the general SF-2281 firmware."

...

"While it's nearly impossible to prove most of this, the fact that we're still
able to reproduce a BSOD on the latest publicly available SF-2281 firmware but
not on the SF-2281 based Intel SSD 520 does say a lot about what you're paying
for with this drive."

[http://www.anandtech.com/show/5508/intel-ssd-520-review-
cher...](http://www.anandtech.com/show/5508/intel-ssd-520-review-cherryville-
brings-reliability-to-sandforce)

~~~
rictic
Point taken; recommendation withdrawn. For the record I ordered my drive on
the 4th, before that article was posted and after, I'd believed, those issues
were resolved.

I don't expect that I'll receive an apology for being called an unethical
shill.

I'd still love to see some statistics on failure rates. All this anecdata is
obnoxious for a buyer to wade through.

~~~
buff-a
I apologize for calling you an unethical shill.

------
martin_k
Nice analogy. From a technical standpoint, however, access patterns often make
a bigger difference than the type of your storage device. The difference
between sequential access on disk and sequential access on SSD isn't nearly as
big as random access on disk compared to sequential access on disk.

~~~
bostonvaulter2
But the order of magnitude between disk and ram is so large that sequential
versus random access on hard drives doesn't matter that much in comparison.

~~~
martin_k
Well, it depends on the hard drives you're using, and the exact access
pattern, but sequential access to disk can be even faster than random access
to RAM.[0] This of course doesn't really matter to average Joe, since he has
little influence on how data is read from his disk. But if you're say, a
database developer, it does matter quite a bit.

[0] <http://dl.acm.org/citation.cfm?id=1536616.1536632>

------
phreeza
To me this evoked the image of a monk doing work in a monastery with a huge
library. If you picture this monk as your CPU, and assume he works 12 hours
every day, over a lifespan of 60 years, thats about the work a 1GHz CPU is
capable of doing every second.

------
jakeonthemove
Nice analogy :-). If only SSD's had the storage capacity of a typical hard
drive...

By the way, get an Intel or a Samsung when your Vertex fails...

~~~
alexchamberlain
3.2TB is reasonably big... <http://bit.ly/yisUYo>

~~~
swalsh
also reasonably priced!

------
wxlittlemanxk
You can get a 200+GB SSD for a reasonable price. And while it might not sound
like it 200+GB is a LOT of data. Chances are it's going to hold everything
except some media files (Music / Movies). But you can cheaply store them on a
HDD or a remote file server. And once you have that file server you can
cheaply play those movies at any computer or any PS3/XBOX/iPad/Boxee (etc) in
your house which is really nice. So, while it might cost a little extra and
take a little more work there is a lot to be said for going down the SSD path.
Of course you can also get a smaller SSD and or use it as your only disk
drive, but they do really make that old file server idea really appealing
especially if you have more than one computer.

------
nooneelse
And about 5000 years for the great user in the sky to notice something you
produce/do.

~~~
abecedarius
Actually 30 years or so for one human-second. (Rule of thumb for that: pi
seconds is a nanocentury.)

~~~
nooneelse
Hey, yeah, how did I get an extra block of "000" on my number with just a copy
and paste? Sorry everyone.

Anyway, I was going for human reaction time of about 0.15sec real world. So
about 4.75 years in the analogy.

------
ackdesha
Wouldn't it be great if I could just click on a file system object (like a
drive in the file browser) and view a diagram of concentric circles for
registers, cache, ram, io layer (disk buffers/caches), and finally the actual
physical device.

The size of each circle would be relative to the avg. amount of time needed to
move data to the CPU. To me this would be more intuitive than charting system
data.

------
nirvana
I think this is in error. The delivery times for SSDs are correct, if you only
consider the periods when the SSD is working. When the SSD fails, the delivery
time comparison is the AGE OF THE SUN. Ok, I kid. You never get your data. So,
lets call "Age of the sun" an average between really fast and infinity.

I've owned 3 SSDs and have had 2 failures, so far, over 2 years[1]. In the
past 20 years, I've owned around 100 hard drives and have had only 4 failures.

This is the achilles heel of the SSD for me. I've gone back to spinning rust
because I need the reliability more than I need the performance.

The performance was nice, very nice. But having to restore from backup is
something that I do not like doing every year. I'd like to do it once a decade
or less.

Until then, I'm no longer using SSDs.

I did a bunch of research into why SSDs fail and inevitably it seems to be
software bugs due to the SSDs being clever. I suspect the Samsung SSDs that
Apple uses are not clever and thus do not fail. I will use an SSD if it comes
with an Apple warranty. But I had an intel SSD fail, and I had a Sandforce
based SSD fail. Both catastrophically with zero data recovery (fortunately I
had backed up, though in both cases I lost a couple hours of work for various
reasons.) In both cases, near as I can tell, the SSD had painted itself into a
corner-- it actually hadn't been used enough to have flash failures sufficient
to be a problem, let alone in excess of the extra capacity set aside. Nope, it
was a management problem for the controller that caused the failures. These
kinds of problems can be worked out by the industry, but give that the market
has existed for 3-4 years now and we're still having these kinds of problems,
I'm going to wait before trying something clever again.

[1] The one that is still working is in my cofounders machine, and I'm
dreading the day that it too fails. I am afraid it is just a matter of time,
and as soon as we can reshuffle things they'll be using spinning rust again as
well.

~~~
regularfry
HP sell SSDs with guaranteed numbers of write cycles that will tell you when
they're going to fail, but they aren't cheap. It's one of those cases where
there are _markedly_ better options available in the server space than for
consumers.

~~~
rdl
I've also had both SandForce and Intel (310, 320, 520, and 710!) SSDs fail --
the issue is not running out of cycles (which could be predicted, or mitigated
by buying more Enterprise style SSDs), but rather weird controller errors.

One (milli-)second the drive is totally fine, then the next it is just gone.
Magnetic disks usually give some warning, and usually fail for mechanical vs.
controller reasons.

I still use SSDs, but am constantly wary, especially in the first month of a
new drive's service.

