

Facebook Shakes Hardware World With Own Storage Gear - kevin_morrill
http://www.wired.com/wiredenterprise/2012/02/facebook-builds-storage-gear/

======
raganwald

      “People who are really serious about software should make their own hardware.”
    

Alan Kay,
[http://www.folklore.org/StoryView.py?project=Macintosh&s...](http://www.folklore.org/StoryView.py?project=Macintosh&story=Creative_Think.txt)

------
dsr_
This is a placeholder for the interesting story that will come in May at the
Open Compute Summit. Until then, no real content.

~~~
alexgartrell
I think this is a little unfair. Lots of people skim the comments before going
to the article, and they'd assume you meant that there's no content there
(other than "Facebook will talk about it in May"), but they actually did talk
to a source or two at Facebook and they actually do mention Rackspace's open
virtualization stuff.

I thought it was a cool article.

------
mmc
Does anyone know any technical details behind this paragraph? Specifically,
are they talking about a new kind of interconnect technology with low power
over ~1m distance?

(Searching for "rackspace virtual I/O" was not so useful.)

"Rackspace is leading an effort to build a “virtual I/O” protocol, which would
allow companies to physically separate various parts of today’s servers. You
could have your CPUs in one enclosure, for instance, your memory in another,
and your network cards in a third. This would let you, say, upgrade your CPUs
without touching other parts of the traditional system. “DRAM doesn’t [change]
as fast as CPUs,” Frankovsky says. “Wouldn’t it be cool if you could actually
disaggregate the CPUs from the DRAM complex?”"

~~~
bri3d
I don't think this would be good at all for most real workloads - you'd be
taking the performance hit of having high-latency memory at all times. Even
most hardcore NUMA vendors try to keep DRAM CPU-local, and writing high-
performance software for NUMA generally involves ensuring that your data stays
close to your CPU. Otherwise missing a branch or getting preempted by another
task which flushes your cache lines becomes really, really expensive.

I do think this could be useful for a memcached workload, though, in tandem
with some smaller amount of fast CPU local memory - you could basically share
"memory bricks" between CPUs, and swap CPUs out independently without evicting
an entire system's worth of memcache.

------
victork2
Tomorrow: "In other news, it appears that Facebook has lost every pictures
hosted on their services."

More seriously though, there were an interesting article on Google DataCenter
and how their customize their hard drive to accommodate their needs. There was
another blank paper on how, because of the massive scale of the data, Cosmic
ray influence quite often data stored. And also they said in that survey that
every 3 minutes one hard drive is failing in one of their datacenter somewhere
in the world.

Pretty neat stuff, but I can't find the source, sorry.

~~~
genkaos
About that cosmic rays thing... wasn't that about DRAM?

[PDF] <http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf>

~~~
victork2
Ah damn, yes, you are right ! Nevermind then ;), still interesting though !

~~~
genkaos
Thanks, I almost forgot about that paper! :D

------
larrys
"Now, Facebook has provided a new option for these big name Wall Street
outfits. But Krey also says that even among traditional companies who can
probably benefit from this new breed of hardware, the project isn’t always met
with open arms. “These guys have done things the same way for a long time,” he
tells Wired."

Maybe one reason is because they've been around long enough to know what
happens with bleeding edge technology.

And as the (old) saying went, "nobody ever got fired by going with IBM".

But the truth is the reliability and "shit hits the fan" if a Wall Street
system goes down (and financial loss) is much greater in a traditional
business system then if the same thing happens for a free service like
facebook. Or somebodies "Show HN what I built this weekend" app.

So of course they are going to move slower. And they should. They have more to
lose.

~~~
pnathan
If Facebook transitions to its new hardware and the hardware begins to crash
and burn and users start getting affected, so will Facebook... there are
competitors who would _love_ to have Facebook's lunch. So reliability is
pretty up-there for Facebook too.

~~~
unexpected
If Facebook is down for 2 hours, you're not suddenly going to sign up on
MySpace. By contrast, other companies can lose millions of dollars in that
same timespan!

~~~
mrdodge
Major stock exchanges have gone down for hours, I've had trouble reaching my
bank's web site for hours. Let this myth of the enterprise having any idea of
what its doing die.

And Facebook could lose millions of dollars in that time span, depending on
what time of the day the downtime occurs. They make money from advertising,
down-time means no clicking and no eyeballs.

------
GnomeChomsky
What are some examples of the hardware Facebook's leaving out, and that
'traditional' suppliers were insisting on leaving in? (cf. Peter Krey's quote)

~~~
flyt
bezels, complicated and difficult to configure LOM systems usually tied to
proprietary vendor management systems, fiddly components that are easy to
optimize and build once but that introduce extra overhead over long-term
maintenance (i.e. small screws on drive carriers, chassis that require tools,
etc)

------
thrusong
I hope this means that they'll open source Haystack, but I assume it will only
be the hardware designs.

------
wslh
Sorry for the joke, but they include a "Like" button there?

------
shingen
It's fascinating to me how software companies like Google, Facebook, Apple,
and others have had to push the hardware industry forward, because they often
feel so loathe to eat their own children.

I suspect there's a huge correlation in there to the net cost of changing
hardware compared to changing software code (not to mention the related
margins in the businesses).

~~~
iamgoat
This can be said for energy and medical industries, too. We are smart enough
to solve all the world's problems _, but we won't do it until we are backed in
a corner and need to.

_ Not including war and religion.

