
Zuck’s photos from Facebook’s futuristic Arctic data center - newhouse
https://techcrunch.com/gallery/facebook-lulea/
======
ChuckMcM
Pretty interesting. I love how bespoke data centers are converging along very
similar lines. I'm sure if you were inside a Microsoft, Google, Amazon, or
Facebook data center you would recognize the new design touch points. The old
data center is dead, long live the new data center.

This was the second big thing Google had learned early on: _" The equipment is
reduced to its basics so it runs cooler. It can also be easily accessed and
repaired quickly."_ \-- slide 12/17

The whole sheet metal box around a server was a real waste of time if your
employees are the only ones accessing the area, and the only reason they want
to touch the machine is to repair it. This in contrast to NetApp (where I had
worked before) which was busily designing impressive cabinets that would
"stand tall" on the raised flooring of the data center.

~~~
nickpsecurity
I think the lesson came in earlier in the NUMA and MPP machines where they
kept trying to cram more stuff on boards that were themselves pluggable into
the larger system. This convergence has happened from several directions. It's
not all the different from the earlier one that started in the 1960's where
they fought cost and inefficiency by getting as few components per box sharing
as much as possible. Moores Law temporarily reversed it (transistors and
memory are free!) then reality check hits that this seems to be a fundamental
principle.

My design a while back was to put it all on PCI cards on a PCI backplane. I
saw backplanes that basically look like motherboards full of PCI slots that
load into racks. I wanted to make the cards nothing but CPU and memory whose
software communicated over efficient networking (not TCP/IP) through PCI DMA.
My design had IO/MMU functionality in the backplane or PCI cards. At least one
card having full-featured stack for management and at least one I/O card for
external interface. I figured the backplane itself could be extended for that,
too, with a dedicated port like motherboards do integrated GigE. Management
and I/O could come through remote DMA over dedicated wires like many servers
do with Ethernet so all the PCI slots could be dedicated to compute.

Dumbest thing about Facebook's model is them destroying drives. The first
thing to notice, due to Ross Anderson's Security Engineering, is that those
pieces still contain a _lot_ of data if they weren't degaussed first. Next is
to remember the fastest way to destroy data: use clustered, encrypting
filesystems so that secrets never touch the drive. Then, you just have to
delete the keys to loose the secrets. No need to trash the drives at all. The
crypto can happen at the storage manager or at hardware interface with HW
acceleration available for both types. I'm surprised they haven't already
built this with all the smart people they have working on big-data stacks.

~~~
nocarrier
To your last paragraph, only relying on forgetting the keys works great, as
long as you have absolute 100% confidence in the mechanism used to do that. I
read your posts on HN often so I know you know you're quite familiar with
defense in depth--I feel that user data is one of those areas where it's ok to
do more than one thing to protect the data.

That said, there are a number of systems at FB where deleting a crypto key
loses the linked data forever--but they still crunch the hard drives just to
be really sure. The drive crunching is an incredibly tiny expenditure compared
to the massive CapEx and OpEx required to build, stock, and run the
datacenters. It's worth it if only for the peace of mind.

~~~
nickpsecurity
Well, thanks for chiming in with insider view.

"as long as you have absolute 100% confidence in the mechanism used to do
that"

It's true. These mechanisms fail way less than shredders, though. Ideally, the
drive encryption would pull KEYMAT from a dedicated system for that somehow on
boot (kernel, network, whatever). That system should be medium to high
assurance. Easy way is rad-hard ASIC's (or antifuse FPGA's) with ECC RAM and
ChipKill that implement a safe-coded protocol engine that moves keys around in
memory. These are in high-availability configuration with electrical and
optical isolation. Separate box manages things, does backups on encrypted
data, etc. A good HSM combo at Level 3 or 4 is already mostly there, though.
Remember even Ross Anderson's people couldnt break IBM's outside some stupid,
unevaluated software for banking. My ideal just assures protocol itself a bit
more.

"I feel that user data is one of those areas where it's ok to do more than one
thing to protect the data."

It's fine, except to environmemtalists, to do it _extra_ on top of crypto for
extra assurance. By itself, crushing it is insufficient given it might be
recovered given just how much data they cram in tiny spaces. It's why DOD/NSA
standards were to suck the magnetism out of the platter with qualified
degaussers then destroy it. Crypto then destruction can't be directly compared
but should also make it hard.

"there are a number of systems at FB where deleting a crypto key loses the
linked data forever"

Great they do. Thanks for telling me.

"The drive crunching is an incredibly tiny expenditure compared to the massive
CapEx and OpEx required to build, stock, and run the datacenters."

I believe that. What groups like Facebook pull off in datacenter hardware,
software, and administration continues to amaze me.

------
1_2__3
"Marketing photos released by Facebook's CEO" is a much more accurate
headline, this is clickbait.

~~~
nzjrs
But then he wouldn't succeed in cultivating the nickname he always wanted!

Good old Zuck!

------
okket
The colour and settings remind me of the movie "Brazil".

------
tonylemesmer
Interesting that its called "Pinnacle Sweden AB" on Gmaps [0]

[0]
[https://www.google.co.uk/maps/place/Pinnacle+Sweden+AB/@65.6...](https://www.google.co.uk/maps/place/Pinnacle+Sweden+AB/@65.6159578,22.1099815,1301m/data=!3m1!1e3!4m8!1m2!2m1!1sfacebook,+Lule%C3%A5!3m4!1s0x0:0xf6ef64b232313a3d!8m2!3d65.6193632!4d22.114234)

------
zelias
The aesthetic feels like it was meant to be a Bond villain's Arctic lair

~~~
draugadrotten
nah, a much better 'villain' data center actually exists in Stockholm.

[https://www.wired.com/2012/11/bahnhof/](https://www.wired.com/2012/11/bahnhof/)

------
diyseguy
Interesting that they aren't using commodity hardware. Is that a myth then?
The farms of cheap linux boxes?

~~~
jerdfelt
These are farms of cheap Linux boxes.

They go out of their way to design and manufacture their own motherboards and
equipment[1] to reduce their capital and operational expenses.

While the motherboard and enclosures are probably custom made, the components
inside (CPU, memory, etc) are all "commodity".

[1] [http://www.opencompute.org/](http://www.opencompute.org/)

------
gonvaled
Is this how we are finally going to melt the Arctic?

~~~
kaybe
I wonder whether they are using the heat for surrounding buildings. Otherwise
it would just be wasted, which is sad.

~~~
SamBam
Put one rack of servers in every house in Sweden. Each one warms the house a
little bit.

Of course, you lose out on all the other advantages...

------
pimlottc
Great photos but the default resolution is frustratingly small. Here are the
links to the fullsized versions:
[https://tctechcrunch2011.files.wordpress.com/2016/09/1446837...](https://tctechcrunch2011.files.wordpress.com/2016/09/14468376_10103136675908131_1017204796411632535_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1446872...](https://tctechcrunch2011.files.wordpress.com/2016/09/14468726_10103136675758431_9005661492814123754_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1450070...](https://tctechcrunch2011.files.wordpress.com/2016/09/14500704_10103136676077791_481490702728193485_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1442533...](https://tctechcrunch2011.files.wordpress.com/2016/09/14425336_10103136675873201_6012411229515107747_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1446840...](https://tctechcrunch2011.files.wordpress.com/2016/09/14468401_10103136676062821_2921648280975177438_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1452461...](https://tctechcrunch2011.files.wordpress.com/2016/09/14524617_10103136676432081_4482533052638067228_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1443508...](https://tctechcrunch2011.files.wordpress.com/2016/09/14435088_10103136675558831_8236639323516846744_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1448048...](https://tctechcrunch2011.files.wordpress.com/2016/09/14480486_10103136676292361_3173932299854703497_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1448073...](https://tctechcrunch2011.files.wordpress.com/2016/09/14480737_10103136675768411_5425692135538561053_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1442542...](https://tctechcrunch2011.files.wordpress.com/2016/09/14425428_10103136676242461_4852340690901958969_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1452503...](https://tctechcrunch2011.files.wordpress.com/2016/09/14525039_10103136675753441_1884992352847746370_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1452440...](https://tctechcrunch2011.files.wordpress.com/2016/09/14524406_10103136675893161_8227743870037033580_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1442545...](https://tctechcrunch2011.files.wordpress.com/2016/09/14425453_10103136681347231_6250889468166724265_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1452496...](https://tctechcrunch2011.files.wordpress.com/2016/09/14524966_10103136675563821_5871425778878901352_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1450042...](https://tctechcrunch2011.files.wordpress.com/2016/09/14500420_10103136676172601_5172988181273031856_o.jpg)
[https://tctechcrunch2011.files.wordpress.com/2016/09/1443528...](https://tctechcrunch2011.files.wordpress.com/2016/09/14435283_10103136675997951_5698881809545986046_o.jpg)

~~~
tempodox
Thx. The slideshow presentation is also crap because the banner ad on top
makes the whole page jump around.

------
type0
This is from 2014 on swedish radio news (google translated):

[https://translate.google.com/translate?sl=sv&tl=en&u=http%3A...](https://translate.google.com/translate?sl=sv&tl=en&u=http%3A%2F%2Fsverigesradio.se%2Fsida%2Fartikel.aspx%3Fprogramid%3D98%26artikel%3D5803221)

Governmental million to Facebook's new data center in Lulea

Scroll down to read it, basically taxpayers money went to Zuckerbergs pocket.

~~~
blackoil
And zuckerberg's money went to tax payers in terms of taxes and salary and tax
on salary and spending from salary, so a profitable deal overall.

~~~
type0
> so a profitable deal overall.

I don't think that facebook is good for openness of the web and/or society as
a whole, the amount of power without oversight that this Zuck has is simply
scary.

Edit: here's some food for thought-
[https://en.wikipedia.org/wiki/Criticism_of_Facebook](https://en.wikipedia.org/wiki/Criticism_of_Facebook)

~~~
rbanffy
Weren't you talking about investing public money?

~~~
type0
Also, I'm all for it when paypayers money go to new struggling but promising
startups, fb is not one of them. I can't really see why they would need
governmental financial help other that to support corrupt politicians.

~~~
rbanffy
Is the question about what returns on the investment or about who the
government invests in?

------
iamleppert
I've always wondered this, but why don't we just scale up some data storage
device? Like create a massive hard drive, 10 feet across? It has to be more
efficient than tons of tiny little drives.

~~~
nickff
From what I understand, there are a few issues, including:

1) The head has to move further to read the next sector of interest, which is
particularly problematic for fragmented data.

2) It is more difficult to manufacture high data density (AKA high-precision)
disks in large formats, as some surface defects are cumulative, and get worse
as the disk gets larger.

3) Manufacturing defects which occur pseudo-randomly increase proportionally
to the surface area of the disk, so the reject rate increases as a square of
the radius.

4) Smaller drives can be spun much faster, allowing for higher data rates, as
the centripetal accelerations in the disk are proportional to the square of
the radius (and I believe the stresses are proportional to the cube of
radius).

For these reasons and many others, HDDs have been moving to smaller and
smaller form factors.

------
chatmasta
Do people think Facebook will ever enter the cloud hosting market?

~~~
flyt
It seems unlikely considering their acquisition and later shutdown of Parse.

~~~
tempodox
I do wonder what that was good for. Parse looked like an interesting idea -
before the acquisition, anyway.

