Hacker News new | past | comments | ask | show | jobs | submit login

So every cluster machine has 40gbit ethernet (?) - does anywhere else do that?

Looking at Table 2 http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183....

Some perhaps, but it is likely broken down into 4x10G. A 16x40G ASIC would get you 48x hosts and 4x40G uplinks.

IBM has had 4x DDR (20Gb) and QDR (40Gb) Infiniband as an option on their blades and frame nodes for about seven years.

I used to work on a program that is using IBM blades with 4x DDR as an MPI processing cluster. The cluster was significantly smaller than what Google is discussing, though.

Yes, this. Infiniband has scaled up significantly since then as well. I believe you can do 4x25 (100Gb) today.

Having this background knowledge gives you a much better idea of why the interconnects is going to be such a big problem going forward.... When I was working with QDR IB 5 years ago, it didn't matter if they had faster cards - the PCI bus at the time couldn't support anything faster. So you were literally riding the edge of the I/O capability all the way through the stack.

Perhaps not related but I remember LHC's raw data rate is about PB/s and they use ASIC hardwares to filter it.


This is different to the sort of thing Google is doing in the data centre. Most of the PBs/sec don't really see the light of day since you may have a multi-megabyte "image" being captured at something like 40MHz, but practically much of the image is zeros for any given capture. So zero-suppression in the first instance already brings the data down to much more manageable numbers, before they hit off-the-shelf computing hardware.

(At least, that was the case a few years ago. I don't know how much it has changed but I would be surprised if they had totally overhauled things).

It is not uncommon for storage nodes in distributed storage systems, especially when they're stuffed with SSDs.

4x10G per node is not totally uncommon from what I've read. Easier to do in smaller clusters though.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact