Hacker News new | past | comments | ask | show | jobs | submit login

There are several interesting "dead" comments on this thread from people who say they have specific knowledge:

secretamznsqrl 20 minutes ago | link [dead] [-]

  Here is a Glacier S3 rack:
  10Gbe TOR
  3 x Servers
  2 x 4U Jbod per server that contains around 90 disks or so
  Disks are WD Green 4TB.
  Roughly 2 PB of raw disk capacity per rack
  The magic as with all AWS stuff is in the software.

jeffers_hanging 1 hour ago | link [dead] [-]

  I worked in AWS. OP flatters AWS arguing that they take
  care to make money and assuming that they are developing 
  advanced technologies. That't not working as Amazon. 
  Glacier is S3, with the added code to S3 that waits. That 
  is all that needed to do. Second or third iteration could 
  be something else. But this is what the glacier is now.



The math on the first "dead" post only works out to 1PB per rack. Unless there is somehow a way to jam 90disks into 4U. The backblaze and supermicro 45 disk 4U chassis suggest that would be pretty tough. Besides, there is still a good bit of rack left at 24U of JBOD plus 3 servers and TOR.


http://www.supermicro.com/products/chassis/4U/847/SC847DE26-... 90 disks in 4U chassis. If you don't care about power / hot swap ( I doubt there's a datacentre monkey running around swapping disks for Glacier or actually anything -- Google anecdotally was repairing servers by removing two rackful worth of them at the same time ) then instead of piling them four deep you can pile them six deep (36" server depth / 6" disk depth) reaching 135 or so disks.


This is awesome, wow. I wonder how you cool something like that without most of it being powered down.


The disk density, at least, isn't entirely out of line with what Amazon have publicly stated they have. Amazon have stated that their density is higher than Quanta products, and Quanta make a 60 disk 4U chassis, the M4600H: http://www.quantaqct.com/en/01_product/02_detail.php?mid=29&...


2PB per rack sounds about right for a very tightly packed rack. Most DC's can't supply power and cooling for that kind of density, though.


The power requirements would be precisely why a Glacier HDD rack has only a fraction of its HDDs powered on at any given time. This also explains the 3-5 hours latency: you have a queue of jobs and you have to wait for other jobs to finish (eg. reading gigabytes of data) before your drive can be powered on.

It all makes sense.


That density is doable if the drives spend most of their time powered down, which would fit with the potential Glacier restore delay while they wait for the discs you need to come round in the power cycle.


I never worked on glacier, but I can second that at launch it was commonly internally discussed as being an s3 front-end.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: