Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really would love to help out, but I don't have 650TB of storage laying around :(

The logistics of this archive are quite crazy; most 2-4U JBODs I worked with hold like 24 or 45 SFF SAS disks.

Standard size (unless things have changed) for 10k sff sas disks seems to be about 1.2TB, so you'd need 544 of these to build a raidz big enough. So we're talking 12 4U jbods, well over a full rack.

I guess I can just hope some rich techie with a volcano lair / private datacenter somewhere is keeping a copy..





24TB drives are quite available, $300 on newegg.

Buy a Data60 (60 disk chassis), add 60 drives. Buy a 1U server (2 for redundancy). I'd recommend 5 stripes of 11 drives (55 total) with 5 global spares. Use a RAIDz3 so 8 disks of data per 11 drives.

Total storage should be around 8 * 24 * 5 = 960GB, likely 10% less because of marketing 10^9 bytes instead of 2^30 for drive sizes. Another 10% because ZFS doesn't like to get very full. So something like 777TB usable which easily fits 650TB.

I'd recommend a pair of 2TB NVMe with a high DWPD as a cache.

The disks will cost $18k, the data60 is 4U, and a server to connect it is 1U. If you want more space upgrade to 30TB $550 each) drives or buy another Data60 full of drives.


There are also enterprise SSDs in existence now which pack more than 200TB into a single NVMe drive. $$$$$, though (for the foreseeable future?)

Kioxia LC9 SSD Hits 245.76TB of Capacity in a Single Drive - https://news.ycombinator.com/item?id=44643038 (22 days ago, 7 comments)

-> https://www.servethehome.com/kioxia-lc9-ssd-hits-245-76tb-of...

SanDisk's "reply": Sandisk unveils 256 TB SSD for AI workloads, shipping in 2026 - https://news.ycombinator.com/item?id=44823148 (10 days ago, no discussion)

-> https://blocksandfiles.com/2025/08/05/sandisk-pre-announces-...


> Enter how many TBs you can help seed, and we’ll give you a list of torrents that need the most seeding! The list is somewhat random every time, so two people generating at the same time will still cover different parts of the collection.

I don't think you need 650TB!


Why limit yourself to the SFF?

45 Drives is the company that builds the hard drive pods used by Backblaze and they offer a 4U, 60x3.5 inch drive array. Has an advertised capacity of 1.44 PB, which would be 24TB drives configured without redundancy.

https://www.45drives.com/products/storage-server-storinator-...


Seagate have an Exos M in 36TB for about £450, so 24 of those could do it. Three vdevs with one parity drive each? Call the project £13k?

Not exactly a production grade setup, but it'd do the job and you'll see fewer failures each year than in 544 10k SAS drives.


> 10k sff sas disks

There's your problem. Ordinary consumer LFF SATA disks go up to 30TB-ish now, though that may not be the most cost-effective size (or it may be, when you consider the cost of the drive bays as well).


Well I don't think you have to dedicate 650TB or nothing :P

A full rack or more sounds like a lot, but I don't know much about hardware, so I'll take your word for it.


At work I have about 1PB per full rack, and that's with plain HP/Dell servers with 12 3.5" hard disks in each.

Maximising storage isn't the purpose of this setup, much denser configurations are possible as others have commented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: