Hacker News new | comments | show | ask | jobs | submit login

Thanks for the great suggestions.

We're considering the fat twins so we get both a lot of CPU and some disk. GitLab.com is pretty CPU heavy because of ruby and the CI runners that we might transfer in the future. So we wanted the maximum CPU per U.

The 2028u has 2.5" drives. For that I only see 2TB drives on http://www.supermicro.com/support/resources/HDD.cfm for the SS2028U-TR4+. How do you suggest getting to 4TB?




Also whatever you do, don't buy all one kind of disk. That'll be the thing that dies first and most frequently. Buy from different manufacturers and through different vendors to try and get disks from at least a few different batches. That way you don't get hit by some batch of parts being out of spec by 5% instead of 2% and them all failing within a year, all at the same time.

If you do somehow manage to pick the perfect disk sure having everything from a single batch would be the best since that'll ensure you have the longest MTBF. But how sure are you that you'll be picking the perfect batch simply by blind luck?


I had this problem with the Supermicro SATA DOMs. Had problems with the whole batch.

That said, I bought the same 6TB HGST disk for two years.


As long as you're not buying all the disks at once sticking with one manufacturer and brand should be fine. If you're buying 25% of your total inventory every year it'll all be spread out to just a few percent per month.

But when you're buying 100% of your disk inventory at once there's a serious "all eggs in one basket" risk.


Sorry, I was confused by the part numbers. I was thinking of the 6028u based system that have 12x3.5" drives. These are what I used for my OSD nodes in my Ceph deployment.

As for CPU density, I still feel like you're going to need more spindles to get the IO you're looking for.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: