Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reusing existing hardware is a great gameplan. Really happy with my build and glad I didn't go for out of the box.

>In general, you want to get the fastest boot drive you can.

Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.

Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content





> Something like proxmox would destroy the stick.

Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.

> And these days considering an all flash build if you don't have TBs of content

Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.


> Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content?

Depends on what you’re storing. With fast gigabit internet there just isn’t much of a need to store ahem Linux isos locally anymore as anything can be procured in a couple mins. Most people just aren’t producing that much original data on their own either (exceptions exist ofc - people in video making space etc)

Plus it’s not that expensive anymore. I’ve got around 6TB of 100% mirrored flash without even trying (was aiming for speed and redundancy). Most of it used enterprise ones. Think I paid around 50 a TB.

Re proxmox - some of their multi node orchestration stuff is famous for chewing up drives at wild rates for some people. People losing a 1% of ssd life every couple days. Hasn’t affected me so haven’t looked into details


There’s enough people out there(not me) that there’s a market for all-SSD NASes.

Unraid weirdly requires booting off of a USB for the base OS. I think it's to manage licensing.

SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: