Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Don't throw away your old PC–it makes a better NAS than anything you can buy (howtogeek.com)
121 points by makerdiety 23 hours ago | hide | past | favorite | 132 comments




I appreciate the message of this article. I've played with half a dozen types of home NAS / RAID / storage solutions over the decades.

The best way I can describe it is:

There are people who just want to use a car to get from A to B; there are those who enjoy the act of driving, maybe take it to the track on a lapping day; and there are those who enjoy having a shell of a car in the garage and working on it. There's of course a definite overlap and Venn diagram :-).

My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.

I will never resent the time (oh God so much time!) I've spent in the past mucking with homelabs and storage systems. Good memories and tons of learning! today I have family and kids and just need my storage to work. I'm in a different Venn circle than the author - sure I have knowledge and experience and could conceivably save a few bucks (eh not as given as articles make it seem;), as long as I value my time appropriately low and don't mind the necessary upkeep and potential scheduled and unscheduled "maintenance windows" to my non-techie users.

But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).

The trick with old computers harnessed as NAS is the often increased space, power, and setup/patching/maintenance work requirements, compared to hopefully some learning experience and a sense of control.


> But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).

You know, I thought I was too, so I threw in the towel and migrated one my NAS to TrueNAS, since it's supposed to be one of those "turn-key solutions that doesn't require maintenance", and everything got slower, harder to maintain and even managed to somehow screw up one of my old disks when I added it to my pool.

The next step after that was to migrate to NixOS and bit the bullet to ensure the stuff actually works. I'd love to just give someone money and not having to care, but it seems the motto of "If you want something done correctly, you have to do it yourself" lives deep in me, and I just cannot stomach loosing the data on my NAS, so it ends up really hard to trust any of those paid-for solutions when they're so crap.


I wouldn't call TrueNAS, or anything where you're installing an OS on custom hardware, "turn-key". That's saved for the Synologys and UGREENs and Ubiquitis of the world.

You can purchase TrueNAS hardware + Software pre-configured. It is not clear what the individual you were responding to was doing, but I have personally experienced many off the shelf, supposedly ready to go IT solutions that require as much tweaking and admin time as a custom solution. But different folks have different skill sets, too.

The problem with Synology type NAS is that they still treat you like a product. If you go this way, you have to accept the limitations. Or you have to do everything yourself.

Are people doing more than serving SMB shares with their NAS’s? I feel like I’m missing out on something.

Depending on how you build it, you could run homeassistant next to your smb, which lends itself to all sorts of add-ons such as calibre-web for displaying eBooks and synchronizing progress.

Of course, gitea and surroundings, or similar ci/cd can be a fun thing to dabble with if you aren't totally over that from work.

Another fun idea is to run the rapidly developing immich as a photo storage solution. But in general, the best inspiration is the awesome-selfhosted list.


Running a home server seems relatively popular for all kinds of things. Search term "homelab" brings up a culture of people who seem largely IT-adjacent, prefer retired DC equipment, experiment with network configurations as a means of professional development and insist on running everything in VMs. Search term "self-hosted", on the other hand, seems to skew towards an enterprise of saturating a Raspberry Pi's CPU with half-hearted and unmaintained Python clones of popular SaaS products. In my experience — with both hardware and software vendoring — there is a bounty of reasonable options somewhere in between the two.

People want all kinds of things besides literal SMB shares:

- Other network protocols (NFS, ftp, sftp, S3)

- Apps that need bulk storage (e.g., Plex, Immich)

- Syncthing node

- SSH support (for some backup tools, for rsync, etc)

- You're already running a tiny Linux box in your home, so maybe also Pihole / VPN server / host your blog?

You've got compute attached to storage, and people find lots of ways to use that. Synology even has an app store.


I'm running Truenas Scale on my old i7 3770 with 16GB DDR3.

Obviously got a bunch of datasets just for storage, one for time machine backups over the network and then dedicated ones for apps.

I'm using for almost all my self hosted apps.

Home Assistant, Plex, Calibre, Immich, Paperless NGX, Code Server, Pi-Hole, Syncthing and a few others.

I've got Tailscale on it and I'm using a convenience package called caddy-reverse-proxy-cloudflare to make my apps available on subdomains of my personal domain (which is on CloudFlare ) by just adding labels to the docker containers.

And since I'm putting the Tailscale address as the DNS entry on CloudFlare, they can only be accessed by my devices when they're connected to Tailscale.

I think at this point what's amazing is the ease with which I can deploy new apps if I need something or want to try something.

I can have Claude whip up a docker compose and deploy it with Dockge.


I just retired my 3770 server last month, it was a good system

I still have one powering a firewall. The only pressure to replace it is power consumption.

I use mine for a ton:

- Home Assistant

- GitHub backups

- Self-hosting personal projects

- File sync

- golink service

- freshrss RSS reader

- Media server

- Alternative frontends for reddit/youtube

- GitHub Actions runners

- Coder instance

- Game servers (Minecraft, Factorio)

Admittedly, this is more of a project for fun than for the end result. You could achieve all of the above by paying for services or doing something else.

https://github.com/shepherdjerred/homelab/tree/main/src/cdk8...


I personally don't get what they are serving with a home NAS? Movies/Music/Family Photos is all I can think of, personally...and those don't seem that compelling to me compared to cloud.

> and those don't seem that compelling to me compared to cloud

I tend to be cloud-antagonistic bc I value control more than ease.

Some of that is practical due to living on the Gulf coast where local infra can disappear for a week+ at a time.

Past that, I find that cloud environments have earned some mistrust because internal integrity is at risk from external pressures (shareholders, governments, other bad actors). Safeguarding from that means local storage.

To be fair to my perspective, much of my day job is restoring functionality, lost due to the endless stream of anti-user decisions by corps (and sometimes govs).


Another use case is hobby photography. Video storage (e.x. drone footage), or keeping a big pile of RAW photos. The cloud stuff becomes impractical quickly.

Also ebooks and software installers, but those and movies/music are my main categories.

Cloud costs would be... exorbitant. 19 TB and I'm nowhere near done ripping my movies. Dropbox would be $96/month, Backblaze $114/month, and OneDrive won't let me buy that much capacity.


And you can buy those disks for… $300 or $400 apiece, I guess? A onetime purchase.

I host mine locally to backup the cloud or in case Google just screws my account one day.

Any substantial movie/series collection can be more over a TB and thus not cost efficient to host in the cloud.

I've been running a server with multiple TB of storage for many years and have been using an old PC in a full tower case for the purpose. I keep thinking about replacing the hardware, but it just never seems worth the money spent although it'd reduce the power usage.

I have it sharing data mainly via SSHFS and NFS (a bit of SMB for the wife's windows laptop and phone). I run NextCloud and a few *arr services (for downloading Linux ISOs) in docker.

(Currently 45TB in use on my system)

Edit: as no-one is asking, I base my system on mergerfs which was inspired by this excellent site: https://perfectmediaserver.com/02-tech-stack/mergerfs/


Not much more, but the extra bits probably differ for different people.

My (Synology) NAS also serves as a Time Machine backup and hosts an LDAP backend for my.


How does that work for you? Last I tried, any interruption during a remote Time Machine backup corrupted the entire encrypted archive, losing all backup history.

I'm hosting a couple of apps in Docker on mine. (Pihole, Jellyfin, Audiobookshelf, and Bitwarden.)

I run NFS and Postgres to enable multiple-machine video editing.

It's curious that you would choose NixOS for a system that "just works". As much as I like the core ideas of Nix(OS)—reproducibility, declarative configuration, snapshots and atomic upgrades/rollbacks—, having used it for a few years on several machines, I've found it to be opposite of that. It often requires manual intervention before an upgrade, since packages are frequently renamed and API changes are common. The Nix store caches a lot of data, which is good, but it also requires frequent garbage collection to recover space. The errors when something goes wrong are cryptic, and troubleshooting is an exercise in frustration. The documentation is some variation of confusing, sparse, outdated, or nonexistent. I'm sure that to a Nix veteran these might not be issues, but even after a few years of usage, I find it as hostile and impractical to use as on the first day. Using it for a server would be unthinkable for me.

For my personal NAS machine, I've used a Debian server with SnapRAID and mergerfs for nearly a decade now, using a combination of old and new HDDs. Debian is rock-solid, and I've gone through a couple of major version upgrades without issues. This setup is flexible, robust, easy/cheap to expand, and requires practically zero maintenance. I could automate the SnapRAID sync and "scrub", but I like doing it manually. Best of all, it's conceptually and technically simple to understand, and doesn't rely on black magic at the filesystem level. All my drives are encrypted with LUKS and use standard ext4. SnapRAID is great, since if one data drive fails, I don't lose access to the entire array. I've yet to experience a drive failure, though, so I haven't actually tested that in practice.

So I would recommend this approach if you want something simple, mostly maintenance-free, while remaining fully in control.


You only really ned to deal with breaking api bianually though. I have really have had use of being able to quickly recover once my bootdisc fails and isntantly being able to have the same machine upp and running again.

Another way to put it - My home lab has production and non-production environments.

Non-production is my kubernetes cluster running all the various websites, AI workflows, and other cool tools i love playing with.

Production is everything in between my wife typing in google.com and google; or between my kids and their favorite shows on Jellyfin.

You can guess which one has the managed solutions, and which one has my admittedly-reliable-but-still-requires-technical-expertise-to-fix-when-down unmanaged solutions.


I personally think big-box computer retailers that build custom turn-key computers (e.g. Microcenter) should get into the NAS game by partnering with unraid and Fractal. It's as turnkey as any commercial NAS I've ever used but comes with way more flexibility and future proofing and the ability for users to get hyper technical if they want and tweak everything in the system.

It's wild how much more cost effective this would be than pretty much any commercial NAS offering. It's ridiculous when you consider total system lifecycle cost (with how easy it is to upgrade unraid storage pools).

Looking right now and my local Microcenter builds essentially three things: desktop PCs, some kind of "studio" PC, and "Racing Simulators". Turnkey NASs would move a lot of inventory I'd wager.


I think the Terramaster NASes are about as close to this as you can get, they even have an internal USB header that seems purpose-added for the Unraid boot disk.

That said, I prefer straight Debain to Unraid. I feel Unraid saves you a weekend on the command line setting it up the first time (nothing wrong with that!), but after playing with the trial I just went back to Debian, I didn't feel like there was $250 of value there for me ¯\_(ツ)_/¯. Almost everything on my server is in Linuxserver.io Docker containers anyways, and I greatly prefer just writing a Docker Compose file over clicking through a ton of GUI drop downs. Once you're playing with anything beyond SMB shares, you're likely either technically savvy or blindly following a guide anyways, so running commands through ssh is actually easier to follow along with a guide than clicking in a UI, since you can just copy and paste. YMMV.


I think there is a slight modification to this, at least for me, there are things tech related for me I want turnkey - I've used MacOS for years because I want a unix system with a decent GUI that I don't have to manage (the fact that the machines work well is a nice addon). I've got a synology, it works.

I don't have unlimited bandwidth or time and want to continue the tinkering phase on things that interest me rather than the tools that enable such.


> My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.

Similarly what I was once told when looking at private planes was "What's your mission?" and they've stuck with me ever since, even if I'm never gonna buy a plane.

One person's mission might be backing up their family photos while someone else's mission is a full *arr stack.


I'm an not the relentless explorer and experimenter that you're sort of patronizing with this comment. I'm somebody who knows that you can put together a NAS with an old desktop somebody will give you for free, slap Debian Stable on it, RAID5 (4 or fewer) or RAID6 (5 or a few more) a bunch of drives together, and throw a samba share on the network in less than a day (minus drive clearing time for encryption.)

It is not some sort of learning and growing experience. The entirety of the maintenance on the first one I put together somewhere between 10-15 years ago is to apt-get update and dist-upgrade on it periodically, upgrade the OS to the latest stable whenever I get around to it, and when I log in and get a message that a disk is failing or failed, shut it down until I can buy a replacement. This happens once every 4 or 5 years.

The trick with big-name NAS is that they go out of business, change their terms, or install spyware on your computer and you end up involved in tons of drama over your own data. This guide is even a bit overblown. Just use MDADM.* It will always be there, it will always work, you can switch OSes or move the drives to another system and the new one will instantly understand your drives - they really become independent of the computer altogether. When it comes to encryption, all of the above goes for LUKS through cryptsetup. The box is really just a dumb box that serves shares, it's the drives that are smart.

I guess MDADM is a (short) learning experience, but it's not one that expires. LUKS through cryptsetup is also very little to learn (remember to write zeros to the drive after encrypting it), but it's something that turnkey solutions are likely to ignore, screw up, or lock you into something proprietary through. Instead of getting a big SSD for a boot drive, just use one of those tiny PCIe cards, as small and cheap as you can get it. If it dies, just buy another one, slap it in, install Debian, and you'll be running again in an hour.

With all this I'm not talking about a "homelab" or any sort of social club, just a computer that serves storage. The choice isn't between making it into a lifestyle/personality or subscribing to the managed experience. Somehow people always seem to make it into that.

tl;dr: use any old desktop, just use Debian Stable, MDADM, and cryptsetup. Put the OS on a 64G PCIe or even a thumb drive (whatever you have laying around.)

* Please don't use ZFS, you don't need it and you don't understand it (if you do, ignore me), if somebody tells you your NAS needs 64G of RAM they are insane. All it's going to do is turn you into somebody who says that putting together a NAS is too hard and too expensive.


Md is great, but does not detect bit rot.

Consider mergerfs + snapraid.

Id also argue if you can setup md you can probably figure out how to setup zfs. It looks scary on the RAM, because it uses “idle” ram, but it will immediately release it when any other app needs it. People use ZFS on raspberry Pi’s all the time without problems.


I've self-hosted web apps (typically IIS and SQL Server) for over 20 years.

While using desktops for this has sometimes been nice, the big things I want out of a server are

- low power usage when running 24/7

- reliable operation

- quiet operation

- performance but they don't need much

So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS. They check all the boxes for server / NAS. Plus both were under $300 before upgrades / storage drives.

Often my previous gaming desktop sells for a lot more than that ... I just sold my 4 year old video card for $220. Not sure what the rest of the machine will be used for, but it's not a good server because the 12-core CPU simply isn't power efficient enough.


> So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS.

I just ordered my first minisforum box (MS-02 Ultra) to serve as my main storage NAS + homelab... first time ordering any of these Chinese boxes, but nothing else checked off all he requirements I had as well as it. Hopefully works out well for me.


Agree and and I wonder what the cost tradeoff is using your old hardware or buying new power efficient equipment. I even recently thought about buying a mac mini to use as a home server.

I feel like ARM wins over even a mobile x86 chip here, right? Like a base Mac Mini sounds ideal.

I would imagine so depending on software / use case.

I run Windows Server 2022 to support IIS / SQL Server so it's not a perfect fit for me personally, but I suspect for many home servers or NAS setup it would work well.


> They check all the boxes for server / NAS

I was pretty disappointed to find out that none of the ms-01 ms-a1 or ms-a2 have a ATX power button header. This means you need to solder wires to the tiny tactile switch and connect those to something like a pi-kvm to get true power control/status and ipmi/redfish

Just seems like something simple they could have easily included if they wanted to really target the homelab space


Wake-on-LAN not available to you?

The UGREEN NAS OS doesn't do encryption right?

Well that isn't on my checklist!

https://www.reddit.com/r/UgreenNASync/comments/1nr2j39/encry...

It's possible because you can install a different OS, TrueNAS, etc. but it's not something I personally worry about.


As a DXP2800 owner with TrueNAS: TrueNAS is so nice on the 2800 for my needs.

It's even relatively straightforward: start it up with a keyboard and video attached, enter the BIOS, and turn off the watchdog settings. I'd also recommend turning off the onboard eMMC altogether for the following FYI.

Just FYI: If you blow away the UGREEN OS off the eMMC, restoring it requires opening a support ticket with them, and it's some weird dance to restore it because apparently they've locked down their 'custom' Debian just enough for 'their' hardware.

As per someone on a Facebook group, "you CANNOT share the file as their system logs once you restore your device and flags it as used. It will fail the hardware test if the firmware has been installed again".


Thanks, I've been tempted, but wasnt sure if they work 'local only' and without app, and this sounds like it dials home? Anyway seems like a long wait list for suitable HDD will save my money for now. Plus I was a little more tempted by their Arm offering.

Ah, no -- the "watchdog" here is basically a system hardware watchdog. The OS 'feeds' the watchdog in the BIOS every X amount of time, if the dog isn't 'fed' in Y time, the computer will fully reboot itself (assuming it crashed).

Because I've installed something that can't feed the watchdog, I just turn the watchdog off.

Their OS install crap, I assume they're just trying to make sure that you can't try to put it on your own hardware (sort of like how people pirate Synology DiskStation).


Find a cheap low power CPU to swap in. Or tune it in BIOS to use less power (some CPUs have an eco mode that make this easy).

Sell the gaming GPU and put in something that does video out, or use a CPU with an iGPU.

Big gaming cases with quiet fans are quiet.

Selling the GPU and tuning or swapping the CPU can put money in your pocket to pay for storage.


It is water-cooled and whisper quiet aside from the GPU fans. So yes there are options.. but right now selling the RAM alone might pay for a whole mini-server. I'm going to try to sell it locally to a PC gamer though, get some proper use out of it!

Server boards (like with xeons) won't have eco modes and will not be made for such use case. Nor server cpus. Idle servers are wasted money, so they are not designed for such use cases.

Big case also means big space.


This is literally impossible with most server grade stuff. It’ll never be as efficient as the low power modern stuff.

“I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already.”

Wouldn’t running something like this 24/7 cause a substantial energy consumption? Costs of electricity being one thing, carbon footprint an another. Do we really want such a setup running in each household in addition to X other devices?


In addition to energy, the biggest reason I no longer use old desktops as servers is the space they take up. If you live in an apartment or condo and don't have extra rooms, even having a desktop tower sitting in a corner somewhere is a lot less visually appealing than a small NSA or mini-PC you can stick on a shelf somewhere.

Tastes differ. I personally find the 36U IBM rack in the corner of my apartment more visually appealing than some of my other furniture, and consolidating equipment in a rack with built-in PDUs makes it easier to run everything through a single UPS in an apartment where rewiring is not an option.

sure, but the co2 emissions from a new machine would take about 10 years to offset, by which time this thinking has made you replace it.. twice.

? It’s not like the machine would be custom built for him.

Are you saying it’s fine to drive a huge truck if you’re single and just need to get around the block to buy a pack of eggs, just because the emissions are nothing compared to those required for making that smaller, more efficient car that you could buy instead?


If your only use for a vehicle is a weekly or even daily trip around the block to buy a pack of eggs, the best environmental choice is to use a vehicle that is already manufactured. If the only vehicle available to you is a semi truck, that’s the best choice. Even over a lifetime of daily trips, the difference in emissions between the semi truck and a golf cart won’t make up for the emissions of manufacturing the golf cart and transporting it to you.

Of course this is a contrived example that ignores the used vehicle market or the possibility of walking around the block.


no, they're saying the emissions needed to create that smaller, more efficient car may vastly exceed their car's emissions during its entire lifetime under their use. so it may be a net loss.

The break-even point for the small car vs truck is much lower, so there it makes a lot more sense to switch.

It’s actually fine to do that because people are allowed to make their own choices no matter how much you disagree with them.

Sheesh. The described "old gaming PC" is much more powerful than the machine I'm using to post this.

> Wouldn’t running something like this 24/7 cause a substantial energy consumption?

Obviously depends on the actual usage, and parent's specific setup, lots of motherboards/CPUs/GPUs/RAM allow you to tune the frequencies and allows you to downclock almost anything. Finally, we have no idea about the energy source in this case, could be they live in a country with lots of wind and solar power, if we're being charitable.


> could be they live in a country with lots of wind and solar power, if we're being charitable.

Because solar wind and hydro have no impact on the environment at all. Or nuclear.

I wish people would understand that waste is waste. Even less waste is still waste.

(I don't argue for fossil fuels here, mind you.)

Plus, the countries have shared grids. Any kWh you use can't be used by someone else, so may come from coal when they do, for all you know. It's a false rationalization.


> Because solar wind and hydro have no impact on the environment at all. Or nuclear.

> I wish people would understand that waste is waste. Even less waste is still waste.

So if I have 10 mining rigs connected to the state power grid, what the source of that energy has matters nothing for the environment? If I use a contract that 100% guarantees it comes from solar, it has the same environmental impact as if I use a cheaper contract that guarantees 100% coal power?

I'm not sure if I misunderstand what you're saying, or you're misunderstanding what I said before, but something along the lines got lost in transmission I think.


> I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already

> I wish people would understand that waste is waste

I think the point is that the configuration from the post can easily run as low as maybe 30-40W on idle, but as high as a couple hundred depending on utilization. An off-the-shelf NAS probably spikes at most in the ~35W range, with idle/spindle-off utilization in the 10W range (I'm using my 4-bay Synology DS920+ as a reference). Normally the biggest contributor to NAS energy usage is the number of HDDs, so the more you add, the more it consumes, but in this configuration the CPU, the RAM, and the GPU are all "oversized" for the NAS purpose.

While reusing parts for longer helps a lot for carbon footprint of the material itself, running that machine 24/7/365 is definitely more CO2-heavy w.r.t. electricity usage than an off-the-shelf NAS. And additional entropy in the environment in the form of heat is still additional entropy, whether it comes from coal or solar panels.


I will sell my old desktop as a gaming pc and use the funds to offset the cost of a new NAS.

Humanity currently produces 30 TWh, with roughly 60% of that from fossil fuels. You connect 10 mining rigs. There are two options for what happens to the world's power generation:

1. You affect the mix! Your rigs create new solar and decommission coal plants! The world is cleaner!

2. You claim a "clean slice" of the existing mix. You feel good because you use only solar, but MRI machines still use power, so their mix is now "dirtier" without changing the actual state of the world.

In real systems, it's probably a combination of the above. I assume our decisions only meaningfully matter by exerting market pressures over longer timescales.


does your gtx 1060 help in any way for the NAS use case?

If you're running a media server (like Plex or Jellyfin) you can do hardware accelerated transcoding on the GPU.

Reusing existing hardware is a great gameplan. Really happy with my build and glad I didn't go for out of the box.

>In general, you want to get the fastest boot drive you can.

Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.

Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content


> Something like proxmox would destroy the stick.

Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.

> And these days considering an all flash build if you don't have TBs of content

Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.


> Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content?

Depends on what you’re storing. With fast gigabit internet there just isn’t much of a need to store ahem Linux isos locally anymore as anything can be procured in a couple mins. Most people just aren’t producing that much original data on their own either (exceptions exist ofc - people in video making space etc)

Plus it’s not that expensive anymore. I’ve got around 6TB of 100% mirrored flash without even trying (was aiming for speed and redundancy). Most of it used enterprise ones. Think I paid around 50 a TB.

Re proxmox - some of their multi node orchestration stuff is famous for chewing up drives at wild rates for some people. People losing a 1% of ssd life every couple days. Hasn’t affected me so haven’t looked into details


There’s enough people out there(not me) that there’s a market for all-SSD NASes.

Unraid weirdly requires booting off of a USB for the base OS. I think it's to manage licensing.

SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.


I've been running homebuilt NAS for a decade. My advice is going to irritate the purists:

* Don't use raid5. Use btrfs-raid1 or use mdraid10 with >=2 far-copies.

* Don't use raid6. Use btrfs-raid1c3 or use mdraid10 with >=3 far-copies.

* Don't use ZFS on Linux. If you really want ZFS, run FreeBSD.

The multiple copy formats outperform the parity formats on reads by a healthy margin, both in btrfs and in mdraid. They're also remarkably quieter in operation and when scrubbing, night and day, which matters to me since mine sits in a corner of my living room. When I switched from raid6 to 3-far-copy-mdraid10, the performance boost was nice, but I was completely flabbergasted by the difference in the noise level during scrubs.

Yes, they're a bit less space efficient, but modern storage is so cheap it doesn't matter, I only store about 10TB of data on it.

I use btrfs: it's the most actively tested and developed filesystem in Linux today, by a very wide margin. The "best" filesystem is the one which is the most widely tested and developed, IMHO. If btrfs pissed in your cheerios ten years ago and you can't figure out how to get over it, use ext4 with metadata_csum enabled, I guess.

I use external USB enclosures, which is something a lot of people will say not to do. I've managed to get away with it for a long time, but btrfs is catching some extremely rare corruption on my current NAS, I suspect it's a firmware bug somehow corrupting USB3 transfer data but I haven't gotten to the bottom of it yet: https://lore.kernel.org/linux-btrfs/20251111170142.635908-1-...


I use mergerfs + snapraid on my HDDs for “cold” storage for the same reason: noise. Snapraid sync and scrub runs at 4am when I am not in the same room as the NAS.

The drives stay spun down 99% of the time, because I also use a ZFS mirrored pool on SSDs for “hot” files, although Btrfs could also work if you're opposed to ZFS because it's out of tree.

Basically using this idea, but with straight Debian instead of ProxMox: https://perfectmediaserver.com/05-advanced/combine-zfs-and-o...

I also use mergerfs 'ff' (first found) create order, and put the SSDs first in the ordered fstab list of the mergerfs mount point. This gives me tiered storage: newly created files and reads hit the SSDs first. I use a mover script that runs nightly with the SnapRAID sync/scrub to keep space on the SSDs open.

https://github.com/trapexit/mergerfs/blob/master/tools/merge...


Btrfs raid was also the one who had data loss bugs.

I have had zero issues running ZFS on Linux for the last 10 years. (Not saying there were no issues that have annoyed or even caused data loss.)

I was wondering what the parent's beef was with ZFS on Linux. I have a box I might change over (B-to-L) and I haven't come across any significant discontent.

No beef: I just simply don't run out of tree kernel code, I've been burned too many times. Linux ZFS is mostly used by hobbyists and tinkerers, it doesn't get anything close to the amount of real world production testing and follow up bugfixing with linux that a real upstream filesystem like btrfs does today.

If ZFS ever goes upstream, I will certainly enjoy tinkering with it. But until it does, I just don't see the point, I build my own kernels and dealing with the external code isn't worth the trouble. There's already more than enough to tinker with :)

All my FreeBSD machines run ZFS, FWIW.


I've even been using ZFS on Linux with USB enclosures for 5+ years with no issues.

This is the first time I've ever had a problem with the USB enclosures. And its fantastically rare, roughly one corrupt 512b block per TB of data written. With a btrfs-raid1 it's self-correcting on reads, if I didn't look at dmesg I'd never know.

I've figured out it only happens if I'm streaming data over the NIC at the same time as writing to the disks (while copying from one local volume to another), but that's all I really know right now. I seriously doubt it's a software bug.


At San Francisco electricity prices of ~$0.50/kWh, using an old gaming PC/workstation instead of a lower power platform will cost you hundreds of dollars per year in electricity. The cost of an N100-based NAS gets dwarfed by the electricity cost of reusing old hardware.

But do you really need to keep it on 24/7? What about a wake-on-LAN solution?

You wouldn't want to consume your hard disk spin-up count every day.

Does WoL... actually work with non-industrial server and networking gear? Any time I've looked into it, it's seemed interminably finicky.

Anecdata: I use it every day to wake my windows desktop PC

and to hibernate it: ssh -f desktop 'shutdown /h'


Better? No absolutely not. Capable? Without a doubt. I have a multi bay nas and it's like 1/6the the size of my pc case. My nas also makes removing and replacing drives trivial. There's a million guides online for my particular nas already and software written with it in mind. It also draws a lot less power than my gaming pc and has a lot quieter operation.

It's difficult for me to accept it's better given all the above.


I've been building and running various home servers for years. Currently I have a n eBay special FreeBSD quad xeon (based on the desktop socket) with 64GB ECC and a cheap SAS/SATA card running two ZFS arrays.

On a side note: I hate web GUI's. I used to think they were the best thing since sliced bread but the constant churn combined with endless menus and config options with zero hints or direct help links led me to hate them. The best part is the documentation is always a version or two behind and doesn't match the latest and greatest furniture arrangement. Maybe that has improved but I'd rather understand the tools themselves.


After I had a reckoning with bitrot, would muchly recommend to use something with ECC memory for NAS. And a checksumming filesystem with periodic scrubing that won't get corrupt on you silently.

Same, but I also discovered a wonderful bonus in the difference between True ECC DDR5 and just the on chip BS stuff.

ECC DDR5 boots insanely fast since the BIOS can quickly verify the tune passes. This is even true when doing your initial adjustment / verification of manufacturer spec.


> checksumming filesystem with periodic scrubing

Do you know a system that does this? Looking for this too


ZFS, Btrfs, or SnapRAID in a chron job (not a file system, but accomplishes something similar).

ZFS is the “gold standard” here


SnapRAID is awesome, it's been bulletproof in recovering multiple failed drives for me, but note that you have to have your sync and scrub operations appropriately configured to get bitrot protection.

I have had good experiences with SnapRAID as well. I use this script to run it (in a Chron job), which is highly configurable:

https://github.com/auanasgheps/snapraid-aio-script


btrfs and zfs

Came here for this comment. I wouldn't run a NAS without ECC.

Adding to the chorus of responses here: I did what this article suggested for a time, but found a purpose-built NAS was way nicer than repurposing an old gaming PC.

Using an old gaming PC for a NAS is kind of like trying to use an old track car that you've taken out the interior, added in a roll cage, and welded the doors shut as your kid's first car. Like yeah it will totally work, and they can impress all of their friends as they cosplay as the Dukes of Hazzard, but it's really not optimal for the task at hand.

I just upgraded my NAS setup to a Terramaster F4-425 Plus (running Debian) and it's great. The N150 CPU in it sips power, and the whole thing is tiny and easy to hide away in a media cabinet. One ultra-quiet Nocuta fan is all that's needed to keep it cool. It's so nice to use the right tool for the job.

EDIT: I'd recommend all of these guides / articles, I basically cherry-picked what I liked from all of them and ended up with something I'm really happy with:

* https://perfectmediaserver.com

* https://github.com/trapexit/mergerfs/blob/master/mkdocs/docs...

* https://blog.muffn.io/posts/muffins-awesome-nas-stack/

* https://drfrankenstein.co.uk

* https://trash-guides.info


The first thing you should consider doing with you old device is selling or giving them away. This helps lowering the need for manufacturing more hardware, it prevents the hardware becoming e-waste in a drawer, and it put pressure on the market to lower it's prices. Sure, you can reuse as a NAS, but someone probably needs it more.

The electronics went so cheap recently, so selling it to strangers is rarely worth the effort. Then there's a question, what OS are you going to put on an old PC. And then even if they are, say, only using browser, and would be okay with linux, modern browsers need 8GB of memory at least.

I know I am in the minority and my uses/needs/requirements are not average, but I am perfectly fine with running Xubuntu on the following hardware: 1) 4GB 2011 Thinkpad with HDD (yeah really) and 2) 4GB 2009 Phenom desktop (was Win10 until a month ago).

By fine I mean running all these at the same time: firefox with several tabs, development tools, Blender and GIMP. All snappy and fast. Even the HDD in the laptop is only an annoyance during/after a cold boot. Then it makes no difference. I daily drive both for the past 8-15 years. The laptop sits at ~10-15W idle and the i5 in it is a workhorse if needed.

Of course there are uses for better hardware, I am not dismissing upgrades. But the whole modern hw/sw situation is a giant shipwreck and a huge waste of resources/energy. I've tried very expensive new laptops for work (look up "embodied energy"), and Windows 11 right-click takes half a second to respond and Unity3D can take several minutes to boot up. It's really sad.

edit: To be honest I have to add a counter-example: streaming >=1080p60 video from YT is kind of a no-no, but that's related to the first sentence of my post.


I am running Win 10 LTSC on "HP 205 G3 All-in-One Desktop PC" with 4GB RAM. Not the best experience, but plays youtube and can output to HDMI.

I am not saying you are wrong in general.


> old gaming PC with a Ryzen 1600x, 24GB of RAM

"Old", right. That old PC I'm about to throw away has 2 GB of RAM.


I've yet to own a machine with 24G of RAM in my life.

I've been a computer geek for 30 years.


I've managed to make good use of 32GB of RAM. But I wonder, what does the NAS need 24GB RAM for?

I'm looking for the quietest 6 bay NAS possible.

I have a beQuiet case and six 30TB HDDs, and I plan to put the Ubuntu with a Plex server on a NVME SSD and do a ZFS 4+2.

Can anyone point me to a better/quieter set-up? Thank you in advance.


My home server / NAS is essentially just my old gaming desktop + some extra hard drives. It runs Unraid with Nextcloud, Plex, and a few other services. It's great, and generally pretty low maintenance.

I'll also point out that there are a lot of folks out there who don't have very large demands when it comes to computing, and would be served perfectly well by a 5-10 year old system. Even low-end gaming (Fortnight, GTA V, Minecraft, Roblox, etc.) can run perfectly fine on a computer built with $300-400 of used parts.


Sure, if you're going to reuse something which would be thrown away or left to dust otherwise (foolish but I'd imagine someone does that).

But don't do this just so you can upgrade your current pc.

I'd vouch more for old laptops, which are generally not upgradeable, come with built-in UPS, if you remove the screen is as thin as a notebook and can handle low usage. Then you can connect either directly or via other interfaces a bunch of disks and you're golden.


Yes it is a NAS and it is cheap and convenient to repurpose hardware.

But for anything where your data is important isn't ECC memory still critical for a NAS in this day and age?


Yes, and my desktops utilize ECC too for that reason. I only lack ECC in the places it's really difficult to avoid that tremendous drawback.

E.G. a Steamdeck is or smartphone are both relegated as toy devices that are not for serious computing.


A Synology NAS is very low wattage. In a year, it saves enough electricity to pay for itself, compared to leaving my old PC on 24/7.

Old Mac Book Pros are very silent and could be used as good NAS. Unfortunately Apple doesn't support modern versions of Mac OS for them, and also doesn't offer any security patches. So the hardware is still quite capable, but the software is too unsecure and unstable to really recommend old Macs as NAS.

Depends on the person administering the machine. Linux is an option that can run a recent version with security updates.

I've used a HP EliteDesk 800 G4 SFF (with an i5-8500) as a NAS/home server for several years now. It's quiet, power efficient, has space for 2 HDDs plus additional nvme slots and regular PCI-e slots. These type of machines are cheap to get on eBay, I highly recommend getting one.

Old PC makes more noise than anything I can buy

A modern fanless NUC clone with a low power consumption processor will eclipse any recycled PC for utility computing.

Unfortunately PCs have mechanical devices that give out after a few years. I am referring of course to fans. I use a Raspberry Pi 4 running Ubuntu and Samba as my NAS. It is cheap and reliable.

I do too, but I’m looking to get proper solution soon. A Pi is a pretty lousy NAS. It can’t even power two drives so you can’t have redundancy unless you get a powered USB hub. And even then, I used one of those for a while and the drive connected to it prematurely failed. I think maybe because the power supply wasn’t stable.

I have a Pi4 running Raid 1 NAS with two SSD drives, and an externally powered USB hub. Unfortunately, it crashes every 6 months or so and needs a power cycle. Haven't been able to track down why, but I also suspect a power supply issue.

Initially I naively tried to run the two drives right off the USB3 ports in the Pi, and that basically crashed within a day - but that is of course because I was exceeding the power draw. An external hub and supply helped, but didn't fully fix the issue.


I've had more SD cards die on me than fans. I don't think any have died in the past five years, even.

Is there a particular brand you buy? Mine always fail after about 5 years… and I try a new brand each time. Not cheap fans, either; usually $15-20 per 120mm unit.

Noctua, but $20 might not be enough for the cheapest one depending where you live.

I’m not buying anything else and I’m also swapping out any non-noctua fan in my parts when possible (e.g. bought a scythe cooler due to ‘interesting’ dimensional constraints and swapped its fan with a noctua one.)


I always buy the cheapest PWM fans available in a nearby store (so usually Arctic) and I never had one fan fail on me in my life.

They almost never run 100%, though, and I have a recurring task set up to clean dust outta my filters, computers and servers.


OT, but what do you use to manage recurring tasks? I haven't found any solution that I love.

I tried using different sd cards with RPi but kept having issues with broken filesystem few months after, it was probably caused by bad power supply and electric surges.

You don't HAVE to boot RPi4+ from an SD card. RPi4 and RPi5 can boot from an external SSD just fine. I don't recall the last time I used an SD card in an RPi but it must have been years.

Yeah, that's what i have done, but it becomes bulky which doesn't always fits the project

I can’t remember replacing fans because they stopped spinning but I have EOLed them because the bearings went bad and they started to screech.

Don't use an SD card, then. It's that simple.

I have fans for the early 00s.

Also a fan is like $10?

Things which are more vital than that are the disks, power supply, rams.


I always found NAS interesting but never had a personal use for a large amount of storage these days.

All the music and videos I watch are through streaming. I don’t have a personal business or anything that requires more than 1 tb.


While there are use cases for NAS, generally, if you have a desktop PC it's far better to put the hard drives in it rather than setting up a second computer you have to turn on and run too. Putting the storage in the computer where you'll use it means it'll be much faster, much cheaper, incomparably more reliable, with a more natural UI, and it'll use less eletricity than having to run 2 computers.

Now if your NAS use case is streaming media files to multiple devices (TV set top boxes, etc), sure, NAS makes sense if the NAS you build is very low idle power. But if you just need the storage for actual computing it is a waste of time and money.


Why do you think it'd be more reliable? That's one of the main advantages of a NAS

It's pretty simple: 2 computers have twice the parts and having twice the parts means there are more chances for something to die. But it goes beyond this too. Far less software stack complexity (the big one), no flaky network link, no complex formatting that cannot be recovered with common tools, etc.

KISS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: