Hacker News new | past | comments | ask | show | jobs | submit login
FreeNAS and TrueNAS Are Unifying (ixsystems.com)
221 points by whalesalad on March 6, 2020 | hide | past | favorite | 107 comments

Excellent news. This is something far more open source projects should do. All this fragmentation is good for nothing, it is far better to have one really good piece of software than 20 half baked ones all doing roughly 80% of the whole.

> it is far better to have one really good piece of software than 20 half baked ones all doing roughly 80% of the whole.

Except it wasn't 20. I was 2. One for free and one for the enterprise.

Right, but both were being managed by a single group of developers - that's just inefficient and prone to cause bugs.

> Right, but both were being managed by a single group of developers - that's just inefficient and prone to cause bugs.

I totally agree with you, but that's a different statement than the one in the comment I originally replied to.

OpenWRT and its LEDE fork eventually merged back together for similar reasons. I couldn't be happier -- the result is amazing.

egcs and gcc "merged" in that the modern gcc is the descendant of the egcs codebase.

Rick Moen wrote a good essay titled "Fear of Forking"


It's a defense of forking, but it contains a few capsule histories of important forks of the past (from the perspective of when it was written) which are interesting.

That’s a great essay, alright – and still stands the test of time. I remember Rick Moen’s contributions to mailing lists in the early 2000s being insightful and informative (for me).

When I first started using GNU/Linux in 1999 (Red Hat 6), I found it confusing to understand what all the fuss was about with libc/glibc and egcs/gcc. I think the differences had been resolved by that stage but all the related documentation referred to the split which made learning this strange new system very confusing (at the time, I knew what a compiler did but I didn’t know about anything about architectures other than x86 and I didn’t know why a C library was so important).

Rick includes a reference to the Emacs split of the early 90s. Jamie Zawinski’s reflection on this fork is quite interesting: https://anonym.to/?https://www.jwz.org/doc/lemacs.html (using the anoymiser as Jamie doesn't seem to like news.ycombinator.com as a referer).

any recommendations for a recent 5G router that can be flashed to OpenWrt? The table of hardware [1] has many routers but many have exceptions that 5G is not working.

Amazon [2] has some micro travel routers only - so what are good options under 100$?

[1] https://openwrt.org/toh/views/toh_available_864?dataflt%5BDe...

[2] https://www.amazon.com/s?k=openwrt

5 GHz band wifi? Or 5G wireless?

>it is far better to have one really good piece of software than 20 half baked ones all doing roughly 80% of the whole.

I have to disagree on this one. Multiple diverging projects create _Stability_, whereas a single project creates _fragility_. One bad step on the only project, and it will be all who suffer

Try to provide a package of a GUI app "for linux". After you support centos, fedora, debian, ubuntu, gentoo and arch, you'll maintain a mess of brittle packaging scripts for years, breaking at each release and some updates. Just making the first release for each is a challenge. Even app images, the least well integrated system will fail at some point.

You'll see how stable is diversity.

Then package for windows xp. It probably works on windows vista, 7, 8 and 10.

Now I do see a lot of value in diversity. Resiliance, ethics, competition, collaboration...

Stability isn't one.

If you're actually interested in packaging a generic Linux GUI app, check out snapd. It's originally an Ubuntu project, but has been pretty widely adopted at this point. I believe every distro you mentioned can run snaps, except maybe Gentoo.


Snapd has it's own set of problems.

It is sandboxed, so your use case must be compatible.

It uses apparmor, what if the system uses SEL ?

It's sandboxed in a certain way, your app must support it.

It's very recent, only the latest distribs have a snapd release that works well.

It's slower to execute.

Snapd assumes systemd.

But wait, did you say snapd ? I though you said flatpack. Or appimages.

Anyway, despite all that, it is easier to write a snapd instead of a deb+rpm+whatever. I actually like this project a lot.

And again it proves my point: to get stability, we use snapd, a tool to compensate for diversity.

Do developers not know how to statically compile software anymore?

Also, you shouldn't be writing your own packaging scripts: leave those to distribution packagers. There are thousands of people who work on packaging these things, and the user of the distribution is much safer if they don't touch software distributed by random people.

This stuff is completely stable, just don't break the assumptions of the system without knowing what you're doing.

Depending on distributors means that your users will only use ages old versions of your software, thus you receive support requests for stuffprobably already fixed/changed. Sending users back to distros also takes time.

And for static linking: That is kind of happening with those modern snap and flatpack and whatever systems for handling applications, but it's bad for security. I want to update my system zlib in case there is an issue instead if depending on all applications consuming compressed unteusted streams updating their package in time. And I certainly don't want each little tool bundling their own Qt.

Oh, but you still need to make a deb, rpm, nix, whatnot then compose with the os tooling, conventions and expectations. And use those to provide your conf/update/failure, menu integration, permission system, logging, init system, notifications, etc.

Or provide an app image and ignore most integration.

This is my point exactly: to get stability, you remove diversity.

Static linking, one packaging system, and you don't have to care about how diverse the universe it.

But it also means you don't benefit from what make those differences add: security updates, dependancy graph, automation, signing, jails, user documentation and training, os integration, native window theming, etc.

Oh, but you still need to make a deb, rpm, nix, whatnot then compose with the os tooling, conventions and expectations. And use those to provide your conf/update/failure, menu integration, permission system, logging, init system, notifications, etc.

This shouldn't need to be done? The distribution's packagers handle most of this. Except for Nix, maybe? I hear they have a particularly fucked up ecosystem for packaging.


The chances your project matches the criterias to be included into any distrib repo are very low, and it's a lot of work and problems by itself.

They are not app stores you pay to get into. The gate keepers have a very strict opinion on what to let in and how. And it's all done manually, and fedora policy is not debian's is not gentoo's.

Plus, statically linked projects are almost never accepted. Back to square one.

Besides, what if it's not free software ?

> The distribution's packagers handle most of this.

And often fuck it up and introduce bugs or even vulnerabilities, intentionally ignore the developer, or simply fail to update packages in a reasonable time frame.

Relying on free labor to package software for you is a terrible terrible idea that helps keep Linux Desktop a shitshow.

Disagree strongly and as a user I feel distributions and package maintainers are a necessary defense against overly opinionated developers.

I'm glad there is a layer there that will patch and configure to better integrate into the system and in some (very rare) cases remove user hostile "functionality".

Sometimes it's better to trust the developers, not the packagers. Example is the Debian SSH vulnerability from 2008: https://www.debian.org/security/2008/dsa-1576

This was a bug introduced in packaging. See https://lwn.net/Articles/281436/ for more details.

I might agree with you in theory, but in the specific instance, there were never really two distinct projects. They were artificially maintained as two distinct projects to have an arbitrary “enterprise” project and the community-supported project.

The relationship between FreeNAS and TrueNAS is more like the difference between Fedora and RHEL. Both sponsored by the same company, but the later is the more mature, “enterprise” version.

So instead of having FreeNAS and TrueNAS we’re going to have “TrueNAS Core” and “TrueNAS Enterprise”.

It’s honestly not that big of a shift from a practical perspective.

> like the difference between Fedora and RHEL

That sounds more like RHEL vs CentOS, really. ...which is fitting, since those two have also been slowly coming closer and closer to each other.

> I have to disagree on this one. Multiple diverging projects create _Stability_, whereas a single project creates _fragility_. One bad step on the only project, and it will be all who suffer

Anti-fragility is a lot more complicated then just having a lot of implementations.

The moving away from CVS and SVN to much more easily distributed revision controls is one of the best things that ever happened to large open source projects.

The reason for this is that when its very easy to loose contributors and users to forks then it enforces a lot of project management discipline on the part of the project leadership. Before when you held all the keys to the castle and it was difficult to move away it was very tempting for people to use their position to impose "political" restraints on other people.

And vastly reduced hosting costs thanks to things like github, gitlab, spread of cloud providers and so on and so forth makes it now cheaper then ever before.

And these things makes it easier to 'unfork' as well.

In this way we have the odd result of easy forking has a way of making it so that forking isn't necessary.

And when there is a major dispute in a particular community then cheap and easy forking (and recombining) means that people can actually have competing governance models and see which approach is actually better. Rather then just fighting until everybody gets burned out and abandons the project.

Lede vs Openwrt is a good example of this.

Libre Office vs Open Office.

Gnome vs Unity.

These things exist more due to competing governance models then anything else.

So this can be summarized as saying "improvements in anti-fragility in modern large open source projects is more due to the fluidity in which projects can be managed, forked, and recombined rather then just the number of implementations users and contributors can choose from"

Ok, one or two. A couple. But not the tower of Babel that we have today. No need for a monoculture but no need for forks for forks' sake either. Or complete re-implementations of stuff that already works because it is cool to be maintainer of a project, rather than collaborator. In the end it is all about having something that is viable and that can compete on merits with closed source.

> In the end it is all about having something that is viable and that can compete on merits with closed source.

I thought it was about scratching your own itch? "I want to be a maintainer" kinda sounds like an itch.

Even funnier, forks over social identity/ideological differences

Not even software exists in only a vacuum, it's influenced by real world events. There are lots of people who see some of these changes as a threat to the existence of the software they use and want to secure it themselves

That depends on how many people are involved. Enough people work on desktops that more is better. However for more obscure things merging projects can be the difference between two projects that are almost dead and one project that shows signs of life.

Oh man, the classic monopoly vs oligopoly debate.

yes, but ... one piece does(and should) not fit all. And after a while things get bloatet and ... this comes to mind: https://xkcd.com/927/


Looking a bit deeper (eg https://www.reddit.com/r/freenas/comments/fdx8rj/freenas_and...):

> Some of the features of TrueNAS Enterprise include support for dual-controller/HA systems, native Fibre Channel support, integrated chassis monitoring and management, as well as certifications with platforms like VMWare, Veeam, and Citrix.

The "native Fibre Channel" support bit sound a bit worrying. Isn't that just the inclusion of appropriate drivers in the kernel config used to compile the image?

People do ask about that on the FreeNAS forums (Fibre Channel support), so it's not like there's no demand for FC from FreeNAS users.

Why is that worrying? They consider FC support to be an Enterprise feature which I'd say is a pretty fair assessment. They are in the business of making money and their way of doing it is by charging for certain features. If you don't want to pay, build your own freebsd nas from scratch and enable FC. I don't understand why that's concerning.

My point is there isn't an "enable FC" option in the base FreeNAS image. Otherwise people would be doing it already. :(

It's sounds like FC is being "kept back" from people that want it, in maybe more an open-core approach.

All of the "Enterprise" features are being "kept back" from Core. That's kind of how tiered software packages work...

Ugh. It's not just "Enterprises" that use FC, though it's the most common case.

It's also in use by people that buy FC gear on Ebay, as they want some of the capabilities that FC offers for whatever reason.

If they've really decided to lock OSS customers out of using FC gear (which would be crappy), this definitely sounds like a move to "Open Core" or something like it and would a shame. :/

Come on man, you can make that exact same argument for damn near every "Enterprise" feature locked away behind a paid tier in every piece of software.

Let us be clear:

They are already doing that and always have been.

This is just unifying the name and the source code.

Because FreeNAS supports fibre channel today.

Let's be clear - you can hack FreeNAS to add FC functionality, but it absolutely doesn't support it out of the box. I don't see why that would change just because the name changed from FreeNAS to TrueNAS Core. They obviously aren't using a kernel with FC support removed or it wouldn't just be a license key to move to TrueNAS enterprise, it would be a completely different image.

Ah, this is fair - "supports" is the wrong conceph. I hadn't quite put 2 and 2 together with the fact that if the kernel features and client tools exist on TrueNAS, they'll also exist on FreeNAS (because it's the same image).

happy and long-time (home) FreeNAS user/admin here. last year i managed to convince our internal corp IT dept to not renew their overpriced yet inexplicably not that good Dell storage and go with purchasing some TrueNAS, and they have been very satisfied. great product, great support, great company. a really solid recommend.

Seriously? Can a FreeNAS setup actually support the kind of uptime and performance features of modern enterprise storage? I've never heard of an open solution supporting redundant controllers and multipath, for instance.

well, re: "enterprise storage", note that i did specify "internal corp IT dept".

so i guess it depends on what you mean by "enterprise"? and what that enterprise does?

we serve a decent amount of video content out (~35Gbps during peak) and we have a lot of video content stored and edited in-house. we made the decision to use TrueNAS for our _internal_ work video storage and sharing, but (not yet) for _public_ serving of video.

all the in-house edit masters, work files/partial renders etc are stored on the in-office TrueNAS systems, but the actual _serving_ of the final encoded HLS content out to customers is via in-DC clustered NetApp sitting behind a lot of striped SSDs (FS-Cache) serving out http(s). we have an autoencoding pipeline that picks up the finished masters from the TrueNAS, does the various encodes and throws them up on the NetApps for public serving once they're done.

i'm a longtime enterprise storage user (and specifically a big fan of NetApp), but i would very much like to see if its possible to transition out of such a giant dollar-suck as NetApp and give something like TrueNAS a chance at the frontend.

these baby steps (so far successful!) are our foray into these uncharted waters!

Yes. My employer uses TrueNAS and we have some setup with redundant controllers in an HA configuration as well as multipathing. The HA feature is considered enterprise so it won't be in the FreeNAS ... I mean TrueNAS Core edition.

Ahem, uh, Linux supports multipath?

Question to those who know:

I used FreeNAS and liked it, but my recent build is Unraid because I had 24 bays, and didn't want to buy all my drives at once, and afaik FreeNAS/ZFS is not super great with adding drives to the pool, especially if they are different drives. Unraid, though, is not great with drive failure and I'm starting to think I want to go back to FreeNAS, but I am torn:

1. I don't know how to move 80TB.

2. I still dont want to buy all my drives at once, is FreeNAS better about adding new drives these days?

With ZFS you add VDEVs not drives. Pools are collections of VDEVs.

Buy 8 drives, have a RAIDZ2. Fill. Buy 8 more, add as another RAIDZ2 VDEV. Capacity is the sum of both, it's one pool.

Nas4Free user here. Home server, nothing fancy but worked perfectly for a few years on a Atom CPU based board with 4GB RAM which could not be expanded further and is listed as minimum required for ZFS. Would this hardware run FreeNAS/TrueNAS or its differences and additional features would require beefier iron? Current configuration is two ZFS RAID 1 pairs plus one additional disk for quick buffering. System currently boots from a USB dongle; the board very conveniently does have a USB receptacle directly soldered, so it doesn't expose the flash key to the outside. No problems in about 4 years, but some day probably this year I'll have to get new disks either due to aging or available space, then get the ball rolling and manually upgrade to XimaNAS (automatic update from this NAS4Free version is risky), or moving to FreeNAS/TrueNAS, or jumping ship completely and get one Helios64 box plus OMV since BSD support on ARM isn't there yet. Any comments?

ps. I'm more of a Linux guy, my BSD knowledge is limited to this NAS4Free installation and a pfSense firewall I run ages ago on PCEngines embedded boards which worked like charm as well.

More RAM is better, it will probably work, but be slow. However the same can be said for your current configuration which you seem happy with.

Don't use a USB boot disk though. FreeNAS writes to the boot disk enough to detroy most USB drives in a few months. A high quality USB drive wouldn't have that problem but they don't seem to exist.

You're right, but in my case it's the embedded "install" (actually a dd write to the USB flash device), which lets the system run entirely in RAM, so the boot disk is never written except when upgrading the system or when the user modifies then saves the configuration, which usually after the first install happens rarely. This makes the NAS less versatile as I can't install lots of extensions, but the included ones are more than enough for me. Yes, I know more RAM would help a lot; unfortunately I have no way to expand it, save for moving to bigger faster platforms which then would cost more in energy as I keep it on 24/7. The system however appears to use just about 50% of the available RAM and is very snappy for my purposes.

I've used my FreeNAS with the same USB drive for a few years now, so seems to be fine. Most activity is against the system dataset (which is on my RAID-Z), no?

When did you last update? If you have used FreeNAS for years you might be on an old version that almost never wrote to the USB and so you are fine. A couple years ago they made some changes and now FreeNAS writes to the USB often and as a result most USB drives last for only a few months.

Just as an additional data point...

I started out with FreeNAS ~18 months ago but I'm not sure at the moment which version that was (I could check later). About a year ago, I updated to 11.2-U2 and just two days ago I've updated to 11.3-U1.

Since the beginning, I've had the "freenas-boot" pool on a pair of USB3 flash drives -- I don't recall which manufacturer/model I'm using but I did specifically buy "higher end" flash drives to use for this purpose. I do a weekly scrub of the "freenas-boot" pool and, thus far, I've experienced no issues with them whatsoever.

FWIW, it's very rare for me to actually log in to the FreeNAS system to make any changes or perform any "administration". I set it up initially and, since then, I've mostly left it alone and it "just works".

I've kept up to date, last update was a month ago or so? It's now running FreeNAS-11.3-RELEASE.

Using the graphs I do see some hourly write activity to the USB device. But say it writes 10MB of data (graph shows ~100kB) every hour for half a year, that's still just ~41GB, just over one drive write in my case.

So maybe some services cause it to write much more data?

FWIW, the minimum requirements call for 8 GB of RAM, with 16 GB as the "minimum recommended".

Like so many other things, though, I'd say try it and see if it works well enough for you.

I’ve been using FreeNAS and it’s been great. There have been some small drawbacks but over all it’s been great. Hopefully this will improve stability.

I run Freenas virtualized on a esxi host (passing a dedicated SAS card to) it's pretty solid and I'm very happy with it.

I've also been using FreeNAS and I'm fairly happy with it. Their botched version 10 was not fun, and lately I'm annoyed by having to recreate all my Warden jails.

I hope this transition doesn't ruin things.

I read about the botched update and it was the first thing that came to mind. Hopefully FreeNAS will have less of a transition than TrueNAS.

I'm considering switching from a QNAP NAS that reaches end of support to FreeNAS. It looks like a well thought out package. What are the drawbacks in your opinion?

Hardware support can be tricky and you need to make sure your mobo supports FreeBSD. The community has the angry sys admin mentality but they generally just want to help you build a robust system. Gotta do it their way. You can build your own which is what I did. i3/Super micro mobo/ECC Memory/6 WD red drives.

I’m not really sold on their jails (VMs). Personally I feel more comfortable on Linux but ZFS and the features that come out of the box with FreeNAS won me over. I’ve looked at Open Media Vault (Debian based) a little bit but not interested in moving data.

Linux supports ZFS, so I suppose you would not have to move data except for installing the OS on a different drive.

Opensuse with btrfs is what I use.

These NAS targeted distros are amazing. However, my needs are simple. I just don’t need 90% of the features they have.

Maybe I’m just a simpleton but I run an rsync script each night. The beauty is that if I accidentally delete something I can easily recover it and if a drive fails I lose less than one days worth of data. This trade-off is well worth it for me.

If I had several drives I’d use SnapRAID to cut down on costs.

Oh and I’m lazy so I installed CentOS on it. In several years when CentOS reaches EOL I’ll just buy new hardware and install the newest version.

Synology type devices make me nervous. Krebs[1] recently did a piece on IoT gear being the new target for ransomware. I also wouldn’t feel comfortable exposing anything running on these devices to WAN.

[1] https://krebsonsecurity.com/2020/02/zyxel-fixes-0day-in-netw...

I've had FreeNAS running on a 2U Supermicro box for a year and a half or so (I ran FreeBSD on it prior to that). My needs are pretty simple and I don't need 90% of the features either. I only use iSCSI (for the VMware servers in my lab) and NFS (for pretty much everything else); occasionally I'll rsync my workstation's files to it for an "extra" backup.

I've got automated ZFS snapshots set up and when I finally replaced a failing drive -- just two days ago, I think -- it was a fairly simple matter. I updated it to the latest version of FreeNAS and that was simple and straightforward too.

FWIW, this box isn't exposed to the Internet. In fact, it doesn't even have a default route; besides, my router / firewall filters outgoing traffic too -- not just incoming.

I could have just used FreeBSD and configured the pieces I need instead of using FreeNAS but it's been quite stable and reliable and has mostly been a "set it up and forget it" experience.

I use CentOS 7 on a Dell server with a bunch of drives, and it works well with NFS. The nice thing about a vanilla distro for it is that you can do what need a lot more easily.

If I already know my way around FreeBSD and around a command line, why would I want to use FreeNAS instead of just sticking to vanilla FreeBSD?

Is the value added that it is easier to configure? Is there better default tuning for a NAS setup?

I've occasionally browsed the FreeNAS forums and they seem to have stridently opinionated people in them telling everyone exactly how things should and shouldn't be done. But I've never seen a simple clear explanation of why.

What I'm looking for is something like a table of items that says:

Here's something that FreeBSD does .... Here's how FreeNAS improves that thing.

FreeNAS is what experienced/wise long-time unix admins will setup at home for their giant anime SAN/NAS because:

- they've been around the block long enough to know that FreeNAS can do 95% of what they need better than they can do it, with far less chance of errors introduced, but with 95% less of their valuable time needed to do it

it's that simple honestly.

if you already know how to do setup/configure/tune/tweak all of the components inside FreeNAS specific to your environment, congrats, you're already a great unix admin. you've been in your career a while, and you're probably pretty well paid.

but why would you spend your valuable per$onal time to faff about setting up/configuring/tuning/tweaking what FreeNAS can do out of the box, unless you have an edge case scenario that FreeNAS doesn't cover or can't do well?

for the same reasons, again i must specify -- unless you have very specific edge case scenario(s)/situation(s) or wanted to use this as a learning exercise -- why would you:

1) hand-craft your own *BSD or Linux-based firewall/VPN/router, instead of using pfsense/iptraf/etc

2) hand-craft, build and create your own Wifi router OS image, instead of using OpenWRT

3) roll your own port monitoring and OS proc monitoring notification system, instead of using Nagios/BigBrother/etc

so to get back to your OG question of

> If I already know my way around FreeBSD and around a command line, why would I want to use FreeNAS instead of just sticking to vanilla FreeBSD?

because your time is valuable, and because FreeNAS can do 95% of what you need better than you can do it.

(edited for formatting/clarity)

> Looks at the contents of his FreeNAS.

Damn you got me.

This was pretty much the logic that I used. I was sick of having to worry about data integrity and backups, the amount of arcane knowledge required to do it correctly and safely is immense. FreeNAS has good defaults and good documentation so I got myself a FreeNAS Mini (eventually upgraded to the MiniXL because it is able to take better advantage of zfs).

Exactly, the less time you can spend setting up your giant anime nas means you have more time actually watching anime!

>giant anime SAN/NAS How did you know???

There can't be two of us in the world with a personal SAN/NAS that isn't a giant anime collection. Since I know I'm the one with a personal NAS without such a collection it stands to reason that if you have a personal NAS you have it full of anime. QED

but of course the 2020 / "cloudy" way to run a giant anime NAS at home is not to run a NAS at home at all:

put your 50TB+ of anime up into Google Drive on a single _unlimited storage_ G suite account, "mount" that onto a Windows 10 VM using Google Drive Filestream, setup Plex on the W10 VM to point to Google Drive, and laugh all the way to the bank because you no longer have to deal with ... well, anything.

It turns your computer into a (tweakable) appliance. No need to remember the ashift when creating ZFS pools. No need to remember the samba config options and syntax. No need to remember which kernel flags to tune for ZFS. No need to fiddle with cron to set up periodic scrubbing.

It's not perfect but it's certainly a lot more "fire and forget" than the FreeBSD login prompt. A power user might find it a bit restricting, but if you just want a NAS it's quite nice.

Correct. Better tuning out of the box for NAS, web GUI to manage everything, and opinionated sys admin types on the forums with strict rules to follow. If is not retired xeons with ECC memory and fans that sound like hair dryers you’re not doing it their way. Once you get past the opinionated stuff it’s not so bad and they want to make sure your data won’t get lost.

I’m not familiar with FreeBSD or it’s command line and it makes FreNAS a better fit. Setting up the ZFS pools, permissions, and sharing is easier and I’m not going to mess it up and lose data.

The built-in reporting, alerting, and monitoring all wrapped up in a nice web UI is what tipped the scales for me. I didn't have to do anything to get historical system metric graphs, system health alarms, or alerts by email.

I suppose I could have set up all these things myself (it is my line of work, after all), but it was nicer to not have to.

Probably not an answer you are looking for, but here are things that FreeNAS has and FreeBSD does not (except for a couple): https://www.freenas.org/about/screenshots/

But why did it have to rename FreeNAS to TrueNAS Core? It makes it confusing to search/google, and we'll now need to search for both FreeNAS and TrueNAS Core. I wished the name remained FreeNAS, as it had been in the past 15 years of its history.

They explained it in the post.

Basically the commercial version was initially called "FreeNAS Pro" but because of the "Free" in the name it was not treated seriously in the enterprise, so it was renamed to "TrueNAS". So after the merge, they still don't want to use word "free" so they took the commercial name and added "Core" to it.

People are posting that it is a great news that they unify for the consumer. I believe at best it is a neutral thing for the consumer, the unification is great for iXSystems, because it will reduce their overhead of maintaining two products separately.

Probably for marketing/sales reasons. The survival of both products depends on businesses purchasing TrueNAS, so increasing exposure to that project is good for business.

Or maybe the "core" version won't be free much longer?

Everything in this thread is discussed (in a few paragraphs) in the article.

now, only question is would they be any better than Synology for non-technical users?

truth be told; I consider myself highly technical, even with things like FreeNAS.

I went to check my custom built FreeNAS one day, and it's off, and it wouldn't properly boot so I needed to haul a monitor out to see what was going on...

I finally hit the age where I don't care to deal with shit like that anymore, and finally pulled the trigger on a synology. Couldn't be happier.

It was fun setting it up and configuring FreeNAS the way I wanted, but overall bleh because I didn't get the right HW (went dual core atom at the time, not beefy enough for plex re-encoding).

How does that help? Now if the Synology doesn't boot what do you do? Is the hope/assertion that the Synology is sufficiently less buggy that you don't mind being less able to fix it yourself?

Me too, this was the most annoying thing about FreeNAS. It just seemed to often not start up properly and require plugging in a monitor to see what's wrong. Often when I did plug in a monitor, it would boot fine, so I had no idea what the issue was.

Recently I replaced it with a barebones Arch install and it works just as well for home use, and starts fast and reliably. I think FreeNAS middleware is overly complex for home use if you are at all comfortable with administering things yourself (ZFS really makes it easy).

FreeNAS might make more complex things you'd do in an enterprise environment easier, but I think for anything you'd want to do at home, a basic Linux install would probably be easier. (eg. installing media server software, setting up a VPN server, even running a VM, etc.)

I ran into similar problems with my custom built FreeNAS too. It's still a great setup, but more work than I would like. When my server eventually dies, I'll probably replace it with one of the official FreeNAS Minis.


On the other hand maybe buying a consumer NAS can give you a false sense of security: https://kevq.uk/i-nearly-lost-all-of-my-data/

> [...] consumer NAS can give you a false sense of security

Your conclusion is not supported in the mentioned URL.

What has been described in the URL could've happened with a FreeNAS or TrueNAS or Open Media Vault system or just about any system. RAID is never a backup, and one backup is not enough.

If you read the parent comment between the lines, it implies that Synology feels worry free in comparison to FreeNAS.

My point is that both options have similar chances of breaking unexpectedly.

It is also a fact that both meterorites and rain fall from the sky sometimes, but that doesn't mean the "chance" of a meterorite hitting my head are the same as a raindrop.

You're only going to get actual, reliable and useful numbers about failure rates between competing products by doing actual studies and analyzing the aggregate results. Anything else is meaningless. And as a buyer, I'm more worried about things like warranty and technical support quality, frankly, if I'm going to buy a prepackaged system anyway, which are easier to gauge from afar.

The board on the NAS died, and took out a locally attached USB backup. He had recently turned off offsite backup.

Instead of buying a replacement synology, and moving the drives (which are all fine) he dumped the data to various other devices using Linux raw LVM, and restored 100% of the data.

This seems like it worked as intended to me (not counting the simultaneous loss of the USB stick and motherboard). Anyway, any NAS that contains important data should be backed up offsite.

> It’s clear that I not only need to replace my old solution, but I also need to come up with a more robust one too.

Is there anything better than RAID? That's what every file system seems to use and it's honestly not very good. High capacity arrays can take over 24 hours to rebuild and during this time the risk of additional failures is even greater due to read errors. RAID also requires matching capacities for all storage devices, making it hard to expand capacity of the whole array.

The only alternative I can think of is object storage which is not exactly made for home use.

Seems like totally false question. Your not comparing OSs, your comparing an OS and your home built hardware, to costume hardware and that OS.

You could just buy a costume build FreeNas OS driven NAS, like the FreeNAS Mini for example.

doubtful, non-technical users really aren't the target for things like freenas/truenas.

Agreed, if you run FreeNAS at home you are likely to be subscribed to /r/homelab. That’s not to say it’s impossible for a non-technical user to deploy and manage it, but it will have a learning curve consumer and prosumer gear doesn’t.

Where is the TrueNAS source repo? I can only find the FreeNAS repo.

From TFA

> FreeNAS and TrueNAS have been separate-but-related members of the #1 Open Source storage software family since 2012. FreeNAS is the free Open Source version with an expert community and has led the pursuit of innovations like Plugins and VMs. TrueNAS is the enterprise version for organizations of all sizes that need additional uptime and performance, as well as the enterprise-grade support necessary for critical data and applications.

> In the 11.3 release, FreeNAS and TrueNAS share over 95% of the same source code but are built into separate images, each with their own name. The Version 12.0 release will change this process by moving to one unified image with two different editions: a free, Open Source edition (this will never change!) and an enterprise edition.

> Both editions will have common Open Source code, unified documentation, and a shared product name.

So I suspect that the source code for TrueNAS is not available.

Ah, I see. I thought they were completely different products, but it sounds like TrueNAS is an enterprise version of FreeNAS and this unification seems like it is just deprecating the FreeNAS name.

The True, Free, NAS. The Free, True, NAS.

I use unraid. While not free, I think it’s the best home nas software.


Why do you "think it’s the best home nas software"?

Because it just works. If you want more advanced features and are fine with tinkering, unraid likely isn’t for you.

Does unraid still not offer real time raid ability? I read before and it seems it is similar to snapraid which require externally evoked sync command to do this. Is this still the case?

Yes, when you use a cache disk (or pool since unraid 6 apparently). Data is moved automatically at night. Cache pools are BTRFS mirrors so they now can provide redundancy of the cache.

No, Unraid /does/ offer real time raid. The "sync" command you are talking about is related to the cache drives. You can setup a pool of SSD drives and tell different "shares" (think a folder than can span 1 more drives based on rules you set) to use that pool (BTRFS raid-1-type pool, so 2x1TB SSDs = 1TB cache pool). The "sync" (called "mover" in unraid terms) is a process that runs on a schedule you set (most people set it to run at night) that will move data to/from your cache drives to/from your raid array. The mover will not move files that still have an open handle on them (ie. you are actively using the file).

You can specify for each "share" if it should be No/Yes/Only/Prefer, explanation of each below (Pulled from my UnRaid install):

<Start copy>

* "No" prohibits new files and subdirectories from being written onto the Cache disk/pool.

* "Yes" indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

* "Only" indicates that all new files and subdirectories must be written to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status.

* "Prefer" indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto Cache disk/pool.

<End copy>

For me I just keep all my docker containers/VMs/docker config on my cache pool ("Only"). Everything else goes directly to the array ("No"). I do this because I gain nothing from having a new TV show sit on the cache for <24 hours. I am rarely going to watch something that fast and so it just fills up the cache drive for no good reason. Yes the array is slower to write to but I don't really mind that.

I am able to lose 1 cache drive (I have 2) and/or 1 data/parity drive without experiencing data loss. I currently have 1 SSD that is showing disconnected and 1 disk drive that is failing yet my system is chugging along without issue. Note: This is not my plan long-term, I think I bumped the sata cable for the SSD (when replacing a different drive) so I just need to open the case and as for the data drive I've got another one ready to go but I just haven't had the time in the past 2 days. Unraid is really great to work with IMHO.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact