Except it wasn't 20. I was 2. One for free and one for the enterprise.
I totally agree with you, but that's a different statement than the one in the comment I originally replied to.
Rick Moen wrote a good essay titled "Fear of Forking"
It's a defense of forking, but it contains a few capsule histories of important forks of the past (from the perspective of when it was written) which are interesting.
When I first started using GNU/Linux in 1999 (Red Hat 6), I found it confusing to understand what all the fuss was about with libc/glibc and egcs/gcc. I think the differences had been resolved by that stage but all the related documentation referred to the split which made learning this strange new system very confusing (at the time, I knew what a compiler did but I didn’t know about anything about architectures other than x86 and I didn’t know why a C library was so important).
Rick includes a reference to the Emacs split of the early 90s. Jamie Zawinski’s reflection on this fork is quite interesting: https://anonym.to/?https://www.jwz.org/doc/lemacs.html (using the anoymiser as Jamie doesn't seem to like news.ycombinator.com as a referer).
Amazon  has some micro travel routers only - so what are good options under 100$?
I have to disagree on this one. Multiple diverging projects create _Stability_, whereas a single project creates _fragility_. One bad step on the only project, and it will be all who suffer
You'll see how stable is diversity.
Then package for windows xp. It probably works on windows vista, 7, 8 and 10.
Now I do see a lot of value in diversity. Resiliance, ethics, competition, collaboration...
Stability isn't one.
It is sandboxed, so your use case must be compatible.
It uses apparmor, what if the system uses SEL ?
It's sandboxed in a certain way, your app must support it.
It's very recent, only the latest distribs have a snapd release that works well.
It's slower to execute.
Snapd assumes systemd.
But wait, did you say snapd ? I though you said flatpack. Or appimages.
Anyway, despite all that, it is easier to write a snapd instead of a deb+rpm+whatever. I actually like this project a lot.
And again it proves my point: to get stability, we use snapd, a tool to compensate for diversity.
Also, you shouldn't be writing your own packaging scripts: leave those to distribution packagers. There are thousands of people who work on packaging these things, and the user of the distribution is much safer if they don't touch software distributed by random people.
This stuff is completely stable, just don't break the assumptions of the system without knowing what you're doing.
And for static linking: That is kind of happening with those modern snap and flatpack and whatever systems for handling applications, but it's bad for security. I want to update my system zlib in case there is an issue instead if depending on all applications consuming compressed unteusted streams updating their package in time. And I certainly don't want each little tool bundling their own Qt.
Or provide an app image and ignore most integration.
This is my point exactly: to get stability, you remove diversity.
Static linking, one packaging system, and you don't have to care about how diverse the universe it.
But it also means you don't benefit from what make those differences add: security updates, dependancy graph, automation, signing, jails, user documentation and training, os integration, native window theming, etc.
This shouldn't need to be done? The distribution's packagers handle most of this. Except for Nix, maybe? I hear they have a particularly fucked up ecosystem for packaging.
The chances your project matches the criterias to be included into any distrib repo are very low, and it's a lot of work and problems by itself.
They are not app stores you pay to get into. The gate keepers have a very strict opinion on what to let in and how. And it's all done manually, and fedora policy is not debian's is not gentoo's.
Plus, statically linked projects are almost never accepted. Back to square one.
Besides, what if it's not free software ?
And often fuck it up and introduce bugs or even vulnerabilities, intentionally ignore the developer, or simply fail to update packages in a reasonable time frame.
Relying on free labor to package software for you is a terrible terrible idea that helps keep Linux Desktop a shitshow.
I'm glad there is a layer there that will patch and configure to better integrate into the system and in some (very rare) cases remove user hostile "functionality".
This was a bug introduced in packaging. See https://lwn.net/Articles/281436/ for more details.
The relationship between FreeNAS and TrueNAS is more like the difference between Fedora and RHEL. Both sponsored by the same company, but the later is the more mature, “enterprise” version.
So instead of having FreeNAS and TrueNAS we’re going to have “TrueNAS Core” and “TrueNAS Enterprise”.
It’s honestly not that big of a shift from a practical perspective.
That sounds more like RHEL vs CentOS, really. ...which is fitting, since those two have also been slowly coming closer and closer to each other.
Anti-fragility is a lot more complicated then just having a lot of implementations.
The moving away from CVS and SVN to much more easily distributed revision controls is one of the best things that ever happened to large open source projects.
The reason for this is that when its very easy to loose contributors and users to forks then it enforces a lot of project management discipline on the part of the project leadership. Before when you held all the keys to the castle and it was difficult to move away it was very tempting for people to use their position to impose "political" restraints on other people.
And vastly reduced hosting costs thanks to things like github, gitlab, spread of cloud providers and so on and so forth makes it now cheaper then ever before.
And these things makes it easier to 'unfork' as well.
In this way we have the odd result of easy forking has a way of making it so that forking isn't necessary.
And when there is a major dispute in a particular community then cheap and easy forking (and recombining) means that people can actually have competing governance models and see which approach is actually better. Rather then just fighting until everybody gets burned out and abandons the project.
Lede vs Openwrt is a good example of this.
Libre Office vs Open Office.
Gnome vs Unity.
These things exist more due to competing governance models then anything else.
So this can be summarized as saying "improvements in anti-fragility in modern large open source projects is more due to the fluidity in which projects can be managed, forked, and recombined rather then just the number of implementations users and contributors can choose from"
I thought it was about scratching your own itch? "I want to be a maintainer" kinda sounds like an itch.
> Some of the features of TrueNAS Enterprise include support for dual-controller/HA systems, native Fibre Channel support, integrated chassis monitoring and management, as well as certifications with platforms like VMWare, Veeam, and Citrix.
The "native Fibre Channel" support bit sound a bit worrying. Isn't that just the inclusion of appropriate drivers in the kernel config used to compile the image?
People do ask about that on the FreeNAS forums (Fibre Channel support), so it's not like there's no demand for FC from FreeNAS users.
It's sounds like FC is being "kept back" from people that want it, in maybe more an open-core approach.
It's also in use by people that buy FC gear on Ebay, as they want some of the capabilities that FC offers for whatever reason.
If they've really decided to lock OSS customers out of using FC gear (which would be crappy), this definitely sounds like a move to "Open Core" or something like it and would a shame. :/
They are already doing that and always have been.
This is just unifying the name and the source code.
so i guess it depends on what you mean by "enterprise"? and what that enterprise does?
we serve a decent amount of video content out (~35Gbps during peak) and we have a lot of video content stored and edited in-house. we made the decision to use TrueNAS for our _internal_ work video storage and sharing, but (not yet) for _public_ serving of video.
all the in-house edit masters, work files/partial renders etc are stored on the in-office TrueNAS systems, but the actual _serving_ of the final encoded HLS content out to customers is via in-DC clustered NetApp sitting behind a lot of striped SSDs (FS-Cache) serving out http(s). we have an autoencoding pipeline that picks up the finished masters from the TrueNAS, does the various encodes and throws them up on the NetApps for public serving once they're done.
i'm a longtime enterprise storage user (and specifically a big fan of NetApp), but i would very much like to see if its possible to transition out of such a giant dollar-suck as NetApp and give something like TrueNAS a chance at the frontend.
these baby steps (so far successful!) are our foray into these uncharted waters!
I used FreeNAS and liked it, but my recent build is Unraid because I had 24 bays, and didn't want to buy all my drives at once, and afaik FreeNAS/ZFS is not super great with adding drives to the pool, especially if they are different drives. Unraid, though, is not great with drive failure and I'm starting to think I want to go back to FreeNAS, but I am torn:
1. I don't know how to move 80TB.
2. I still dont want to buy all my drives at once, is FreeNAS better about adding new drives these days?
Buy 8 drives, have a RAIDZ2. Fill. Buy 8 more, add as another RAIDZ2 VDEV. Capacity is the sum of both, it's one pool.
ps. I'm more of a Linux guy, my BSD knowledge is limited to this NAS4Free installation and a pfSense firewall I run ages ago on PCEngines embedded boards which worked like charm as well.
Don't use a USB boot disk though. FreeNAS writes to the boot disk enough to detroy most USB drives in a few months. A high quality USB drive wouldn't have that problem but they don't seem to exist.
I started out with FreeNAS ~18 months ago but I'm not sure at the moment which version that was (I could check later). About a year ago, I updated to 11.2-U2 and just two days ago I've updated to 11.3-U1.
Since the beginning, I've had the "freenas-boot" pool on a pair of USB3 flash drives -- I don't recall which manufacturer/model I'm using but I did specifically buy "higher end" flash drives to use for this purpose. I do a weekly scrub of the "freenas-boot" pool and, thus far, I've experienced no issues with them whatsoever.
FWIW, it's very rare for me to actually log in to the FreeNAS system to make any changes or perform any "administration". I set it up initially and, since then, I've mostly left it alone and it "just works".
Using the graphs I do see some hourly write activity to the USB device. But say it writes 10MB of data (graph shows ~100kB) every hour for half a year, that's still just ~41GB, just over one drive write in my case.
So maybe some services cause it to write much more data?
Like so many other things, though, I'd say try it and see if it works well enough for you.
I hope this transition doesn't ruin things.
I’m not really sold on their jails (VMs). Personally I feel more comfortable on Linux but ZFS and the features that come out of the box with FreeNAS won me over. I’ve looked at Open Media Vault (Debian based) a little bit but not interested in moving data.
Maybe I’m just a simpleton but I run an rsync script each night. The beauty is that if I accidentally delete something I can easily recover it and if a drive fails I lose less than one days worth of data. This trade-off is well worth it for me.
If I had several drives I’d use SnapRAID to cut down on costs.
Oh and I’m lazy so I installed CentOS on it. In several years when CentOS reaches EOL I’ll just buy new hardware and install the newest version.
Synology type devices make me nervous. Krebs recently did a piece on IoT gear being the new target for ransomware. I also wouldn’t feel comfortable exposing anything running on these devices to WAN.
I've got automated ZFS snapshots set up and when I finally replaced a failing drive -- just two days ago, I think -- it was a fairly simple matter. I updated it to the latest version of FreeNAS and that was simple and straightforward too.
FWIW, this box isn't exposed to the Internet. In fact, it doesn't even have a default route; besides, my router / firewall filters outgoing traffic too -- not just incoming.
I could have just used FreeBSD and configured the pieces I need instead of using FreeNAS but it's been quite stable and reliable and has mostly been a "set it up and forget it" experience.
Is the value added that it is easier to configure? Is there better default tuning for a NAS setup?
I've occasionally browsed the FreeNAS forums and they seem to have stridently opinionated people in them telling everyone exactly how things should and shouldn't be done. But I've never seen a simple clear explanation of why.
What I'm looking for is something like a table of items that says:
Here's something that FreeBSD does .... Here's how FreeNAS improves that thing.
- they've been around the block long enough to know that FreeNAS can do 95% of what they need better than they can do it, with far less chance of errors introduced, but with 95% less of their valuable time needed to do it
it's that simple honestly.
if you already know how to do setup/configure/tune/tweak all of the components inside FreeNAS specific to your environment, congrats, you're already a great unix admin. you've been in your career a while, and you're probably pretty well paid.
but why would you spend your valuable per$onal time to faff about setting up/configuring/tuning/tweaking what FreeNAS can do out of the box, unless you have an edge case scenario that FreeNAS doesn't cover or can't do well?
for the same reasons, again i must specify -- unless you have very specific edge case scenario(s)/situation(s) or wanted to use this as a learning exercise -- why would you:
1) hand-craft your own *BSD or Linux-based firewall/VPN/router, instead of using pfsense/iptraf/etc
2) hand-craft, build and create your own Wifi router OS image, instead of using OpenWRT
3) roll your own port monitoring and OS proc monitoring notification system, instead of using Nagios/BigBrother/etc
so to get back to your OG question of
> If I already know my way around FreeBSD and around a command line, why would I want to use FreeNAS instead of just sticking to vanilla FreeBSD?
because your time is valuable, and because FreeNAS can do 95% of what you need better than you can do it.
(edited for formatting/clarity)
Damn you got me.
This was pretty much the logic that I used. I was sick of having to worry about data integrity and backups, the amount of arcane knowledge required to do it correctly and safely is immense. FreeNAS has good defaults and good documentation so I got myself a FreeNAS Mini (eventually upgraded to the MiniXL because it is able to take better advantage of zfs).
put your 50TB+ of anime up into Google Drive on a single _unlimited storage_ G suite account, "mount" that onto a Windows 10 VM using Google Drive Filestream, setup Plex on the W10 VM to point to Google Drive, and laugh all the way to the bank because you no longer have to deal with ... well, anything.
It's not perfect but it's certainly a lot more "fire and forget" than the FreeBSD login prompt. A power user might find it a bit restricting, but if you just want a NAS it's quite nice.
I’m not familiar with FreeBSD or it’s command line and it makes FreNAS a better fit. Setting up the ZFS pools, permissions, and sharing is easier and I’m not going to mess it up and lose data.
I suppose I could have set up all these things myself (it is my line of work, after all), but it was nicer to not have to.
Basically the commercial version was initially called "FreeNAS Pro" but because of the "Free" in the name it was not treated seriously in the enterprise, so it was renamed to "TrueNAS". So after the merge, they still don't want to use word "free" so they took the commercial name and added "Core" to it.
People are posting that it is a great news that they unify for the consumer. I believe at best it is a neutral thing for the consumer, the unification is great for iXSystems, because it will reduce their overhead of maintaining two products separately.
I went to check my custom built FreeNAS one day, and it's off, and it wouldn't properly boot so I needed to haul a monitor out to see what was going on...
I finally hit the age where I don't care to deal with shit like that anymore, and finally pulled the trigger on a synology. Couldn't be happier.
It was fun setting it up and configuring FreeNAS the way I wanted, but overall bleh because I didn't get the right HW (went dual core atom at the time, not beefy enough for plex re-encoding).
Recently I replaced it with a barebones Arch install and it works just as well for home use, and starts fast and reliably. I think FreeNAS middleware is overly complex for home use if you are at all comfortable with administering things yourself (ZFS really makes it easy).
FreeNAS might make more complex things you'd do in an enterprise environment easier, but I think for anything you'd want to do at home, a basic Linux install would probably be easier. (eg. installing media server software, setting up a VPN server, even running a VM, etc.)
Your conclusion is not supported in the mentioned URL.
What has been described in the URL could've happened with a FreeNAS or TrueNAS or Open Media Vault system or just about any system. RAID is never a backup, and one backup is not enough.
My point is that both options have similar chances of breaking unexpectedly.
You're only going to get actual, reliable and useful numbers about failure rates between competing products by doing actual studies and analyzing the aggregate results. Anything else is meaningless. And as a buyer, I'm more worried about things like warranty and technical support quality, frankly, if I'm going to buy a prepackaged system anyway, which are easier to gauge from afar.
Instead of buying a replacement synology, and moving the drives (which are all fine) he dumped the data to various other devices using Linux raw LVM, and restored 100% of the data.
This seems like it worked as intended to me (not counting the simultaneous loss of the USB stick and motherboard). Anyway, any NAS that contains important data should be backed up offsite.
Is there anything better than RAID? That's what every file system seems to use and it's honestly not very good. High capacity arrays can take over 24 hours to rebuild and during this time the risk of additional failures is even greater due to read errors. RAID also requires matching capacities for all storage devices, making it hard to expand capacity of the whole array.
The only alternative I can think of is object storage which is not exactly made for home use.
You could just buy a costume build FreeNas OS driven NAS, like the FreeNAS Mini for example.
> FreeNAS and TrueNAS have been separate-but-related members of the #1 Open Source storage software family since 2012. FreeNAS is the free Open Source version with an expert community and has led the pursuit of innovations like Plugins and VMs. TrueNAS is the enterprise version for organizations of all sizes that need additional uptime and performance, as well as the enterprise-grade support necessary for critical data and applications.
> In the 11.3 release, FreeNAS and TrueNAS share over 95% of the same source code but are built into separate images, each with their own name. The Version 12.0 release will change this process by moving to one unified image with two different editions: a free, Open Source edition (this will never change!) and an enterprise edition.
> Both editions will have common Open Source code, unified documentation, and a shared product name.
So I suspect that the source code for TrueNAS is not available.
You can specify for each "share" if it should be No/Yes/Only/Prefer, explanation of each below (Pulled from my UnRaid install):
* "No" prohibits new files and subdirectories from being written onto the Cache disk/pool.
* "Yes" indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.
* "Only" indicates that all new files and subdirectories must be written to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status.
* "Prefer" indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto Cache disk/pool.
For me I just keep all my docker containers/VMs/docker config on my cache pool ("Only"). Everything else goes directly to the array ("No"). I do this because I gain nothing from having a new TV show sit on the cache for <24 hours. I am rarely going to watch something that fast and so it just fills up the cache drive for no good reason. Yes the array is slower to write to but I don't really mind that.
I am able to lose 1 cache drive (I have 2) and/or 1 data/parity drive without experiencing data loss. I currently have 1 SSD that is showing disconnected and 1 disk drive that is failing yet my system is chugging along without issue. Note: This is not my plan long-term, I think I bumped the sata cable for the SSD (when replacing a different drive) so I just need to open the case and as for the data drive I've got another one ready to go but I just haven't had the time in the past 2 days. Unraid is really great to work with IMHO.