PC-BSD is very good, but there are some weird quirks compared to Linux. For example, I have still never really figured out how to get my screen-brightness to adjust properly, and none of my laptops' integrated WiFi seem to work out of the box with any BSD. The latter problem I have been able to solve by cheap USB wifi dongle, and the the former I just haven't solved since this laptop doesn't need a particularly good battery anyway.
However, there are some nice perks to having a system that runs on ZFS. Due to its frequently-snapshotting nature, it's much easier to do risky stuff without having to worry about nuking the whole system, since you can always revert to the previous snapshot (kind of like Git).
I also have come to really appreciate the ports system, especially after I started doing a lot of stuff with video processing (specifically FFMpeg). Being able to have freshly compiled packages, whilst still having them managed by a package manage (which happily resolves compile-time dependencies), is incredibly useful, and makes it kind of hard to go back to Linux (though I hear Gentoo might be up my alley).
Overall, the transition isn't terribly hard, but I wouldn't recommend it if you don't want to get down in the weeds of the system. I enjoy that stuff, and I suspect most of the HN audience does too, so I recommend trying it out.
Agreed, this is an amazing feature, as are jails to keep things compartmentalized. I hate having to clutter my FreeBSD home server root install with a billion dependencies for the various services it hosts, so I use jails as lightweight VMs to partition it into different service groups. It also makes it easy to back up and move jails whenever I upgrade because things are contained on ZFS subvols.
For people who don't know, snapshots are also well supported on Linux, and not just in ZFS. I use it with BTRFS on my main workstation and it's saved me a few times. There is a great tool called Snapper (by the SUSE people, but it works on other distros - I use it on Arch) that automates managing snapshots and can even e.g. hook into your package manager to automatically snapshot before an upgrade. It's like Windows System Restore Points done right.
Raid-Z is a great soft-raid that allows you to lose up to 3 disks (depending on your configuration level), and `zfs send` is really useful for keeping disks synchronized on different machines.
Btrfs has similar features with `btrfs send` and RAID, though the parity RAID modes most similar to RAIDZ have kind of a troubled history (https://btrfs.wiki.kernel.org/index.php/RAID56) and I wouldn't recommend using Btrfs RAID5/6 for valuable data. ZFS RAIDZ is also superior in terms of reliability because it does a full-stripe write and ZFS has some other features for server use (e.g. zvols). Btrfs is more flexible with regards to growing or shrinking a RAID volume, but again the code is not considered production ready and has some pretty severe drawbacks.
That said, I used Btrfs RAID5 on a small compute cluster and it's never had any issues, but if I was going to do it again I'd use ZFS there.
I actually did not know that Btrfs had a send function, I'll have to play with that on one of my laptops.
Aside: I ran a homebrew FreeNAS (forget the version, was several years ago now) with 12x3TB drives, without ECC, and the version of freenas had a bug that didn't pickup the drive designated as a hot spare when the first drive died... They were horrible Seagate 3tb 7200.xx drives, and they all died way too close together... in the end, lost most of the system... when I finally got around to testing the drives (didn't have the heart/energy after for a while) all but 2 had significant errors.
That's more a knock against Seagate (will NEVER buy from them again) than ZFS though. Will also not likely do a homebrew nas server ever again, didn't save enough over Synololgy that made the cost savings vs. time worthwhile, but ymmv.
Now this server runs on ZFS on Linux using Proxmox, though admittedly I did up the RAM to 32GB.
Thinking of another synology box in a few months, bumping to more or larger drives. Though, space would be less of a problem if I were willing to re-rip everything to h.265, finally got my settings for the nv encoder pretty well balanced, just a lot of time.
However: the questioner wanted to know about TrueOS, not about FreeBSD. The desktop with TrueOS is significantly different to the desktop with FreeBSD. There is extra desktop stuff that comes in the box with TrueOS. And the desktop stuff is done somewhat differently.
With some FreeBSD-derived operating systems such as GhostBSD one gets "flavours" of the operating system according to the desktop environment, like "GhostBSD MATE" and "GhostBSD XFCE". With TrueOS, there aren't such multiple "flavours". Rather, PCDM allows one to select from multiple desktop environments at login time, and the desktop environments are available side by side in a single "flavour" of the "desktop" operating system. TrueOS is in fact "flavoured" as "desktop" versus "server", rather than as "MATE" versus "XFCE" versus whatever.
There are a number of "TrueOS-own" tools that are added to those desktops, as well, including (for examples) the Life Preserver, the PC-BSD Control Panel, AppCafe, and the System Updater.
Going further down the rabbit hole, if you are willing to forego an easy OOTB experience and want to really get your hands dirty, give OpenBSD a go. I've found its installation process to be the fastest and easiest of any BSD, and it gives you a bare bones foundation to build into whatever you want. For desktop use it does require some tweaking, but it's minor stuff and you end up with a rock-solid OS that gets out of your way and lets you get on with your work.
Some of my go-to resources for setting up a proper OpenBSD desktop:
https://news.ycombinator.com/item?id=13223351 (ymmv, I use Xfce instead of Lumina)
The reason I chose PC-BSD over vanilla FreeBSD (or Ubuntu) was its integrated ZFS / beadm system, which creates ZFS snapshots at update time that are integrated with the boot loader. That means you can roll back if an upgrade fails, or breaks a crucial piece of software. There is less reason to do this now, as FreeBSD current now has decent beadm and zfs boot support baked in as of a few months ago.
The things that I hate compared to MacOS and Linux:
- No support for U2F in Chromium. In fact, if you have set up a security key, simply getting the U2F prompt will crash. See https://bugs.chromium.org/p/chromium/issues/detail?id=451248
- Chromium version historically lags Chrome on supported platforms by weeks or months (though, as of today, I have chromium-56.0.2924.87_1, which is pretty recent).
- No support for Google Hangouts (or other video conf. systems)
- "Complex" Linux commercial binary software (like Chrome, slack desktop client, my kid's Kerbal Space Program) does not work with the Linux API support.
To work around these things, I've actually installed a bhyve VM running ubuntu for Linux apps (and gone through the pain of routing pulse audio via a network socket so that sound works). With PCI-passthru, I can actually pass a USB controller into the Linux VM, so U2F and a video camera for conf. calls works. For KSP, Wine works reasonably well on the Windows binary.
If I had it to do over again, I might just reverse things, and run Ubuntu for my desktop, and pass the disk controllers through to FreeBSD and run my FreeBSD stuff in a VM.
1. I also LOVE KDE (But mostly use a tile window manager i3)
2. I tried liking BTRFS (Similar to ZFS for my non-server systems and just went back to my old file system) I feel my 2 copies and one off site is a better system for desktop. I use OpenSUSE and never had a problem I couldn't figure out.
3. You can use ZFS on Linux
My main data loss comes from hardware problems or a command problem like mv photo.jpg and forget the don't over wirte command. That's a simple fix on my side. I copy the effected file over.
That said, it's always an exercise in frustration, and if personal hardware unexpected cost, when a drive dies. That said, it's still less down time than a full recovery.
1) I have a second hard drive that has a copy of the files from the first drive (They are not raid and it makes saves mechanical errors). It isn't for restoring the OS but for saving the file. I did play with a dd backup solution on a second drive but that wasn't very practical. rsync was way more practical for my needs as a content maker.
2) I have a home server that uses Crashplan that sends the files to their server. This is my offsite backup. If I accidentally delete a file this is where I get my backup copy.
(Not an expert in this field so please bear with me if I'm too fuzzy or not rigorous enough).
- COW + snapshotting basically give you git-like rollback/failsafe capability. You could e.g. trigger it manually before updating or tweaking stuff, or have it automated everytime you run some pacman/apt/yast/whatever.
- subvolumes give you the features of partitioning without the physical/ordering limitations (each subvol can expand into free space, etc.) Combined with snapshots (you can snapshop each subvol discretely as its own fs), especially towards another physical btrfs drive, and some automation tool (like Snapper from openSUSE or simply crons), your 'machine-wide git' becomes an effective backup solution (1st order at least but you can snapshot to several machines including off-site obviously).
So it's a "set it up and forget it" very powerful filesystem base for your machine, regardless of what you do with it. Rollback instantly, automated backups, self-healing in case of corruption, and btrfs in single-disk or RAID 0/1/10 is incredibly performant. It basically allows you to utterly destroy your install and get it back up and running within minutes (if your snapshotting strategy is well devised, it can be as painless as a matter of changing the boot drive and fixing a single UUID entry in the /boot files, literally 5 minutes).
Note: on your root drive, on OS install you'd typically subvol /home, /var, /var/log (so as to debug externally), or domain folders like /var/www and so on and so forth. It basically makes most 'nix fs architectures constrain-free of most physical drive limitations.
- I don't see much activity on the root volume, most of it in my case is either from VMs or userland.
- For VMs I use discrete subvolumes, different fs/snapshot/backup strategies given the intricacies of this use-case. You'd typically snapshot VMs from the hypervisor itself.
- /home is its own subvol, with again specific settings. I typically disable COW on /home but adopt a much more agressive snapshot policy (e.g. every hour). Whereas system (/) can COW a lot with little to no performance impact, but snapshot only once a day is usually fine.
Note that I only use SSDs (either 2x RAID 0 over SATA 3, or NVMe) and my workstation is a beefy one; so that may explain why I don't see any resource issue whatsoever with these admittedly heavier filesystems (btrfs being much leaner than ZFS in this regard, depending on how you setup the latter).
My personal intuition is that major OS suppliers will eventually move to such 'modern' fs under the hood for most users, if only to provide easy rollback/backup features that actually work fine and fast. But it takes years to validate a new fs on an existing architecture, so I wouldn't expect that for Windows or MacOS until the next 'major' version (post-10.x, amusingly in both cases).
Boot environments - saved me many times while learning about BSD
Periodic data checks
Datasets - are like standalone filesystems which you can tailor for specific use (use compression,...)
I have TrueOS instaled on a workstation and I use ZFS raid10 for reliability. I also have offline backups, but it sure is nice not to worry about that, unless something really out of ordinary happens.
It's missing NFSv4 ACL support. It's missing per-dataset user and group permissions, forcing all operations to require root privs; on the other systems, the ZFS tools work for all users, with appropriate checks. No delegation of privileges e.g. to allow users to perform admin actions like snapshotting, send/recv. No transparent NFS export. It behaved badly when a disc glitched, requiring a reboot to resolve; the tools started getting stuck in D state when trying to investigate the fault. And just a few weeks ago it refused to mount all my datasets until I changed the mountpoint (for no discernable reason). FreeBSD and IllumOS provide a more mature, well integrated implementation.
GEOM_PART: zvol/tank/vm1 was automatically resized.
Use `gpart commit zvol/tank/vm1` to save changes or `gpart undo zvol/tank/vm1` to revert them.
Every time I'd run the VM, I'd start seeing ZFS corruption. I finally just moved the VM to a normal file. Sigh.
Luckily ZFS was able to recover from the corruption.
slackware64-live-current.iso is the one you want to try. KDE 4.14.10
As to software disappearing. I think what might have been happening is that you'd go install something, but the upgrader had already run and created a new boot environment for the update based on a snapshot from before you'd installed the new pkg. So just as soon as you rebooted into the new BE, you'd "loose" the package. This is one of the reasons that they changed how the updates work, and only create the new BE when you're rebooting. I personally hate this, as I'd rather re-install something than wait 1/2 for the update to re-install pkgs in a singleuser-ish environment.
I second the notion that you might want to consider switching to Ubuntu, rather than going back to Windows. I've run ZFS on Linux for years on my old desktop (now my wife's) and never had a problem. The only "issue" I had was the transition from 14.04 to 16.04 was a bit rocky, as 16.04 includes ZFS natively, rather than using the zfs-on-linux ppa.
With computers you can either learn how computers work, or use tools that try to hide how computers work so you don't have to.
For example, you can configure a network interface by hand - it's a few lines of configuration - but, you need to know how networking works at a basic level (e.g. vocabulary like SSID, subnet masks, etc.).
The alternative is what Windows/macOS/Linux do, which is wheelbarrow code into GUI wizards and daemons that try to automate everything. That's either good or terrible, depending on who you are.
It's good in that often you can click a bunch of buttons and it will "just work". It's terrible in that if anything goes wrong, not only are you dealing with the complexities of the actual problem, you also have to guess around what the hand-holding code is doing on your behalf, and half the time it will fight you or not let you do the things you need to.
Worse is that bugs are proportional to the lines of code, so the likelihood of it mostly working but sometimes not (randomly at times) is many times higher, because the amount of code involved (including that of all the different subsystems talking to each other) is many times larger than the code that actually does the job.
I sat down and learned how LAN networking works about 17 years ago. It took a few hours of reading and messing around. The dividends have been huge: I just paste network configuration from one rc.conf to another, and if anything goes wrong the language that the computer reports the error in is the same language that I speak, so the solution tend to be obvious.
Also, nothing in terms of how I use tcp/ip or ethernet has changed in all that time, and I doubt it will 17 years from now either. Learn it once and you're done. How many different ways to configure networks have Windows/macOS/Linux had in that time?
The reason I left Linux is due to all the code that was supposed to "make things easier", but for someone who knows how things work, actually makes life harder. For a long time I could ignore all of it on Linux - the old low-level stuff still worked - but then they started messing with that (systemd was the last straw) and I had to get out. FreeBSD was a wonderful escape - it looks like complexity to people who don't know how computers work, but to me it's absolute simplicity.
Computers are important - if you're in this field it's worth taking the time to understand the tools that you depend on.
The only desktop BSD I can recommend is OpenBSD.
All you need after a fresh installation is:
echo 'xcompmgr &\nexec cwm' > ~/.xinitrc && xinit
echo 'xcompmgr &\nexec cwm' > ~/.xinitrc && xinit & && exit
xinit & && exit
&& is a binary op meant to execute the following command if the return code of the preceding command is success (0)
& is another shell builtin that sends the command preceding it to the background, and lets the shell continue with whatever comes next...
So, at first I read your edit and thought "Oh, this will exit the shell unless X can't be started" ... but exit should be running before X ever gets around to returning, no?
I just read the man page for my favorite shell, and it says the return code for `&` is always 0. So it's clear that `& &&` will work, but I don't see anything suggesting it will do anything that `&` alone won't do.
Just want to know if you know something I don't, and I figure there's no better place to learn a new shell thing than on a HN thread about BSD OS.
FreeBSD idled heavy on my Thinkpad X201s, but OpenBSD was much better, longer battery and all the hardware worked basically flawlessly (except X11 tweaks for the middle mouse button thing, which is not exclusive to BSD).
Also wifi being handled by "ifconfig" is super nice.
It used to be, freebsd on desktop, openbsd/netbsd on devices, Solaris on the servers. These days though I think you might be in for a bit of a shock in the differences between the mach kernel Macos and current BSD land, especially depending on your use case. Can you elaborate on why you are considering this, and what your normal use is like?
Anyway, I ran it for a few days about two weeks ago. Without Chrome support, I can't have Netflix. Without Netflix, I can't watch cheesy fantasy TV shows while coding. Oh, and there was a bug in libressl that prevented me from using `go get`. Other than that, I thought TrueOS was pretty nifty.
I went back to Arch but I might try BSD as a desktop OS again in a few years.
Shortly after I began (AppleSeed) seed testing of pre-release Yosemite I decided that – after decades of happy use of Macs – the mash-up was not for me. An ill-fitting one-size-fits-all approach to the GUI, which was rushed through.
July 2014: I planned a three-year transition away from Mac OS X.
To the best of my knowledge, Apple's changes to human interface guidelines were not seeded. Instead, the changes were privately preset, published only after Yosemite was released.
October 2014: https://twitter.com/grahamperrin/status/524642026261057537
Two years later, my experience with PC-BSD, expressed when TrueOS Desktop was announced: https://twitter.com/grahamperrin/status/797724086898196481 "smart direction".
Should I recommend TrueOS Desktop to a user of Mac OS X? Not at this time; https://github.com/trueos/trueos-core/issues/created_by/grah... the most troublesome (time-wasting) of which is https://github.com/trueos/trueos-core/issues/287 – tl;dr the Mac metaphor would be an inappropriate switch off of Wi-Fi when simply switching network location. And https://discourse.trueos.org/t/-/378/18 too often I can not get network service without a restart of the operating system.
Generally, the majority of my discussion about the software is at https://discourse.trueos.org/users/grahamperrin/activity … you might need to scroll down a page to reach discussion of the software (to skip stuff arising from changes to community guidelines that were made without consultation with the community).
https://github.com/trueos/trueos-docs/issues/33 time for change :-)