From my experience of "old" computers, workstations, and then "PCs as Workstations", Windows won the desktop because the UNIX camp could never check their egos at the door and get their act together on a windowing system and graphics architecture. And while it was brilliant that you could do the "heavy lifting" on a server and do the display part on a less powerful machine, that architecture should have been taken out behind the barn and shot the moment 2D and 3D acceleration hit the framebuffer cards. Instead we have the crap that I have to put up with, which is a wonderfully compact and powerful system (A NUC5i7YRH) with a graphics architecture that is not supported at all on FreeBSD and has pretty egregious bugs on Linux, and runs beautifully on Windows 10.
For the conversation at hand though, the Linux graphics pipeline teams are much better at getting things working than the FreeBSD team, even though FreeBSD is much more "friendly" to non-open drivers.
I would love to have a nice, integrated, workstation type experience with a UNIX operating system but that music died when SGI and Sun through in the towel and tried to retreat to the machine room.
The funny thing is that there's a reason for that: up until recently, the focus of FreeBSD has been on being a server OS. There's a push recently on improving the graphics situation, but it's only relatively recent, and is mainly focussed on getting integrated graphics on more modern hardware working. There are also some efforts to expand the driver compatibility layer so that Linux drivers can be run as kernel modules with minimal change.
Moreover, FreeBSD types aren't quite so fixated on making a "FreeBSD desktop" as the Linux community is. Sure, there are desktop spins like PC-BSD, but the project as a whole has never really had desktops as a focus.
FreeBSD as a project is pragmatic, and tries to focus effort on where the greatest benefit lies.
Of course, there's the larger issue of Linux being insular, with a tendency to reinvent the wheel (epoll vs. kevent/kqueue being a nice example) rather than looking at what the state of the art is elsewhere and adopting that. You just have to look at containerisation: Solaris and FreeBSD have long had this in the form of Zones and Jails, but containerisation on Linux is largely an independent effort that didn't look to learn lessons from those efforts or adopt them.
 And I remember containerisation being mocked when hypervisors and VMs were all the rage. How things change...
I haven't used FreeBSD on a desktop for years, but I've been told it's getting a lot better recently.
Up until 7.0 came out, the desktop experience was pretty decent, and things like audio and video (anecdote ahead) worked out of the box better than Linux. After that, things started to slip in comparison.
The work put into PC-BSD has helped with the out-of-the-box experience, assuming your have the right hardware, but it's good to see more effort being put into improving the general driver situation. I certainly wouldn't be at all against ditching Ubuntu and switching back to FreeBSD as my desktop OS.
(Does kdbus count?)
Meaning that you no longer use kernel syscalls, but instead program with systemd in mind and then systemd talks to the kernel as it sees fit.
Just observe their actions regarding session tracking, where they are basically ignoring all data from the kernel about sessions.
One refuse outright to supply information that allows a proper open driver to be developed.
Another have just started providing this information in recent years, and there is still delays between the latest hardware hitting the market and the proper information reaching relevant hands.
The third only recently started producing hardware, and said hardware is only bundled with other products from same supplier. Never mind that it in general performs worse than the other two offerings.
There used to be another, one that was supposedly well supported by open drivers. But that one long since left the consumer market, focusing instead on multi-display business setups etc.
so effectively you are complaining about how a group of people, forced to reverse engineer and often working in their spare time, can't match the market gorilla that gets direct support from third party suppliers.
I get that it is impossible for the free software and hardware community to design a GPU. I also get that people who do design them have no interest at all in leaving any money on the table by sharing so much of the design that their "secret sauce" gets stolen (or they get hit with patent abuses).
Because it is genetically impossible for the Linux community to tolerate, much less endorse or encourage any sort of proprietary advantage, I do not expect that Linux will ever be a functional desktop environment of the level and quality of a Mac or Windows environment. FreeBSD could be that environment but they have to design for it. So far it hasn't been a priority for them.
I like using Linux, but I still miss the predictability of a BSD system - you know where things are, and where they are supposed to be. When I first started using Linux, I was absolutely flummoxed by the lack of distinction between the base system and add-on utilities.
Linux definitely feels more "organic" and "grown" whereas FreeBSD seems like it was architected and planned out. Not that this is a bad thing for Linux. My FreeBSD heritage still shines through when I use Linux; anything I install from source sits in /usr/local :).
To be fair, this is the norm on Linux too. I have never used BSD as a desktop operating system, but everything I've installed from source also sits in /usr/local. It's the default install directory for most Linux build scripts and I feel dirty if I add anything directly to /usr that the package manager isn't aware of.
This is 'The Cathedral and the Bazaar'. I've often seen the phrase used to contrast Microsoft/Apple with 'open source in general', but it's this right here: a fully integrated and designed system contrasted with an organically created system. Which approach is better is up for discussion.
Does this not depend on the distro you're using?
In FreeBSD anything you find in /etc is part of the system, applications that you install are fully contained in /usr/local including etc and rc.d (init.d). /usr/local is also completely empty when you first install the system.
System packages are packages in the distro repo, so those are the same thing.
> and 3rd party repo all sorting configuration and startup scripts in /etc.
Stuff you compile/install from random sources on the internet will place its stuff wherever the upstream decided to place it. The same will happen if you compile that software on FreeBSD. If you're thinking of ports, those are patched to install to the usual places, exactly the same as the packages in the linux distribution repositories.
FreeBSD's base system is very neat and well-organised, and they have an impressive ports tree with ports that integrate fairly well for the most part. But that doesn't extend to third-party software, and it seems silly to include that in a comparison on the linux side.
I run a small site on a VPS, so:
1. I don't have GBs of free memory for ZFS
2. I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.
Except for an Application Firewall for nginx, what does ports have over deb packages? All in all, how much MB of free RAM or free HD space will I win by compiling everything myself (taking time to do so, and pushing off security updates because I don't have the time to sit and watch the compilation, hoping nothing breaks (which _did_ happen once)
3. License - I don't care. GPL is free enough for me.
4. Base vs. Ports - Why should I care? Debian (testing!) is stable enough for me. Except for dist-upgrades, I never ran into issues, and then it may be faster to nuke the server from orbit. Now had BSD "appified" the /usr/local directory (rather than keeping the nginx binary in /usr/local/bin and conf in /usr/local/conf it would have kept everything related to nginx in a /usr/local/nginx) it would have been interesting, but now?
If anything, I like how Debian maintains both base and ports, so I can get almost any software I need from apt-get, and don't have to worry about conflicts.
5. systemd? The reason Debian went with systemd was (IIRC) because they didn't have the manpower to undo all of RedHat's work in forcing all applications to integrate into systemd (such as GNOME). I don't know how FreeBSD is doing in that regard.
I don't mind learning new systems. (see my username :) ). I actually understand what nixos or coreos, for example, bring to the table. But FreeBSD?
I got sick of the churn in Linux, having to relearn things all the time and then find out when I'm done that it's not any better, just different.
Things I learned how to do in FreeBSD a decade ago still work. For example, to get a NIC configured I can either put one or two lines of text in rc.conf, (which I have pretty much memorized at this point), or stop and learn what RedHat thinks is the coolest way to do network config this year.
I'm sure there are great many things that I consider "right" now that will be wrong in 15 years time either due to changes in the environment in which they work or me otherwise learning new information or developing new techniques over time.
I don't have GBs of free memory for ZFS
In the early days of ZFS it required many GB of memory -- it was developed for Solaris, and designed for servers with lots of memory. But it has improved dramatically since it first came to FreeBSD, and people run it with far less memory these days.
(FWIW, at one point the amount of address space was far more important than the amount of RAM -- amd64 systems with 1 GB of RAM would run better than i386 systems with 2 GB. I'm not sure if this is still the case.)
No first-hand experience with ZFS on BSD, but I believe in the early days of the ZFS port there were issues where if the system came under sudden memory pressure, ZFS might not hand its RAM back to the kernel fast enough, leading to (I guess) a panic. So this is a fit-and-finish issue with a specific port, not an inherent ZFS issue, but it seems to have fed into the whole notion of "ZFS needs bucketloads of RAM".
If you are using ZFS without an l2arc or dedup enabled even a measly 1GB of RAM is "sufficient", you just won't get the best performance since it's going to have to constantly fetch from disk depending on your active data set (which is literally no different a problem than OS-level file caching in any other operating system).
Atom C2758, 32GB ECC RAM, 4x4TB in RAIDZ2 (losing that much storage was painful, and the $1200AUD upgrade path to 4x8TB even more so) with a 128GB SSD as L2ARC.
I found out some time later that I probably wanted an SLOG device instead but I'm really too afraid to touch my config. FWIW it's pretty stable on FreeBSD 10.3 with around 5GB RAM in use.
RAIDZ is a performance killer right off the bat, I would switch to using mirrored vdev's instead if throughput is an issue for you. Parity calculation kills write speeds, especially on slower CPU's like a C2758, and RAIDZ doesn't give you any extra throughput on reads since there's only one usable copy of a stripe to read from. I have 2x4TB disks in my pool as a mirrored vdev (with an additional 2x3TB coming after I get my 128GB flash drive for XenServer to free up my second 3TB disk again) and I get reads over 500MB/sec using 10Gig-E to my XenServer host, and equally fast writes. Disks are fairly cheap, RAIDZ is not a good solution if you need performance, and if you are using four disks in RAIDZ-2 you would lose the same amount of storage using two mirrored vdev's anyway (and save the increased potential of a rebuild failing due to multiple disk failures that is increasingly common on higher capacity drives).
If writes are a bigger problem for you, then, yes, a SLOG device will help - but any random SSD is not going to do. If you are using a SLOG ZFS expects it will not corrupt in-flight data following a power loss, even a "high-end" SSD like a Samsung 850 Pro or a Crucial MX200 will lose data in the event of a power failure. You also don't want a SLOG that isn't mirrored, if your SLOG is corrupted you just lost your entire pool. In addition you need a SSD with proper power-failure protection like the Intel DC series. Also, large writes skip the SLOG (>64KB) entirely, so if you are bandwidth bound (either from network or to disk) it is not going to help, if you are IOPS bound it can help tremendously.
Also, you mention 4x4TB and 4x8TB, I assume your issue is the available disk bays in your system? I personally ignore high-capacity drives as they are far too expensive, and my HP ML10 that I run FreeNAS off only has 1 internal drive bay (with a $50 add-on I can buy to install an extra 3), instead of dealing with internal drives as well as limited capacity I bought a cheap DAS array (Lenovo SA120, here's a picture of mine and my cheap TP-Link layer 3 switch http://i.imgur.com/eEvtP6Z.jpg) that cost me $200USD and connected it with an equally cheap LSI 9200-8e SAS HBA ($40), I now have 12 hot-swap drive bays and can replace my FreeNAS box without worrying about how many bays it has, if I need more I just need to buy an additional enclosure and daisy chain it off the first. This isn't a dirt cheap solution by any means (my homelab gear is easily worth over $1500 at this point, including my Lenovo TD340 acting as my XenServer host), but it will save you money in the long run by allowing you to easily buy many cheaper drives than fewer more expensive (but high capacity) ones.
I do have mirrored vdevs but now you've made me doubt what configuration I put them in.
This stuff is so advanced it's not funny.
Edit: thanks for your help here - you don't have an email address listed in your profile but I'd like to discuss this further if you have the time.
This seems like a lot, but you can ignore most of the technical details I just posted. Buy a SAS HBA with external ports, buy a SAS DAS array, plug it in and you see a bunch of drives - no fuss. Since SAS controllers can also support SATA drives you can save yourself $10-20 a drive and buy normal SATA disks, or you can get some added reliability (multi-pathing and error handling) and buy near-line SAS drives for a small premium (I don't bother personally, but I only have one controller installed in my SA120 so I have no second path for data to travel in the event of a failure anyway).
Feel free to hit me up anytime, I'm /u/snuxoll on reddit (pretty active on /r/homelab) and you can email me at stefan [a] nuxoll.me.
As far as I'm aware, SLOG loss does not jeopardise the pool, only the transactions that are in the SLOG. The pool may violate synchronous write guarantees in that supposedly committed writes that were in the SLOG effectively get rolled back (or rather, never get applied to the underlying pool), but that's about it.
2. pkg install and the related pkg utilities have existed for awhile.
5. systemd isn't and shouldn't be a requirement for applications going forward. Any application that requires it is limiting its portability for unknown reasons.
The joy of trying a new system is the little things you learn that you weren't even aware of before.
How should one do it instead? Treat each alternative system specifically? Or is there a way to cover them all at once, including unknown and future ones?
Seriously, who ever thought such a dependency was a good idea? Who even thought it would be a good idea to make an init system that was possible for an application to be dependent on? This is exceedingly poor engineering, and I'm dumbfounded at its acceptance and spread.
So have apt and apt-get. So that takes away one of the advantages of FreeBSD
> systemd isn't and shouldn't be a requirement for applications going forward. Any application that requires it is limiting its portability for unknown reasons.
True, tell RedHat that :(
Not completely. In Debian and others it's mostly a choice between a stable base and stale software or a moving base and up to date software (though backports improve things quite a bit for Debian Stable).
Since in FreeBSD packages are separated from the base system, it's possible to run a stable base system, while using quarterly updates or rolling release (latest) of packages.
(Note: I didn't have many problems running Debian Unstable, but it may depend on your requirements.)
pkg provides a way to manage (binary) packages. There are other tools (like poudriere, last time I dabbled in the FreeBSD world) to manage/create binary packages.
You could run FreeBSD on your tiny VPS, but build packages with all the custom ports options you could want in a jail on your local machine.
If you require the integrity guarantees offered by ZFS and still want the efficient resource allocation of containers you will not get around FreeBSD or a Solaris based distribution.
If you need a system that is secure by default you will not get around OpenBSD.
If you need the portability of Linux coupled with a license that lets you freely redistribute in order to sell an appliance you will be hard pressed to get around NetBSD.
By the way, ZFS will run fine on small VPS since the RAM cache can be turned off. You still get instant snapshots and rollbacks, efficient replication and integrity guarantees.
It's perfectly legal to sell an appliance with Linux, people do it all the time.
It also makes it really hard to earn a living with actual software (not services) that depends on GPL.
So, looking at Linux system software as an example, it only works as long as you stay far away from the kernel.
Gnu.org could even demand a percentage of revenues going towards the GNU foundation. IMO that would make Linux even free-er since it would democratise the way kernel development is funded (i.e. redistribute some of the influence away from the big software corporations who currently chip in). Right now kernel development works fine because you have the right gatekeepers at the top (especially LT), but I wonder what happens when he retires - will some committee take over?
Your desires are more inline with Microsoft 'shared source' type of situations.
The GPL and the FSF's world-view isn't particularly aligned with what you want (I believe). Either use the GPL and live with the constraints, or use something else. Something else's that come-up in a GPL but commercial context are:
a. Service or Support. You've said you want to be paid for creating the software and not for providing services, so this one is out. Note that SaaS is now the easiest way to achieve what you want.
b. Dual licensing. You said "you can't depend on GPL software then". Well yeah, because GPL is about a commons of equals 'sharing' and you want to charge for sharing - so you don't get to use other people's stuff for free. Seems fair to me.
But, your actual concern isn't really valid - there are lots of things you can write without depending on GPL software. This is probably more realistic than you think.
c. Open core. Someone described this to you earlier, I think.
d. Validation and/or IPR. Probably outside the boundaries of most individual developers. But think of the way that Java is licensed on the basis of it being a 'valid' implementation and then IPR and trademarks.
> We recommend against using Creative Commons licenses for software. Instead, we strongly encourage you to use one of the very good software licenses which are already available.
If I where you, I would instead copy the license of the unreal engine. That is, you don't look at the commercial status but rather on profits from products that include your work. If someone is earning profits, regardless of the nature of the organization who do so, then a cut is required to be given to you. Its simple, it fixes the school problem above, and there is a "industry" example to use if it ever became a court case. The big drawback is that its not an open source compatible license.
Only ZFS dedup uses goobles of RAM, and use of that is discouraged anyway.
> I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.
Last I checked, Debian now supports ZFS in form of a source DKMS package. No need to maintain your own setup.
If you go with Ubuntu instead, which provides binary ZFS packages, you can also use it as a rootfs to get full advantage of its capabilities.
That said, I've decided to just stick with btrfs, because it's in the mainline Linux kernel, and works well enough for my needs with absolutely zero hassle.
FreeBSD is nice though. No harm in trying it out to widen your horizon :)
Ditto. I definitely like the idea of FreeBSD (and, even more, OpenBSD), but neither really seems worth the pain. The fact that I can't just rely on third-party software working means that I can't use either for my personal desktop (I like to play proprietary games) or even for servers (without extensive testing ahead of time, anyway).
As for the license, I prefer the GPL over the BSDL. I don't see why third parties should be permitted to forbid their users from enjoying the same rights they themselves received. Nuts to that.
And, honestly, as much as BSDs are a nicely-integrated software suite, they are an old nicely-integrated software suite. As much as the GNU userland is … weird & inconsistent, it's also really powerful. I like having an ls which has added options since 1986.
I'd love to love the BSDs, but … I don't.
It's unfortunate, as I'd love to have a versioned FS
Not just like, but the very same one. DPorts is FreeBSD ports with a few patches, and pkg in DragonflyBSD is pkg from FreeBSD.
> Also, the DRM drivers are up to date and support current Intel GPUs.
I believe FreeBSD is integrating that work itself. Intel GPU support on Dragonfly is pretty great.
On a laptop, however (with an Intel GPU) that is not an issue, as there is only one disk, anyway. I might have to take a look over the summer.
I've no idea what compromises that'll bring you, or how it would react to you pointing it at a multi TB space.
btrfs at your service then. Comes with the mainline Linux-kernel. Give it a spin!
Others have already addressed the memory hog myth, so I'll just say that you aren't required to run ZFS - UFS still works just fine. I just sshed into my freebsd AWS instance (UFS) to see 116d uptime with 130MB ram in use and the remaining 110MB being free. I've made no effort to optimize resource usage on the instance, but if I wanted to I could pretty easily cut that by a third without a noticeable impact on performance.
> 2. I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.
You might be surprised to learn how many build options there are for the software you use daily. I jumped in with both feet and setup a jailed build server after wanting some uncommon build options for vim, ncurses and sqlite3.
> 5. systemd? ... didn't have the manpower ... I don't know how FreeBSD is doing in that regard.
It has made no attempt, which I'm pretty happy about. The only annoying thing from linux that keeps popping up on my freebsd desktop is dbus.
ZFS does not require large amounts of RAM any more, unless you want to benefit from its impressive caching features and have large amounts of disk space.
Since FreeBSD 10, pkg-ng has been the standard. It unifies binary packages installable similar to apt and other linux package managers with your custom ports built packages as needed. Now you can mix and match seamlessly and there is no need to have an entire ports tree unless you have special needs.
> 3. License - I don't care. GPL is free enough for me.
Each to their own, but MIT and BSD are, to me, free-er and more friendly to business applications.
> 4. Base vs. Ports - Why should I care? Debian (testing!) is stable enough for me. Except for dist-upgrades, I never ran into issues, and then it may be faster to nuke the server from orbit. Now had BSD "appified" the /usr/local directory (rather than keeping the nginx binary in /usr/local/bin and conf in /usr/local/conf it would have kept everything related to nginx in a /usr/local/nginx) it would have been interesting, but now?
The layout you're discussing is basically what happens by default when things are `make install`ed most times without any customizations. The layout that you're lamenting makes sense enough and has merits, with no need to be "appified" -- that is what packages are for. Again, I implore you to check out pkg-ng, it is a core feature of the OS now.
> 5. systemd? The reason Debian went with systemd was (IIRC) because they didn't have the manpower to undo all of RedHat's work in forcing all applications to integrate into systemd (such as GNOME). I don't know how FreeBSD is doing in that regard
People are experimenting with alternative init systems in the BSD world, though we will likely never directly have SystemD, there are some efforts to support systemd unit files as well as some new directions taking into account Apple's work as well as lessons learned from systemd to build a BSD licensed alternative to all of the above.
> But FreeBSD?
FreeBSD brings to the table a free, cohesive, highly engineered and integrated operating system that is coming into its prime. There are a multitude of advanced features in terms of filesystems, networking, and more. FreeBSD has first class support in AWS and some other major cloud environments. However, FreeBSD has been and continues to be a server focused operating system. Even so, FreeBSD on the desktop is coming along, but it's certainly nowhere near linux in that regard.
In the end, each to their own, if you find a reason to like and use it, then great, but at the least you should look into it and learn what you can from it, like any system.
True, but I'm not going to sell OSs, and plenty of corporations are working on Linux
Obviously BSD has its uses, but if you're looking to develop a new feature and get it out the door (in OS development) Linux is the easiest choice.
I remember reading somewhere that Netflix uses BSD for all of its net-intensive servers as network performance tuning on BSD is, at least by reputation, better, but they use Linux as their workhorse. They employ some FreeBSD committers though (obviously not everyone can do that).
To see why the various BSDs are not an example of the Cathedral process, you only need to look at their source control. In fact, OpenBSD pioneered the idea of anonymous CVS -- before that, you needed an account to check out from the CVS repository:
"When OpenBSD was created, de Raadt decided that the source should be easily available for anyone to read at any time, so, with the assistance of Chuck Cranor, he set up the first public, anonymous CVS server. At the time, the tradition was for only a small team of developers to have access to a project's source repository. Cranor and de Raadt concluded that this practice "runs counter to the open source philosophy" and is inconvenient to contributors. De Raadt's decision allowed "users to take a more active role", and signaled the project's belief in open and public access to source code." 
Furthermore, people without committer access land patches in the BSD's all the time. In fact, that's how you get committer access: you first start contributing patches to the appropriate mailing list where existing committers can review them, discuss them, and perhaps merge them. It's not terribly different from sending a pull request on GitHub, and afaik it's pretty much the same process used by the Linux kernel.
* The BSDs feel more elegant, the Linux distributions feel more complete/up to date (and are probably more performance optimized due to enterprise embrace but that's only speculation on my part).
* I sympathize with the BSD license philosophy and agree that the BSD-license is in theory freer but don't care too much either way
* OpenBSD is awesome (yes even for desktops, with the right hardware) and I install it every now and then but ultimately I'm too familiar with Linux to switch. I do like that they keep fighting the good security fight, don't care about supposed personality quirks of some devs. Keep up the good work
* At the end of the day I use Linux because that's what I grew up with and it tends to "just work" (for the record all BSDs I installed in recent memory pretty much worked out of the box). I am kind of forced to use OS X at work but other than that it's Linux on all machines. The parents also use it by now.
The philosophy laid out in this article seems more like rationalization of historical accidents more than anything else. Linux file system layout is just as predictable as anything else. Configuration goes in /etc, PID files and other such things go in /var, things local to the users go in /usr/local, cron related things go in /etc/cron.d, and so on and so forth. FreeBSD file system layout on the other hand makes no sense to me and the rc system is even more bizarre. Do I really need to specify "XYZ_start=YES" when upstart, systemd, and others have this stuff all sorted out already. Well not systemd so much but close enough.
Overall the BSDs are too low level for development and play purposes. For deploying proxies, caches, NATs, and other such single purpose things I'm sure they're great but anything else not so much.
Also, OpenBSD, DragonflyBSD, and FreeBSD can all be used as server or desktops without much fiddling. Their installers are better than anything in the Linux world. You can have one up and running with a nice wm, browser, etc, in 15 minutes.
They are more finnicky about hardware, but when you can buy a laptop for $200, that's not so much an issue.
OpenBSD's is very minimal simple ASCII input and output questions with sane defaults.
FreeBSD offers a console dialog based approach similar in some ways to Debian's installer, but in other ways different (and to my mind, better).
Both of them will get you efficiently setup from nothing to installed within minutes, and since they are very simple and well documented, both of them are easily scriptable for installing servers or laptops quickly with a specific setup or your own prompts.
By contrast, linux installers are generally larger, slower, clunkier processes and I am not sure how easy it would be to integrate any random distribution's installer into a scripted workflow.
It's a small, subtle detail that for many is less important today than it once was; however, if I have two FreeBSD systems of mostly equal versions and I want to migrate applications and/or users from one to the other, I know ALL I have to do is copy from /usr/local -- try that on your Ubuntu machine. Good luck separating the system-specific configurations from your user and application configurations.
No. It's not.
If nginx releases an update, do you also need to update your OS? No, because the user dependency graph isn't shared with the base OS.
If you update your OS, do you also need to update nginx? No, because there's backwards ABI compatibility.
The split is not artificial.
Only for minor versions. How is that different from any other package that promises ABI compatibility?
Why are FreeBSD OS components split up at a low granularity, so you can't install the stuff you want without the stuff you don't?
Why are there two separate sets of tools to update them and other packages?
edit: Why separate non-base packages from base packages on the filesystem, but not from each other? If the advantage of separating /usr/local is the ability to copy the non-base configuration/apps without the base stuff, wouldn't it be more useful to be able to copy any given individual package or set of packages? Or if that is possible already (by asking the package manager for a list of files or whatnot), wouldn't that obviate the need to separate base from non-base?
FreeBSD does not work this way. You are projecting Linux onto it.
Drawing the line at the kernel syscall ABI is arbitrary; no other major operating system other than Linux does so.
Consider how difficult it is to ship 3rd-party applications for Linux and all its myriad of distributions; the only thing you can truly rely on is the anemic syscall ABI provided by the kernel (vs. userland).
So I don't really know what you're getting at. If you're not talking about kernel vs userland and you're talking instead about ABI compatibility then the ABI is even more stable and requires even less work to port any piece of software from one distribution to another.
NeXt->Apple also avoided anything like systemd via a configuration system that supported inheritance across search domains (User, Local, Network, System):
If you're having trouble understanding the layout of the filesystem, you only have to man hier: http://www.freebsd.org/cgi/man.cgi?hier(7)
If you blew away /usr/local, you would be left with a pristine (mostly) BSD install.
An analogy is a base windows install and all the associated tools and drivers. Anything else you install on your own is an add-on.
This distinction is hardly arbitrary.
Seems to me you have a hard time understanding what a "base system" means. This is what the article is trying to explain, and it seems to have gone completely over your head; I can't help with that.
The equivalent of "base system" in linux land is called a distribution and rightfully so. There is nothing basic about a distribution. It is an arbitrary set of choices made by the distribution maintainers and it is sold/advertised as such.
As far as the compiler goes, in my 10.3-RELEASE-p5, I can only see clang 3.4.1. No Python, no Perl, no even Bash. If I happen to need clang 3.8, I can just `pkg install clang38` and it will happily live inside /usr/local separate from the one in /usr/bin (that is probably presented for building the world). This one will be managed by pkg, but the one in /usr/bin will be updated when I upgrade to 11.0-RELEASE (which will ship with 3.8).
Thinking in Linux's terms, is probably like installing Debian stable only the minimum, and use pkgsrc to install the rest of the system to /opt.
This is why I typically find Debian to be a more cohesive system than FreeBSD. If I could run only the base system, or only the base system plus maybe one or two packages, FreeBSD would have a very good story about being developed in a unified, coherent way. But in practice I need a bunch of stuff from ports, and on FreeBSD that much more clearly falls into the "everything else is an add-on" category. There's relatively little integration testing or attempts to make the software in ports do things in a coherent "FreeBSD-style" way; it's a bunch of third-party software delivered mostly as-is. Whereas Debian considers anything in the 'main' Debian repository part of the official release, subject to release-management and integration testing, and ensures it works in a more or less "Debian way". Whether that matters depends on your use case, but for me that makes Debian feel more like a unified system.
I'm thinking mostly about servers here fwiw. On desktop the base/applications distinction works better for me, so I could run "FreeBSD" as a coherent base system and then install some separate applications on top of it, which is all fine. But on servers I prefer the coherent base system to be more "batteries included", including integrated release management of all the major libraries and software packages I'm likely to need. If I deploy on "Debian 8" vs. "FreeBSD 10", for example, the former gives me a much larger set of stable components that work together in a reasonable way, while the latter leaves me to more DIY it outside of the relatively small base system. (Whether this matters of course depends on what you're building.)
... except that that was Ubuntu two major versions and some time ago, back when it used upstart. Nowadays, Ubuntu Linux is a systemd operating system.
On the other hand, over the last ~15 years, I have almost never checked if a piece of hardware is supported by Linux (or BSD) before buying it, and I have only run into problems twice, really. One was an Aureal Vortex sound card for which a driver was maintained outside the Linux kernel tree, so I had to download it and build the module manually; the other was a really crappy HP scanner. Ironically, these days it does work on Linux much better (meaning: at all) than on Windows. (One key factor, though, seems to be not using brand-new hardware. After a year or so, chances are much* better it is supported properly.)
Apart from that, I have had no problems at all. Maybe I was just really lucky.
It always seems to be one thing or another. Either it's some program/service I use daily that doesn't work (or doesn't quite work as well as I know it can) or it's some strange issue with sound or most likely GPU
When I wanted to try FreeBSD the installation failed multiple times and after the first success I couldn't install a DE for hours. I've also had problems with the package manager because I couldn't install anything before doing something with some configuration files. OpenBSD is the same category but at least it doesn't failed at installation as much.
It still doesn't? Haha!
I remember ranting about this 10 years ago, with a friend who went to a conference in France… that was dedicated to package management in BSDs.
I asked him how I "just upgrade all the packages" (apt-get upgrade). I'd found two-three ways, but couldn't get them to work. He said that yeah there are three ways. They don't work.
I thought surely they'd fixed it by now.
They did. pkg has gotten a lot better the past 3 years.
No different from apt on a Debian system. It's come on leaps and bounds since the initial 10.0 release.
Of all things, the lack of a distinction in Fedora/RHEL of "requires vs. recommends" is frustrating, coming from Ubuntu.
* You can use great hardware, but the choice of it is rather limited.
* You're OK with upstream making a lot of choices for you, though the upstream is considered good at making the right choices.
* You don't mind running a lot of proprietary stuff (anything above the Darwin level).
People who choose Linux (over OSX and even BSD) look at these bullet points differently, thus the divide.
* When I bought my laptop 5 years ago, Apple was a candidate. It lost out because I still wanted an optical drive (and was interested in Bluray specifically, at the time), I wanted a 1080p screen, an HDMI output, and at least 3 USB3 ports. Upgradeability is a plus; I've upgraded the RAM, hard drive, and wifi card since I bought my laptop, as my needs have changed.
* I like having choices (in terms of software and configuration), and they often don't match up with the best options for mainstream users.
* I've got a Windows partition, but it's not my first choice for general computing. I don't have any particular interest in Apple's technologies or services, so I feel like a lot of their ecosystem would be lost on me anyhow. I do generally find Apple's software to be well-designed and aesthetically pleasing. I'd be worried about fighting the software to get what I want done, though.
It also mentions that Gentoo is getting popular, which was happening in 2002 and 2003, after which it has been declining.
Also this article has appeared on Hacker News many times before, with titles of various relevancy: https://news.ycombinator.com/from?site=over-yonder.net
Ok fair enough, but the same can be said about manual v automatic transmissions, static versus interpreted languages, etc.
When something is harder to use then you're forced to think about it more and understand it better.
Obviously I'm an outlier, but if I need some software on FreeBSD, I'll contribute a port. Most of the time, it's almost trivial, and it's only with very obscure (maude is one that I'm currently working on) or newish (RethinkDB, which isn't written in a super portable manner - it doesn't even use pkg-config and has a hand-written configure script!) where there's an issue.
https://www.gnu.org/gnu/linux-and-gnu.html (BSD is discussed here as well)
https://www.gnu.org/gnu/gnu-linux-faq.html (According to this page, the title of the article should be BSD vs. GNU/Linux, though the latter was mentioned once in the article)
I guess, I'm still hoping that one of these days, I will reach one of those authors and make them understand that the contents is of paramount importance and creative coloring can do nothing but detract (except when you're an expert). If you use your own color scheme, please make sure you know what you're doing.
This article is really about the objective difference between linux and BSD. It is not a rant in the form B > L or B < L it is much more about the difference in a lot of direction.
I personally find it much more an help to guide people to come and as much as to stay away.
I am a former user of linux (that I used to like) and I am now using freeBSD. I switched without love or hate. I just was on linux to have video card and latest hardware support, and I am on freeBSD to have the ports, stability and easy upgrading.
I did it after years of experiences enabling me to build the information about the different fit use case of both OSes.
Having this summed up in 10 pages will spare people a lot of time in deciding.
And I think it is as important to have people come to BSD as to not delude them in coming for wrong reasons: disgruntled users are as much a pain that rigorous contributors are a gift.
So information to make choice, especially when they are not structured as an aggressive comparison are very welcomed.
The author should be praised.
In the meantime FreeBSD has changed the release schedule and there are efforts under way to package the base system separately.
Also Linux now uses git for development.
> And normally, you do the whole rebuild process in four steps. You start with a make buildworld which compiles all of userland, then a make buildkernel which compiles the kernel. Then you take a deep breath and make installkernel to install the new kernel, and make installworld to install the new userland. Each step is automated by a target in the Makefile.
> Of course, I'm leaving out tons of detail here. Things like describing the kernel config, merging system config files, cleaning up includes... all those gritty details.
Wow, so painful. For me its as simple as :
$ pacman -Syu
and watch some movie. I bet the BSD way offers more opportunities to learn but I personally don't like learning for the sake of learning. Learning different ways to do the same thing does not make me a better person. So this is not interesting to me.
> Does Linux support hardware that BSD doesn't? Probably. Does it matter? Only if you have that hardware.
I could obtain that hardware at some point in the future.
> "But Linux has more programs than BSD!"
> How do you figure? Most of these "programs" you're so hot about are things that are open source or source-available anyway.
Given an application, does the provider support it on your OS is an important consideration that people make before choosing to use an OS, as they should.
> Linux, by not having that sort of separation, makes it very difficult to have parallel versions, and instead almost requires a single "blessed" one.
Isn't this what NixOS ( https://nixos.org/nixos/about.html ) is supposed to be solving ( among other things ) ?
I might try a BSD for the novelty aspect of it, but so far I have seen no reason why it should be better.
> Wow, so painful.
And also something that hasn't been true for ages. By way of anecdote, I've had to do config merges on major version upgrades more on Debian systems than FreeBSD systems, but even then the difference hasn't been all that great: major version upgrades just work. And the last time I had to build a custom kernel was FreeBSD 6, which was about a decade ago.
Any remaining pain is largely a consequence of the freebsd-update works and should mostly go away once the base system is packaged (which should also make security updates a little faster to install).
I run Ubuntu now on multiple devices. It often has a feel of flimsy binary patches. I don't know that I care too much though as it works well. I have family members too on it so it is good to remain practiced with it to help them.
I miss playing with FreeBSD though! Perhaps I should run it on one of my RPis? Anyway, this is a great site.
My setup scripts are all in Git:
I've used a few different OSs on a daily basis for work and recreation: Windows (since 3.0), Linux (various distros since 1995), OSX (for a few years between 2009 and 2013), and FreeBSD.
I have found BSD to be the most comprehensible, simplest and the best 'cultural fit' for the way I think and work. I appreciate that the latter part is a bit vague, but that's because my understanding of it is vague :) BSD just feels ... more like home.
Those wanting to give it a go as a desktop OS should check out PC-BSD, which is built to be usable 'out of the box' for that purpose:
systemd has arguably made Linux bootup faster than before but I would expect a default BSD to be quicker than a bloated Ubuntu default install.
FreeBSD is working on it, but it took surprisingly long.
>"But Linux is more popular."
>So? Windows is even more popular. Go use that.
This slope is so slippery I broke my neck when I fell.
Is anyone successfully using Java on FreeBSD in production?
I've been running Gitblit and multiple Minecraft (Spigot) worlds with Openjdk8 on FreeBSD 10 for more than a year and never had any problems (Jenkins worked fine too but I used it only for a short time).
But my userbase is only about 10 people, mostly in minecraft (to get a sense of the main world's scale: ), so I don't know if that qualifies :)
You can even install IntelliJ IDEA with pkg!
When I say "Linux", I mean Red Hat. I mean Slackware. I mean Mandrake. I mean Debian. I mean SuSe. I mean Gentoo. I mean every one of the 2 kadzillion distributions out there, based around a Linux kernel with substantially similar userlands, mostly based on GNU tools, that are floating around the ether.
Edit: I can't find a date, but from other stuff in it I'm guessing the article is from quite a few years ago, before Android was mainstream or even existed.
I hate to break it to you, but...
Looks like it's in 4.1+.
I really haven't been paying attention O_O
But I cannot help and sometimes try the BSD stuff out, as it feels like "my parents home".
That's a sub-$200 Intel Haswell machine. It also means Dragonfly (as well as the other BSDs) work fine on similar hardware.
Similarly, you can put together a modern sub-$300 desktop that will run it just fine.
I find that FreeBSD is generally a lot simpler and more discoverable than Gnu + Linux. If you want a bare-bones UNIX experience free from SystemD and Pulse Audio, I'd recommend giving FreeBSD a try. The FreeBSD handbook is very nicely written, and at the very least using FreeBSD is an educational experience.
I'm confused by this sentence. Did you perhaps mean "people who came from OS X"?
It's well written and informative, though.
I still prefer NT's IOCP, but kqueue isn't bad. epoll… is what happens when you get someone who's never done asynchronous I/O to design a system for doing asynchronous I/O.
NT's IOCP is good, sensible model overall having used it some recently. It's mildly put off by some "gotchas" (like async calls occasionally returning synchronously if there's nothing else on the queue, so you have to be careful with send, recv, etc), but the actual design is good. Thread pools being a convenient, native API in Vista+ is also a bonus.
The need for these cryptographic primitives isn't too onerous on its own, either; FreeBSD already needs them for IPSec (among other things probably), so the vast majority of needed, kernel-level encryption code was already there before this.
> ... which I assume it will do
I hate to lament or anything, but: ... why assume at all - they've written about it? I really wish people would just read the paper about this feature, because this is a large misconception about their work that nobody ever seems to get right, and I say this as someone who doesn't use FreeBSD at all, I just found the paper interesting. Everyone hears "sendfile with TLS support" and immediately jumps without actually reading.
Again, sorry to lament. It's just a personal nitpick or something I guess; the paper is very approachable though, so I encourage you to read it before assuming. I encourage you to do this with most papers - even if you don't understand it, since you might still learn something :)
A google search for "freebsd tls sendfile" brought me this immediately:
The main points about this are on page 2, last paragraph on the right column, and page 4, paragraph after the bullet points on the left column.
That said, you always have to look at the code to determine if it's really something worth going upstream, for all the usual reasons.
Also described as neutral forces of nature, and in other ways. Not always as persons, often as if they were mechanical.
Even if one doesn't believe in the Devil, surely it seems in poor taste to use as a mascot a mythological creature who is reputed to be the source of every single evil, cruel, horrible thing in all of history. It's almost as bad as naming a piece of encryption software 'felony.'
Expressions like “speed demon” don't bother me either. The general concept of a demon is somewhat different from a fallen angel; I don't have trouble with demons being painted in a neutral/jokingly positive light, unless the demon obviously refers to the devil or affiliates (in which case “woo the devil is awesome!” makes me moderately uncomfortable).
¹ Odds are I could even get labelled as a backwards religious fundamentalist.
Well ... what about Pan and Dionysus ? What about all other cultures in which there is no Devil representation that has a tail and horns ? History is long and the world is more diverse than you seem to realize.
> the source of every single evil, cruel, horrible thing
and used Pan and Dionysus as a counter-example.
If you're saying 'well, Pan and Dionysus both have horns like the bsd mascot' I don't find that compelling either because the identifying characteristics don't line up (different types of horns, the tail, the pitchfork, and so on).
Oh, and quite a lot of math and science dates back to Christian and Muslim scholars, so your last sentence is perhaps dismissive of the general intellect of religious folk. As someone rather appreciative of the work of Isaac Newton, Blaise Pascal, Gregor Mendel, a variety of ancient Muslim scholars, and many more, I'm glad they didn't stick to reading fiction.
¹ Who I don't generally group with Christians (the religions, while superficially similar, are pretty different), but I don't have anything against them and I would not consider them “evil”, or at least not more evil than humanity at large.
Also, please don't create many obscure throwaway accounts on HN. This forum is a community. Anonymity is fine, but users should have some consistent identity that other users can relate to. Otherwise we may as well have no usernames and no community at all, and that would be an entirely different forum.
I've never heard anything good about the "MacOS desktop" and I'm pretty sure that the "windows desktop" is far behind the "linux desktop"(plugins, performance, menus etc.). Have you tried something else than xfce or unity?
You're missing the point, GP is talking about a level below the DEs, namely the graphics stack, which if unavailable or inefficient makes any discussion about the DEs moot since the GUI may very well be unusable at all in practice.
In that regard the "Windows desktop" has plenty of favorable points and "macOS desktop" is just stellar because it has had a comparatively perfect track record of driver support including a compositor since forever.
With a VESA/OPenGL/Quartz software render fallback many people on the Hackintosh crowd have been using OS X on unsupported cards via software fallback and not even notice they weren't getting the full Quartz QE/CI, which is nothing short of astonishing.
This is the signature attitude that points to the point being missed, what good is conky or awesomewm or ratpoison or xmonad or openbox if I can't make tearing and rendering artifacts disappear nor get proper colour management or sane HiDPI support? (PS: Xinerama, I hate you)
As for the DE experience on other platforms, Windows has had a kinda-tiling WM that regular people do use in the form of AeroSnap, while macOS is now getting tabs-in-the-window-manager, and Exposé+Spaces that evolved into Mission Control has been a positively brilliant experience for years.
Meanwhile on Linux, the very fact that you have to massage it into something useful ever so slightly at every level is, to me, a telltale sign of its deficiencies. Things are getting better (wayland, drm2), but they're getting better on the other platforms too. The best part is that nobody's standing still and things are moving forward (hopefully, as from the outside it looks like Linux people are running in circles these days around, so I hope reinvented wheels are getting rounder).
> Meanwhile on Linux, the very fact that you have to massage it into something useful ever so slightly at every level is, to me, a telltale sign of its deficiencies. Things are getting better (wayland, drm2), but they're getting better on the other platforms too. The best part is that nobody's standing still and things are moving forward (hopefully, as from the outside it looks like Linux people are running in circles these days around, so I hope reinvented wheels are getting rounder).
And then again, after using almost 10 years xmonad at work and at home for everything, using Windows or OS X is definitely not a nice experience. I don't really care about GUI tricks or clutter on my desktop. I just want my terminal, my editor and my browser to appear when I press the key combinations and I want my desktop to manage the windows so that they're always in the right place.
The other thing I don't miss from the Windows and OS X world are the updates. I don't want to spend one work day to update my system to the next version and possibly fixing things because something changed in the xcode and I need to reinstall it to get my toolchain to work, or something changed in the Windows Linux subsystem and again I'm spending time fixing stuff.
I like things to be simple and a very basic Linux installation with a good wm which you know by heart is miles ahead of everything else. And it doesn't change so you can focus on your work.
With my arch/xmonad laptop I can handle everything else, except that damn Witcher 3 :)
Most PC gamers will tell you that the experience usually sucks and is often poorly performant.
There are a million and one ways to use your graphics. Be it watching videos or gaming, which while you might have a problem with that, is actually done by people. It can also be used to smoothly decode and render a training video, or to make spreadsheets more pleasant to crawl through without the screen tearing and making everything look like a mess. There are even more minuscule applications you can do with graphics hardware and the software to drive it: transparency to subconsciously still remember what you're returning to when you click on another window, for example.
As nice of you as it is to warn the parent about their information-action ratio, you could use this opportunity to appreciate people's freedom of choice as well as understand there is more to Windows and Mac than Netflix and Call of Duty.
You're not adding anything to the discussion.
Wayland is on the way, which may bring Linux graphics into the 21st century, but I haven't been following that very closely.
Apple's way more on the ball about GUI stuff. While OS X lags behind in OpenGL support, which sucks for games, they nail everything their desktop environment needs to function. But notably, they don't use something like X. They have their own software stack.
DRI2 is an improvement over DRI, of course (which was already about five to eight years behind SOTA for GPUs when it came out), but it's still a rickety, insecure mess--for one, any application can fire messages at any window in the X environment and read input sent to any window, which has some obvious implications for the usefulness of containers on the desktop--that is sorely in need of replacement. This isn't to criticize X when it was developed, mind you--it was fine then. It should have been defenestrated by 2002.
Full disclosure: I get the feeling from your posts that you're looking to win an argument, rather than learn anything, so I'm not likely to respond to you further unless your tone changes. HTH. HAND.
For your disclosure - don't be too lofty with me - I'm trying to understand your problems because I've a very different experience.
This is mostly because X11 sucks. (It's also partly because the proprietary nvidia drivers sucks). Nowadays I reboot on windows to watch netflix or play games (even linux-compatible ones) because the experience is just much better. Linux has been constrained to work stuff.
Now wayland is coming (apparently. NVidia is still trying to pull some shit) and should fix all that. That's great ! I tested wayland a few months ago though, and my GPU's proprietary drivers weren't supported. So still no games, still no movies. :(.
For example, I tried watching TNG from my external drive the other day (Using the latest VLC for both). On Linux this isn't a problem, the quality is good and there's no stuttering. On Windows it was hellish: The quality was extremely poor and it froze every (literal) 10 seconds to grab more data from the disk.
One from 2005 that I sent Joanna in gripe about Qubes not having a trusted path:
Good to see she added a better graphics system at some point. I suggest looking at prior work, though, as it's by security engineers with knowledge of adding it to every aspect of lifecycle and intent to minimize TCB. For example, GenodeOS is using Nitpicker + a microkernel. The abandoned EROS OS, whose site is still up, combined a different windowing system with a capability kernel and new networking stack. All to minimize risk in system in fine-grained way.
In other words, your browser can snoop on what passwords you type into an X terminal if it wanted to or is compromised.
(Of course, the Dockerfiles were also running apps--like, say, Chrome--as root, with no user namespaces, so it wasn't exactly great anyway.)
It's only bad advice if you're relying on Docker to provide security against targeted threats. Know your enemy.
But in Linux, I can run application in container or VM and then use VNC, RDP, or Spice to connect to it in secure manner.
No, it's not. OS X has GUI isolation, an application cannot read keystrokes or read other application's contents, unless you explicitly give permission to do so. This is the reason why applications that can do more, such as Alfred, require you to explicitly enable these rights in the Privacy section of the Security & Privacy prefpane.
Windows also has GUI isolation (UIPI), but it's a bit murkier. As far as I understand, lower-privileged applications cannot read events from higher-privileged applications.
So while it's possible for a program to listen to keyboard events for other non-administrative windows (such as the password for a browser), it isn't possible for a non-administrative window to grab keyboard input for stuff like windows password prompts, or information typed into administrative console windows, etc.
 - http://stackoverflow.com/questions/3169675/how-to-use-setwin...
If you select your hardware, they work pretty flawlessly. Certainly right there with Windows or OSX or anything else. In a lot of cases you don't even need to be very selective, you just need to have a modern machine.
Maybe the fact that I have to give them time is one of the major problems? You know how much time I invested in getting my Windows and OSX desktops to work flawlessly? Zero.
Upgrading these are hit and miss, and it's more common than not that third party drivers do not support newer versions of the operating system.
Linux, meanwhile, integrates all of these components. As long as you stay away from non-supported third party drivers it just works, and upgrades are painless. (Until some desktop developers change the GUI again, but that's another story.)
I think a modern Ubuntu 16.04 Unity desktop for instance is actually a bit of a revelation for long time Linux users because it just works out of the box. I didn't have to install a single package or fiddle with anything for a change. It's fast, smooth, robust and works exceedingly well out of the box and wait for this it's even a delight to use.
It's another thing I am a long time Gnome user and won't even have tried Unity because of the now I think inexplicable bad press Unity has got but it's been quite a revelation. I urge those who have not tried it yet due to bad press to give it a shot and prepare to be suprised.
I compare it to my Windows machines and OSX laptops and do not feel a particular difference apart from preferences and of course once you get into specific use cases like adobe apps or gaming there is still ground to cover. But for a general productivity desktop with full acceleration I think it's there in many ways. If you need a rich ecosystem of dev tools it goes to the top. There is of course always scope for improvement architecturally and other things and I think that is happening with mir, wayland and Vulcan.
Network drivers are almost always missing. Looking at the device manager, about 5-10 different devices fails to auto-install, all part of the motherboards. The default graphic drivers tend to "start", but is limited in refresh rate and resolution, and moving around windows shows a noticeable stutter until I install some official drivers from the graphic card manufacturer. Sound normally works without issues.
On linux, the problems are almost the reverse. I have yet to have network problems on a fresh debian installation. Graphic is a all-or-nothing deal, which means either x-server starts up normally or it refuses to start completely. Sound is normally a pain, but looks to have better default behavior in the last 3 years or so.
What are you reinstalling, Windows XP every time?
Newer versions of Windows are getting better at finding and installing sane drivers for the hardware. It’s still not perfect, Windows Update doesn’t have all the latest drivers and OEMs ruin everything especially on laptops, but these days I install the latest Windows and everything usually just works.
But if you don't believe me, and choose to believe those reporting issues with Linux, just do a google search on windows install and network issues. There is plenty of people reporting the same issue. to cite windows own support page:
"If Windows can’t find a new driver for your network adapter, visit the PC manufacturer’s website and download the latest network adapter driver from there. If your PC can't connect to the Internet, you'll need to download a driver on a different PC and save it to a USB flash drive so you can install the driver on your PC"
A few months ago I bought a gaming laptop on release date. Guess what, network worked without issue. X-server did not start until proprietary drivers were installed. Sound worked.
Windows release schedule do not match Debian's release schedule. Each year after a release, the default drivers will get worse, but the general experience can still be obtained by people who experience the install process. The experience I have endured with windows is issue with new motherboards and especially network drivers (and built-in raid cards for the installer... good grief, that was a wasted afternoon trying to get the installer to accept the raid drivers). Second to that, the fall back graphic drivers look crappy and is bad in every way except that its slightly better than a basic command prompt.
Intel chipsets and their graphics drivers have excellent support out of the box. That goes for everything from refurbished Thinkpads to Chromebooks to used Macbook Pros to new desktops.
"If you select your hardware," you can definitely have a decent time. But I don't know many people for whom the operating system is more important than what you can do with it, and part of "what you can do with it" is "use your hardware."
Having had an issue with Ubuntu and nvidia in the past, you might want to google NOMODESET and setting it at boot, which should let you boot into X/Unity and get the latest drivers.
> But I don't know many people for whom the operating system is more important than what you can do with it, and part of "what you can do with it" is "use your hardware."
Absolutely. But if OSes aren't directly equivalent (and the hackability of a nix gives it more power than Windows can ever have), then it's worth sorting out those hardware issues (as frustrating as they are).
Thanks; I later ran into something that hinted at that. Frankly, though, at this point I don't care. I have something that works and will be patched until 2019. I don't care about my desktops. Every second I spend debugging something stupid on a desktop is a wasted second. This is bad for me and I resent it.
> and the hackability of a nix gives it more power than Windows can ever have
Ehh--if you have to use that "hackability" to get something that is minimally usable, that's kind of a push. (Or, when you consider OS X, a serious negative, because the only thing I have to do to get OS X to where I want is install Homebrew and a short list of packages, none of which require configuration.) I don't care about desktop environments or tiled window managers, the extent of my interaction with my WM, which I could not name, is a sort-of reimplementation of Aero Snap. Again, if I wanted to throw a bunch of time away on an operating system, I would set up a Hackintosh and actually be able to use Photoshop. Which, in keeping with the theme of "Linux desktops are a fractal of unusability", would be a significant improvement. I tried to avoid a reboot and use the GIMP yesterday for something. I think I need counseling now. I ended up using Pinta, whose layered image format ("OpenRaster", which wants to be a standard but it seems like nobody uses it?) is so bonkers and edge-case that ImageMagick doesn't even support it, to say nothing of Photoshop or even Paint.NET.
It turned out, kind of to my surprise, that Linux on the desktop offers very little to me as a Linux developer, sysadmin, and long-time entrails-reading "power user". That's pretty damning, in my book.
Another ocean boiled, courtesy of the Free Desktop Project.
And this isn't a "rant". I promise you, when I'm ranting, you will know.
As I understand it, installing macOS on non-Apple hardware is a license violation. Your employer is okay with that?
How are you supposed to run Adobe inDesign on a Linux desktop?
Or any other useful application, really.
Linux should just kill off its desktop. Mac OS X won the Unix desktop wars, and has the expected use models of a desktop, with a proper modern GUI API as well, instead of the ancient X hacks.
Mac OSX won the "desktop wars"? That's not even funny - it's one of the most awful DEs. You people have clearly no experience with any other OS besides the one from your favorite brand. "Hacker news"... more like "Noob Army News".
> Or any other useful application, really.
You search in the software center or in your menu and start the app. Simple. FUD elsewhere, apple-noob.
* Finder is garbage (alt tab is broken and non configurable. I can't figure out how to convince it that the built in display is always the primary display. It seems to struggle with nfs mounts which Gnome doesn't at all. I've had just as many, or more issues with OS X and projectors as I have with Gnome). Where is the centralised location for Finder extensions to tweak it to my preferences?
* The default command line tools that come with OS X are old and basic (reminds me of Sun's tools) and have a death of a thousand paper cuts (e.g. top defaults to sorting by pid, lol; they still ship bash 3.5.57, locales are all fucked up).
* brew, compared to dnf or apt, is not good. But that's Open Source people trying to shore up OS X deficiencies so I won't call it "garbage". (F)OSS people put in a lot of effort to keep it running and they deserve praise.
* The programs from Apple are all terrible. I literally don't use any except Safari sometimes since it allegedly doesn't use as much battery as Chrome or Firefox. I guess I sometimes copy paste stuff into Textedit and I sometimes use Calculator.
It's all certainly usable but there is no consensus that OS X is the best. YMMV. My feeling is that each time there is an OS X release, it jumps ahead of Gnome; but Gnome usually trundles along and surpasses it and maintains a lead most of the time.
Try PathFinder: http://www.cocoatech.com/pathfinder/
It can be used for free until you decide to buy it.
> alt tab is broken and non configurable
Cmd+Tab? What exactly do you find annoying?
Try Missing Control (F3, or set a mouse hot corner in System Preferences -> Mission Control) to quickly see all windows (for all process, or Ctrl+F3 for current app, Cmd+F3 for desktop) and just raise the one you want.
Also try TinkerTool: https://www.bresink.com/osx/TinkerTool.html
> brew, compared to dnf or apt, is not good.
Try nix: https://nixos.org/nix/
> The programs from Apple are all terrible.
Which ones, and how, exactly?
Cmd-` to change between windows within an Application is a behaviour I don't like and prefer Cmd-tab to cycle through all windows. Gnome defaults to the same thing as Finder but it's possible to change the behaviour.
>Try nix: https://nixos.org/nix/
I've taken a look at guix and it's promising.
> Which [of Apple's programs are terrible], and how, exactly?
Well since I stopped using a lot of these programs where possible, my opinions may be out of date. But Quicktime player doesn't play most files. It's slow. It hangs. It's all around worse than VLC and MPV. And you used to have to pay to upgrade it to do things that other players do for free. And why can't it play DVDs? Or can it? If so, why is there a separate DVD player? (Rhetorical questions; I don't care since I don't use either program).
App Store is outrageously slow to do anything. 14 seconds to check for updates? wtf? (server backend and not gui issue, but part of UX). I had XCode update fail at downloading the 3 or 5 GB ball of mud. Instead of resuming the download or checking the hash of the file to make sure it wasn't corrupt, it just tried to download 5GB again.
XCode takes up 5GB and seems to be required for some particular development work. It's not really appropriate for Mac Book Airs with their small ssds. So I just don't do that work or try to use a Linux VM.
Facetime just doesn't work 3/5 times.
Preview is alright for reading PDFs, but you can't look at an image in a directory and press a hotkey to see the next image in the directory (as least, I haven't seen how); eog does this.
Mail is an absolute pile of garbage. The threading is confusing as hell. It's soooo slooooooow. Then it hangs when pulling in sysadmin style alert folders (with thousands of mails). Deleting mails is also very very slow (i.e. cleaning out said mailbox takes days). Instead of pushing operations to a background thread it where you can see the progress like in Thunderbird or whatever, it just beachballs. And for some reason, if you want to attach Calendar to an online account you have to do it through Mail (wtf) and if you don't want Mail to be used for the Mail then you have to configure the account correctly. Some people may be tricked into subjecting themselves to the pain of using Mail. :(
With Mail being so terrible, one ends up using Thunderbird or Outlook which have Calendars integrated. So Calendar becomes superfluous. Which is a shame because it works alright with online calendars, but it doesn't seem to have the integration to help plan anything with people.
I don't understand why Notes, Reminders, and Stickies all exist. Reminders should be rolled into Calendar. Notes allegedly integrates with Google but I don't see anything from my Google Keep account in my local Notes hierarchy. And is Notes supposed to compete with OneNote? They've a lot of catching up to do.
Then there's a lot of programs that I haven't opened in a decade since they used to be terrible (iTunes can't even play ogg out of the box, wtf) or I just don't have a use (photobox, game center, imessages). I used to have Pages and Numbers but I don't remember being impressed with them. From what I read, I haven't missed anything by not using them. But if they're free now, why aren't they installed by default for people who might want to write a document?
And if I've signed on to App Store with my icloud id (which needs a credit card to get the free updates, wtf!?), why is there no single sign on? Why do Game Center and Facetime and iMessages prompt me for iCloud credentials? I guess so I can sign in with different accounts (corp vs. personal?) But keychain access doesn't prompt me. So maybe keychain isn't backed up automatically to the cloud... :(
Aside from a lot of other very good wms, there's always mate if you really like GNOME 2.
Even better, none of those wms will go away just because Apple decided to change things.
In retrospect, a Mac Pro would probably not be that much more expensive than the time I spent getting this fairly minimally demanding environment set up, though on the other hand I did re-learn a decent bit of stuff about Linux in the process. On the gripping hand, things related to X aren't particularly important to my life, so that's kind of a push.
Meanwhile everything went down the path of compositing and otherwise burning way too much CPU to do not a whole lot extra.
I switched over to a tiling WM years ago, and have been extremely happy since. It does exactly what I want; I can connect to a running instance with my editor and reconfigure it, as well rewrite it while it's running. It's pretty great.
I don't know what's going on with the Gnome guys, but KDE is flat-out the best desktop manager I've used, and I'm including Windows and OS X in the comparison. It's not without glitches, but the glitches are mostly of the "Google can't be bothered to make Chrome cooperate, and no-one has updated Emacs' DM support in the last ten years" sort.
Gnome 2.x is a dead-end - it was buggy and ugly. Gnome 3.x is far better. Unity is just another canonical-outrage.
> For productivity, Unity beats any Gnome 3 setup I've tried so far.
Unity has problems with multiple monitors, consumes more RAM and CPU and is also hard to customize.
> And we're still not talking about the maintenance/security nightmare that is the Gnome 3 plugin system.
But we should talk about it if we're here...
After watching the "we're the only ones who know best, so shut up"-antics of the Gnome developers, I can understand that canonical got cold feet.
> Unity has problems with multiple monitors, consumes more RAM and CPU and is also hard to customize.
In my experience, setting up multiple monitors was much less of a hassle under Unity than under Gnome. I don't have any recent data on the resources use, but I wouldn't run either Unity or Gnome on a low memory setup. Firefox and Chrome dwarf any RAM use for compositing anyway.
>But we should talk about it if we're here...
Ok, gladly. In Gnome 3, a lot of functionality comes from extensions. Even changing the theme (from a black top panel) needs an extension. Installing extensions is done over the web using your browser (this works to a varying degree out of the box). I don't know of any recent changes, but about a year ago I had a look under the hood, because I wanted to make my own extensions, and was shocked by how they work.
First of all, there seems to be pretty much no integrity checks, signing, hashing to prevent malicious extensions. That's pretty much a no-go if you want to use Gnome in any kind of industrial setting (you will have to maintain them offline and manually on the file system level, sidestepping the supported way).
My second gripe is the stability of extensions. Since you're downloading from a website, the currently offered may not fully support your slightly outdated Gnome install. However, if you keep your Gnome up to date, expect random failures of your extensions.
From http://lwn.net/Articles/460037/ about the reasoning behind sidestepping distro packaging (emphasis mine):
"The second reason is that extensions are not working with a stable API, so have very specific version requirements that don't really fit into the normal assumptions packaging systems make. If a packaged extension says it works with GNOME Shell 3.2 only and not with GNOME Shell 3.4, then that will likely break the user's attempt to upgrade to the next version of their distribution. We'd rather just disable the extension and let the user find a new version when it becomes available."
So, you just updated your Gnome and your productivity extensions fail. Now you have to search replacements and, should you find them, configure them anew. Sometimes extensions just randomly fail. This was the main reason I finally gave up on using Gnome for work.
My final conclusion regarding Gnome 3 was that it is a wobbly work in progress, less configurable than Compiz (there just aren't that much extensions to choose from in the end), based on questionable design principles and taste. It's okay for hobby use, I guess. But I have yet to find a distro that provides an as polished DE setup with Gnome as Ubuntu does with Unity.
Don't get me wrong, I want Gnome to be great, since many distros use it as default DE and I want an alternative if Canonical mucks up Unity with their Mir transition and their focus on smart phones. There's also Cinnamon wich I kind of like, but it has the same problems regarding extensions as Gnome. I will give KDE a closer look in the future.
If only we could exile the GNOME devs, Canonical & the systemd devs to a desert island, the state of the Linux desktop would be … well, probably not as good but at least it'd be a much more collegial community.
To get back to the original topic: the attitudes of the GNOME devs, Canonical & the systemd devs reminds me of that of the OpenBSD devs, with the exception that the OpenBSD guys are generally right, just socially inept in how they convey their message.
I wouldn't judge them that harshly, but the kerfuffles had the effect of splitting the Linux community into those who embrace progress and those who shun it. Sadly, most of the experienced went the latter way because the progressive path was full of cranks.
> To get back to the original topic: the attitudes of the GNOME devs, Canonical & the systemd devs reminds me of that of the OpenBSD devs, with the exception that the OpenBSD guys are generally right, just socially inept in how they convey their message.
Most of this, in my experience, comes from sticking to a very strong opinion that is heavily based on ideals. The further apart from the real and existing world these ideals are, the more caution should be taken when implementing them. Otherwise you will be placing a huge turd on someone's desk. During work hours. On a deadline.
I think this is the main problem with the attitude that the Gnome and systemd devs have. The Canonical devs took their ideals at least closer from a working model (OS X) and they were (I assume) motivated by pragmatism.
The OpenBSD devs base their ideals probably closer to their own experience. That makes it more likely to be right.
- random failures to mount network shares,
- drag-and-droping files between Finder windows suddenly stops working,
- the file rename edit widget appearing in random part of the screen
that are bugs, that should not appear in the supposedly best desktop OS.
I recently set up Ubuntu and Mint computers for some folks, and can report that Mint is not bad, but Ubuntu is dreck (seriously, the change-user-password dialogue hangs: do Ubuntu users never change their passwords‽).
I've been running Debian with stumpwm for years now, and am convinced that this is the way of the future: a tiling window manager, extensible in a real language, capable of performing literally any task I ask of it. Most of the time, my main window is full-screen — that's odd to someone used to a classic desktop interface (as I used to be), but it's actually very much like a modern tablet or phone.
I love Linux as a developer platform and that's why I'm staying here. But if I could give up the shell, package management, understandable system architecture and the like, I'd move to Windows in a jiffy. Its desktop works flawlessly on my PC.
The solution that worked for me was getting an SSD.
Now, I always have a debian VM working in seamless mode.
Ctrl+alt+t opens a terminal window, all linux dev tools work without a flow.
With an SSD there is absolutely no lag.
Virtual desktops in Win10 and snapping with win+arrow keys, eliminate the need for a linux DE.
Plus you can use Adobe products, Visual Studio etc. at full speed, without hassle.
Tell me more. Which one?
Usually that's not a problem, because I only have emacs and/or a terminal session running. With the control key on the right, you switch back to your host os.
To switch to a windows program from linux you simply press RCtrl+Alt+Tab.
Virtual desktops in Win 10 are very handy while using a VM. Ctrl+Win+d opens a new virtual desktop. Ctrl+Win+Arrow Keys switches between them.
Maybe you should buy a dev computer instead. Gnome consumes far less memory than the average DE. I always try every DE at every release and as far as I've experienced gnome-shell and cinnamon are the best at customization/plugins/performance.
> I'd move to Windows in a jiffy. Its desktop works flawlessly on my PC.
I'm currently at work and I've a windows and an ubuntu on virtualbox - gnome-shell is pretty smooth while windows appends the "Not responding" text to every window's title, the search in the menu is much slower than it should be and the apps are often frozen - it is far from "flawless" for me.
For example, pulseaudio works fine for me for at least 6 years on 6 notebooks and 2 workstations of various vendors (HP, Dell, Acer, Medion). But if it will work badly under heavy load, then obvious command will fix that: `sudo renice -n -10 $(pgrep pulseaudio)` .
I could care less about games, MS Office, a d a lot of other things that are deal breakers for some, but I have tried Linux on the desktop and it is still lacking.
You can change|install fonts on linux.
> and even with patches there are still problems with Java, old GTK libs, etc
What kind of problems?
> but I have tried Linux on the desktop and it is still lacking.
Can you tell us what do you miss?
My latest try with running Fedora 23 on my laptop outside of the VM seemed okay... until a recent Kernel / X update turned it into a hot mess - fan constantly running, processor overheating warnings in the logs, etc. Maybe it was my NVidia card or something. The problem is that I have a Dell Precision which is one of THE few models (along with Thinkpads) that are well supported on Linux.
But I just don't have time to deal with these things anymore. If Windows 10 doesn't clean up its act, I'll probably be moving to a MBP, even though I haven't had good luck with them in the past.
Honestly, I'm at the point where I'd pay serious money to RedHat or some other company (maybe Dell or Lenovo) to put out a well supported laptop / Linux combo: nice fonts, supported discrete video, working ACPI, upgradable parts, and no spying or trying to monetize the OS user nor dumbing down the interface a la Apple and MS.