There simply isn't any operating system as conducive to creative tinkering and progressive enhancement of key software as Ubuntu Studio. Its way, way ahead of the pack in this regard, and I think anyone who scoffs at the idea needs to be taught the lesson that Ubuntu - and of course, Linux and the ecosystem it promotes - is really worth the effort to know, learn and understand.
Rock solid. Better than Pro Tools.
I chose my hardware wisely. In my case, I've got two Presonus Firepod FP-10 interfaces, which work out of the box with JACK. Truly superlative Firewire audio interfaces, and I chalk that up to the fact that the Bridgeco chipset used in these devices was developed in a pro-Linux environment.
Of course, ymmv .. especially if you fall into the PulseAudio trap, which is designed - imho - to make Audio suck on Linux as much as possible. Not so with Jack and Firewire audio - in this case, its the best possible combination of audio hardware and software components, given superlative latency and multi-channel I/o capabilities ..
It would also be great to see some audio latency numbers for ubuntu vs the rest on common hardware
JACK has a reputation for being fiddly to set up, but puts great emphasis on low-latency/real-time and other professional use-cases. I'd expect it to be competitive, but don't really have the knowledge to make the comparisons myself.
I did find this unfinished review of latency in JACK from a few years ago: http://apps.linuxaudio.org/wiki/jack_latency_tests#does_late...
I was looking into whether I should purchase Windows or buy a Mac to run a DAW, but if you're saying that Ardour is good enough, then I can just install that this weekend!
One of the simpler things is to customize the look and feel. E.g. you could try out different desktop experiences and WMs (window managers). Most of them are just an `apt-get` away and when you logout and login again you can choose and try out a different desktop.
This is really the great thing about Linux: we've got tons of options when it comes to audio I/o routing .. but its a liability too. My advice: commit to setting up Jack and learning it - definitely the best thing about Linux Audio.
I have a new workstation PC I just recently put together. It's currently running Ubuntu 16.04 LTS but I plan to switch over to Arch Linux in the near future. Because I want to use LUKS (a.k.a. full disk encryption) and ZFS (and multiple ZFS pools) on it, I decided to work through the installation procedure on a laptop first (just to make sure I've got the steps down right -- an installation w/ the root filesystem on ZFS is "non-standard", and LUKS throws additional issues into the mix) before I wipe out my primary work machine.
First, by default, the LUKS setup only supports "unlocking" a single encrypted drive at boot. Since the laptop has a pair of SSDs (in a ZFS mirror), I need to unlock both of them (and my workstation has an NVMe, two SSDs, and two HDDs, so that'll be an even bigger problem!). I had to "hack on" the initramfs stuff so that I could unlock both drives when the system boots up.
In addition, I have several Yubikeys and wanted to use them with LUKS. I set them up in Challenge-Response mode, programmed them with a "secret key" that I generated, and then came up with a (relatively) short(er) passphrase that I can use to unlock the disks. I also had to create a custom (initramfs) "hook" to prompt for my passphrase, detect if a Yubikey is attached and, if so, send the "challenge" to the Yubikey, receive the response from it, and use that to unlock the disks. If a Yubikey isn't attached, then I have to provide a (much longer, more secure) "backup" passphrase to unlock (each of) the disks.
On the laptop, this means I can enter in my (relatively) short(er) passphrase once and unlock both disks. On my workstation, it means I'll enter it once to unlock all five disks. The alternative would be to type in my much longer, more secure passphrase FIVE separate times every time I boot the system up.
None of this was very hard or time consuming, but it was necessary in order to meet my exact goals and I'm glad that Linux allows me the opportunity to do exactly this type of non-standard stuff.
And have you tried Bitwig yet? With its similarities to Ableton Live, I'm sure it has the potential to push audio production on Linux to some degree.
You'd be surprised just how many VST's work smoothly and reliably under Linux.
Tried it, but not a fan - I have too many external hardware sequencers to be bothered with setting it up with the rest of my system. (Hardware sequencers run forever!) I do, however, have Ardour set up for perfect multi-channel recording, so every instrument has its own digital tracking, and this has been wonderful ..
As plugins go, many good ones are available in LV2 or similar formats so no problem on that front.
Well, it's not that impressive compared to something like Live or Cubase.
Your point being? "A subpar DAW should be enough for everybody in 2017 as long as it's better than 70s era studio technology"?
Not to mention that you compare apples and oranges. These people did rock, punk and/or jazz, not modern r&b, pop or electronic music that are usually more demanding of a DAW for one.
Second they had tens of thousands and sometimes millions of dollars as a budget for a single record, great facilities with top-notch acoustics, multiple producers, access to orchestras and top-notch musicians, and crazily expensive consoles with people to operate them for them.
Third one doesn't compete against Louis Armstrong era sound but against their own's generation's sound and production techniques.
Heck, the more advanced users end up going with a lot of hardware-based solutions and analog equipment.
There's also another meaning of freedom: freedom from having to care of worry about the platform, and being free to create, because you have a thing that "just works".
For musicians that's more important than "route everything to everything" (for which there are solutions even for commercial DAWs, e.g. Rewire for those that do care for it).
It's all about tradeoffs, but Jack and "freedom to modify DAW code" is pretty down on the list of priorities for 99% of musicians.
Also, a lot of time using FOSS is a choice we make not because of the price or principals but for convenience. I simply don't want to juggle licenses and dongles between home and studio computers, or for that matter be forced to install licensing software that is more or less a rootkit...
But the more essential question is: does it matter?
If one is experimental electronic musician / sound artist, it might be worth it.
But the average musician wants a good environment with the features they need built-in (and easy access to quality third party plugins etc for FX etc), not to tinker (and even less so, in C/C++) with their DAW.
If Pure Data was as mature and polished as Max (which it is not) we just as well would.
If not, those things matter more than openness when you have music to record and work to do.
The problem for me is not superior VS inferior or impressive VS unimpressive software but free software VS closed software.
The problem for most people is good and inferior, or "getting my work done with this" and "this is lacking".
Free vs closed doesn't even come second to most people's concerns, and I'm not even sure it should.
My main problem has recently been "why can't I open these songs anymore" vs "can still edit old productions anywhere, any OS, no need for intrusive DRM that takes over my entire laptop just because Steinberg"
Back when Ubuntu started, it was explicitly a variation of Debian with a GNOME desktop, plus some custom parts to make a "Linux for Human Beings", such as an easy-to-use installation process. Mark Shuttleworth had been a Debian developer, Canonical hired it's technical people from Debian contributors, and everybody was sensitive to the need for Ubuntu to work with the upstream projects (and vice versa).
Linux is a complicated system of components, and desktops are far more complex than servers, so it's essential for developers with different employers to cooperate to get things done. Important decisions require developers from multiple organizations to reach a consensus. It can be a slow and frustrating process, and it's easy for awkward people to cause a lot of hassle for everyone else.
In practice, Canonical always struggled to work well with others, and eventually they switched to developing their own convergence stack (to span desktops, smart TVs and mobile) that happens to use Linux components but shares increasing less common ground with the rest of the community: using their own graphics systems, desktop environment, and their own software packaging systems. In other words, Ubuntu has been morphing from community-friendly Debian variant to an Android-style single-vendor system.
Google can do this with Android and not take the same level of flak because Android has always been a commercial product that happens to have FOSS components, and they seem to cooperate reasonably well with the rest of the community in areas of shared interest.
I think a big part of the problem is Red Hat. Not that they're malicious or doing something wrong or anything of the sort; just that they're overrepresented in key projects that determines the direction of Linux. With enough people in key positions it is much easier for others to write off working upstream because they have different business interests and feel like contributing is working for you rather than with you. Here LWN with their stats on contributor affiliation for kernel releases can help (since they show that they're not even mostly one company), as does having people employed by the Linux Foundation. This doesn't occur as much for non-kernel software though.
Sure, everyone loves if stuff just works - but if I'm looking back to my first years of Linux (1998-2007) and the last (nearly) decade - I haven't actively cared what kind of kernel, what version of kernel or whatever I am running on my laptops or servers. It just works[tm]. I'm really not trying to downplay the kernel developers' work - but if you're not using the latest hardware or need the last bit of performance.. many people could live with security fixes alone (hey, surprise, all the kernels in LTS distros only do get security backports).
(For me at least) in the last 5+ years the real development was in userland (for better or worse) - but I've stopped caring for anything besides "works fine" on a kernel level. Sad? Maybe.
The desktop devs on the other hand seems all too willing to break existing interfaces, treating them more like internal code than something exposed to third parties.
I don't think Ubuntu deserves flak just for forging their own path at times, even if it creates a bit of a fork in some stacks.
People bitch constantly about evil Red Hat. Any article about systemd, flatpak and so on.
It's Ok because systemd is actually pretty great.
Out of the people who complain about systemd, I have heard very little of them actually managing dozens of servers with it daily, however I have heard tons of praise from sysadmins, because it is a lot saner writing & managing systemd unit files, rather than a bunch of hacky shell scripts, so much so that FreeBSD, the "*nix way or the highway OS", wants a clone of systemd for themselves.
Also, there are non-systemd options, look at Gentoo or Void Linux.
Right now the Freedesktop approved way of handling suspend and hibernate on a Linux laptop is via systemd-logind, the systemd session and seat manager!
What used to handle it, powerkit/powerd, is not pretty much just a wrapper around the power management parts of logind.
Never mind that logind itself upped and replaced consolekit, that could be used independently.
Or that these days udev, a project for managing the content of /dev on Linux installs, and that existed for almost a decade on its own, is these days part of the systemd source ball. On paper it can be used independently, but in practice the procedure for extracting udev from systemd change at random intervals.
And where did you get the idea that FreeBSD wants to clone systemd?! Best i recall is that the developer of launchd (the OSX/MacOS inspiration for parts of systemd) was lobbying for FreeBSD to adopt Launchd. But he was largely rebuffed and has since opted to develop his own FreeBSD fork instead.
There is some effort underway to clone the external systemd APIs, but last time i read anything about it they had gotten hung up on the ever morphing nature of logind.
And more recently there has been effort spent towards developing a BSD DE that do not depend on anything Freedesktop derived, systemd included. Because The major reason for having anything Systemd related on the BSDs was to support the major DEs, Gnome in particular.
So no, the BSDs are in no way "envious" of the systemd shoggoth. And why should they be? Their own init scripts are a haven of sanity compared to the sysv derivative that RH and Debian/Ubuntu clung to for so long. Heck even venerable Slackware adopted a variant of it, and they seem uninterested in replacing it any time soon.
When I started with Linux I used a distribution like Mandrake with a KDE Desktop and by default you got several text editors, browsers and other tools (the drop-down menu was huge) and for me as a starter (I did not know any programming then and just wanted to play with a different OS coming from a Microsoft stack) I was overwhelmed and although I could appreciate the customization it was hard to get started as there were too many options.
Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well. E.g. I use Ubuntu as you can find lots of documentation, it is supported well in cloud infrastructure and they (re)distribute a lot of packages.
In particular I do not get critics about the Desktop Environment. In no way made Ubuntu your Desktop less customizable and I use e.g. xmonad as my WM which is totally straight-forward and it just works. My mom uses the default Desktop and is pretty happy with it.
That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.
Yes, and it seems like a number of other distribution developers have come to the same conclusion as well. We really need some thoughtful analysis about this trend.
> AFAIK their home-grown desktop is FOSS, so it's not like you are buying into a proprietary system and this is tremendous.
Source availability is part of enabling a broader developer community, but there's a huge amount of other work that is needed, as well. I haven't looked at the current state of Unity development, but the reputation of the project is that it is built for Ubuntu.
> Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well.
I totally agree: Ubuntu really was revolutionary when it started. It's kind of amazing how many innovative things Mark Shuttleworth and his team did right at the start.
> That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.
There is also a third audience for any piece of software that is large enough to be programmable: developers. For desktops, you have third-party theme authors as well as application developers, people that want to work on the desktop software itself, maintainers of other Linux OS components, and folks that want to use the source code to build their own custom projects.
I think they are part of the "possibility to customize-everything" crowd. The most important point for me in a distribution is its security record and a good package system to support all common use-cases. The default desktop should totally be targeted to casual users, as experienced users will not agree with your base-configuration system anyway (I remember YAST from SUSE), so I appreciate strong opinions there, in particular less is more (the original Ubuntu approach compared to other distributions at the time).
Sometimes, yes: I know developers that do customize heavily, but personally the furthest that I go is changing a desktop wallpaper :) (DevOps: I switch between systems all of the time).
The sentence wasn't very clear, but I was really talking more about APIs and developer experience: one of the reasons why projects switch from GTK/GNOME tech to QT is that it's apparently much easier for them to work with.
That's a complex topic, but over the years, a couple of reasons come to mind; Outdated packages and broken PPAs giving a bad impression to new users, slow and bloated in many respects, aggressive community behavior, CLAs, passive-aggressive blog posts and stances by the project leads towards alternative distros like Mint or any criticism whatsoever, Mir when everyone standardizes on Wayland, Unity is very hard to get working properly on non-Ubuntu distros, aggressive push for Snaps, reluctant to adopt systemd, not really part of the community, unless they want to push their own tech, (a bit like Apple), lack of kernel contributions compared to ie RedHat, distancing from the term Linux, using only Ubuntu as much as possible.
Ubuntu weren't alone in that.
Who were those people taking issue with the lack of eagerness on adopting systemd and were they in Raleigh? Systemd is a rather radical shift from the "*nix philosophy".
I don't have patience for RH "UX contributions don't count" saltiness when they intentionally abandoned desktop Linux in favor of chasing the enterprise market (successfully!). No, I'm not bitter about RHEL at all.
1. Tongue firmly in cheek
2. I do know Fedora exists, I read the very 1st announcement. It's not the same
Ha! Obviously RH and Ubuntu have different points of emphasis, so any particular person might prefer one over the other. ISTM most of the complaints about Ubuntu, however, aren't about their focus on the GUI user, but rather on their occasionally doing their own thing. As if RH don't do the same, except more!
1. systemd got adopted by Arch way before Fedora.
2. Most people who actually use it daily think that it's a good change, yes, it may not be 100 "one thing", but it "does do it well" and the fact is, booting up a modern computer involves more than one thing anyway, I think it's better to have it this way, writing systemd unit files is much saner than crazy shell scripts. Opinions may differ, of course.
The 50+ years old design needs upgrading from time to time.
C is also not perfect etc.
3. systemd is easy to get going on any distro, not just RH, unlike many of Canonical's creations.
4. Unlike Canonical, RH employs many people that benefit the wider community* as a whole, ie GNOME, kernel devs etc.
* ie Arch Linux is independent of RedHat, yet we (its users) greatly benefit from their work, not much from Canonical's.
I don't know where you're going with this, but didn't "Canonical's creation" upstart ship with RHEL6?
> 4. Unlike Canonical, RH employs many people that benefit the wider community* as a whole, ie GNOME, kernel devs etc.
No shit. RH has 14-times as many employees as Canonical (Google suggests 9 870 vs 700). I didn't bother to check revenue figures, but it's obvious that Canonical has to choose it's battles.
In some cases, I find the nebulous benefits to the "community" dubious: Gnome3 and systemd seem to have been conceived fully formed from the minds of their RH-employed leads who are pretty headstrong and won't easily accept criticism of their vision by the said community. I'm sure there are enough rants online about features dropped from Gnome3 with no real reason.
The late Pieter Hintjens had something to say about who benefited as Red Hat interacted with the AMQP community
As someone who likes his free time for doing things other than fiddling with configuration files, Ubuntu is quite nice. It's not perfect, it's strayed from its original vision; but it's still my Linux desktop distro of choice.
First, ubuntu was initially seen as "noob's linux", debian users especially not taking the fork well, nor the fact that the numbers of linux users was raising in some kind of OS version of eternal september.
Then there was the fact that ubuntu was OK to mix up proprietary code in their repos, like proprietary drivers. It was (it is) a big fight for debian to sacrifice ease of use to enforce non-proprietary software.
Third, there was the massive success of ubuntu, making all other distros the challengers. This always tends to attract criticism.
And finally, there was the perception of canonical pushing their agenda on their users, leaving them no choice, like when ubuntu migrated to unity or with the whole amazon lens debate.
The mix of all of this makes this linux distro having the easiest setup and the more compatibility/support being looked at with disdain, which is a shame, really.
And it's not like they became apple or microsoft. Their mistakes are minor at worst compared to other competitors. What they brought to linux, however, is huge.
This never sets well with me, especially when OSS groups are involved. Is it not fundamentally the whole point of the software freedom movement?
no Linux distribution is popular
All Linux users: "everyone should be on Linux! Linux is amazing! Yay Linux!"
ubuntu becomes very popular
All Linux users: "everyone should be on Linux! Ehhh... but maybe not that Linux..."
I regard Stallman the way Randall Munroe regards Ayn Rand: "I found myself enthusiastically agreeing with the first 90% of every sentence, but getting lost at 'therefore, be a huge asshole to everyone.'"
As a professional, I love it.
But without canonical and his compromises, my mother wouldn't be able to use it today.
I'm glad I have a solid base, but creating something less radical on top of it will not destroy it.
I've also never tried to discourage anyone from using Ubuntu - but I noticed it's not for me. Apart from a tiny fraction of "this is a little nicer for the desktop user" I am losing a lot versus plain Debian - and I wouldn't that call that idealistic.
On the other hand... maybe there would've been a year of Linux on the desktop if a good part of the distros would've folded and people would've joined forces. Who knows?
Then again the "majority" way is largely dictated by a few big projects in and around Fedora, with developers largely on Red Hat payroll.
And the shit slinging didn't really take off until they up and created Unity after a spat with Gnome over the latters future course (afaik). Closely followed with Canonical starting Mir after misrepresenting/misunderstanding where Wayland was going.
So who really knows whats going on...
Pretty much all of the conversations and disputes have happened on the Internet, so if you are interested you can read the mailing lists, Google+, blog posts etc. in each case. There's no reason why anyone should unless they are interested, but everybody really is free to do so, and then draw their own conclusions.
I'm actually considering switching my HTPC back to Windows, or trying Debian proper. I don't spend much time in unity there, mostly Kodi, and sometimes Chrome, but the DE matters little to me. I run Windows, mac, and linux (ubuntu unity) regularly.
Sorry for veering off into a rant... All around I actually do like Unity though.
That being said I think they somewhat deserve the bad opinion the Linux community has about them: Canonical decided to play solo on several critical subject instead of cooperating with others, in particular I'm thinking about upstart (a competitor for systemd) and Mir a competitor of wayland).
Their marketing is also pain point, because they brand everything as Ubuntu and don't refer to Linux at all in many of their statements (for instance, you cannot find a single occurance of the word «Linux» on their landing page).
As far as the marketing goes, it is a larger trend. See Fedora Workstation/Server, elementaryOS. In fact, on the CentOS homepage there is nary a mention of Linux.
Maybe that's unfair.
If you look at the whole product, there's much more to it than Linux. Linux is 'just' the kernel, I doubt the average Joe cares what kernel his device runs. Most probably don't know that their Android phone is powered by Linux, too. Also, this leads to the old "GNU/Linux" discussion. Where would Canonical stop acknowledging important parts of the OS? Ubuntu GNU/Linux/systemd/libinput/Mesa/Qt...
I'd also love to see Canonical mention that Ubuntu uses Linux, but I understand that for their product and their target group, it doesn't really matter (and it may seem more important to push forward the brand "Ubuntu").
We know that Ubuntu is "Ubuntu Linux" and Linux is "GNU/Linux".
I'm not trying to make a point here, I just wanted to give some context to someone asking a question. I'm not an Ubuntu user, but I'm not an Ubuntu hater either: the distribution I use (Linux Mint) wouldn't even work without Ubuntu.
They get compared to Red Hat, which from a hacker or open source point of view virtually always takes the high road and does the 'right thing'. They open source everything, they track down licenses, they sponsor the community, they insist on the purity of their own products, they're seen as being very co-operative when joining a project, et cetera. It's a high bar and Ubuntu doesn't quite reach it.
They're angels compared to pretty much any corporation but Red Hat, but it's Red Hat we compare them to.
Worst things that does Apple ? Delegating slavery, blatant monopoly, consumer lock up, patent trolling, killing small businesses for their profits, etc.
Worst things that does Canonical ? Doing some technical mistake, making controversial design choices, spending less resources than some people would like to help FOSS in specific ways. Yeah, they totally deserve the shit storm.
Some other things are made much harder than they need to be. Back in the old days making a network bootable image involved compiling a custom kernel and setting up a DHCP server and NFS. These days it seems to require a flock of chickens to sacrifice. Hint: you need to pass a boot option that is entirely undocumented, except for down on page 15 of a discussion topic somewhere on the internet. The README is wrong/obsolete.
Other annoyances come from the system trying to be "smart", like when you try to dump a bootable image on a USB stick with dd, only to have the operation killed shortly after start because the OS detected a new bootable image on the stick and tried to mount it partway through the write, changing out the file descriptors from under DD.
Or when you're trying to diagnose a network problem by upping an interface and putting an IP on it, only to have NetworkManager go LOLNOPE and kick you straight in the balls.
Or when the system fails to boot because some message wasn't passed from some startup script somewhere and good luck tracking that down. That's nearly impossible to debug.
Heaven forbid you select the nVidia binary blob driver for your video card and then let Ubuntu install a new kernel. Ironically the only time the kernel upgrade goes smoothly is when I tell Ubuntu to leave it alone and install the driver directly from nVidia. This is extra fun when Ubuntu is deciding to upgrade the kernel twice a week. Even more fun when you've let it partition the disk for you and it creates a 256MB boot partition that fills up after 3 kernels.
Overall Ubuntu is easier to use than the old systems, but when it breaks it takes 10 times longer to fix it.
> In the server distribution patch instructions, it often says "reboot your computer", whereas one can just restart the services like in the latest openssl security update.
It seems like a bad idea to criticize them for taking the safest tack in generic documentation intended for a wide audience. Some Ubuntu servers are run by veteran sysadmins but others run by people who are learning, primarily working on other things, etc.
Restarting processes requires a decision per-patch to understand all of the affected components and safe restart strategies for all of them – e.g. in the case of OpenSSL, the library will be loaded not only by services but also other long-running jobs – cron tasks, anything a user has been running, etc. Yes, you can script looking for open file-handles and try to restart everything but if that goes wrong in any way, you're running with a known security hole which people will incorrectly think has been patched and may even claim that a scanner must be reporting a false-positive (I've personally seen that).
If I was writing documentation to give to non-experts for operations, I'd make the same choice every time because it's simple and fails safely. Experts probably aren't going to read that documentation anyway and have enough knowledge to understand when they can make optimizations based on local knowledge.
There were other features in Unix for "decades", such as Bash Shellshock bug for example.
10M files isn't a huge amount by post-90s standards and talking about disk access destroying hardware is pure hyperbole. If you have disks failing that frequently something is wrong with your server room, cooling, etc. A daily cron task is not a make or break problem there: I mean, 15 years ago I had that many small files backing a departmental email setup (maildir, so one file per message) — if you think locate is a problem, consider the I/O characteristics of hundreds of IMAP sessions and remember that RAM was a lot more expensive back then and many individual users had mailboxes larger than the aggregate total server memory). Drives failed, yes, but usually after years of heavy service.
I'm a dev guys.
My first UNIX was Xenix and I got introduced to GNU/Linux with Slackware 2.0.
Ubuntu is the only reason I still have a netbook with GNU/Linux. All other my computers at home run Windows nowadays.
At the office our computers are a mix of Windows and Mac OS X, GNU/Linux installations only exist as VM instances.
I'd love to try it again, I really would, as my host OS. But I just can't bring myself to do it again... yet.
12.04: System V
16.04: System D
Gahh...stick with something. I'm tired of learning an entirely new init system every couple years.
To be fair, this is a problem with Linux stuff in general; I just wish Ubuntu could lead the pack in picking something and sticking to it.
4.10: System V (in 2004), first release, adopted from Debian
6.10: Upstart (in 2006), see http://upstart.ubuntu.com/index.html
16.04: systemd (in 2016) and it was gradual.
This is part of the natural evolution of software.
There are new requirements and new software is needed to implement them.
I had this problem too - I gave up and moved to runit. It's basically an improved version of DJB's daemontools, you write very simple scripts, it monitors their output, handles logging, and service management and that's it. Very minimalist, but quite capable.
And, quite importantly, it's capable of running on top of another init system, it doesn't need to be PID 1, so I'm able to run it on FreeBSD with rc, Linux with sysvinit and Linux with systemd with the same script.
It makes it so easy to write service scripts I went from running my personal stuff inside a tmux script to runit in about an hour.
I develop lots of side projects that I always host in Ubuntu. They are web projects that require knowledge of configuring, stopping, and starting services on the box with ease. I am fully comfortable with upstart - but to learn systemD adds a huge amount of overhead to someone like me with little time who just wants to enjoy coding and shipping fun projects. I don't want to be a fulltime sysadmin just so I can launch a demo or game that nobody will ever use!
It's the use of the CADT development model: https://www.jwz.org/doc/cadt.html
I'm getting lots of downvotes from people talking about off-the-shelf laptops and other generic x86-based projects. Lets be clear that the following post is in the context of other CPU architectures and other platforms a little more exotic than your typical PC or laptop.
I'm not here to bash Ubuntu as I couldn't care less what platform people choose to run - even if that's Windows - just so long as I can run whatever I choose to run. However with that said, I still have to disagree with your statement above (re other distros only dream of supporting a massive number of devices). Ubuntu supports less hardware than their originating platform, Debian. Less than Suse, Redhat, and derivatives. Even Slackware and Arch support a considerable number of alternative architectures through 3rd party ports. And stepping away from GNU/Linux for a moment, FreeBSD, OpenBSD, NetBSD all too official support more platforms than Ubuntu.
Support for multiple devices and architectures isn't something unique to Ubuntu - it's pretty typical in the FOSS community. In fact back in the 90s and early 00s there used to be a running joke about people installing Linux on a whole plethora of odd devices just for kicks; talking kettles, toasters, stuffed animals(!!!), all sorts of things (baring in mind this was before the IoT revolution).
How is it everyone is overlooking that I'm talking about CPU architectures?
Seriously what's with people these days just playing on a few x86 PCs and maybe a raspberry Pi and then making bold claims their distro x runs on everything. Lets talk about some real niche platforms that are non-trivial to port software to please.
We're talking about a system primarily targeted toward x86 machines (esp desktops or laptops) common for consumers. So, everyone assumes you mean x86 machines with various configurations of hardware. That was my assumption, too. You assessing its hardware support by focusing on architectures virtually no Ubuntu or Linux user is using or wants to use is strange. I'll go further to say it's immaterial to an assessment of Ubuntu's hardware support given its target market. There's other distro's that focus on MIPS, SPARC, and "exotic" hardware.
I wasn't doing that though. I was literally _only_ stating that the GP's comment, "Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about.", is factually incorrect since "other distributions" can and do support not only many of the same platforms that Ubuntu supports but also a whole plethora of platforms that Ubuntu does not. I didn't disagree with the GP's other compliments towards Ubuntu and I even stated that I wasn't here to criticise Ubuntu. I even went on to say that multi-platform support in Linux isn't a new thing to reinforce the point that I'm grouping Ubuntu and Debian and all other Linux distros together with regards to their ability to run on an interesting array of hardware. I just disagree that Ubuntu is the only Linux distro that runs on multiple platforms - because, simply, it's not the only distro that does that.
However what _is_ strange about my post is how a great many people have then taken my comments as a personal attack against Ubuntu or said "those platforms don't matter" just because they personally don't use them. It's pretty nuts because the fact that I'm aware of unsupported platforms means that I've ran into situations where Ubuntu didn't run and thus means those platforms do matter to some people - like myself. The fact we are having this discussion is proof that your argument of irrelevance is itself irrelevant.
I'll probably get downvoted for saying this but reading the rest of this thread and the replies in my own branch it sounding a bit like a pro-Ubuntu echo chamber where like minded enthusiasts are all reconfirming their pre-existing biases and branding opposing experiences as liars (just as I had been by another HNer). This is a shame because there is a lot to praise Ubuntu for without needing to invent additional achievements. What's worse is downvoting members for the serious crime of actually managing to run non-Ubuntu distro's on new hardware when other enthusiasts could not. It's all a little absurd and it ruins the discussion for everyone as people like myself wont bother contributing in another Ubuntu-themed thread on HN again.
Kernel ABIs are pretty static so drivers can be backported for earlier kernels more easily than recompiling the entirety of Ubuntu Server for an unsupported CPU architecture - such as SPARC.
That's what I meant when I said Debian supports more architectures.
However going back to your previous point about the age of Debian, you don't have to run the default repositories. If you run "testing" or "sid" then you can be just as up-to-date as Ubuntu or even more bleeding edge. In fact it wasn't that many years ago when Ubuntu was effectively just a reskinned Debian + testing repositories (I'm talking pre Unity, upstart, etc). But at the end of the day, it's all FOSS so anything Ubuntu runs can also be run on Debian, Arch, etc. It's just Debian already ships compiled binaries for more alternative CPU architectures can Canonical do with Ubuntu. Which is why I said supporting different platforms isn't anything new to Linux nor unique to Ubuntu.
> good luck Debian / BSD like on modern laptops.*
I have done. They worked fine. In fact FreeBSD was my primary OS for a period of time and Debian has always been my primary "debian-like" platform for all bar media centres (which do run vanilla Ubuntu). I have also ran officially supported variations of Ubuntu as my primary OS for short periods of time as well. I've tried them all before I finally found what OS I felt right for me.
So I do have considerable experience backing up my claims :)
> If you run "testing" or "sid"
So you drop your stability to use beta / alpha packages that are unstable where in Ubuntu LTS you have all the recent packages / drivers that have been tested. I mean with that statement you can run anything if you can compile it.
One can complain about many things for Ubuntu but not hardware support.
I've had to build stuff for SPARC recently. I definitely care. And frankly your opinion of SPARC does not alter the point one bit.
> I'm glad the focus is on platforms that people use like x64_86 and ARM
So am I. Reread my post: I wasn't criticising Ubuntu one bit. I was disagreeing with the GP's comment that only Ubuntu supports other platforms.
> secondly BSD like and Debian don't run well on recent hardware
"Some recent hardware". Which is a tiny subset of the amount of hardware out there that all the aforementioned platforms do support. BSD and Debian aren't even remotely as bad at supporting hardware as people make out
> it's a lie to say otherwise
That's not a valid counterargument. That's just offensive to anyone who has achieved what you claimed wasn't possible.
> at best you will have terrible battery life, wifi / bluetooth that doesn't work properly, touchpad half broken ect...
Ubuntu still runs the same kernel ABIs as Debian, ArchLinux, CentOS, Slackware, etc. It's all just x86 Linux. Veteran users like myself can get Linux to run on most platforms without too much difficulty. So what you're really trying to say is "Ubuntu is easier to get running on newer hardware". Which is the crux of Ubuntu - it's designed to make Linux easier. However to say "non-Ubuntu distros cannot run on stuff Ubuntu can and anyone who says otherwise is a liar" is insulting to all those who do successfully run non-Ubuntu platforms, and insulting to those distro maintainers.
> So you drop your stability to use beta / alpha packages that are unstable where in Ubuntu LTS you have all the recent packages / drivers that have been tested
Debian Testing isn't unstable. As already stated Ubuntu used to be based on Debian Testing.
> One can complain about many things for Ubuntu but not hardware support.
Again, I wasn't making a complaint. I was making an observation that Ubuntu wasn't the only distro that supports multiple platforms (which the GP claimed) and used CPU architecture as an example to my point. I'm sorry you've had bad experiences with other Linux distros and with BSD but my point still stands.
For what it's worth, on my last laptop I got better battery life out of ArchLinux than I did Ubuntu on the same laptop. All the other hardware was supported the same. And to follow on from my previous point: if I could have been bothered I know I could have improved the battery life in Ubuntu to match Arch. However at the time I needed the convenience more than the battery life (which was why I installed Ubuntu in the first place) so left the system unaltered until I had the time to distro-hop again.
1. If you are happy with how Ubuntu 16.04 works for you, you get to keep it and you receive support until 2021.
2. With Ubuntu 16.04.2, you get the option to switch to a new path of updated Linux kernels. If you do so, your Linux kernel will get updated every six months, until 2021.
For the first update with Ubuntu 16.04.2, you can enable to get the 4.8 kernel that was used/tested in development version of Ubuntu 16.10.
In the subsequent update with Ubuntu 16.04.3 (around July 2017), you will be updated to that Linux kernel that was used/tested in Ubuntu 17.04 (to be released in April 2017).
And so on.
The command to switch you to the new path of updated kernels (updated every six months), is
sudo apt-get install --install-recommends xserver-xorg-hwe-16.04
I was considering switching I saw this caveat...
"For clarity, the Canonical Livepatch Service is only available and supported against the generic and lowlatency GA kernel flavours for 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) release. HWE kernels are not supported at this time."
Also, it's not clear if there is a different kernel/command for upgraded kernels on a server.
EDIT: looks like it's going to be "linux-generic-hwe-16.04"
For me, canonical-livepatch status --verbose has never showed me any fixes, running linux-image-generic on 16.04.
It's not clear to me if upgrading to HWE would correctly disable the livepatching.
I won't say that the End of Life illustration is easy to interpret, but it shows how Ubuntu releases work:
According to https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Kernel.2FS...
you are now at the Linux kernel 4.4 and it will remain the same until the EOL of Ubuntu 14.04 (in 2019).
It's a little bit surprising coming from Debian stable releases, but makes sense.
In a nutshell,
1. if you are happy with the currently kernel in Ubuntu 16.04, then you can stay with this kernel (it's version 4.4) and it gets supported until 2021).
2. if you want to jump to the new supported and tested (tested in 16.10) 4.8 Linux kernel, then there is a command described in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that helps you upgrade.
However, when you upgrade the kernel (and Xserver stack that are linked together), your Linux kernel will be upgraded every six months from now on, until 2021. The next kernel version update will be in July, and it will be whatever Linux kernel was released in Ubuntu 17.04.
Python 3: Python2 is not installed anymore by default on the server, cloud and the touch images, long live Python3! Python3 itself has been upgraded to the 3.5 series.
Python 2 will be with us for a long long time.
These minor releases are just new installer images so that new users don't immediately have to download a huge chunk of data.
The important issue with 16.04.2 is that you can now decide easily upgrade the Linux kernel from the original 4.4 version, to the new 4.8 version. This 4.8 Linux kernel version was released in Ubuntu 16.10 (Oct 2016) and it has been promoted to the new kernel for 16.04.2.
May sound complex :-). There is a nice graphic in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that explains it well.
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)
Edit: This is new. https://btrfs.wiki.kernel.org/index.php/Status I was hoping that it would grow into a more stable, user-friendly project. But RAID1 was broken for months, it would let you create RAID5/6 volume even though the feature wasn't even finished yet, and I personally ended up with a filesystem that will crash the kernel when I try to read certain files. I recovered most of the data using a virtual machine that I could reboot quickly. Maybe I'll look at it again in a couple years, see if someone is taking the project seriously.
Including ZFS root filesystem, and swap on a zvol.
I have to say, I really like it, and I'll use it again if I have to redo a machine.
Installing Ubuntu on a ZFS root filesystem is much more involved than merely running the installer. If you have never done it before, and are appropriately cautious, it'll take you half a day to follow the (very detailed and helpful) Wiki page. I can do it in less than an hour now.
So far, only one problem (and not a bug, more of a misfeature): when one of the disks in the NAS died, I couldn't replace it with a new one, because the ZFS mirroring was using its default 2kb blocks (I forget the correct term). It can only do this on 2kb/sector HDDs. My new HDD had 4kb/sector.
I was forced to recreate the entire filesystem using a larger blocksize (ashift=12).
Luckily this worked without a hitch, thanks also to zfs send | zfs receive, but it still pissed me off.
I have be doing that, and did not notice any problems.
About LXD: https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012...
At this point I'm sticking with btrfs instead of going with zfs because of the flexibility for growing/shrinking volumes and adding/removing devices in a non-destructive manner.
No major issues to report and no minor issues that I can recall.