To get rid of the entire "dynamic" MOTD, disable the timer unit:
$ sudo systemctl disable motd-news.timer
(I'm on an iPad and going from memory, but I think this is correct -- someone will correct me if it isn't!)
Almost every service that phones home tells you that it is going to do so in an EULA that it makes you agree to. It still isn't ok default behavior. Its especially gross when your whole business is built on top of the open source community.
There are differences between them but I am 100% confident that I can do everything that an Ubuntu system can, especially if we're talking about infrastructure.
It was always a big one for me, I ran with ZFS root for 2+ years and every kernel/zfs update was a bit of a Russian roulette (I did have to perform elaborate recovery once or twice). Even if it was 100% guaranteed to compile and work, there was 1. a very scary window during which if the system hung, you're semi-bricked; 2. the long build.
Running a desktop/laptop without a time-traveling file system is quite simply not an option (especially once you get used to it).
At that point, you might even be able to optionally background the compile process and have it run asynchronously and update your kernel only once it builds successfully.
Alternatively, teach GRUB to be able to boot "the most recent kernel with all of this set of modules available".
Made backing out of patches easy mode, snapshot, patch, reboot, if its effed up, go back to the snapshot with a reboot. The only thing I've seen that comes close is NixOS, but it takes a completely different take on those kinda things. Better in some ways. But zfs root is great, means you can just set compression on for log dirs and don't need to worry about compressing logs. Fs takes care of that for ya, now grep works fine, no zgrep etc... needed.
I honestly can't see why you wouldn't want to boot off it. Hell I just upgraded my FreeBSD nas box at home, took a snapshot of the entire pool before I did just in case things went south. It may not be as performant as some filesystems, but to be honest, with nvme ssd's, its a non issue.
So explain how it isn't worth it in your opinion? Experience for myself has proven otherwise. And made me loathe BTRFS for being so behind zfs of even 10 years ago. (I've had BTFS zero out data for a Makefile I had open while running make, if it wasn't in a git repo I would have been super pissed off)
It certainly shouldn't be in the same pool as your actual data, the stuff you back up. /home can live in that pool though.
I have to squint really hard to see the upside of snapshotting anything other than things like /etc - I have in the past put /etc in git just to track ad-hoc changes.
Overall, for the base OS, a minimal set of defaults + a provisioning script does me.
I essentially boot Docker images bare-metal, directly from Grub.
My recipes: https://github.com/pauldotknopf/darch-recipes
0.8.2 is Compatible with 2.6.32 - 5.4 Linux kernels
You have to combine out of date zfs with bleeding edge kernels to have a difficulty.
Basically you just check the zfs release page before upgrading x.y to x.y+1
This could be trivially avoided with packaging
On servers, not so much difference. Well unless you are using k8s, etcd, juju and all that jazz
I'm typing this from an LG Gram 17 laptop running current Debian. Before this laptop, I used a Dell Inspiron N7110 for many years, also running Debian stable. I basically never need to use anything else to get work done.
I will grant that a few things are a bit fiddly -- touchpads are iffy sometimes (and infuriating on the LG Gram because it's got a new one) and function keys sometimes don't work. The Gram had a booting problem that has since been resolved (https://bugzilla.kernel.org/show_bug.cgi?id=203617) but also had workarounds pretty quickly. So for some hardware, yeah, it's not a good experience for people used to just running Windows.
But there's also lots of hardware for which it works just fine out of the box, and for experienced Linux people, it's not usually that hard to deal with the sharp edges.
And modern KDE is brilliant.
I'm typing this on my "workstation" but, sitting here on my desk, there's a laptop running Debian on either side of it (and there's another one across the room).
I can't read your mind so I'm not sure what issues you've encountered attempting to run Debian on a laptop. Apparently I have already figured out how to workaround them (but I've also been using Debian for ~23 years so perhaps that has something to do with it).
You _can_ do it but on Ubuntu it just works.
"non-free" is an official Debian repo. It's not enabled by default, but that's a single line edit in sources.list. And it has all those non-free video drivers etc.
There's also an unofficial Debian CD image that includes non-free packages, for those cases where you need that stuff during installation time.
Ah, you mean the ones in the Debian non-free apt repo?
ie: Enabling non-free and installing sudo gets you most of the way there for the majority of users.
The Debian PHP maintainer, Ondřej Surý, maintains his own repo:
Of course in the years since the PPA system was introduced we've seen a lot of projects push in to reproducible builds which somewhat negates that concern, but there are still a lot of us who would rather not go through that process for every random binary we want to run. Having a third party that we inherently trust because they built the rest of the operating system building the random packages we want has an appeal. Also for the devs/packagers free hosting by the OS vendor is nice too.
And what's even worse, if you install Docker containers you don't build and manage yourself, you're pretty much right there again with "I don't know or trust" as your means of security.
This makes the cases where you want the full Debian build but with a patch or just stepping the version easy. That's useful when you need to patch a package or can't wait for an upstream security fix.
Too often I see people building upstream packages "by hand" in those cases. The packaging tools are great and any Linux user is greatly helped by taking a few minutes and learning the basics of apt preference files, package selection and source packages.
It's a very famous computer science paper, pretty easy to read. Nothing niche or controversial. I'm sure you'll find it interesting.
The fact that you can't achieve the ideal does not mean we should claim defeat.
Never heard that term before, but it does, in fact, seem to describe a lot of Canonical's issues in the past decade.
I, personally, don't have any examples ready to provide you but I no longer subscribe to any "general discussion" lists.