
Debian 10 “Buster” Released - thekyle
https://www.debian.org/News/2019/20190706
======
yaantc
As in all threads on Debian stable, I see criticisms that stable software is
stale / too old. That really misses the point: this is not a bug, it's a
feature of stable!

Stability doesn't only mean "lack of (big) bugs", it also mean that it's
dependable: it won't change under you, and what is inside is well understood
(including limitations). This require using mature enough software. In any
non-trivial piece of software novelty will brings exciting new features but
also regressions in some cases. There's an inherent trade-off between
freshness and stability in this sense.

Debian stable is made for stability in the full sense, like a Red Hat. If one
prefers something more recent, but with more frequent changes, it's fine.
Either use a distro with more frequent releases (e.g.: regular Ubuntu or
Fedora) or even a rolling one like Arch, of Debian testing (sort of) or
unstable. There's room for all of this, it's just different trade-offs.

Lastly, as I've said elsewhere it's possible to mix and match. One rarely
needs the latest in all, and it's possible to use stable as a base to avoid
problems, and add fresh versions on top for only the software one care about.
With the various languages package managers and flatpack/snapd this is
becoming very easy. For some it may be a better trade-off than moving to a
rolling distro.

~~~
alxlaz
This is _particularly_ useful in the context of today's release policies.
Between the ease of updating, the (perfectly justified!) "release early,
release often" policy and the occasional rewrite frenzy, many open source
projects _never_ have a truly stable release. On every release, at least some
of the components or new features are, at best, beta-quality. The next release
(typically) solves some of the bugs, but introduces new ones -- because it
includes a new (and pretty much unfinished) feature, or something got
rewritten and so on.

So if you're running an "unstable" system, you get a new set of bugs each time
you upgrade, and a new set of workarounds, and so on. Debian's stability means
that, at the very least, you get to work around one set of bugs every two
years or so.

It's not ideal, but a lot of people prefer it that way, and frankly, I
understand where they're coming from. It is a trade-off, just like it's a
trade-off to run distros with more frequent releases or a rolling-release
distro.

~~~
dmix
While this is true it widens the gap between stable -> unstable. So your only
choice is either have really old [1] or really new.

This is why I prefer Archlinux’s package selection which is nicely in between
new & stable. You get the latest packages but not the alpha/beta or broken
RCs.

(Not to start a distro flame war, but it’s a reality of using Debian unstable
is that it’s going to break often.)

[1] a while back the 2yr+ old Gnome 3 package in Debian was way uglier than a
new redesign HiDPI/retina-friendly version around 3.10 I believe, but took 2
yrs to show up. Which was around the time Gnome was finally catching up to
Apple in the basic attention to detail at the pixel level. Which makes Debian
a non-starter for Linux desktop to me, but fine for lower security servers.

~~~
alxlaz
> While this is true it widens the gap between stable -> unstable. So your
> only choice is either have really old [1] or really new.

It does. It's not necessarily something that I consider a _good_ approach in
every way, I just understand it. I myself took it for a few years, during the
Great Rush to Rewrite Everything between cca. 2012 and 2017. I used to run
Gentoo (at home) and Arch (at work) up to that point but I temporarily moved
over to Debian (which I hadn't used since 2007 or so) until the storm settled
for a bit.

Yeah, it does mean that things that are broken tend to remain broken for two
years, but you also don't need a set of new workarounds for two years. I very
un-fondly remember that back in 2013 or so when I said no more, my machine was
perpetually broken in some way.

(FWIW, I've been back on Arch on my main station for more than two years now,
it's gonna be hard to start a flamewar with me by arguing about Arch's package
selection :-) ).

I guess it depends a lot on everyone's usage patterns, too, there's a lot of
diversity in this space, more than many younger FOSS developers (and UX
designers!) realize. Some of the nastier things I do to my station are deeply
rooted in my first days of running Unices almost twenty years ago, and while I
do my best to make sure that not all old habits die hard, it can be hard to
convince people to let go of the good ones.

(Of course, "good ones" is subjective, hence the flamewars. I guess 50% of my
habits are good, I just don't know _which_ 50%).

------
charlesdaniels
Congratulations to the Debian team on their new release! I will certainly be
upgrading to it soon enough. Just for fun, I downloaded the ISO shortly after
it went up and recorded my first impressions of using it on the happy path
with the default GNOME environment. I would consider myself a technical user,
but I use GNOME rarely and am not familiar with it personally.

* I didn't notice anything out of place with the Wayland switch. I haven't tested suspend/resume/hibernate/external monitors yet though. Personally, I won't be using Wayland on my main machine once I update that as I'm a CWM user, and Wayland support probably isn't every going to happen for that.

* My user account wasn't added to the sudo group by default, and using `su` broke something with PATH (/sbin wasn't in PATH after running `su`).

* GNOME apparently does not support configuring multiple wifi adapters, which was a problem since I had intended to use a USB adapter to download the nonfree drivers for my internal adapter, but GNOME did not have a menu to configure them separately.

* The problems that GNOME 3.X has always had continue to be present -- chiefly the UI scale is too large on low-res screens (the machine I was testing with has a 1366x768 display), and graphics performance continues to disappoint (animations stutter very noticeably on the "search" screen for example, this was on a 2nd gen mobile i7).

* The much-touted software center seemed to have several polish issues. The first time I launched it it threw a bunch of inscrutable errors and didn't work until I re-booted the machine. It still threw inscrutable errors, but worked after this. Lists of software would take 60s+ to download (render? not sure what part was slowing it down).

From the perspective of a technical user, I don't care about any of these.
They either affect software I don't use, or could be fixed easily enough.
However, if the objective is to support non-technical users who don't have the
knowledge/time/interest to troubleshoot, I feel that this release falls short.

For reference, the hardware used to test was a ThinkPad X220 with a 2nd gen
i7, 12GB RAM, and an SSD.

~~~
alxlaz
> The problems that GNOME 3.X has always had continue to be present -- chiefly
> the UI scale is too large on low-res screens (the machine I was testing with
> has a 1366x768 display)

It's not just Gnome that does that. There's an entire generation of designers
that's cargo-culting mobile designs at the moment. Breeze, KDE's default
theme, is equally space-wasting. They're great if you have a touch screen but
it would be really cool if you could turn them off on systems without touch
screens, i.e. about 99% of the current Linux installations I'm guessing :-).

If you're willing to fiddle around, there are a few good themes that don't
waste so much space.

~~~
frutiger
I totally agree with your overall point, but:

> about 99% of the current Linux installations

Not if you include Android!

~~~
ufo
In these contexts people usually mean "GNU/Linux desktop" when they say
"Linux". It is not as if we have any meaningful choice of desktop environment
on Android-land anyway.

------
_delirium
Some interesting bits in the What's New [1] part of the release notes, besides
just package updates. Pulling out a few:

* UEFI Secure Boot support

* AppArmor now installed by default, with profiles for various programs

* APT can optionally be sandboxed with seccomp-bpf (but not yet enabled by default)

* nftables replaces iptables as the network filter

* GNOME/Wayland by default

[1] [https://www.debian.org/releases/buster/amd64/release-
notes/c...](https://www.debian.org/releases/buster/amd64/release-notes/ch-
whats-new.en.html)

------
simonw
PostgreSQL 11 and Python 3.7.2 in Debian Stable supported for the next 5
years? Nice!

~~~
juststeve
Rust and go as well.

------
tannhaeuser
From my anecdata, I'm seeing Debian growing into the role that Red Hat used to
have in the enterprise, on VPSs, and as a stable build server requiring older
gcc and other backports etc., with Ubuntu being the the most used O/S by
developers now (more than Mac OS, let alone Windows, at my recent customers).
I'm quite ok with it, as it keeps the door open for Devuan to get rid of
SystemD as RH looses influence. With their focus on stable, I wonder if Debian
could collect maintenance contributions (code or cash) from enterprises.

~~~
RidingPegasus
Always interested in why exactly systemd is bad?

There's a lot of negative PR and not much evidence of why people shouldn't use
it. Comes across as like the same recommendations of "don't use chrome, use
this fork instead", one that's run by a tiny team of people who face very
little consequences for messing up.

~~~
CameronNemo
My reasons for avoiding it:

* build system actively makes it difficult to build one component without the others, seemingly for political rather than technical reasons

* build system relies on Python, making bootstrapping more difficult (not quite desirable for "building blocks to build an OS from")

* needless overlap with countless other pre-existing projects including binfmt-support, lxc/runc, and libdbus

* cgroups2 hierarchy model which is unworkable for rootless containers unless they register with systemd via dbus or are running directly as systemd services

* The dependency model has lots of corner cases and gotchas. OpenRC and the LSB headers are much simpler and just as effective.

* Events are done in a hacky way: device events are performed by tucking "SYSTEMD_WANTS=..." away in a udev rule _somewhere_ , IPC and timer events have their own unit types. This model is not extensible, and it obscures the policy rules that can be used to start a particular service.

* All the positive PR and marketing has left a bad taste in my mouth. Why did RedHat fly Lennart across the world dozens of times just to sell this hunk of C to literally anybody that would listen? It was a waste of petrol and shows that systemd did not win on merits alone.

* does not make any attempts at portability, and does not follow any pre-existing standardized interfaces that can be easily emulated on other OSs.

* Reliant on dbus as its primary IPC mechanism, when it does not even need a bus in the first place. JSON over a socket could have been a simpler and cleaner choice.

------
gjsman-1000
...and Wayland is the DEFAULT? Things must have gotten better quite quickly.

~~~
nextos
The problem with Wayland right now is that most (tiling) window managers need
a very significant effort to be ported from X11.

~~~
snazz
Maybe sircmpwn could comment on this: Could wlroots (originally for Sway) make
the transition process easier for other tiling window mangers wanting to port
to Wayland?

~~~
new_realist
Personally I’m waiting for a tiling Wayland WM that supports NVIDIA.

~~~
iforgotpassword
Wait. So I haven't used Wayland at all yet but why would the graphics driver
need to have support for a tiling WM? isn't that in a whole different later
and shouldn't matter at all?

Does NVIDIAs driver has any special code to make i3 work on X11 but missing
code to make sway work on Wayland?

~~~
secure
It’s the other way around: on X11, nVidia implements all the standard APIs, so
window managers like i3 don’t need driver-specific code.

For Wayland, nVidia only implements the EGLStreams buffer management API,
whereas other drivers implement the GBM API. Individual compositors (like
sway) need to add support for EGLStreams.

See also
[https://wiki.archlinux.org/index.php/Wayland#Requirements](https://wiki.archlinux.org/index.php/Wayland#Requirements)

~~~
makomk
Note that Wayland doesn't have a full driver abstraction layer in userland
like X does. Every Wayland compositior is expected to call into the Linux
kernel drivers for the graphics card directly. GBM is just a thin wrapper that
hides the fact that Linux never standardised the buffer creation interface
across different hardware - it assumes that all those driver-specific
interfaces wrap essentially the same shared Linux buffer management code, and
it doesn't abstract stuff like modesetting at all. NVidia managed to
reimplement enough of the Linux modesetting interface that existing code
mostly works, but their own buffer management infrastructure is apparently
different enough that they couldn't make it look like the one existing Wayland
compositors are expecting to use. So they ended up creating their own slightly
higher-level API instead.

------
lossolo
I run Debian everywhere, it's super stable, had machine that was running
production jobs for almost 3 years without downtime (around 1000 days uptime)
and it went down because of server power supply failure. Have plenty of
machines with 1+ year of uptime.

~~~
ohazi
I also run Debian (almost) everywhere. For those who want a rolling release,
Debian testing is also surprisingly stable. One of my laptops has been on
testing getting continuous updates for about 7 years now and still runs great.

~~~
_delirium
Even Debian unstable is reasonably stable for me (despite the name),
comparable to other rolling-release distros I think. Really not-ready-to-ship
stuff is uploaded to 'experimental' instead. The main precaution I take is
having 'apt-listbugs' installed. It checks for any new high-severity bugs in
the packages you're upgrading. You can take a look at them before deciding
whether to upgrade, which occasionally helps avoid installing something
broken.

On servers I prefer Debian stable though. I like my mail server running
unattended with only security upgrades, and rolling-release distros tend to
introduce config-breaking changes exactly when I have the least time to deal
with them.

~~~
ohazi
You're describing my setup exactly. Stable for servers, testing for personal
machines, with apt-listbugs installed (the install guide recommended this at
one point, not sure if it still does). I also ran unstable for about two years
and, like you said, it was fine.

Testing is usually only a few weeks behind unstable though, so if your primary
motivation is to have relatively recent software and a rolling release so that
you never have to stop everything and deal with full system upgrade breakages,
then testing is great.

------
ohiovr
I upgraded a couple days ago from stretch and was happy with the no hassle
experience. I got newer nvidea drivers and cuda 10.1 and finally managed to
figure out how to get nvenc encoding to work. I installed it fresh on a
virtual machine for a server project I am working on and was glad to find all
my scripts work as expected (about 100 of them)

------
nathancahill
From Node 4 to Node 10.15. Huge upgrade in JS land.

~~~
M2Ys4U
Were they were any people _not_ using alternative repositories for Node?

If I'm deploying node to a server the first thing I do is set the system up to
use nodesource's repos.

------
xmichael999
Great job Debian team, personally I can't wait for the Devuan fork based on
this release.

------
awill
I wonder if the timing of this is significant. For people wanting a rock
solid, free, stable distro, Debian and CentOS are two of the top choices. Even
though RHEL 8 released, CentOS 8 still isn't out. I'm running CentOS 7 on my
server, and it's getting long in the tooth. Debian 10 releasing before CentOS
8 will likely result in me giving it a spin.

~~~
petre
OpenSuSE is great as well. Stable, the Leap release is reasonably fresh with
18 month support window and they have a rolling release.

------
efiecho
I'm personally looking more forward to the Devuan "Beowulf" release. Maybe
this will happen soon now that Debian "Buster" is released.

------
perlgeek
Hah, we just got rid of our last Jessie box in our $work setup (with the
exception of some database boxes that aren't part of the regular upgrade
cycle).

I hope this next dist upgrade goes much quicker, due to the automations we've
built (config management with Ansible, Continuous Delivery).

------
shmerl
Congratulations! Hopefully libdrm will start moving soon, so Mesa master will
be buildable again.

------
paulcarroty
Don't like the speculation 'old software is stable', sounds like cheap
marketing. If any software was released X years ago nothing doing this old
release "better" or "stable", bugs happens everywhere.

~~~
ris
That's not the sort of "stable" that is meant by "debian stable". It's
"stable" as in "doesn't change". Things shouldn't break randomly every couple
of months. You shouldn't wake up one morning and realize that one of your
applications needs to be ported to a new API to keep working. If something
works today, it should still work in 6 months, while still receiving security
fixes. You should be able to reliably narrow down weird bugs without it
becoming a heisenbug due to the shifting sands of your underlying software
distribution.

------
pleasant_truthz
So can anybody explain why did they decide to split Python into python3 and
python3-pip packages?

Pip is not some random third-party library, it's part of the official
installation that you can get from python.org.

------
diehunde
People that have used both Ubuntu and Debian, what are the cons of using
Debian over Ubuntu? Also, which is the best supported desktop environment
these days for Debian?

------
aorth
Great! Now just waiting for official tarsnap and nginx.org builds so I can
upgrade my servers. MariaDB already has Buster builds in their deb
repositories.

~~~
cperciva
Doesn't the existing Tarsnap package work?

~~~
chrismsnz
Not for me - seems it depends on libssl1.0 and buster ships 1.1

~~~
cperciva
Oh, right. Try the experimental packages: [http://mail.tarsnap.com/tarsnap-
users/msg01557.html](http://mail.tarsnap.com/tarsnap-users/msg01557.html)

------
Tepix
Does Debian 10 work on the Asus Tinkerboard as-is? Or do i need a special
version? I look forward to running it!

~~~
Havoc
It'll probably run but hw accel will be a pain. I'd rather go for armbian.

~~~
Tepix
I don't need HDMI, I just want to run some network services on it.

Looks like Armbian Buster is alread out:
[https://www.armbian.com/tinkerboard/](https://www.armbian.com/tinkerboard/)

~~~
Havoc
Lucky for you. The tinkerboard is actually pretty nice gear aside from the god
forsaken hw accel.

Keep an eye on the temp though - seems to run hot. I managed to get mine to
behave without a fan by adding significant thermal mass. (~pound of copper)

------
camdenlock
I do hope Debian will run well on emerging privsec-focused hardware like
Librem...

~~~
_delirium
It looks like everything except bluetooth works out of the box:
[https://wiki.debian.org/InstallingDebianOn/Purism/Librem%201...](https://wiki.debian.org/InstallingDebianOn/Purism/Librem%2013)

~~~
als0
And worth noting that enabling Bluetooth is as simple as installing the
firmware-atheros non-free package.

------
turrini
Is there wireguard support?

------
jimmyhoughjr
yeah but npm is broken. Software update is broken because something was
changed from testing to staging. Seems they didn't even test their first time
setup.

------
snvzz
I notice m68k is not part of the release.

~~~
duskwuff
Hasn't been for a while. The last release to include official m68k support was
4.0 (etch).

From a practical standpoint, m68k is old to the point that it doesn't seem
practical to run any kind of modern Linux system on it anymore. The last new
processor in the family (the 68060) was released in 1994, and ran at 75 MHz.

~~~
snvzz
Even if not tier 1 or whatever, I still sort of expected it to be there, as
the port certainly exists and is still alive.

------
ebg13
I love how it's 2019 and installing Debian still requires a PhD in forensic
psychology to find the download link.

~~~
mmphosis
2.4. Installation Media

    
    
        2.4.1. CD-ROM/DVD-ROM/BD-ROM
        2.4.2. USB Memory Stick
        2.4.3. Network
        2.4.4. Hard Disk
        2.4.5. Un*x or GNU system
        2.4.6. Supported Storage Systems
    

[https://www.debian.org/releases/buster/amd64/ch02s04.en.html](https://www.debian.org/releases/buster/amd64/ch02s04.en.html)

I am going to burn a CD-ROM and/or DVD-ROM. It's been a while and feels very
retro, I guess I could do this. What is a a BD-ROM?

I am going to use dd to copy the image onto a USB Memory Stick. I've done
that.

I am going to do a Network install, no I am not going to do this.

Hard Disk install, this is how I installed it previously. Let me find my
notes.

Whether it's CD-ROM, DVD-ROM, USB Memory Stick, or Hard Disk install, the
tricky bit is getting the computer to boot the image from that media. And, the
tricky part about this is knowing the incantation for GRUB to do this. And,
the additional tricky part is the mangled PC EUFI/Secure boot settings. If
it's the raspberry pi it's easy: I stick the USB Memory stick in the pi and
go. If it's the PC, it's well, as you said: forensics.

I wish that I could download a little install program, and install a fresh OS
into one of the many free partitions on the PC, and that the installer would
automatically configure GRUB to recognize that new OS without the
incantations, and without stomping on other distros.

~~~
asveikau
I'm not sure what you mean because I've never had to type anything manually at
the grub prompt in an installation use case -- the boot images seem to be
configured right, to give you a ramdisk and go into an installer.

But I will say:

> I am going to do a Network install, no I am not going to do this.

For Debian netinst really is the best. Debian has _a lot_ of packages. I
haven't kept up with all the full media over the years [first time I did a
debian install was '99 or so], but it's always seemed pretty "completist", and
contains more than I'll ever use. With netinst you get a base system, it gets
you to a shell quickly and you can apt-get exactly what you need without a lot
else.

------
epynonymous
how long before ubuntu switches over to this? i don't even know if ubuntu has
forked from debian or it's pretty much the same, but with different package
management?

~~~
thekyle
I believe that Ubuntu gets their packages from Debian unstable not stable. So
the latest Ubuntu most likely already has very similar packages (if not
slightly newer) to this.

------
phosphophyllite
Will someday debian authors and die hard fans understand that old libc and old
Firefox or Gimp are different and one really can mean stable and another can
be just outdated?

Is it possible that desktop should be developed in other manner than server?

~~~
unwrap
They could implement something like Fedora modules: you have a stable base
system, and then you have separate module streams for some software packages,
which are updated separately, with multiple versions supported concurrently.
For example, you can install the latest Fedora, and then pull the
postgresql-9.6 module (or 10, or 11, or all three in parallel), despite 11
being the "official" version.

[https://docs.fedoraproject.org/en-US/modularity/using-
module...](https://docs.fedoraproject.org/en-US/modularity/using-modules/)

~~~
kasabali
Tangential but this is misunderstood widely, so I cannot resist the urge to
explain.

First of all, modules can not be installed in parallel [1].

Secondly, it is not a new technological development at all. Modules in essence
are just a fancy version of maintaining multiple overlay repositories on top
of your base distribution. Only difference is it adds a more streamlined usage
to dnf. You could do the same thing 15 years ago with yum or apt by manually
enabling/disabling repositories.

This is the easy part.

Real hard work that goes to modules is actually creating and maintaining those
separate modules having separate versions of the same software stack. And that
is the thing none of the distributions were willing to do until recently.
Heck, maintaining multiple versions of the same library or the same program in
a distribution release was a _taboo_. Distribution developers dreaded the idea
of doing that and avoided it as possible as they can.

Jury is still out on how well it'll work in practice and how much useful it'll
be (in that will maintainers be actually eager to do the hard work of
maintaining multiple stacks in parallel, how long will they support each
version or how many versions will they maintain in parallel etc.)

I have the best wishes Fedora developers with the modules idea and I hope
it'll prove to be successful thus maybe other distributions would be more open
to the idea in the future.

[1] > Modularity brings parallel availability, not parallel installability.
Only one stream of a given module can be enabled on a system
[https://docs.fedoraproject.org/en-
US/modularity/architecture...](https://docs.fedoraproject.org/en-
US/modularity/architecture/consuming/)

------
bitmadness
I used to run Debian, but I switched to Fedora a few years ago and haven't
looked back. Fedora is always sleek and up-to-date, unlike Debian, which due
to its outdated packages always ends up looking like the plain step-sister. I
also strongly prefer DNF to apt. With that being said, they both play crucial
roles in the Linux ecosystem - Debian being the base for Ubuntu and Mint, and
Fedora being the distribution which upstreams the most code.

~~~
hlieberman
Running Debian testing is generally the practice I recommend on systems which
are single-user, like desktops and laptops. It's quite stable and very up to
date, except for a couple of months in the run-up to freeze.

Disclaimer: I'm a Debian developer.

~~~
blihp
As a long time Debian user, I think you're doing Debian a disservice giving
that advice. Debian testing is stable... except when it's not. Tell people
that it's _relatively_ stable, not that it's stable. And please don't tell
them to run their daily driver on it unless they know what they're potentially
getting into.

My own experience is that things can be broken for months in testing and of
course good luck finding any help when that happens. For example, since the
freeze for Buster I had broken sound drivers for a couple of months until
someone else found that it wasn't actually the drivers but another broken
package that as of a month or so ago still wasn't fixed (but the problem had
been reported)

That's been my experience with testing for the last 3-4 Debian releases: it
works great... except for a handful of annoying bugs that seem to linger
almost until release. And _assuming_ you know exactly which package is broken
(which can be difficult to narrow down for driver/system level issues), you
may or may not get the problem addressed in a timely manner. Then there's also
the case of the occasional package that you depend on that disappears from
testing for a month or two for any number of reasons.

~~~
denormalfloat
> I had broken sound drivers for a couple of months until someone else found
> that it wasn't actually the drivers but another broken package that as of a
> month or so ago still wasn't fixed (but the problem had been reported)

I had the _exact_ same problem (the issue ended up be a MIDI program called
timidity). The main problem for me was that Debian doesn't give any advice on
how to report such issues (the website is firmly planted in the early 90s),
and the mailing lists I found all appear to be dead. I wish Debian had a
slightly more modern website.

~~~
blihp
That would be the one! I posted a stackoverflow question along with my
workaround not knowing the cause. Fortunately, someone else identified
timidity as the root cause. Definitely one of the stranger issues I've run
into using testing in recent years, but far from the only one.

