
The end of the Iceweasel Age (2016) - beefhash
https://lwn.net/Articles/676799/
======
ivanbakel
It will be interesting to see how this hits everyone in the Deb libre
downstream - Parabola provides only Ice* currently. Could we even expect users
to request that Iceweasel as a look sticks around, for consistency?

------
williamstein
Rstudio would presumably present similar issues today?

~~~
baldfat
RStudio isn't in official Debian Repos for this very reason. Also the reason
why RStudio's downloads are hosted on their site

~~~
nerdponx
What is that reason? They enforce strict guidelines about using their logo and
name that are incompatible with Debian policy?

~~~
subway
The gist of it is Debian patches everything to bring software into alignment
with Debian philosophy. They back port bugfixes, correct weird path usage, and
generally make the software play nicely with the distro.

Some developers are bothered by this patching and insist any patched version
of the software is no longer the original software, so they make life
difficult for the distro by going after marks and art.

~~~
jancsika
Is there a _requirement_ for the Debian developer doing the patching to get it
accepted upstream or at least get upstream review before patching?

~~~
subway
When possible, DDs do submit patches upstream, but this isn't always possible
or appropriate.

Examples of unlikely to be upstreamed patches: Sometimes upstream software was
explicitly developed as a tool for another distro, so Debian patches it such
that it is usable/applicable to Debian.

Sometimes upstream software will vendor libraries otherwise shipped with
Debian. Debian will often remove the vendored version and patch to link
against the system package.

Sometimes upstream software has tests that fail within Debian's build system,
so Debian will patch the software to build/test under their build system.

Sometimes Debian disagrees with the default preferences set by upstream
software, and will patch in new default preferences.

In a surprising number of these cases, upstream developers can become _super_
hostile. And it's kinda-sorta understandable.

When Firefox or RStudio breaks, users don't think to file a bug with their
distro, or reach out to the distro maintainers for support (they should!).
Instead they reach out directly to the upstream developers, forcing the
upstream dev to field all sorts of support requests, and grow to hate the
downstream distro maintainers.

The depressing thing is, we _really really really_ need both of these roles --
upstream developers, and downstream maintainers. A separation of concerns
between "does the latest version of this code work correctly" and "is this
code reliably built/installed/configured in a target environment".
Unfortunately we seem to be moving away from it by allowing upstream to
shodily craft and ship an entire rootfs in the form of a "container", and
declaring it deployment ready.

~~~
klodolph
Is containerization common outside of things like services? To me it makes a
lot of sense e.g. "I want to run Jenkins, just spin up a container" because it
reduces the number of configurations that the developers have to consider. But
it seems like the majority of packages will be individual libraries and tools.

~~~
subway
Containers are frequently being used to bundle up an artifact for end users --
this means they're always going to be service/application, as opposed to
libraries (which get crammed inside the container). Those libraries still get
developed, and still get installed inside the container even if they aren't
the "direct" product. Only now, as a sysadmin, I can't easily audit the
versions of those packages inside the container, because I have no guarantees
on how they were installed. One of the 18 Dockerfiles between `scratch` and
`some_application` might install a few bits of software from distro packages
from whatever rootfs they decided to base off of. The next layer may download
a tarball and extract it over the rootfs to throw down a few more of the app's
dependencies.

When using software packaged by a distribution like Debian, I'm given pretty
strong guarantees that I can always audit the version and integrity of
installed software, and that when a bug is fixed in a common library _cough_
openssl _cough_ , I don't have to spend as much time fretting about the places
a vendored vulnerable version might be hiding in my infrastructure.

That said, I don't mean to disparage use of containers in general: I'm a
_huge_ fan of them, I just feel pretty strongly they should be a software
deployment tool as opposed to a distribution tool. I get a bit uncomfortable
when I can't consistently generate an image "FROM scratch".

