Thank you Chris for all your work on Debian, and congratulations on the release! I have been following your progress from back when you were the maintainer of the SWI-Prolog package for Debian, and it is awesome how much you and all other contributors have achieved within the project, to the benefit of all people who use Debian!
- old arch single config file, with '@' syntax for parallelism got me hooked; sure it's different in systemd days
- early problem free systemd adoption
- simplified, close to upstream distribution (don't want to dig for src, dev, docs etc)
- their wiki is the one that speaks the most to my mind, I try to be gentle and objective, but every time, I find solutions in a short amount of time, and even more ideas. They hit a very very sweet spot to me. (gentoo was like that before the data loss)
- no installer, might seem stupid but it's a bit easier to reason with it; I don't have to learn an install framework, it's very bare and unixy.
- very thin tooling from arch, debian does a lot, but it's too heavy for my mind. Things might have changed since I last live in debian but I run a few debian live isos and derivatives and it always feels like "too much", administrative (as the debian documentation)
- rolling by default, debian has testing but it feels riskier
- AUR felt simpler (again) than custom apt repos
Also I might add that I distanced myself from the OS quest (or if I could I'd run a lisp or smalltalk fork or something similar). I'd be happy to hear your suggestions about my points if you have time for that.
> their wiki is the one that speaks the most to my mind
I'm a debian user, and I love the arch wiki. The debian wiki is often stale and/or incomplete, and usually you're told to go to the mailing lists (which are an awkward way to get info). The arch wiki is great at being clear and concise and I find arch's mediawiki to be more easily legible than debian's bare html.
Something I'd add (as Arch user, but also as a Debian user for projects).
I found Arch has quicker updates to the kernal and overall is super quick to adopt the latest tech. What made me completely switch to Arch for every day use, was the day the GTX 1080 came out, I could set it up on Arch with the beta drivers. If I wanted to do the same on Debian, I had to figure out how to update the kernal to a version not typically used, then find or figure out how to install the beta drivers, etc. Etc.
Debian is more stable, so when I need to do something where I don't need the latest drivers, I like to use it.
The AUR community packages repo (with Yaourt tool) is definitely the biggest draw for me. It has basically everything you'd ever want as a linux desktop user, whenever you come across a program online it's almost always already available in AUR if it's not in the main package repo (which often mirrors Debians package availability).
A common issue with Debian style distros is that you can install packages easily but it often comes with lots of configuration afterwards. It doesn't "just work"
AUR community scripts are like APT but for less popular programs that aren't supported in the main repos and, more importantly, packages that come with more complete installation scripts with necessary configs and supporting tooling.
Like any OSS community it's benefited most from a lively active community.
ArchLinux is easily the best experience you will find for desktop Linux usage. Plus the Wiki is the best around for Linux.
It's the only distro where using GRSecurity is as simple as installing a package. That's just not possible on Debian.
I'd still use Debian for servers as most VPS companies don't support ArchLinux natively, the set-up process for Arch is a bit intense, and if anyone else needs to get access it's better for portability.
I'm 100% with you. Been getting into Smalltalk over the past year. Bought all the books, played with Pharo/Squeak/Cuis, listened to a ton of Kay lectures. Let me know if you come across anything especially interesting.
Not the OP. But also in the same case, Debian was my first distro, these days I align more with Arch.
The ArchWiki is a gift to all the community. Rolling releases, not testing. With Arch there's a feeling of owning your OS without going full LFS. I am currently planning a summer reformatting for a laptop and I can't imagine not using Arch.
But those are concerns only for my personal computers. On my servers or quick VMs I still prefer Debian.
I've used Debian privately and professionally on servers and desktop since the 90's, altogether several hundred systems. I've also used Slackware and LFS years ago. My personal laptop runs Arch the last few years (with Debian stable as backup partition); my work computer runs Debian unstable or testing (depending on release cycle).
The reason I like Arch is that it's close to the source. Debian often adds more patches and layers, policies and complexity to the building process. In Arch I can get an updated package in hours straight from the source origin, either built by the maintainer or built and packaged myself. It's very helpful if you are tracking, fixing or reporting bugs to upstream developers. True rolling release model is handy if you'd like to be part of open-source development or need to get something working at the bleeding edge.
Note that Debian's added complexity and stability is much appreciated for all other systems, it's just a bit more difficult to pull straight from the source for many packages. I do not need and would not want to run bleeding edge on most systems.
I thought I'd answer despite not being the person you asked.
I run Arch because it's so bare by default, without an installer. As a power user, the fact that I know exactly how my whole system is wired together and what is there means when things go wrong or need to change, I know where to go.
That and I often want to be on the bleeding edge, and the rolling release and AUR makes that super easy.
Not OP, but mostly in the same boat, except I still use Debian for my servers.
My biggest gripe with debian is apt. Compared to pacman, it is just so much worse in just about everything. I realize that it has a slightly more difficult job, but still.
I have found myself too often in situations, where I simply couldn't fix whatever apt was complaining about. With pacman you just specify -d and you're fine (or --force if there's a file conflict).
Also, I've never managed to successfully create my own debian package. With Arch PKGBUILD system, it's a breeze.
It's hard to be very specific here - since I obviously don't remember the exact problems I have encountered.
However here are two examples[0][1] of the sort of problems I mean. Both are non-issues with pacman, I can simply choose to ignore dependencies entirely and fix the problem.
Also I just realized I wrote "apt" before, but I really meant the packaging system itself, not just apt. Which also begs the question, why are there at least 4 seperate programs for package management (apt, apt-get, aptitude, dpkg)?
Thank you for your hard work. I've just upgraded from Jessie to Stretch on my main laptop and the process is silky smooth.
Even though I only started using Debian with Jessie (was using Ubuntu and Arch before that), I've come to love and depend on Debian's quality, stability, and reliability.
Don't forget to verify the install medium, which is a little more involved with Debian.
If you're already running a trusted Debian system, then install the debian-keyring package. Packages are signed and verified, so those keys don't need further verification.
Otherwise, fetch the keys in [0] with gpg:
$ gpg --keyserver keyring.debian.org --recv-keys <...> # e.g. 0x6294BE9B
Then, verify the key's fingerprint with [0]:
$ gpg --fingerprint
Unless you don't trust your CA, this is good enough.
Finally download the checksum and their signature files, and verify their signatures:
$ gpg --verify <...> # e.g. SHA512SUMS.sign
$ gpg --no-default-keyring --keyring /usr/share/keyrings/debian-role-keys.gpg --verify <...> # if using debian-keyring package
With files SHA512SUMS and SHA512SUMS.sign in the current directory the verifying can be as simple as
gpg --auto-key-retrieve SHA512SUMS.sign
The key is retrieved from user's default keyring or keyservers. The usual keyserver pool (pool.sks-keyservers.net) has the Debian CD signing key. How we can trust that the key is the right one is another matter. It is signed by many Debian developers.
Right, if you're already in the WOT then there are better ways, but then you're probably familiar enough with GPG that you don't need any help. :-)
Most distributions have signed checksum files, but also post those checksums in a HTTPS location. I, and I suspect most people, just check against that and call it good. AFAIK Debian don't have that, and between using GPG or thinking "F* it, I'll take my chances", I suspect many would choose the latter. I was trying to give people who's security conscious but not paranoid^W^Wlazy an option.
I have been running a little "Single Server LAMP Lifestyle Business" for 15 years now and it has been happily crunching away on rock solid Debian all the time :) All in all I spend a few hours per week on it and it pays all my bills. Thank you, Debian team!
From what I read [1] Debian 8 will be supported until April 2020 and Debian 9 until June 2022.
So in 2020 I will have to decide to either switch to Debian 9 or to Debian 10 which probably will be out by then. Is that correct? My feeling is that it might make things easier for me to skip Debian 9 and go directly with Debian 10.
I did the same with 7. My server used Debian 6 until I switched to Debian 8.
> If you use debhelper/9.20151219 or newer in Debian, it will generate debug symbol packages (as <package>-dbgsym) for you with no additional changes to your source package. These packages are not available from the main archive. Please fetch these from debian-debug or snapshot.debian.org.
No more shipping -dbg packages with full binaries. And less storage space is always a win.
-dbg packages never shipped full binaries (with a few exceptions for unusual libraries); they always shipped detached debug symbols. This change just makes them automatic and puts them in a separate archive.
Did not know that, thank you. Do you know happen to know how it is done? When I pull the debian tarball for nginx (which has a -dbg package with symbol files) I see:
> dh_strip --dbg-package=nginx-$(*)-dbg
Which is the exact same command I use in my rules file. But instead of giving me a -dbg package with symbol files debuild gives me a -dbg package with the unstripped binary. Not sure what I am missing. I am following the DebugPackage guide on the Debian Wiki[1].
With current Debian, you don't need to do anything at all, and in particular you should not pass --dbg-package. Instead, dh_strip will automatically create a -dbgsym package containing detached debug symbols, if your package contains a library or binary.
Also, make sure that you build with debug symbols enabled in the first place; the default CFLAGS should do that.
Sorry, I'm not asking about current Debian. In your previous comment you mention that the symbol packages are not new. I'm wondering how people have been creating them on Debian 8.
Base docker image? Sounds like you are using docker to run a complete OS.. not sure if that's the best way to do either thing. (Container: run the app, not the OS. Need a virtual OS? run a vm..)
No, I use Docker to run a single process. You still need a base image. Some people use Alpine, some Ubuntu, some Debian and some roll their own. Here are the Debian base images:
My favourite change is the transition to GnuPG 2.1 as the default /usr/bin/gpg. Particularly the "trust on first use" (TOFU) trust model is a really good improvement.
Including Python 3.6, even as a non-default Python, would have required to rebuild all Python packages and handle the non-working ones (either fix the problem or explicitely exclude Python 3.6 support for this package). There was not enough time to do all that.
I'm in process of moving my company's main web app to python 3, and standardized on 3.5 to match Debian 9.
But python 3.6 has so many cpu & memory improvements (not to mention things like f'' strings), seriously considering installing custom copy of 3.6... though not sure if I want the burden of maintaining my own copy of everything that will affect.
Then again... "Debian stable" being rock solid stable is why I stick with it for production; if their caution in this is the price I pay, it's worth it.
Basically, yes. While for a lot of software, you can use backports to get a more recent version, this doesn't apply to software being part of the toolchain (gcc, python, ...) because it would affect the rest of the archive. o
You are left with third-party repositories until we come with another solution, like Ubuntu PPA.
This also applies for the perl version they shipped, perl releases are supported for 3 years, Debian for 5.
It's inevitable that you're going to get mismatches between OS support and "official" support for the specific packages that go into that release. Not all projects provide supported releases over a period of years, so distros just do their best to patch any issues that upstream has stopped caring about past that point.
In practice the only thing that's going to be a big worry are new security issues, which upstream is usually willing to go out of their way to fix for versions still used in the wild, even for technically "unsupported" releases, at least the Perl project is, I don't know about Python.
I'd expect most packages to be EOL from the perspective of upstream by then. It's a normal thing for distributions to backport fixes, sometimes even develop them themselves when required.
A lot of people porting older applications that depend upon base OS assumptions into containers will probably be using more full-featured containers. With greenfield applications I would expect more use of Alpine or even the scratch base image for people trying to deploy truly minimal containers.
As I mentioned yesterday on the "Upcoming" comment thread (https://news.ycombinator.com/item?id=14574287), if you're looking to start using Stretch in your Vagrant dev environment, we're uploading AMD64 & i386 boxes for both VirtualBox and Parallels providers to Atlas as I type this. (If you're reading this soon, make sure it's v1.2.0, v1.1.0 is based on RC5 from a few days ago)
Edit: the uploads are complete, v1.2.0 of debian9-amd64 and debian9-i386 are released.
If there is user demand for it, we can look into vmware boxes, and possibly hyper-v too.
Apologies if anyone feels this is off-topic/opportunistic - AFAIK all other Debian 9 boxes on Atlas target Virtualbox only, and while projects like Boxcutter (which we forked from) do support Parallels/etc, they aren't always the quickest to produce new boxes.
And experimental is called "rc-buggy". (Debian has the notion of "Release Critical" or "RC" bugs, which affect migration from unstable to testing, so I find the nickname "rc-buggy" for experimental hilarious.)
Actually a 400 MHz one. Through the help of eBay I upgraded it to 1GB RAM and 160GB HDD. It's currently built into a desktop arcade machine running MAME.
I have an awful habit of re-using old tech instead of throwing it out. Hopefully I can eventually get rid of the stuff that still works at the MIT FLEA or something.
I use debian mainly for servers. Tried Debian 9 this morning for desktop, do not like the hidden-by-default-activities UI, also D9 does not recognize my dual LCDs(Ubuntu has no issues with that), so it will be the same for me: All servers will be upgraded to Debian 9, Ubuntu LTS for the desktop, and ArchWiki for documentations. Good for now and Thanks for the new release.
I've been running stretch on a lenovo carbon x1 for the last 2 years and it's been one of the first problem-free experiences running linux on the desktop (linux user for.. 18 years). Really awesome. Thanks!
Some years back I was using testing. It was the year they changed over from Gnome 2 to Gnome 3. I couldn't use the stable release because my mainboard and cpu wasn't supported.
Boy, was this a hard process. It took some months until my system worked without GDM crashes. After that I changed over to Arch and been a happy Arch-er since than. I wouldn't recommend running testing.
I've heard that unstable is more like Arch linux and the rolling release model of Arch Linux works very well for me.
I've been running debian on an x1 carbon as well (gen 3). I was impressed that it 'just worked' to the point that I even got an inbuilt wireless modem, popped the SIM card in, and the only bit of configuration I had to do was, literally, choosing which of my ISP's APNs to connect to.
The problem I see with unstable is not the lack of stability, it's the time needed to check the changes when upgrading. Stables is much easier to track because it changes less (wait, that was the point, no ? :-))
Can you please elaborate? I was always under the impression that testing is better for regular desktop usage. I've been running it for a long time now.
I like it as a secondary browser for its excellent support of multiple profiles but I run Ubuntu and had to switch to Chrome because Chromium doesn't seem to be updated promptly.
I don't think so, but some addons support per container settings. For example Cookie AutoDelete where each defined container can have its own whitelist:
I'm running 4.6 kernel "testing" on a MacBook Air 7,2 as my daily driver. It is rock solid. Wifi, Backlight brightness control and FaceTime camera will not work out of the box, however they can be made to work.
Expected to spend a weekend getting it all working. It's not for everyone, but if you want to better understand Linux, running it everyday is a great way to be fully immersed. It's also nice to have a desktop environment (openbox/lxde) that only uses 263MB ram on startup instead of a massive OS full of features I do not need or use. So yea, let's say the machine rarely swaps.
This is by no means exhaustive, but some things that I'm running and work well for me:
I've been running Debian on a MacBook Pro 11,1 for a couple of years and everything works great. The 7,2 Air has similar hardware and I would expect that you'll have a good Debian experience. Battery life is slightly worse than running macOS but performance is slightly better.
I don't understand what you're trying to say. I want to use Debian as my daily OS because I don't like how macOS does things, and I got a MacBook because they are good quality and have a good warranty and I can run all 3 platforms (Win+Lin+Mac) on it if I so desire.
I don't wanna argue, but keep in mind that I'm in Serbia and that I bought the MacBook with a "special financing plan" which the Apple reseller offered but not any of the generic stores that carry such ultrabooks as the XPS 13 or the Spectre...
Nice to see this release; I'd already started upgrading some of my lightly-loaded servers over the past few weeks but the "real" ones will wait a little longer.
One thing that is new in this release is the availability of mod_http2, for Apache. I'm looking forward to seeing if that will increase the response-time of my various websites.
At last the freeze is over. It started to be a bit annoying to build Mesa from source when stuff like newer llvm and libdrm are hard to squeeze into frozen Debian testing.
I suppose the idea of reducing freeze time with "always releasable testing" didn't really work out (lack of resources?).
Testing is usually a bad distribution to use for anything in production. If something breaks, it will stay broken for quite a long time until the fix makes it out of Unstable.
Unstable is almost always a better choice. Things may occasionally break, but fixes will arrive very quickly.
I'm talking about regular desktop usage, not about servers. And things break in testing only if maintainers messed up transition. Related packages should come in consistently otherwise.
Perhaps they are trying to improve the situation, but I read Debian security advisories regularly, and they are almost always published without a fix on the testing branch.
Unstable is the Debian branch to use if stable is too slow for one's tastes. The testing branch is just for testing and it's really not safe to use on the internet.
Stable usually doesn't get such issues. They are related to transitions. Sometimes it happens, that some packages are stuck in unstable for example, because they don't build on some arch, while their related packages go through. In result, testing gets an inconsistent combination, while unstable is OK. It happens when someone didn't take care to specify that these packages should only move to testing together.
This won't happen with stable, since it will get the consistent result in the end.
Stable moves much slower (if at all), so breakages are much less likely, and packages are battle-tested. A cutting-edge or rolling distro, like Unstable, gets a lot more changes, and you get to do the battle-testing yourself.
Basically you install the Stable system to get a stable system, and then 'bring forward' just those packages you need.
I needed gcc 6 for testing and decided to upgrade the whole system about three weeks ago. It was very easy and it's still pretty stable. I might do a fresh reinstall now...
If you go from stable to testing (or unstable), it should work relatively well closer to release time when bugs in upgrade path are mostly ironed out.
I was talking about the freeze itself though. It affects testing too which is normally rolling. So if the freeze is too long, things start becoming annoying.
Debian Stable is just that: stable. The default browser is an Extended Support Release (marked as such by the vendor, not Debian), so it'll stick around longer.
An ESR is more useful for use cases like education or companies that roll their own SOEs and like to document things for users. Browsers love randomly changing the UI or other behaviour on a whim (and on a 6-week cycle). So, it's a browser's ESR by default, and you can always install another one.
> The default browser is an Extended Support Release (marked as such by the vendor, not Debian), so it'll stick around longer.
But Debian Jessie was the "Stable" version for roughly two years. Mozilla's end-of-life for ESR 52 is on June 26, 2018. If Stretch has the same lifetime as Jessie, that leaves roughly one year during which Firefox ESR 52 will be end-of-life.
So how will Debian Stretch remain stable during a period when it is shipping an end-of-life Firefox for which-- as Mozilla states-- "no further updates will be offered for that version?"
This is the case of many software. Usually, distribution support is longer than upstream. Therefore, distributors have to be backport the patches. That's what it is done for Debian.
They are few exceptions: any Oracle product (Oracle doesn't provide security patches and discourage people from making them) and Chromium (patches are too big) and Firefox (idem). For Chromium, the exception is to use the latest version. For Firefox, the exception is to switch to the next ESR once the current one becomes unmaintained.
>Unfortunately, this means that libv8-3.14, nodejs, and the associated node-* package ecosystem should not currently be used with untrusted content, such as unsanitized data from the Internet.
Jeez. I guess this means most people will be using other node binaries in production.
Really looking forward to this release. We run Kubernetes with Debian 8. One of the big pain points has been needing to enable Docker memory accounting. I read that memory accounting will be enabled by default in Debian 9. Is that still the case?
Debian stable releases have a 2+ year lifespan so even 1.9 would be out of date for most of the distro's life.
Luckily Go is quite easy to install and use from an isolated directory, and the majority of Debian usage here would be as a target OS where the Go compiler version doesn't matter.
Also jessie shipped Go 1.3 but it was updated to Go 1.7 in jessie-backports, so you can probably expect further Go updates in stretch-backports when it's released.
Unless you need Debian for some reason, why not use one of the arch based distributions or fedora/some other redhat based one? Those always have the latest packages.
There are a lot of ways to handle golang for Debian.
Here's a quick command to build a golang-1.8.3 package with fpm (download and extract go1.8.3.linux-amd64.tar.gz first; get fpm from https://github.com/jordansissel/fpm):
#!/bin/bash
DEBIAN_REVISION=1
fpm -s dir -t deb -n golang-go -v 1.8.3-$DEBIAN_REVISION go1.8.3.linux-amd64/bin/go=/usr/local/bin/go go1.8.3.linux-amd64/bin/gofmt=/usr/local/bin/gofmt go1.8.3.linux-amd64/bin/godoc=/usr/local/bin/godoc go1.8.3.linux-amd64/=/usr/local/go
Congratulations. The return of Firefox branding makes me feel nostalgic. I remember using Firefox 1.04 on Debian in the early 00's. This was in the golden age of Firefox, when every new release was an improvement and it was a lean non-bloated alternative to other browsers.
In the past Debian was considered to be one of the most stable Linux distributions available. Stability and quality was a priority above anything else. However, around 2014 something changed when systemd was forced into Debian in a way that would never have happened before the new generation of developers took over the project.
Maybe this is just something we have to get used to, young developers seems to value ease above quality and stability, this also explains the current flood of Electron apps.
Half of the technical committee chose systemd. None of them are new comers. The casting vote in favor of systemd was done by Bdale who is a Debian developer since the very beginning.
systemd was just a symptom. Multiple developers that had been working on Debian for many years, left the project in that period for various of reasons.
https://news.ycombinator.com/item?id=14579080