Hacker News new | comments | show | ask | jobs | submit login
A proposal for an always-releasable Debian (lwn.net)
102 points by cpeterso on May 10, 2013 | hide | past | web | favorite | 33 comments



I must wonder why Debian needs to do 10 months of developing/bugfixing software in the first place. I mean most of the software is already supposedly "stable" releases from upstream, is it not? Maybe Debian should focus on working closer to upstream developers so that the software would be more directly usable out-of-the-box.


Two headlines I've read in the last 24 hours:

  "International Space Station Goes Open Source, Dumps Windows XP for Debian"
  "Google's cloud dumps custom Linux, switches to Debian"


It's not so much the individual packages, it's the interaction between them, and dependencies. In this release, we got multiarch, which is quite an overhaul in how cross-compilation works (especially relevant for running 32-bit (often binary only, non-free) software under the amd64 architecture.

There are such issues as which version of libxml, libopenssl and even glibc certain packages work with (or especially, do not work with).

So, eg nginx upstream might test (mostly) against an upstream release of openssl and some libc, while the apache web server might be more conservative (this is a made up example, tomcat, especially legacy versions such as 6, might be a better example than apache httpd).


Multiarch took so long to ship that most of the proprietary binaries are now available in 64 bit. It is still marginally useful.

The testing issue is a big problem as up streams often have poor tests but that is an upstream issue really. Maybe the best solution is to provide CI frameworks for upstream to use that support current and future Debian versions easily (something like Travis but with more OS versions).


) ftfy.

Even so, 10 months is a bit much. If the freeze were shorter, then Debian would have newer software. Fortunately, Debian releases are made roughly every 2 years, but even when Debian ships, it's already several months behind.

This isn't a problem for most servers, but it's a definite pain for developers like myself who use newer versions of software (extreme example, I use Arch) then try to backport to whatever's in the Debian repository. Sure, newer versions of software can be installed, but that kind of defeats the purpose of running Debian in the first place.

A shorter freeze means newer software in a release, which may reduce the need/temptation to use a package not in the stable repository, which is better for everyone.


But, for example, the interaction between the pieces of software, as well as, the Debian-maintained software might not be stable.


In short form: I think the turn around time is increasing because Debian has to wait for conflicting interests to settle. Solution: stop waiting for everything including the kitchen sink, and only worry about including the highest-yield and most necessary software in "stable". Let the newest desktops, for example, stay in unstable and layer them on stable (Ubuntu could be a themed faster-moving layer that could use as much of the stable core as possible). With the decreased turnaround time for stable iterations (the Debian ecosystem's "core" platform), less stable layers (derived distributions) would actually become less of a burden to develop since they would have a base they can depend on - one that isn't too far away from bleeding edge, but still solid as a rock.

In long form: I think Debian should focus on getting the slowest-moving targets and major package-management design and minimally necessary policies well before, and above, faster-moving targets like UI and experimental features, etc... This means rethinking repos with an eye towards community division of labour along lines like turnover time (some tools simply haven't changed much in 20 years), popularity/necessity (kernel support, bootloaders, libraries), and bleeding-edge version expectations (desktop). In practice, Debian is pretty monolithic. People mostly install what's in their primary repo (the walled garden problem).

In practice, IMHO, this would mostly boil down to making it easier to pull in separate repositories under one install, layering them on those provided by stable and it's official installer, and other special-purpose repositories (for example, with an distro-specific virtual package to articulate the dependencies and make upgrades/downgrades/sidegrades clear and easy). I recognize a lot of motion in related areas (blends, debian live, stuff outlined in the article, etc - and a technically savy user can modify their sources), but I think that there is more than necessary being attempted under the name "Debian Stable" and this has been distracting from "stability" (and costing effectiveness). Specifically, the fastest-moving targets should not be considered part of Debian at all, IMHO, but separate distributions along lines that maximize stakeholder ownership and improve turnover time. It then becomes necessary to allow seamless distribution (repository) layering and interdependence (something the package manager can already do, but is difficult in practice for more than a few sources, even to experienced users). In essence, the package+repo system does not currently help much with forks and merges at that level - even though it is one of the most common problems in open software development. (Git has spoiled me.)

Ideally, IMHO, all Debian-based distributions should be installable from the same installer, even if they may want to provide their own (like Ubuntu), something that would be made possible because the packaging standards would have solidified to the point that other distro-builders (repo, and special-purpose package maintainers) can rely on them (my hope for "stable").

That said, I am just another "user" who prefers things not break over them looking brand new.


I dislike how "LAMP" is considered a first class citizen, still. Why is "LAMP" preferable to things like nginx, PostgreSQL, Python, Ruby, or other now extremely popular alternatives?


Those may be extremely popular alternatives. But they are alternatives to something, and that something is LAMP. LAMP is a standard one can count on being available on linux-distros, and it is a standard which makes it very easy to develop something for the internet on. As much as I like Ruby, I would not dare to assume that Apache+PHP alone isn't way more popular in terms of general use.

The one thing though that I count on getting replaced on that stack is MySQL, though the M might stay. If Debian hasn't done that already.


But they are alternatives to something, and that something is LAMP.

This simply isn't true. Web development is done in many languages, and while the P in there stands for several of them, it's entirely inaccurate to suggest that any or all of them are somehow a default value deployment or development.

Web development deployment environments are not the sole purpose of Debian, and the purpose of non-'P' development languages is not to serve as potential but perpetually-sidelined alternatives to this arbitrary default.


It's not just for devs, but loads of people just want their Wordpress/Drupal/MediaWiki/Gallery/phpBB/Joomla website to work, or they want to sell shared hosting to customers who want that.

The fact that Debian has a "task" for LAMP is really a reflection of how simple the architecture is: you can press a button to install LAMP and then you can install any one of the applications I mentioned on top of it. LAMP is not a "default" by any means, it's just something easy to stick in the package manager, something that a lot of people use.

Sure, you can also press a button to install Rails or Django, but those are more self-contained and don't need to be packaged separately as tasks.

The reason LAMP is mentioned as important is because a lot of people do use it and do depend on it. In an ideal world, every application would work on Debian. But with finite resources, you make a list of the things you want to test the most.


Honestly I'd settle for an up-to-date version of nginx.


PHP is accounts for the majority of lower end web dev and even some % of the high end.

Having a LAMP stack that works up and running in one command is great for people who just want to FTP up a bunch of folders and be done with it.

People developing on Rails etc are probably developing against some specific version rather than "whatever happens to be in the repo" and are more likely to be either using a specialist environment like heroku or bring their own puppet setup to the party.


And this is why sites get hacked so much.


How so? Other stacks are no strangers to vulnerabilities.


Even still, people are using mysql_, because mainstream PHP culture is one of ignorance.


And why you are sitting around ycombinator trashing php and talking about which stack is superior, I've gotten loads done with it.

You can write awful code with php or you can write great code with php or something in-between, it's up to the developer.

The fact that you go so out of your way to trash php going to the lengths to question "why lamp is treated as a first class citizen", actually just goes to show your own utter dependency on development environment and existing code base.

So if there is crappy code out there, suddenly you become incapable of writing good code? No, so why sit around on sites and trash a programming language that the final product completely depends on the skill level of the person writing it.

You touch php and all you write is shit code? Maybe you are just shitty at programming? You don't "have" to use ruby or python to write clean, well thought out code. Please get off your high horse and understand that people get shit done using the lamp stack, and give it a rest.

Do you berate any other programmer anytime you see them writing perl as well? You are not superior to anyone because of the technology stack you use.

edit: how about you downvoters, actually come up with a coherent response and tell me why I'm wrong instead of treating this site like reddit and downvoting what you don't understand.

eddit2: I've noticed you have a nice github repo, which includes https://github.com/radiosilence/Ham All of your code looks clean and very decent, I'm just pointing out that php is obviously a good enough language for you to waste your own time developing in it!


True, but LAMP and vulnerabilities are BFFs.


Actually, the situation is worse than you make it look. Most sites are running WordPress, which isn't particularly secure out of the box, and only gets worse as poorly written plugins are added.


Debian is a distribution for bureaucrats. For installations that are just being maintained. If you are anything close to hacker do yourself a favor and use something else.


Hackers also appreciate having a stable, even conservative, foundation on which to build, especially when it's time to put a service in production, scale it to multiple servers, and keep up-to-date with security patches.

There is nothing to stop Debian users from using Rails, Django, Node.js, or any of the other alternatives to LAMP. SOme of them are even packaged in Debian.


I'm running LMDE and I've got all the goodies by default. If there is something else that the package manager doesn't cover, I do this amazing thing... I download the source and make! Crazy, I know.


Save the lecturing, you can do this in virtually any OS.


To be accurate, it was sarcasm, not lecturing.

This is lecturing: "If you are anything close to hacker do yourself a favor and use something else."


Edessa?


The concept of reference installations reminds me of Ubuntu main versus universe, or the main RHEL repository versus Fedora's EPEL repo. Makes sense to me. Perhaps Debian should use separate repositories too, so the difference is highly visible when we browse packages.


Well, technically, there is already "contrib". I guess it is a little crazy to have as many packages in "main" as there is. Impressive, yes, but also a little crazy :-)


Now that backports is part of Debian, I think most of the issues with "out of date" packages are much less painful than they used to be. I suppose factoring main into "core" and "non-core", would make maintaining backports easier too - you could then test building a package against a smaller set of core packages and libs.

Packages needing newer versions of glibc, automake/autoconf, gcc/clang/ruby/python/perl etc are still tricky though.


Currently contrib is used for packages that have compile-time or runtime dependencies on packages outside of the distribution (e.g. with dependencies in non-free).


Just two things.

Debian guys should really do something like OpenBSD package flavors. Sometimes you just don't need X,Y or Z support and it's pain to do this kind of stuff with Debian. I'm not sure if that is even possible in such rigid package managemenet system. Other thing would be to allow files/libs as dependencies. Those two things would give people more freedom and flexibility, but they are from start trying really hard to be anal with packages (even given the fact that .deb system is , at the same time, best and worst thing about Debian). So, guess, just forget about it.


First thing I do whenever I install a fresh Debian copy is swap out stable for testing so I can get Python 2.7, then add the official repos for nginx.


Now that Wheezy is Stable, you shouldn't have to do the first step.


The Debian FAQ recommends using unstable because it can take some time for bug fixes to go into testing. I'm on stable right now but I'd been using unstable for about 6 months and it worked great.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: