I didn't see the words "bug report" anywhere there. Debian is a volunteer organization, open to anyone. If you want to improve it, sign up and help instead of just ranting about crazy paranoid theories like:
> It's simply a tactic to make sure that you are stuck on Debian.
Of course I'm biased, as I'm a former Debian maintainer, but I think that Zed's connection with the reality of the situation is tenuous, at best, in this case.
Making a distribution is difficult, and doing so with 1000's of volunteers who are not working on it full time (and since you're not paying for it, they don't really owe you anything) and are distributed throughout the world adds to the difficulty, so yes, there are bugs and problems and challenges to overcome. That said, if each author of each package in Debian got their way about the exact location of files and so forth, the system would be utter chaos. If you think your package isn't being treated right in Debian, get on the mailing list, file a bug report, make your case, and get things fixed, rather than treating Debian as "the enemy"... Sheez.
There is a reason that Debian is being singled out, which you ignore - Debian likes to add lots of distribution-specific patches to everything they package. Other OSes/distributions are not as problematic, because they are much more likely to ship software as the author wrote it (unless it's ridiculously broken and unmaintained and important.)
Also: why should software authors have to write bug reports to Debian?
With 1000's of packages, and 1000's of maintainers, saying that "Debian" likes to patch "everything" is a pretty big claim.
A lot of the patches that do go in make whatever package integrate better with the rest of the system.
> Also: why should software authors have to write bug reports to Debian?
Because there's a bug?
> "Even though the piece of software I asked aptitude to install requires these libraries to run, they refuse to install it."
At first glance, that sounds like a packaging bug to me, with Rubygems, not with one of Zed's packages. The best thing to do in those cases is report the bug, rather than attempt to "make Debian pay" (for the thousands and thousands of hours of work to give you a free operating system, presumably?)
I don't use rubygems myself so I'm just commenting as a spectator. But I think it's very reasonable to ask that if someone (anyone) discovers a bug, then the right thing to do is to report it. They don't have to fix it themselves; maybe that responsibility falls on the rubygems package maintainer. But the maintainer will never know there's a bug until someone reports it.
Yes, it does seem to be a critical and obvious-to-catch bug. But as davidw said, maybe the maintainer already had the prereqs installed, maybe they were having a bad day, and they just ran a quick test and saw that it worked and released it. Mistakes happen, and I'm sure the maintainer would be happy to fix those mistakes, but they can't be fixed if the maintainer doesn't even know about them.
On a side note, I agree with Zed's rant on Debian being unique with its configuration and packaging layout. I use Ubuntu on my work laptop and so I've gotten used to the different filesystem layout for config files, etc. When it comes time to set up a server, am I going to go with a different distro and remember which distro has which config file in which location, or am I just going to stick with Ubuntu server because I already know it? I don't like having to make that choice.
> why should upstream authors have to pay the support cost (in e.g. time and negative impressions of them) for bugs introduced by Debian's packagers?
They are just as capable of saying "that's a Debian bug, go file a bug report there". That's a small price to pay for being open source, in any case. Debian certainly picks up bugs that are intended for upstream, and there are certainly instances when Debian maintainers have been quite helpful with fixing them. And others where they haven't been. With that many people there are bound to be some really good ones, and some that are below average...
"That's a Debian bug" is not useful - you can only determine that after you have analyzed the bug thoroughly, and gone to the length of getting out a Debian system instead of whatever you're usually developing on.
But yes, some maintainers are very good and some are quite bad; however, Debian's policies and patch-happiness means that bad maintainers (can/tend to) do much more damage than in other distributions/OSes.
Well if you really want to play tough about it you just say that all bug reports for your software running on Debian must go through the package maintainer. You could even ignore them if you get a lot of bogus ones.
The author put mongrel out there under a free license, as well as some unnamed gem, and various other things. The gem is not included in Debian. As to the "benefits" of Debian including anything, they certainly aren't pecuniary, but accrue mostly to the end users (or not, in this case). So, if you help Debian fix something, the beneficiaries are ultimately all the users of Debian, not really 'Debian' itself.
> Mac OS X - Fine. FreeBSD - Fine. RedHat/CentOS - Fine. SuSE - Fine.
Any sufficiently complex system, including every one of the above, has problems. They may differ from Debian's in type, nature and even quantity, but they exist. Including missing commands, disabled functionality, and misconfiguration.
This. The point of Gems and Maven and the CPAN client and all this other crap is to smuggle code into a production box while carelessly failing to get it into the platform's database of what's installed where and what depends on it. They replace one tool that knows everything that's going on, with many tools each of which can't.
Somehow we got stuck with a critical mass of small-scale developers who never faced the nightmare of somehow keeping the right tarballs on hundreds of machines in different roles without good tools, so now we all get to relive it.
My response remains the same: Debian (or what-have-you) is but one distro among many, and those of us that treat such things as important but fundamentally commodity deployment platforms have no wish to spend our lives babysitting whatever package manager happens to be on the operating system we're using that month/year/whatever. Further, plenty of people have plenty of good reasons for deploying on Windows and OS X, so using tools that work well there make sense.
The priorities of the various linux distros aren't everyone else's priorities – which somehow seems to come as a galloping shock to some.
That is why I use Slackware. Installing new software by hand from tarballs is a bit tedious compared to package managers, but it is transparent, and when I am done I can be pretty sure it is going to work. I have tried other distros, even Debian once (for some reason the Debian wouldn't let me sign in as root, I never did figure out why), but I keep coming back to Slackware - for its transparency.
And Slackware's packaging is light enough that it is fairly simple to write wrappers around gems/CPAN modules/Python eggs. The latter two have prewritten SlackBuilds that you can use, assuming that the setup.py is not doing anything too shady.
With CPAN (and local-lib and/or perlbrew) I can have per-user libraries, per-application libraries, and it doesn't interfere at all with the dependencies of the system. If a library that both my application and the system use, and they fix a bug that includes an incompatibility, I can upgrade my applications without fiddling with the system, I don't need to take care to break APT in the process (fun times). I'm not even talking about possibilities like copying someone else's local-lib to my own environment to help them debugging, while still having my own setup stay untouched.
So, I guess I disagree. These tools provide a lot more than the usual package management.
To me it seems backwards to try to manage libraries globally when their use-cases are often already localized.
This has been done, by several people. The problem is that the resulting packages don't fit with Debian policy, and in a few cases result in file collisions. If we want to mend the situation, we need both sides to agree that the solution is suitable.
The Right Way to do it, in my mind, is to take a snapshot of the available gems, and convert only those versions of the gems to debs such that rubygems and the Gem constant aren't involved at all. This means that "gem 'foo'" gets thrown away (or replaced with a check that the correct distro package is installed), using the Gem constant is a blocking bug, and "require 'rubygems'" is at least a no-op. Files from lib/ should be installed to /usr/lib/ruby (or at a pinch site-ruby), so that "require 'foo'" just works. Anything that assumes File.expand_path(__FILE__,"..") is under that library's control is again a blocking bug (I'm looking at you, haml).
I know there are people working on this, so it's not entirely theoretical, but besides, these are all good ideas anyway.
> The Right Way to do it, in my mind, is to take a snapshot of the available gems, and convert only those versions of the gems to debs such that rubygems and the Gem constant aren't involved at all.
If you think that's going to work, you don't know enough about the Ruby library ecosystem. You can't take a snapshot of the available versions and think that'll satisfy users. Users need to be able to install different versions of the same package in parallel, tons of things rely on this. Take a look at the rationale behind Bundler: http://gembundler.com/rationale.html
Sadly dpkg doesn't support multiple package versions.
tl;dr dpkg models this just fine, thank you very much ;P
I don't feel it is accurate to say "dpkg doesn't support multiple package versions"; let's say it let you install multiple versions of the same package at once: now you are just going to have file-on-file conflicts. From dpkg's perspective these different versions of the same files, which live in different places on the filesystem and can coexist on the same system, are just different packages.
The real question I feel the Ruby community should be asking, and thankfully the very question that I have personally heard from Yehuda, is "why doesn't C feel the need to do this?". The answer is "well, they do, but".
For the "well, they do" part, we can easily call into evidence that on my system I have both libreadline 5.x and libreadline 6.x: there is a (hilarious) fundamental break between these two versions of the library, leading to them deciding to increase the major version number, which then causes different applications to need to continue linking to one or the other.
Therefore, Debian models this: these two major versions of readline are each a separate package, libreadline5 vs libreadline6. You can install them both at the same time: they do not conflict.
Now for the "but", where the question becomes "what if I want to install 6.0 and 6.1" and the answer is "you don't want to do that, no one wants to do that, that is not something you should want to do", and this, I feel, is where the conversation starts breaking down with the Ruby community.
The reason no one wants to install 6.0 and 6.1 is that there is no reason to: 6.1 is newer and better than 6.0. Meanwhile, software that currently works with 6.0 /will/ work with 6.1. If software that worked with 6.0 didn't work with 6.1 it would not be called 6.1, it would be called 7.0.
Now, exactly whether the major version or the second-most major version (libxml2 uses this) or whatever other scheme someone comes up with is used to determine this isn't important. The way it works for C is that the version you are using gets fixed at compile time, which I think people should actually start thinking as analogous to creating their Gemfile.
Here's how that works (for Linux gcc with typical paths): the developer of a project passes -lreadline to the compiler. This looks for /usr/lib/libreadline.so, which in turn is a symlink to a file which is named with the "compatibility version" (in this case, 5 or 6), which in turn is a symlink to the specific version of the library in question (although there might be some more levels if someone is being silly).
Then, every binary has in it an "installed name" or "soname" which is the path to the library that will be encoded in any compiled binaries that have linked against it. This name is based on the compatibility version, and not the specific version, as there is an assumption that versions of the library that have the same compatibility version are, by definition, compatible.
So, this means that even though this symlink scheme theoretically supports having 6.0 and 6.1 installed at the same time, it would have no effect: all software wanting 6.x will be linked against libreadline.so.6, which will in turn only be able to point to either 6.0 or 6.1.
This assumption is a good thing: it means that we can upgrade libraries. People should not be wondering whether upgrading to 6.1 will break their program: if it does then the person who released 6.1 is not doing their job right, and there is a bug in that library that needs to be considered critical.
Luckily, system integrators such as Debian go to great lengths to test and verify that these libraries really are as compatible as they claim. This sometimes gets them into really hot water, however, as if upstream goes nuts and starts breaking their ABI, even slightly, Debian either needs to fix that bug or not take the update: it is not acceptable to Debian to allow an incompatible release of readline into the ecosystem, as it will have unpredictable consequences on the software that is using it.
That assumption is really important: there is no "we came really close, but some random things have changed that you now need to go fix". The idea in this world is that there is no good reason for that: there could be an arbitrarily large number of people who are relying on that feature, so it is irresponsible to unilaterally decide that you can just change it, even if it makes more sense.
This belief that "incompatibilities are serious bugs" leads to a number of common C idioms. To draw a few examples: * rather than change old APIs, add new ones; * rather than changing structures every now and then, use version numbers to tell different users apart; * use C for public external interfaces over C++ (which tends to have ABI fragility problems on most platforms) whenever possible.
Given this belief, if libreadline6 comes out, you also would never even consider using it in place of libreadline5 without going back to the source code to try to verify if it works (at least compiling it, but there may also be semantic issues at work in the API). With this belief in mind the idea of using ~> (which Yehuda has to spend a lot of time convincing people of) becomes "obvious": if a major version number (which unfortunately is not what ~> correctly models, but at least it is better than nothing to get the idea out there) changes, your package should not assume that will work.
You also find yourself being really happy that there is someone in the ecosystem--Debian in this case--who is doing all of that really hard, incredibly grueling, and (apparently) often thankless job of standing at the gates making certain software doesn't just willy-nilly enter the ecosystem until it has been reasonably regression tested. The developer's goal is, sadly, not always aligned with the grand unified vision, and that's what the users of these systems are buying in to (and I'll even go so far as to say: and that is where most of the value is, not the individual software projects).
Ok, </rant>. ;P (If anyone is curious: the reason libreadline5 and libreadline6 are incompatible with each other is actually that 5.x is GPLv2 and 6.x is GPLv3; afaik, and I'll admit I might be wrong, not only is the API between these two major versions identical, but so is their ABI: only in the world of politics and legalities was there an interface break. It still makes a simple example that a lot of people have run in to, though. ;P)
Why do people keep representing this as bizarre when C libraries parallel install just fine and you probably all have multiple versions of the same library installed right now?
Accept it's a legitimate situation please. What it needs is radical change in languages like ruby/python/etc to resolve dependencies explicitly at runtime. Currently they hide the problem with import path priorities.
Multiple versions of a library are certainly possible with C, but are generally avoided on Debian where possible. Where that isn't possible -- for instance, for libraries which completely changed their API like gtk1.4 -> gtk2, they simply create completely separate packages for the two versions.
With gems, this isn't quite so easy, as there's no way to distinguish between a minor bugfix and a huge API-breaking change without either guessing based on the version number or having a human read the changelog. Even then, there's no guarantee that dependent packages will depend on distinct sets of versions.
The naive way will just load the highest-numbered version you have installed:
When you have rubygems loaded, this is equivalent to:
gem 'rack', '>=0'
You can specify exact and relative dependencies easily. There are a fuckton of frameworks to handle this for you, many of them oriented around copying the depended-upon gems into your app when it's deployed, as is common in the Java world.
OpenSSL is an interesting case, because it's part of the standard Ruby library, and it should be available in every single copy of Ruby. But Debian faces export restrictions on OpenSSL, so their copy of Ruby is missing a number of important standard libraries by default.
Rubygems can't really clean this problem up, because Rubygems assumes it's running on an actual copy of Ruby, and not a half-broken Debian Ruby.
To get a working Rubygems installation on Debian, I usually 'apt-get install ruby-full', which installs a non-broken Ruby interpreter, and then install Rubygems itself from a tarball so that apt-get stays _far_ away from my Ruby setup.
I don't blame Zed for being angry about this. Debian's Ruby has often been pretty broken even by vendor standards.
That's a pretty absurd claim. Is there anything to back it up?
I mean, releasing an open source operating system doesn't seem to be the epitome of "control and lock in" to me. Ubuntu probably wouldn't exist if Debian were any good at "control and lock in", for one.
Also, your "bug report" is quite different than the one Zed points out. You're suggesting a fundamental change to the way rubygems works on Debian. That's something that merits further discussion, rather than a simple "bug report", obviously.
Excuse me, but I tried to direct you back to the original premise. Now it's an absurd claim? Did you read the article? It's not even my claim. And yes, if you read http://twitter.com/zedshaw you will see that the bug I posted is one that Zed references.
By the way, your "post" is quite the "epitome" of total "air quotery".
I don't think it's anything to do with control and lockin, it's to do with the fact that Debian feel they have a far superior package management system and allowing other, inferior (from their perspective) systems is a Bad Idea which they'll fight tooth and nail every step of the way.
It does seem silly from a rubyist's perspective but their approach does make a heck of a lot of complicated software work together in a way that makes administering complex server setups a doddle compared to how it could be.
There's just a yawning philosophical gap between how the Debian and Ruby communities look at the world and approach software management.
Fair enough re: #2. Given that, it's not at all surprising that the ruby and python folk have done rvm and pip (which, in a faux-spectrum, is a big shift towards what maven et al. provide in the java world).
Of course, it's the debian maintainers' prerogative to structure things such that others generally pick up their toys and go play in their own sandboxes. I just get irritated when I hear people griping about it when it happens, as if system-level package managers have some kind of inherent priority.
I'd say that it's more that if there's no (current) way to track what the other package management system is doing on the distro (any distro, not just Debian) then they can't tell what packages are installed and hence whether the distro-supplied packages can work with them...
> Also: why should software authors have to write bug reports to Debian?
What is the alternative? Should software authors be given carte blanche control of their own packages, regardless of their knowledge and commitment to the Debian platform and ideals? The goal is to make a working system, not to indulge every author that thinks their package is the exception to the guidelines laid out by the Debian members.
I have to say, I just installed Debian's postgresql-9.0 package and I was pleasantly surprised. They've packaged things in such a way that you can install 9.0 and 8.4 in parallel. Yes, it is different than the default postgres install. Instead of everything being in one directory they put config files in /etc, library files in /usr/lib and your db in /var/lib--just like every other package on the system. This seems like the right kind of packaging to me--it makes the system consistent.
Of course, for every good package like postgres there's a package with some epically stupid decision (netcat-traditional's "-q" patch).
> If Debian made it a policy to change packages as little as possible (which is not "not at all"), these problems wouldn't show up.
No, but others would show up. Come on, these things are engineering tradeoffs, with both positive and negative aspects. Even where we disagree about the benefits and disadvantages, we should agree to disagree in a civil way, rather than calling for "in-person confrontations". There are certainly technical complaints to be made about Debian (I could make a long list myself), but Zed's attitude is way out of line.
I don't think you can consider that locked-in in the slightest--it's not like they changed the DB format or removed the ability to dump a db. Do you really think anyone is going to load up postgres on Fedora, find that it's installed in /var/postgres and not /var/lib/postgres, throw their hands in the air, scream like a girl and then go reinstall Debian? Having stuff moved around can be coped with. I'll be able to transfer the data itself to another disto just fine, thank you very much.
This because upstream projects usually can't be bothered to do integration work, and it's up to the distributions to do it and that means patches (which are normally separated from the upstream tarball in the source packages).
Debian adds patches, but so does Red Hat/Fedora. Since these are the two main distributions (maybe not by themselves, but because they form the base for most others), then you criticism can be applied to 99% of the distributions out there.
And bug reports about integration work should go to the distribution, and not upstream developers.
Look, I understand that combining gcc/binutils/glibc is a complete clusterfuck that needs some patches (each of these projects is "individually" at least somewhat sane, but they don't like working together at all), and I understand that there is a lot of shitty code in the average desktop environment. Really, some patches probably are necessary. (Although there's a whole debate about feeding them upstream that you could have at this point.)
But why does Debian split up Postfix (well-written, actively maintained) into a zillion different packages stitched together by dynamic loading, instead of just configuring it properly? Why does Debian fuck up OpenSSL?
No, it's not hard at all - I don't think it took me more than one minute to figure out. It is, however, an example of a gratuitous change.
With respect to the OpenSSL stuff, I'm not talking about the GPL license - I'm talking about http://www.debian.org/security/2008/dsa-1571, one of the most painful security issues in recent memory and caused by a meddling maintainer. (Certainly, the upstream team could have more clearly warned him, but without the maintainers changes everything would have worked just fine.)
I understand you spend a lot of your free time working for free, and I appreciate your good intentions and the good that Debian and Debian's developers have done (e.g. writing lots of man pages for programs that have none) - but I do think Debian has some really questionable sides as well.
Okay, I can absolutely appreciate how hard making a distro is. Point taken. However, your comment also indicates exactly what Zed was furious about.
Debian is a piece of software that people depend on. They have a right to be upset when there are bugs. The proper response is to apologize and fix the bug. Period. Instead, your response is to blame the (already angry) user on the basis that they didn't submit a bug report. I would be livid if you responded that way. Yes, you do voluntary work. You get credit when you do good work. It's only fair that you take on the responsibility when your work isn't so good.
The Debian community can consider Zed's post a bug report. Now, go fix it please.
Sorry, Debian is not telepathic, so while yes, they should fix the bug, if it hadn't been reported, they couldn't have fixed it. It's absolutely the proper thing to do to file a bug if you find one. That's not "blaming the user".
Also, "I'm imagining a mass petition, maybe some funny ad campaigns, in-person confrontations meant to embarrass package maintainers, SEO tricks to get people off Debian, promotion of any alternative to Debian, anything to make them pay, apologize, and listen." is not something likely to make most people apologize, fix the problem and move on. It's rude and uncalled for.
Actually, what he says is "Gems can't even run because the gems needs openssl and net/scp, but the Debian (not Ubuntu) package doesn't install it along with rubygems.", which means that there is likely a missing dependency: rubygems should require ruby-ssl.
Of course, the more general point stands that the current state of the Debian/rubygems integration is far from ideal. However, fixing it takes people thinking about the problem and working on it, not just moaning and calling for boycotts. What a pissy attitude: it's easy to say "this sucks!", but much harder to create something better, or fix the broken thing.
I would have been quite comprehending of a "I'm very frustrated by X, Y, and Z about Debian". I mean, it certainly has its flaws (I ended up moving to Ubuntu myself). Another thing is this entitled attitude of "let's hurt them!"
All I'm saying is that the non-functioning state of ruby/rubygems on debian and/or ubuntu is not news. This is not something that Zed just uncovered that was previously unknown. Everybody in the Ruby community knows this, and has known it, for many years.
Thus, Zed offering a polite "bug report" rather than bitch and moan about it probably wouldn't have been all that much more effective -- seeing as how it has remained in the current non-functioning state for many years.
> Thus, Zed offering a polite "bug report" rather than bitch and moan about it probably wouldn't have been all that much more effective -- seeing as how it has remained in the current non-functioning state for many years.
And trying to round up a lynch mob is more effective how?
The proposed solution (an environment variable) is crap. That any of the Debian maintainers think that it's a good idea is an indication of how badly broken the Debian process is, and I know that Lucas was trying to do a good thing there. I don't blame him, it's at least an attempt at a fix.
There's already a working solution provided by Apple for Mac OS X and improved upon by the RubyGems team. There's concepts of vendor gems, system gems, user gems, and I think one other level. The Debian team has never considered it despite being told that it's there. The Debian team has also never offered patches that would help RubyGems come more in line with what Debian thinks it should be doing.
Why should the RubyGems guys bother working with uncooperative Debian maintainers?
Debian doesn't really work too well when you need multiple or new versions of software, or need something custom.
It's either spend a week wrapping your head around all the different ways of getting something to spit out a conforming .deb, or falling back to something like RVM + RubyGems / PIP.
The maintainer community is also reflexively defensive when they've broken something. The problem is never Debian breaking things into little unusable pieces, it's always upstream for not foreseeing how their software would be bastardized on Debian.
> Debian doesn't really work too well when you need multiple or new versions of software, or need something custom.
That's a fair complaint, and is also something of a complex issue, as there are tradeoffs between high quality integration, rapidly assimilating new stuff, the total number of packages in the repository, and so on.
However, Zed's mistruths about the nature of Debian (which is a volunteer-drive non-profit organization, not a business), are not:
"It is sadly pure business and has nothing to do with open source, quality, or culture."
"I did not write it to boost the Debian business plan"
It's like the mainstream news media — anytime they report on a subject you know anything about, it's riddled with obvious errors. After this happens enough times, you know enough not to trust them about other subjects…
You're right that a bunch of what's broken in Debian stems from how naïve APT is — especially the lack of a slots mechanism (they just mangle the package names when there's no way to excuse it).
But the fundamental problem is purely social, not technical. Debian is always ready to introduce a new policy that would require them to screw with all the upstream packages (breaking things up, license wankery, dash, silence in valgrind, etc.)
Depressingly I find myself liking the way packaging is often done with RPMs, especially from vendors — they just make a directory in /opt that bundles any contentious dependencies, and just fucking works. Functionally there's almost no difference between RPM and DEB — it's purely social expectations.
Because they have a systemwide mechanism for doing precisely the same thing rubygems is trying to do with Ruby, that system has been in place longer than Ruby itself and has proven to be one of the major strengths not only of Debian, but, in similar implementations, of all Linux distros.
We have explored this before - if Zed wants a Ruby that works his way, he should install a separate one, like Python developers do with virtualenv or a source tarball. Then his gems will be, hopefully, installed somewhere sane out of the way of update the system.
If gem installs its libraries on the wrong place then, it's a bug with gem, not Debian.
You are under the impression that Debian doesn't know about this problem or that they consider the current behavior to be a bug. Both are untrue. The current behavior is very much an explicit choice by Debian and no amount of bug reporting or patch submitting will change the situation.
I can only agree. If something doesn't work, and you think debian has screwed it up, type
and write up what's wrong. It often takes quite some time to fix stuff, but often it is fixed eventually.
(Note that by Debian policy, dependencies that are only for some actions of the module might not be hard dependencies, but "recommends" dependencies; but you can configure apt-get/aptitude to install those for you too).
Fixing the "bug" involves breaking either the FHS or rubygems. My belief is that rubygems is broken by design, but actually going from there to having a replacement that works with apt is a lot of work.
Fortunately, this is being worked on. I won't say by whom, because I don't know if they want it public yet.
The approach that rubygems works terrifically, and is depended upon by the Ruby community — you can trivially install as many versions of the same gem as you want to satisfy the dependency requirements of other gems and apps. Library authors can fix mistakes and take advantage of new functionality in other libraries without worry.
Everything about Debian's approach is in rabid opposition to this. They want to scatter the files all over their precious FHS. They want to install as few versions of libraries as possible, and where multiple versions must be installed in the best case the names sre mangled because apt does not have a 'slots' mechanism (average case they use the mediocre 'alternatives' system, worst case there are multiple independent depgraphs!). On top of all of that there's their release cycle where they freeze all the packages at a particular version number for several years for normal users (and regular 3-6 month periods on the 'bleeding edge'), after which they start mutilating random patches from upstream as 'support' instead of just shipping the new versions.
You're right that there's no way to reconcile this fundamental schism. You're wrong about which community is breaking everything.
> Everything about Debian's approach is in rabid opposition to this.
This is not necessarily true. There are many Debian packages that can be installed in parallel. I still have gcc-2.95 installed and that hasn't been available in the Debian archive for years. It is named properly and doesn't conflict with any of the newer gccs. So it is possible to make parallel installs.
> They want to scatter the files all over their precious FHS.
This is true. And as a Debian user, I support them fully. I don't want to learn some stupid Ruby package directory convention. I already know if I install a Debian package that the configuration files go in /etc/packagename and the docs are in /usr/share/doc/packagename. Every package is like that and consistency is a good thing.
I think the Debian Perl maintainers have gotten this right. They have a program "dh-make-perl" that will download a CPAN package and make it into a Debian package. This really gets you the best of both worlds. Hopefully something like that will get written for Python and Ruby (they haven't had their CPAN clones out for as long as CPAN, so give it time).
> On top of all of that there's their release cycle where they freeze all the packages at a particular version number for several years for normal users.
That's not just a Debian thing. %$#@! RHEL still ships a 2.6.18 kernel in their latest release! There is a certain class of people who are very, very conservative with their software upgrades. Debian Stable and RHEL appeal to those types. If you aren't one of those types, don't run them--instead run Debian Unstable or Fedora or Ubuntu. Those stay on the bleeding edge most of the time.
> I still have gcc-2.95 installed and that hasn't been available in the Debian archive for years. It is named properly and doesn't conflict with any of the newer gccs. So it is possible to make parallel installs.
It is NOT named properly — they mangle the name of the package to be 'gcc-2.95' to make this work. They only do this when they really can't get away with it otherwise because other base packages have hard low-level dependencies.
> RHEL still ships a 2.6.18 kernel in their latest release!
The ancient base kernel is mostly just so that commercial drivers never get ABI breakage. Most other things at least have newer versions available at more recent patchlevels, community repositories are maintained, and shit can be installed directly from upstream without the sanctimony.
Really RHEL just leads you directly towards the correct conclusion (that Debian tries to talk you out of) — treat the distro system as a general starting point, and install everything directly relevant to your usage from upstream. Use the distro packages for native dependencies, not applications you really care about.
Really RHEL just leads you directly towards the correct conclusion (that Debian tries to talk you out of) — treat the distro system as a general starting point, and install everything directly relevant to your usage from upstream. Use the distro packages for native dependencies, not applications you really care about.
I have yet to see a distro that could meet all the requirements for a non-trivial production deployment out-of-the-box. And I have seen all of the big names in production.
When it comes down to the runtime environment then you're always rolling your own packages in one way or another, regardless of the underlying linux distro.
The value of a linux distribution is not related to how it bundles the language runtime de jour. Those are inherently moving targets and no matter what distro you look at, they're always outdated and always wrongly packaged for your particular use-case.
The value of a distro mostly stems from how it keeps all the other mundane maintenance and infrastructure headaches out of your face while you're busy keeping your runtime in shape.
Praising RHEL/yum over apt in this context seems mildly hilarious.
> It is NOT named properly — they mangle the name of the package to be 'gcc-2.95' to make this work.
So what? If it installs in parallel and still works after umpteen years, how is it not "proper"?
It seems like you want to have it both ways--on the one hand you bash Debian for hacking versions into the package names for practical reasons, then praise the practicality of RHEL because you can install crap from Joe Shmoe's public repository (btw, would you really do that on your company's server??). All the while ignoring the fact that you can use "alien" to install rpms on a Debian machine (it's not perfect, but I've had good luck and have been able to experience the wonderful world of random junk in /opt) and that there are a bunch of Debian repos out there to get stuff that isn't in the official repository...
It's not "so what," and what RubyGems does really matters for version management within an application.
If I have one application that depends on version 1.3 of a library, it can do:
gem 'library', '~>1.3' # 1.3 and higher, but not less than 1.4
The next application that requires 1.5 or later can do:
gem 'library', '~>1.5' # 1.5 and higher, but not less than 1.6
There's two things here: Debian typically won't include two versions like that; they might include library1 and library2, but not 1.3 and 1.5. Second, I only have to have the 'gem' method in one location; everywhere else I can just do "require 'library/file'". I don't have to go through my source to change "require 'library-1.3/file'" to "require 'library-1.5/file'" when I want to upgrade.
The Debian approach works...fairly well for compiled applications, but it's utter crap for dynamic languages like Ruby. The RubyGems approach is superior for Ruby, and it fills a very important need for Ruby: it works on platforms other than Debian.
Debian packagers assume that they're the only packager in the world that matters. They're not, and other vendors have done infinitely better jobs of handling packaging Ruby than Debian, mostly because other vendors actually write patches for the upstream providers instead of bitching that the system isn't compatible. (See what Apple did for RubyGems. It still required work on the RubyGems side, but it was at least usable and didn't break the fundamental behaviour of RubyGems.)
If Debian packagers want more respect from upstream developers, they need to stop treating us like we're the ones doing something wrong. If we've developed something, it's because there's something missing.
You know what this sounds like to me? It's a classic case of DLL hell. The shared lib people solved it by giving their libraries ".so numbers"--versions increment when incompatible APIs are introduced.
> Debian typically won't include two versions like that; they might include library1 and library2, but not 1.3 and 1.5.
That is because shared libs actually increment their .so number when they are incompatible. Note that library1 and library2 are not necessarily versions 1.x and 2.x--the 1 and 2 are the .so number. In the past Debian has had to increment the number themselves because the upstream introduced an incompatible change and then didn't increment it.
It sounds like the Ruby people need to pay attention to history and add an "so number" like field to their programs to signal compatibility changes. Or perhaps just adopt the fairly widely accepted "major number increases when compatibility is affected" technique.
> I don't have to go through my source to change "require 'library-1.3/file'" to "require 'library-1.5/file'" when I want to upgrade.
That is a red herring. In the ideal packaged world library-1.5 would install into the proper gem location such that "require 'library/file'" would work the same way it does if you had installed it by hand, regardless of its package name.
> You know what this sounds like to me? It's a classic case of DLL hell. The shared lib people solved it by giving their libraries ".so numbers" -- versions increment when incompatible APIs are introduced.
It's not at all related to DLL hell. It's called "freezing your dependencies." It's not simply a matter of compatibility; if version 1.3 has support for feature A and version 1.5 adds support for feature B while not changing its support for feature A, is that worth a complete .so number bump? In the Ruby world, no (and it shouldn't be, IMO, either -- especially with how most version numbers are determined in the real world). Not when we have a much smarter system that actually works with a dynamic language that doesn't depend on build-time link specification.
>> Debian typically won't include two versions like that; they might include library1 and library2, but not 1.3 and 1.5.
> That is because shared libs actually increment their .so number when they are incompatible. Note that library1 and library2 are not necessarily versions 1.x and 2.x--the 1 and 2 are the .so number. In the past Debian has had to increment the number themselves because the upstream introduced an incompatible change and then didn't increment it.
In other words, the .so number has no relationship to the version in any way that a user can meaningfully understand it. (I knew this, it's just nice to get confirmation of it.)
> It sounds like the Ruby people need to pay attention to history and add an "so number" like field to their programs to signal compatibility changes. Or perhaps just adopt the fairly widely accepted "major number increases when compatibility is affected" technique.
In reality, the Ruby people have paid attention to history and noted that .so numbers don't work as well as some people think they do. In my Linux work, I make one binary that's supposed to work on multiple distributions. In at least one case (several years back in the end of the kernel 2.4 era), we found a problem where a common library (libc, maybe) would work if the underlying symlink was to library.so.x.y.6, but not if it was anything less than library.so.x.y.6. Yeah, that was fun to debug, especially since our link specifier was against library.so (-llibrary instead of -l/lib/library.so.x.y.6). The latter, by the way, would have prevented me from using library.so.x.y.7 if that came out and was still unbroken, whereas linking against library.so or library.so.x.y would have given me that; I just couldn't trust anything under .x.y.6.
SO versions also don't really work if you're not, you know, using an SO. Ruby has different needs than a C or C++ compiler, and with RubyGems it's trivial to install multiple versions in parallel in a way that they don't conflict.
>> I don't have to go through my source to change "require 'library-1.3/file'" to "require 'library-1.5/file'" when I want to upgrade.
> That is a red herring. In the ideal packaged world library-1.5 would install into the proper gem location such that "require 'library/file'" would work the same way it does if you had installed it by hand, regardless of its package name.
Except it's not a red herring. Like I said (and you may have missed), I can do this:
gem 'library', '=1.3'
I will get version 1.3 that's installed. Then, I can do this:
gem 'library', '=1.5'
I will get version 1.5 that's installed. The "gem" method doesn't have to be in the file where I require "library/file", either; it can be in a prelude that identifies the versions this application is meant to work with (this is sort of the point behind Gem bundler and other techniques). This gives me something that's at least as flexible as .so numbers, and considerably more so: =1.5 means "this version and this version only"; ~>1.5 means ">= 1.5 && < 1.6".
Debian's techniques wouldn't be able to work with the ~>, >=, or < operators for version specifications if you actually have 1.5, 1.5.1, and 1.5.2 installed (and there are reasons for this). Even if you ignore that capability, Debian can't solve the 1.3/1.5 version problem without affecting the folder name somehow. It will either be:
which means you have to change your Ruby's LOAD_PATH to choose your version. By happen coincidence, this is exactly what RubyGems does, and it has mechanisms within Ruby itself to make it so you don't have to do "LOAD_PATH=/var/gems/1.8/library-1.3/lib my_app".
What Debian maintainers seem not to like about RubyGems is that it acts a lot like "stow" in that everything related to a gem is stored together, including the extensions and the binaries (which breaks FHS). What they don't realize (either by choice or ignorance) is that the RubyGems developers thought about wanting to have multiple versions of Rake installed and making it so that the command-line could select which gem version:
rake _0.7_ task # runs version 0.7
rake _0.8_ task # runs version 0.8
rake task # runs the latest version
The file in /usr/bin/ (or wherever) is a stub that takes that optional version argument and "fixes" the appropriate gem version before handing off execution to the primary binary in the gem.
That and the fact that things like "gem update --system" updates the functionality of RubyGems outside of the control of the dpkg system makes them unhappy and causes them to break fundamental behaviours that Rubyists expect and complain loudly about. Zed is right about this breakage; it's a problem and it isn't Ruby's (or Python's or Perl's or Java's) fault. Debian maintainers are making choices that aren't in line with what the programming language folks need when they're working cross-platform. Debian's policies and procedures are stuck in 1997 with statically typed languages and fully compiled software systems. It'd be nice for them to join the 21st century and be friendlier toward development and deployment environments that need greater flexibility.
 The funny thing is that the functionality update is something the user has to request and is itself installed as a RubyGem (e.g., rubygem-updates-1.3.6). RubyGems is smart enough to know that if it's version 1.3.5 and there's a rubygem-updates-1.3.6 gem in the installed cache, it switches to that for loading. If, however, Debian were to install 1.3.7, rubygem-updates-1.3.6 would be ignored. If Debian actually just respected RubyGems instead of trying to make it 100% fit with its policies, things would work a lot better. The authors of RubyGems have to make this work on a lot of platforms, and Debian is the one that has the least love because of the lack of respect from the maintainers.
I don't want to learn some stupid Ruby package directory
convention. I already know if I install a Debian package
that the configuration files go in /etc/packagename and
the docs are in /usr/share/doc/packagename. Every package
is like that and consistency is a good thing.
Here you ara basically saying that you want Debian to be easy for those who do packages, not for those who are going to actually use them. That explains a lot.
No, I'm not saying that at all! I am not a Debian developer. I have made packages so I understand what goes into them, but that statement was driven by me as a Debian user--when things are consistent it makes life so much easier.
> you can trivially install as many versions of the same gem as you want to satisfy the dependency requirements of other gems and apps.
You say that like it's a good thing. The ability to do that is a large part of why it's broken. If you're a developer, yes, it's extremely convenient, and it means you never have to worry about API stability, back-ported security fixes, or even semantic versioning. You just bump the version number and push out a new release. It's a great developer tool that happens to also work for deployment if you're deploying in a language monoculture. If you're a general sysadmin, or targeting a system where rubygems isn't suitable (embedding ruby, for example), it's a bloody nightmare.
> They want to scatter the files all over their precious FHS
The "precious FHS" is part of what keeps sysadmins sane, and splitting packages isn't a problem unless filename conflicts happen. I know that rspec and cucumber used to have a problem here, I don't know if it's been fixed.
> They want to install as few versions of libraries as possible
Yes, and that is the correct approach if everything is supposed to be under control of the OS.
> after which they start mutilating random patches from upstream as 'support' instead of just shipping the new versions.
Again, this is the right thing to do, if upstream won't provide fixes, or keep to any kind of API or dependency stability. In the Ruby community, new versions tends to mean new features, which inevitably means new bugs.
> You're right that there's no way to reconcile this fundamental schism.
I didn't say that. I believe it's firmly reconcilable, with a mechanism not unlike backports. However, it does need gem authors to lay off some particularly inconvenient practices for it to be manageable. Just adhering to the RPG would be a good start.
> You're wrong about which community is breaking everything.
Rubygems is the latecomer here - standards like the FHS have been evolved over a generation before this little head-to-head happened. Besides, I don't think the "community" is breaking things. I think the "community" is behaving in a way promoted by a broken tool.
> But here's the problem, Debian package maintainers don't want to give up control to the responsible parties. I would more than gladly make my own .deb packages, but they refuse to let me. In fact, I plan on making packages for the major Unices in order to head them off. That's what everyone ends up doing.
As a user, and amateur administrator of my home machines, I've learned the hard way that third-party packages supplied by the original software vendor are, in general, utter crap. They're built against a distro two or three versions old; or they're built for Mandrake or Ubuntu instead of Red Hat or Debian; they "install" a tarball to /tmp which the post-installation script untars in the root directory; they don't have any dependencies declared at all but crash at startup if you don't have a particular version of libjpeg... if you're relying on the packaging system to detect file conflicts or outdated dependencies, third-party packages can be very, very scary.
The single biggest reason I choose Debian (and sometimes Ubuntu) is the uniformly high-quality packaging. Zed's found one problem package, and a trawl through Debian's bug tracker will no doubt find others, but the fact of the matter is that with a single command I can install almost any Free software in the world, and have it installed and set up within seconds - and Debian has years of experience figuring out how to make that work. I don't think that's a thing to be dismissed lightly.
I used Debian on the desktop for 2 years (2008-2009) ... the whole time the Iceweasel package (their repackaged Firefox) has had its renderer broken, making some web pages unusable for me ... this wasn't a problem if I manually downloaded and installed Firefox, and it finally got fixed just before I switched to Ubuntu.
It does have the best software repository ever, but boy how much it sucks when something's broken.
This is in no way specific to Ruby — if you look at the way any sufficiently advanced software you're familiar with is packaged in Debian, you'll find it to be completely fucked. Ancient versions, nonstandard configuration files, random things disabled at compile-time (often for ideological reasons), files scattered everywhere with the new locations hardcoded, with basic features broken into separate packages. My favorite is the random patches, which when they aren't in the service of the aforementioned ridiculousness, are mostly cherry-picked from current upstream versions to 'fix bugs' without accidentally introducing features because they're afraid of new version numbers. When a patch doesn't fit those categories you really have to worry, because now they're helping (see OpenSSL)
The result is that any program or library that you use directly must be sourced from upstream, especially if it's less than 15 years old or written in a language other than C or C++. Luckily pretty much all of the modern programming language environments have evolved to cope with this onanistic clusterfuck.
Haskell has more fucked by Debian than any other language I know of — when I last had to deal with it a year ago there were two broken+old builds of GHC with different package names and mutually-exclusive sets of packages that depended on them. On top of that the version of cabal (Haskell's packaging tool) in the repository was so far out of date that you couldn't use it to build anything remotely recent (including useful versions of itself), nor could you use it with anything in Hackage (the central repo).
My old roommate had listened to me bitch about this stuff for years, and always dismissed me as crazy for thinking that the packaging was fucked (though he did share my hate of debian-legal). Last week he called me out of the blue and apologized — he'd installed Wordpress through Debian and they'd broken it up into a bunch of library packages, but still left a base skeleton of random php files and symlinks, accomplishing nothing but breakage and unportability.
A lot of the software they package comes with unit tests. Those unit tests have a purpose. They are meant to see whether or not the software as configured and installed, works.
Debian systematically strips those unit tests out, and never runs them to see how much stuff they are shipping already broken. Why? Why not package the unit tests as an optional package, and make sure they have a wide variety of systems in different configurations running them to notice problems?
I can't count how many times I've tried to install a Perl module from CPAN, found it was failing its unit tests, installed the Debian package with it, ran the unit tests, and found that the unit tests said the package, as installed, was broken. It's not as if the package developer didn't try to tell you you were missing something. Why throw that information away?
If they did this then they'd automatically catch a lot of problems that currently get missed. Heck, insert a test for ruby gems that says, "Does this software start?" They'd have caught this bug automatically.
Until Debian catches up with standard best practices, I completely can't agree with the meme that they run software better than everyone else. It isn't as if unit testing is a new idea. It has been pretty mainstream for most of the last decade. Some were doing it earlier. Perl has been doing it since the 1980s.
I used to maintain Debian packages. As far as I know (or could find looking through the Policy manual and a few language-specific packaging guidelines), there is no policy against running tests after builds. I spent a few minutes looking through several packages and didn't see any that ran tests, but I have seen bug reports about test failures. If there are tests and they run using a reasonable amount of resources, I can't imagine anyone would argue against running them. The package maintainer has to do it, and I assume that some mess up.
I don't think it's so cut and dry. In the ideal case, the packages don't need tests because they are going into the system already tested. If the package is correct then running the unit tests is just redundant work because you are getting an identical copy of a package that already tested good with the appropriate dependencies. I think that's the theory anyway. On the other hand, theory does not always match reality.
Debian CPAN packages do run the unit tests--they just run them when you create the package and not when you install the package. In fact, the package will not build unless the tests pass.
Sorry, but it really is that cut and dry. In the ideal case you don't need unit tests. In the real world, they are useful.
One of the big things that unit tests are supposed to catch are hidden dependencies. By nature, hidden dependencies are likely to be installed on the maintainer's machine, so unit tests only help if they are run somewhere else.
As a prime example, the bug that Zed was complaining about was a dependency problem of EXACTLY this sort. It worked fine on the maintainer's machine. It didn't work on Zed's machine because he didn't have a key library installed. No unit test the maintainer can run would catch this. Even a stupid unit test Zed can run would have flagged this immediately.
Incidentally about the Debian CPAN packages, my experience with them a few years back was that they were so consistently broken (usually because of locale issues) that I have a hard time believing that unit tests had actually been run by anyone. Or if they were, they were ignored.
Well, then maybe what they really need is a single server/farm out there that just goes through all the perl/ruby/whatever modules and installs them and runs their unit tests. That way you get the same coverage but you don't waste my time. I, personally, don't want a bunch of unit tests to run when I install stuff--it's slow enough as it is and it's (usually) completely redundant.
It seems the same to me as slackware or one of the other "compile everything yourself" distros--they force you (the installer) to do a bunch of redundant calculations. If someone can compile on my architecture then why should I ever have to compile it? The same with unit tests--if someone can run the unit tests on my exact configuration then why should I ever have to run them?
If you're installing item after item, then the odds are that at some point you're going to install the dependencies, and after that the tests will pass and mean nothing. So you keep on wiping, installing one thing, and testing. Fine. Now you have cases where package A conflicts with package B and you never find it. And now what? The number of combinations to test is incredibly large, and nobody is likely to have tested your exact system.
As for the unit tests, make them available, make them optional. That's OK by me. I'd opt to use them. Particularly since I learned the hard way that most of what I want in CPAN either isn't in Debian, or isn't up to date there. So I install things directly from CPAN. That means that when I install from Debian, there is a possibility of library conflict that I want to notice.
I'm not sure what the -dbg packages in Debian/Ubuntu are strictly for, but wouldn't this be a case for it? If one could post-run the tests by saying "apttest Foo" or something along those things, integration into utilities to automatically install test packages and run them shouldn't be a huge hurdle.
> A lot of the software they package comes with unit tests.
I don't know about 'a lot', but in any case, your idea is potentially a good one, although probably not 'low hanging fruit' in terms of the amount of work it would take to get all the various systems integrated and talking together.
And in terms of 'beef with Debian', it seems to be a general beef with modern OS's, because none of the others do it either, do they?
Others don't tend to share unit tests with end users. But I'm quite sure many proprietary OS vendors run regression tests on a variety of variations on their platform. So while not perfect, on this particular item I think they do better than Debian does.
> So while not perfect, on this particular item I think they do better than Debian does.
Could well be, but at this point, we're moving pretty far away from Zed's spittle-flecked exhortation to pursue "in-person confrontations meant to embarrass package maintainers, SEO tricks to get people off Debian, promotion of any alternative to Debian, anything to make them pay" and merely talking about what could be made better (and there's always something).
...and people ask me why I write tons of freeware, but make so little of it open source.
The reason is simple: control. If I can't control every stage of the development, deployment, and distribution process, I don't want in (yes, I'm a control freak. No, I don't think necessarily a blemish on my personality, it's just who I am). If there's something wrong with how my users perceive my software, it's because of something _I_ did wrong, not because someone took my hard work and toil and perverted it with their own changes, be it making the code ugly with nasty function names or dirty hacks (in my opinion, of course) or if they distribute it in a way that makes users cringe. It's my hard work, and I deserve to be (a) in control of the user experience, and (b) attributed.
If you're willing to make your awesome utility/code that you've spent 5 years developing and maintaining fully available to the public, giving up all control of the end-users' perception of the package, you have a bigger heart than me. But me, I'm a selfish guy when it comes to my users, and I don't want anyone to even have the possibility of hurting them. I have _my_ users' best interest at heart, you probably don't. At least, not to the same extent that I would.
There are certain clauses that don't affect the definition of a program as open source, but would prevent it to be included by, let's say, Debian. I'm thinking on the old BSD license or the branding problem of Firefox.
Right, though I didn't mean the capitalized Open Source but "you can still share the source code" and thereby still accept patches for example. You could also allow forks as long as they use a different name that your project.
I sympathize with frustration at poor software packaging, but the proposed solution seems completely disproportionate to the offense. I also don't see any real evidence presented for the assertion that Debian's policies are based on a corrupt motive. What OS is actually pure and innocent of technical flaws and bad policies? If I recall, openSuse is in bed with Microsoft, Ubuntu fails to contribute code to important projects, Fedora is really just Red Hat's beta testing, etc. One of the pains of open source software is that things you write will probably be messed up in lots of ways by other people. If seeing software packaged or modified in ways you don't like makes you angry and advocate retaliation, is that really consistent with the philosophy of a free software license?
The efforts of both Zed and Debian are based on the same corrupt motive: self-satisfaction
As the upstream developer, Zed wants to make sure that his software is distributed in a manner that is useful to people and actually works and shit. This gives him jollies.
As the distributors, the Debian contributors want to fullfil ideological goals, fit everything into their neat hierarchy, assuage neckbeard sysadmins that the version number is sufficiently ancient (not that the software is, given the distro patches), pretend to care about people running it on the PA-RISC port of Debian GNU/kFreeBSD, wank about licensing, and reach consensus. This gives self-important teenagers and manchildren all kinds of joillies.
I've always thought that it was a bad idea for Linux distros to package any language libraries at all. It seems like a lot of repeated work and your users will probably end up needing to install things manually in the end. Things will require the lastest version of gems, new and essential features will be added, etc. Just give your users a manual on how to install gems (or pip, or whatever) and let it be. The users of language libraries are exclusively programmers after all; I think it's safe to assume they can handle library installation.
That said, Zed is blowing this way out of proportion.
And you're forgetting that there are "end-user" programs that depend on those libraries as well, so they really do need to be packaged up.
I'm not forgetting this. I don't think the user's programs should use the same libraries (or even the same language install) that the distro's programs depend on. This way, if the user messed something up (like trying to upgrade a library), the entire distro wont be hosed. It's a really bad idea from a stability, security, and design standpoint if you don't keep these two things separate.
[Disclaimer: I'm not a Ruby developer, but have developed in plenty of languages, as well as being a sysadmin for a fair while]
IME, separate package-management for each and every language is more painful than the problem he is describing.
Rubygems, CPAN, whatever other languages use.
As soon as you install things outside of the distro package-manager, it has no idea what is going on with those packages and so continues to think that they aren't installed. If there was a way to get the package-managers for language libraries and the distro to play together nicely and work for dependencies etc then this would be OK, but as it is it's a bit of a sysadmin nightmare.
Edit: I thought that installing distro-supplied packages into /usr/local was against the FHS?
> way to get the package-managers for language libraries and the distro to play together nicely and work for dependencies etc
Yes, hacking the language's package manager to talk to the distribution package manager is probably the only way to really go about solving the problem in an elegant way, and even that is not without its pain points.
Absolutely, but this problem has been around for a long time, and fighting with distributions on how to get it to work won't help anyone.
Of course, it also means that if you install a buggy version of a library from, say, CPAN then you could break distro-supplied packages... I guess it depends on which you value more: distributor-provided testing or being able to install out-of-band packages onto your distribution more easily and with proper dependencies.
Actually, the Ruby and Rubygems Squeeze packages have been in quite a flux lately. One of the more recent Ruby1.9.1 packages broke Rubygems completely due to some changes in the Ruby 1.9.2 source and they way they handle gems. The latest version of Ruby has really integrated the Gem package management system into its core. It has therefor been decided to stop building a separate Rubygems package for the 1.9.x series and let the Ruby1.9.x package deal with gems.
Have their been clashes between the Ruby and Debian communities? Yes! Are we all working toward a solution? Yes! Will it happen instantly? Unfortunately no, but I think the two groups are now talking and are making some good progress. Keep an eye out for some good stuff in the future.
This is far from being a rant. It's a concise exposition on the Debian/Ruby issues.
Thanks for posting the link, but being labelled a rant, I nearly didn't go there. Just want to encourage others to read it for an excellent explanation of the issues at hand from the guy who is doing the work.
I'm pretty sure that "this rant" in the parent refers to Zed's post not Lucas's. The parent's point being "we should look at Zed's rant again after reading this discussion by a Debian Ruby maintainer."
Hypothetical question: Now that rvm has been created, would anyone still use apt-get to install ruby even if the latest version was supported?
Sure you could argue that yes, ruby should be installed with apt-get and alternative versions should be handled with Debian's alternatives infrastructure...
I think this is an interesting case in which version numbers are more than just version numbers, they are more like sister projects, and they don't fall neatly into the "conservative and stable" or "bleeding edge and risky" camps the way maintainers typically view different versions.
From the perspective of a package maintainer, if we had to include an alternatives list for every dot version of every package, then the distro would explode in complexity.
Ruby and Python just happened to grow so quickly that their growth didn't immediately trigger the appropriate response from distro maintainers, and very quickly the community worked around the problem.
> would anyone still use apt-get to install ruby even if the latest version was supported?
Not everyone is a heavy Ruby user - someone might just want to install a package that depends on Ruby and be sure that things work. Just as they may want to install a package that depends on Java, Ada, Haskell, Erlang, Perl, Python, PHP, Ocaml, Tcl or any of the many other languages present in Debian, and not have to deal with a different package management system for each language on the computer.
> Ruby and Python just happened to grow so quickly that their growth didn't immediately trigger the appropriate response from distro maintainers
It's actually a fairly old problem. Perl and CPAN have many of the same issues. I'm not sure there's a perfect solution.
apt-get or yum for ruby? Hell yes. After wasting countless hours on random versions of crap scattered around some servers' filesystems but not others, now nothing hits production without being in a package that installs and uninstalls cleanly and declares everything it relies on being present. That includes my own code, which we package ourselves.
Hear hear. Nothing strikes fear in my heart like a /usr/local hierarchy filled with a million random versions of random programs. BTW, if you are going to install something into /usr/local, at least use a cheap and easy packager like GNU "stow".
> Ruby and Python just happened to grow so quickly that their growth didn't immediately trigger the appropriate response from distro maintainers, and very quickly the community worked around the problem.
This is true, but it's more than that. Ruby and Python are both committed to working on a wide variety of platforms, including platforms that don't have native package management systems (Windows, Mac OS X, Solaris, etc.). For Ruby programs and libraries, RubyGems are a more elegant solution than telling people 10 - 20 different ways on how to work with their project, depending on the OS.
Obviously, he should spend just a few hours in the shoes of a package manager. Honestly. What would it be without packages/ports? A mess.
The only people trying to make it bearable here are the package/ports managers and yet they don't get any kind of reward for their job. They have to come up with crazy tricks to make things just work because people who write software are unable to write proper install notes, list dependencies correctly, etc..
This process is heavy, slow and doesn't always produces the expected results (there's no doubt about it). So people thought it would be great to have just language-specific package management systems and make it unbearable again. Alright. I personally never use them and install stuff at hand unobtrusively.
Now, do your job. Go fill a bugreport. Or better yet; fork. This is not discussing things, this is not helping. Tears don't help.
Gentoo avoids making patches specific to itself wherever possible, when it does it's generally just to make it compile+work, the changes are straightforward, and they are applied live on your machine in a self-documenting way.
The build process is also sophisticated enough to be able to 'slot' potentially-incompatible versions of the same package so that they can be installed simultaneously. From my recent snapshot of the package repo I can install 5 different simultaneous major versions of postgres, with a variety of independent build options.