> It's simply a tactic to make sure that you are stuck on Debian.
Of course I'm biased, as I'm a former Debian maintainer, but I think that Zed's connection with the reality of the situation is tenuous, at best, in this case.
Making a distribution is difficult, and doing so with 1000's of volunteers who are not working on it full time (and since you're not paying for it, they don't really owe you anything) and are distributed throughout the world adds to the difficulty, so yes, there are bugs and problems and challenges to overcome. That said, if each author of each package in Debian got their way about the exact location of files and so forth, the system would be utter chaos. If you think your package isn't being treated right in Debian, get on the mailing list, file a bug report, make your case, and get things fixed, rather than treating Debian as "the enemy"... Sheez.
Also: why should software authors have to write bug reports to Debian?
A lot of the patches that do go in make whatever package integrate better with the rest of the system.
> Also: why should software authors have to write bug reports to Debian?
Because there's a bug?
> "Even though the piece of software I asked aptitude to install requires these libraries to run, they refuse to install it."
At first glance, that sounds like a packaging bug to me, with Rubygems, not with one of Zed's packages. The best thing to do in those cases is report the bug, rather than attempt to "make Debian pay" (for the thousands and thousands of hours of work to give you a free operating system, presumably?)
Yes, it does seem to be a critical and obvious-to-catch bug. But as davidw said, maybe the maintainer already had the prereqs installed, maybe they were having a bad day, and they just ran a quick test and saw that it worked and released it. Mistakes happen, and I'm sure the maintainer would be happy to fix those mistakes, but they can't be fixed if the maintainer doesn't even know about them.
On a side note, I agree with Zed's rant on Debian being unique with its configuration and packaging layout. I use Ubuntu on my work laptop and so I've gotten used to the different filesystem layout for config files, etc. When it comes time to set up a server, am I going to go with a different distro and remember which distro has which config file in which location, or am I just going to stick with Ubuntu server because I already know it? I don't like having to make that choice.
As to the bug report thing, let me clarify: why should upstream authors have to pay the support cost (in e.g. time and negative impressions of them) for bugs introduced by Debian's packagers?
They are just as capable of saying "that's a Debian bug, go file a bug report there". That's a small price to pay for being open source, in any case. Debian certainly picks up bugs that are intended for upstream, and there are certainly instances when Debian maintainers have been quite helpful with fixing them. And others where they haven't been. With that many people there are bound to be some really good ones, and some that are below average...
But yes, some maintainers are very good and some are quite bad; however, Debian's policies and patch-happiness means that bad maintainers (can/tend to) do much more damage than in other distributions/OSes.
The 'gem' command not working after installing the 'rubygems' package does seem like a bit of a bug, to put it mildly, you'd think a cursory test by the maintainer would catch that.
Because he cares about the bug being fixed, rather than just whining? The point is to focus on fixing the system, not on pointing fingers or telling eachother to fuck off.
The maintainer most likely already had the ruby-ssl package, and so the 'cursory test' worked for him.
Look, I don't think the rubygems package on Debian is a fantastic bit of work, but ultimately:
It's free software. If you don't like it, either help fix it or don't use it. Point it out if you like, but don't be such a dick about it.
The author put something out there under a free license, Debian benefits by including it in the distribution.
There is no onus on the author whatsoever.
People bitch because it's always Debian from which these problems originate.
Mac OS X - Fine.
FreeBSD - Fine.
RedHat/CentOS - Fine.
SuSE - Fine.
Debian - Commands missing, functionality disabled, misconfigured.
> Mac OS X - Fine. FreeBSD - Fine. RedHat/CentOS - Fine. SuSE - Fine.
Any sufficiently complex system, including every one of the above, has problems. They may differ from Debian's in type, nature and even quantity, but they exist. Including missing commands, disabled functionality, and misconfiguration.
1. Its the polite thing to do.
2. The bug can be tracked and commented on.
3. The bug is more likely to be fixed.
4. Thats how most open source projects work.
Somehow we got stuck with a critical mass of small-scale developers who never faced the nightmare of somehow keeping the right tarballs on hundreds of machines in different roles without good tools, so now we all get to relive it.
A similar idea was voiced a few days ago, specifically regarding Java libraries (http://fnords.wordpress.com/2010/09/24/the-real-problem-with...).
My response remains the same: Debian (or what-have-you) is but one distro among many, and those of us that treat such things as important but fundamentally commodity deployment platforms have no wish to spend our lives babysitting whatever package manager happens to be on the operating system we're using that month/year/whatever. Further, plenty of people have plenty of good reasons for deploying on Windows and OS X, so using tools that work well there make sense.
The priorities of the various linux distros aren't everyone else's priorities – which somehow seems to come as a galloping shock to some.
So, I guess I disagree. These tools provide a lot more than the usual package management.
To me it seems backwards to try to manage libraries globally when their use-cases are often already localized.
Why can't Zed do it and fix the problem instead of ranting about it?
I know I won't. It's not my problem. This is a WONTFIX bug for me.
The Right Way to do it, in my mind, is to take a snapshot of the available gems, and convert only those versions of the gems to debs such that rubygems and the Gem constant aren't involved at all. This means that "gem 'foo'" gets thrown away (or replaced with a check that the correct distro package is installed), using the Gem constant is a blocking bug, and "require 'rubygems'" is at least a no-op. Files from lib/ should be installed to /usr/lib/ruby (or at a pinch site-ruby), so that "require 'foo'" just works. Anything that assumes File.expand_path(__FILE__,"..") is under that library's control is again a blocking bug (I'm looking at you, haml).
I know there are people working on this, so it's not entirely theoretical, but besides, these are all good ideas anyway.
EDIT: Oh yeah, dependencies. That too.
If you think that's going to work, you don't know enough about the Ruby library ecosystem. You can't take a snapshot of the available versions and think that'll satisfy users. Users need to be able to install different versions of the same package in parallel, tons of things rely on this. Take a look at the rationale behind Bundler: http://gembundler.com/rationale.html
Sadly dpkg doesn't support multiple package versions.
Unless you are - for some reason - relying on a bug, there's no reason not to have the current stable release of any library.
I don't feel it is accurate to say "dpkg doesn't support multiple package versions"; let's say it let you install multiple versions of the same package at once: now you are just going to have file-on-file conflicts. From dpkg's perspective these different versions of the same files, which live in different places on the filesystem and can coexist on the same system, are just different packages.
The real question I feel the Ruby community should be asking, and thankfully the very question that I have personally heard from Yehuda, is "why doesn't C feel the need to do this?". The answer is "well, they do, but".
For the "well, they do" part, we can easily call into evidence that on my system I have both libreadline 5.x and libreadline 6.x: there is a (hilarious) fundamental break between these two versions of the library, leading to them deciding to increase the major version number, which then causes different applications to need to continue linking to one or the other.
Therefore, Debian models this: these two major versions of readline are each a separate package, libreadline5 vs libreadline6. You can install them both at the same time: they do not conflict.
Now for the "but", where the question becomes "what if I want to install 6.0 and 6.1" and the answer is "you don't want to do that, no one wants to do that, that is not something you should want to do", and this, I feel, is where the conversation starts breaking down with the Ruby community.
The reason no one wants to install 6.0 and 6.1 is that there is no reason to: 6.1 is newer and better than 6.0. Meanwhile, software that currently works with 6.0 /will/ work with 6.1. If software that worked with 6.0 didn't work with 6.1 it would not be called 6.1, it would be called 7.0.
Now, exactly whether the major version or the second-most major version (libxml2 uses this) or whatever other scheme someone comes up with is used to determine this isn't important. The way it works for C is that the version you are using gets fixed at compile time, which I think people should actually start thinking as analogous to creating their Gemfile.
Here's how that works (for Linux gcc with typical paths): the developer of a project passes -lreadline to the compiler. This looks for /usr/lib/libreadline.so, which in turn is a symlink to a file which is named with the "compatibility version" (in this case, 5 or 6), which in turn is a symlink to the specific version of the library in question (although there might be some more levels if someone is being silly).
/usr/lib/libreadline.so -> /lib/libreadline.so.6
/lib/libreadline.so.5 -> libreadline.so.5.2
/lib/libreadline.so.6 -> libreadline.so.6.1
Then, every binary has in it an "installed name" or "soname" which is the path to the library that will be encoded in any compiled binaries that have linked against it. This name is based on the compatibility version, and not the specific version, as there is an assumption that versions of the library that have the same compatibility version are, by definition, compatible.
So, this means that even though this symlink scheme theoretically supports having 6.0 and 6.1 installed at the same time, it would have no effect: all software wanting 6.x will be linked against libreadline.so.6, which will in turn only be able to point to either 6.0 or 6.1.
This assumption is a good thing: it means that we can upgrade libraries. People should not be wondering whether upgrading to 6.1 will break their program: if it does then the person who released 6.1 is not doing their job right, and there is a bug in that library that needs to be considered critical.
Luckily, system integrators such as Debian go to great lengths to test and verify that these libraries really are as compatible as they claim. This sometimes gets them into really hot water, however, as if upstream goes nuts and starts breaking their ABI, even slightly, Debian either needs to fix that bug or not take the update: it is not acceptable to Debian to allow an incompatible release of readline into the ecosystem, as it will have unpredictable consequences on the software that is using it.
That assumption is really important: there is no "we came really close, but some random things have changed that you now need to go fix". The idea in this world is that there is no good reason for that: there could be an arbitrarily large number of people who are relying on that feature, so it is irresponsible to unilaterally decide that you can just change it, even if it makes more sense.
This belief that "incompatibilities are serious bugs" leads to a number of common C idioms. To draw a few examples: * rather than change old APIs, add new ones; * rather than changing structures every now and then, use version numbers to tell different users apart; * use C for public external interfaces over C++ (which tends to have ABI fragility problems on most platforms) whenever possible.
Given this belief, if libreadline6 comes out, you also would never even consider using it in place of libreadline5 without going back to the source code to try to verify if it works (at least compiling it, but there may also be semantic issues at work in the API). With this belief in mind the idea of using ~> (which Yehuda has to spend a lot of time convincing people of) becomes "obvious": if a major version number (which unfortunately is not what ~> correctly models, but at least it is better than nothing to get the idea out there) changes, your package should not assume that will work.
You also find yourself being really happy that there is someone in the ecosystem--Debian in this case--who is doing all of that really hard, incredibly grueling, and (apparently) often thankless job of standing at the gates making certain software doesn't just willy-nilly enter the ecosystem until it has been reasonably regression tested. The developer's goal is, sadly, not always aligned with the grand unified vision, and that's what the users of these systems are buying in to (and I'll even go so far as to say: and that is where most of the value is, not the individual software projects).
Ok, </rant>. ;P (If anyone is curious: the reason libreadline5 and libreadline6 are incompatible with each other is actually that 5.x is GPLv2 and 6.x is GPLv3; afaik, and I'll admit I might be wrong, not only is the API between these two major versions identical, but so is their ABI: only in the world of politics and legalities was there an interface break. It still makes a simple example that a lot of people have run in to, though. ;P)
Then, by definition, it has not been done - it has been attempted and the problems proved hard.
I think the best way to go about it is leave the Debian Ruby subsystem alone and either go the virtualenv route or install it from the source tarball and place it in /usr/local or /opt
Multiple versions of a library are certainly possible with C, but are generally avoided on Debian where possible. Where that isn't possible -- for instance, for libraries which completely changed their API like gtk1.4 -> gtk2, they simply create completely separate packages for the two versions.
With gems, this isn't quite so easy, as there's no way to distinguish between a minor bugfix and a huge API-breaking change without either guessing based on the version number or having a human read the changelog. Even then, there's no guarantee that dependent packages will depend on distinct sets of versions.
gem 'rack', '>=0'
Rubygems can't really clean this problem up, because Rubygems assumes it's running on an actual copy of Ruby, and not a half-broken Debian Ruby.
To get a working Rubygems installation on Debian, I usually 'apt-get install ruby-full', which installs a non-broken Ruby interpreter, and then install Rubygems itself from a tarball so that apt-get stays _far_ away from my Ruby setup.
I don't blame Zed for being angry about this. Debian's Ruby has often been pretty broken even by vendor standards.
I think you really should go to debian.org and demand they return your money.
These are the likely prerequisites to a slick experience for their users on any distro
You'd think, but apparently that's not the case, as Debian is the only distributor who breaks rubygems when packaging it.
Will you do me a favor and file this bug report for me? I'm allergic to sanctimony which I'm sure is all you'll receive from the package maintainer:
"gem update --system is disabled on Debian. RubyGems can be updated using the official Debian repositories by aptitude or apt-get."
That's a pretty absurd claim. Is there anything to back it up?
I mean, releasing an open source operating system doesn't seem to be the epitome of "control and lock in" to me. Ubuntu probably wouldn't exist if Debian were any good at "control and lock in", for one.
Also, your "bug report" is quite different than the one Zed points out. You're suggesting a fundamental change to the way rubygems works on Debian. That's something that merits further discussion, rather than a simple "bug report", obviously.
By the way, your "post" is quite the "epitome" of total "air quotery".
He also says that "it is sadly pure business and has nothing to do with open source, quality, or culture." which is pretty odd, given that Debian is a non-profit, voluntary organization.
It does seem silly from a rubyist's perspective but their approach does make a heck of a lot of complicated software work together in a way that makes administering complex server setups a doddle compared to how it could be.
There's just a yawning philosophical gap between how the Debian and Ruby communities look at the world and approach software management.
If everyone thinks you're off in the wilderness, you might want to check yourself, maybe just a bit.
2. These packaging systems were all designed by and for developers. Debian was designed by and for system administrators. The fact that there are common problems is not surpising.
Of course, it's the debian maintainers' prerogative to structure things such that others generally pick up their toys and go play in their own sandboxes. I just get irritated when I hear people griping about it when it happens, as if system-level package managers have some kind of inherent priority.
I'd say that it's more that if there's no (current) way to track what the other package management system is doing on the distro (any distro, not just Debian) then they can't tell what packages are installed and hence whether the distro-supplied packages can work with them...
What is the alternative? Should software authors be given carte blanche control of their own packages, regardless of their knowledge and commitment to the Debian platform and ideals? The goal is to make a working system, not to indulge every author that thinks their package is the exception to the guidelines laid out by the Debian members.
I have to say, I just installed Debian's postgresql-9.0 package and I was pleasantly surprised. They've packaged things in such a way that you can install 9.0 and 8.4 in parallel. Yes, it is different than the default postgres install. Instead of everything being in one directory they put config files in /etc, library files in /usr/lib and your db in /var/lib--just like every other package on the system. This seems like the right kind of packaging to me--it makes the system consistent.
Of course, for every good package like postgres there's a package with some epically stupid decision (netcat-traditional's "-q" patch).
If Debian made it a policy to change packages as little as possible (which is not "not at all"), these problems wouldn't show up.
Also, yes, your shiny PostgreSQL package now matches every other Debian package, but it is different from every non-Debian PostgreSQL install out there. Zed wasn't talking about lock-in for nothing.
No, but others would show up. Come on, these things are engineering tradeoffs, with both positive and negative aspects. Even where we disagree about the benefits and disadvantages, we should agree to disagree in a civil way, rather than calling for "in-person confrontations". There are certainly technical complaints to be made about Debian (I could make a long list myself), but Zed's attitude is way out of line.
- config files would be shoved all over the place, without being able to back up /etc as a config location or map frequently read data to var
- You couldn't partition your disks reaosnably beyond a single filesystem as there's no way to predict where apps will shove their junk
- Uninstalling apps would leave junk cron jobs and other files made manually rather than put into include dirs
- Your PATH would be exceedingly long
- You'd end up using 'find' a lot more to know where a particular library is.
I don't think you can consider that locked-in in the slightest--it's not like they changed the DB format or removed the ability to dump a db. Do you really think anyone is going to load up postgres on Fedora, find that it's installed in /var/postgres and not /var/lib/postgres, throw their hands in the air, scream like a girl and then go reinstall Debian? Having stuff moved around can be coped with. I'll be able to transfer the data itself to another disto just fine, thank you very much.
Debian adds patches, but so does Red Hat/Fedora. Since these are the two main distributions (maybe not by themselves, but because they form the base for most others), then you criticism can be applied to 99% of the distributions out there.
And bug reports about integration work should go to the distribution, and not upstream developers.
But why does Debian split up Postfix (well-written, actively maintained) into a zillion different packages stitched together by dynamic loading, instead of just configuring it properly? Why does Debian fuck up OpenSSL?
If you need plain old Postfix, you just aptitude install postfix.
If you need LDAP support in your Postfix instance, you just aptitude install postfix-ldap.
Is it really that hard?
And regarding OpenSSL, the problem is not in Debian.
Just check http://people.gnome.org/~markmc/openssl-and-the-gpl.html
OpenSSL always had a problematic license.
With respect to the OpenSSL stuff, I'm not talking about the GPL license - I'm talking about http://www.debian.org/security/2008/dsa-1571, one of the most painful security issues in recent memory and caused by a meddling maintainer. (Certainly, the upstream team could have more clearly warned him, but without the maintainers changes everything would have worked just fine.)
I understand you spend a lot of your free time working for free, and I appreciate your good intentions and the good that Debian and Debian's developers have done (e.g. writing lots of man pages for programs that have none) - but I do think Debian has some really questionable sides as well.
Debian is a piece of software that people depend on. They have a right to be upset when there are bugs. The proper response is to apologize and fix the bug. Period. Instead, your response is to blame the (already angry) user on the basis that they didn't submit a bug report. I would be livid if you responded that way. Yes, you do voluntary work. You get credit when you do good work. It's only fair that you take on the responsibility when your work isn't so good.
The Debian community can consider Zed's post a bug report. Now, go fix it please.
Also, "I'm imagining a mass petition, maybe some funny ad campaigns, in-person confrontations meant to embarrass package maintainers, SEO tricks to get people off Debian, promotion of any alternative to Debian, anything to make them pay, apologize, and listen." is not something likely to make most people apologize, fix the problem and move on. It's rude and uncalled for.
The most common answer in #rubyonrails is probably this: "you've installed ruby/rubygems via apt haven't you? uninstall it and reinstall it manually."
Of course, the more general point stands that the current state of the Debian/rubygems integration is far from ideal. However, fixing it takes people thinking about the problem and working on it, not just moaning and calling for boycotts. What a pissy attitude: it's easy to say "this sucks!", but much harder to create something better, or fix the broken thing.
I would have been quite comprehending of a "I'm very frustrated by X, Y, and Z about Debian". I mean, it certainly has its flaws (I ended up moving to Ubuntu myself). Another thing is this entitled attitude of "let's hurt them!"
Thus, Zed offering a polite "bug report" rather than bitch and moan about it probably wouldn't have been all that much more effective -- seeing as how it has remained in the current non-functioning state for many years.
And trying to round up a lynch mob is more effective how?
Compared to the previous total lack of a discussion over the last several years, it seems Zed's approach has been quite efficient.
Also, if you read the Debian guy's opinion, it is clear that, behind the scenes, and away from all the kerfluffle, people are doing work to improve the situation.
Also, you people seem awfully sure of yourself. If you peruse the bug reports, you might find some interesting nuggets:
> - Do we want to disable gem update --system? I think that we should allow a way for the user to do it anyway.
But don't let me disabuse you of your preconceived notions.
There's already a working solution provided by Apple for Mac OS X and improved upon by the RubyGems team. There's concepts of vendor gems, system gems, user gems, and I think one other level. The Debian team has never considered it despite being told that it's there. The Debian team has also never offered patches that would help RubyGems come more in line with what Debian thinks it should be doing.
Why should the RubyGems guys bother working with uncooperative Debian maintainers?
It's either spend a week wrapping your head around all the different ways of getting something to spit out a conforming .deb, or falling back to something like RVM + RubyGems / PIP.
The maintainer community is also reflexively defensive when they've broken something. The problem is never Debian breaking things into little unusable pieces, it's always upstream for not foreseeing how their software would be bastardized on Debian.
That's a fair complaint, and is also something of a complex issue, as there are tradeoffs between high quality integration, rapidly assimilating new stuff, the total number of packages in the repository, and so on.
However, Zed's mistruths about the nature of Debian (which is a volunteer-drive non-profit organization, not a business), are not:
"It is sadly pure business and has nothing to do with open source, quality, or culture."
"I did not write it to boost the Debian business plan"
And for all Debian's adherence to standards, it's unfortunate that one couldn't be agreed upon for source packages, 'apt-get source' is a mixed bag.
This makes it a royal pain to maintain forks, because now you have to be up on three or four source packaging formats, in addition to getting the final binary format to comply with the standards.
For the parts of the system where you don't care, using the distributor provided packages is fine.
But when you do care, it is often simpler to just use pristine upstream rolled into your own deb as a starting point for managing package versions at your site.
But the fundamental problem is purely social, not technical. Debian is always ready to introduce a new policy that would require them to screw with all the upstream packages (breaking things up, license wankery, dash, silence in valgrind, etc.)
Depressingly I find myself liking the way packaging is often done with RPMs, especially from vendors — they just make a directory in /opt that bundles any contentious dependencies, and just fucking works. Functionally there's almost no difference between RPM and DEB — it's purely social expectations.
The way Debian attempts to micromanage rubygems is insane and ends up breaking under real world environments. Why do they persist?
Does he want to be 'fun to read' or fix a problem? Because pissing all over other people's hard work and attributing nefarious motives to them to boot is not a good way of fixing the problem.
> The way Debian attempts to micromanage rubygems is insane and ends up breaking under real world environments. Why do they persist?
Good question - why not do some research and find out?
(BTW, I'll add that there are good reasons not to use Debian's gem packages, and that I disagree with the 'pile-on' downvoting of the parent post, even if I disagree with what he said)
Because they have a systemwide mechanism for doing precisely the same thing rubygems is trying to do with Ruby, that system has been in place longer than Ruby itself and has proven to be one of the major strengths not only of Debian, but, in similar implementations, of all Linux distros.
We have explored this before - if Zed wants a Ruby that works his way, he should install a separate one, like Python developers do with virtualenv or a source tarball. Then his gems will be, hopefully, installed somewhere sane out of the way of update the system.
If gem installs its libraries on the wrong place then, it's a bug with gem, not Debian.
(Note that by Debian policy, dependencies that are only for some actions of the module might not be hard dependencies, but "recommends" dependencies; but you can configure apt-get/aptitude to install those for you too).
Fortunately, this is being worked on. I won't say by whom, because I don't know if they want it public yet.
Everything about Debian's approach is in rabid opposition to this. They want to scatter the files all over their precious FHS. They want to install as few versions of libraries as possible, and where multiple versions must be installed in the best case the names sre mangled because apt does not have a 'slots' mechanism (average case they use the mediocre 'alternatives' system, worst case there are multiple independent depgraphs!). On top of all of that there's their release cycle where they freeze all the packages at a particular version number for several years for normal users (and regular 3-6 month periods on the 'bleeding edge'), after which they start mutilating random patches from upstream as 'support' instead of just shipping the new versions.
You're right that there's no way to reconcile this fundamental schism. You're wrong about which community is breaking everything.
This is not necessarily true. There are many Debian packages that can be installed in parallel. I still have gcc-2.95 installed and that hasn't been available in the Debian archive for years. It is named properly and doesn't conflict with any of the newer gccs. So it is possible to make parallel installs.
> They want to scatter the files all over their precious FHS.
This is true. And as a Debian user, I support them fully. I don't want to learn some stupid Ruby package directory convention. I already know if I install a Debian package that the configuration files go in /etc/packagename and the docs are in /usr/share/doc/packagename. Every package is like that and consistency is a good thing.
I think the Debian Perl maintainers have gotten this right. They have a program "dh-make-perl" that will download a CPAN package and make it into a Debian package. This really gets you the best of both worlds. Hopefully something like that will get written for Python and Ruby (they haven't had their CPAN clones out for as long as CPAN, so give it time).
> On top of all of that there's their release cycle where they freeze all the packages at a particular version number for several years for normal users.
That's not just a Debian thing. %$#@! RHEL still ships a 2.6.18 kernel in their latest release! There is a certain class of people who are very, very conservative with their software upgrades. Debian Stable and RHEL appeal to those types. If you aren't one of those types, don't run them--instead run Debian Unstable or Fedora or Ubuntu. Those stay on the bleeding edge most of the time.
It is NOT named properly — they mangle the name of the package to be 'gcc-2.95' to make this work. They only do this when they really can't get away with it otherwise because other base packages have hard low-level dependencies.
> RHEL still ships a 2.6.18 kernel in their latest release!
The ancient base kernel is mostly just so that commercial drivers never get ABI breakage. Most other things at least have newer versions available at more recent patchlevels, community repositories are maintained, and shit can be installed directly from upstream without the sanctimony.
Really RHEL just leads you directly towards the correct conclusion (that Debian tries to talk you out of) — treat the distro system as a general starting point, and install everything directly relevant to your usage from upstream. Use the distro packages for native dependencies, not applications you really care about.
I have yet to see a distro that could meet all the requirements for a non-trivial production deployment out-of-the-box. And I have seen all of the big names in production.
When it comes down to the runtime environment then you're always rolling your own packages in one way or another, regardless of the underlying linux distro.
The value of a linux distribution is not related to how it bundles the language runtime de jour. Those are inherently moving targets and no matter what distro you look at, they're always outdated and always wrongly packaged for your particular use-case.
The value of a distro mostly stems from how it keeps all the other mundane maintenance and infrastructure headaches out of your face while you're busy keeping your runtime in shape.
Praising RHEL/yum over apt in this context seems mildly hilarious.
So what? If it installs in parallel and still works after umpteen years, how is it not "proper"?
It seems like you want to have it both ways--on the one hand you bash Debian for hacking versions into the package names for practical reasons, then praise the practicality of RHEL because you can install crap from Joe Shmoe's public repository (btw, would you really do that on your company's server??). All the while ignoring the fact that you can use "alien" to install rpms on a Debian machine (it's not perfect, but I've had good luck and have been able to experience the wonderful world of random junk in /opt) and that there are a bunch of Debian repos out there to get stuff that isn't in the official repository...
If I have one application that depends on version 1.3 of a library, it can do:
gem 'library', '~>1.3' # 1.3 and higher, but not less than 1.4
gem 'library', '~>1.5' # 1.5 and higher, but not less than 1.6
The Debian approach works...fairly well for compiled applications, but it's utter crap for dynamic languages like Ruby. The RubyGems approach is superior for Ruby, and it fills a very important need for Ruby: it works on platforms other than Debian.
Debian packagers assume that they're the only packager in the world that matters. They're not, and other vendors have done infinitely better jobs of handling packaging Ruby than Debian, mostly because other vendors actually write patches for the upstream providers instead of bitching that the system isn't compatible. (See what Apple did for RubyGems. It still required work on the RubyGems side, but it was at least usable and didn't break the fundamental behaviour of RubyGems.)
If Debian packagers want more respect from upstream developers, they need to stop treating us like we're the ones doing something wrong. If we've developed something, it's because there's something missing.
> Debian typically won't include two versions like that; they might include library1 and library2, but not 1.3 and 1.5.
That is because shared libs actually increment their .so number when they are incompatible. Note that library1 and library2 are not necessarily versions 1.x and 2.x--the 1 and 2 are the .so number. In the past Debian has had to increment the number themselves because the upstream introduced an incompatible change and then didn't increment it.
It sounds like the Ruby people need to pay attention to history and add an "so number" like field to their programs to signal compatibility changes. Or perhaps just adopt the fairly widely accepted "major number increases when compatibility is affected" technique.
> I don't have to go through my source to change "require 'library-1.3/file'" to "require 'library-1.5/file'" when I want to upgrade.
That is a red herring. In the ideal packaged world library-1.5 would install into the proper gem location such that "require 'library/file'" would work the same way it does if you had installed it by hand, regardless of its package name.
It's not at all related to DLL hell. It's called "freezing your dependencies." It's not simply a matter of compatibility; if version 1.3 has support for feature A and version 1.5 adds support for feature B while not changing its support for feature A, is that worth a complete .so number bump? In the Ruby world, no (and it shouldn't be, IMO, either -- especially with how most version numbers are determined in the real world). Not when we have a much smarter system that actually works with a dynamic language that doesn't depend on build-time link specification.
>> Debian typically won't include two versions like that; they might include library1 and library2, but not 1.3 and 1.5.
> That is because shared libs actually increment their .so number when they are incompatible. Note that library1 and library2 are not necessarily versions 1.x and 2.x--the 1 and 2 are the .so number. In the past Debian has had to increment the number themselves because the upstream introduced an incompatible change and then didn't increment it.
In other words, the .so number has no relationship to the version in any way that a user can meaningfully understand it. (I knew this, it's just nice to get confirmation of it.)
> It sounds like the Ruby people need to pay attention to history and add an "so number" like field to their programs to signal compatibility changes. Or perhaps just adopt the fairly widely accepted "major number increases when compatibility is affected" technique.
In reality, the Ruby people have paid attention to history and noted that .so numbers don't work as well as some people think they do. In my Linux work, I make one binary that's supposed to work on multiple distributions. In at least one case (several years back in the end of the kernel 2.4 era), we found a problem where a common library (libc, maybe) would work if the underlying symlink was to library.so.x.y.6, but not if it was anything less than library.so.x.y.6. Yeah, that was fun to debug, especially since our link specifier was against library.so (-llibrary instead of -l/lib/library.so.x.y.6). The latter, by the way, would have prevented me from using library.so.x.y.7 if that came out and was still unbroken, whereas linking against library.so or library.so.x.y would have given me that; I just couldn't trust anything under .x.y.6.
SO versions also don't really work if you're not, you know, using an SO. Ruby has different needs than a C or C++ compiler, and with RubyGems it's trivial to install multiple versions in parallel in a way that they don't conflict.
>> I don't have to go through my source to change "require 'library-1.3/file'" to "require 'library-1.5/file'" when I want to upgrade.
> That is a red herring. In the ideal packaged world library-1.5 would install into the proper gem location such that "require 'library/file'" would work the same way it does if you had installed it by hand, regardless of its package name.
Except it's not a red herring. Like I said (and you may have missed), I can do this:
gem 'library', '=1.3'
gem 'library', '=1.5'
Debian's techniques wouldn't be able to work with the ~>, >=, or < operators for version specifications if you actually have 1.5, 1.5.1, and 1.5.2 installed (and there are reasons for this). Even if you ignore that capability, Debian can't solve the 1.3/1.5 version problem without affecting the folder name somehow. It will either be:
What Debian maintainers seem not to like about RubyGems is that it acts a lot like "stow" in that everything related to a gem is stored together, including the extensions and the binaries (which breaks FHS). What they don't realize (either by choice or ignorance) is that the RubyGems developers thought about wanting to have multiple versions of Rake installed and making it so that the command-line could select which gem version:
rake _0.7_ task # runs version 0.7
rake _0.8_ task # runs version 0.8
rake task # runs the latest version
That and the fact that things like "gem update --system" updates the functionality of RubyGems outside of the control of the dpkg system makes them unhappy and causes them to break fundamental behaviours that Rubyists expect and complain loudly about. Zed is right about this breakage; it's a problem and it isn't Ruby's (or Python's or Perl's or Java's) fault. Debian maintainers are making choices that aren't in line with what the programming language folks need when they're working cross-platform. Debian's policies and procedures are stuck in 1997 with statically typed languages and fully compiled software systems. It'd be nice for them to join the 21st century and be friendlier toward development and deployment environments that need greater flexibility.
 The funny thing is that the functionality update is something the user has to request and is itself installed as a RubyGem (e.g., rubygem-updates-1.3.6). RubyGems is smart enough to know that if it's version 1.3.5 and there's a rubygem-updates-1.3.6 gem in the installed cache, it switches to that for loading. If, however, Debian were to install 1.3.7, rubygem-updates-1.3.6 would be ignored. If Debian actually just respected RubyGems instead of trying to make it 100% fit with its policies, things would work a lot better. The authors of RubyGems have to make this work on a lot of platforms, and Debian is the one that has the least love because of the lack of respect from the maintainers.
I don't want to learn some stupid Ruby package directory
convention. I already know if I install a Debian package
that the configuration files go in /etc/packagename and
the docs are in /usr/share/doc/packagename. Every package
is like that and consistency is a good thing.
You say that like it's a good thing. The ability to do that is a large part of why it's broken. If you're a developer, yes, it's extremely convenient, and it means you never have to worry about API stability, back-ported security fixes, or even semantic versioning. You just bump the version number and push out a new release. It's a great developer tool that happens to also work for deployment if you're deploying in a language monoculture. If you're a general sysadmin, or targeting a system where rubygems isn't suitable (embedding ruby, for example), it's a bloody nightmare.
> They want to scatter the files all over their precious FHS
The "precious FHS" is part of what keeps sysadmins sane, and splitting packages isn't a problem unless filename conflicts happen. I know that rspec and cucumber used to have a problem here, I don't know if it's been fixed.
> They want to install as few versions of libraries as possible
Yes, and that is the correct approach if everything is supposed to be under control of the OS.
> after which they start mutilating random patches from upstream as 'support' instead of just shipping the new versions.
Again, this is the right thing to do, if upstream won't provide fixes, or keep to any kind of API or dependency stability. In the Ruby community, new versions tends to mean new features, which inevitably means new bugs.
> You're right that there's no way to reconcile this fundamental schism.
I didn't say that. I believe it's firmly reconcilable, with a mechanism not unlike backports. However, it does need gem authors to lay off some particularly inconvenient practices for it to be manageable. Just adhering to the RPG would be a good start.
> You're wrong about which community is breaking everything.
Rubygems is the latecomer here - standards like the FHS have been evolved over a generation before this little head-to-head happened. Besides, I don't think the "community" is breaking things. I think the "community" is behaving in a way promoted by a broken tool.
As a user, and amateur administrator of my home machines, I've learned the hard way that third-party packages supplied by the original software vendor are, in general, utter crap. They're built against a distro two or three versions old; or they're built for Mandrake or Ubuntu instead of Red Hat or Debian; they "install" a tarball to /tmp which the post-installation script untars in the root directory; they don't have any dependencies declared at all but crash at startup if you don't have a particular version of libjpeg... if you're relying on the packaging system to detect file conflicts or outdated dependencies, third-party packages can be very, very scary.
The single biggest reason I choose Debian (and sometimes Ubuntu) is the uniformly high-quality packaging. Zed's found one problem package, and a trawl through Debian's bug tracker will no doubt find others, but the fact of the matter is that with a single command I can install almost any Free software in the world, and have it installed and set up within seconds - and Debian has years of experience figuring out how to make that work. I don't think that's a thing to be dismissed lightly.
It does have the best software repository ever, but boy how much it sucks when something's broken.
The result is that any program or library that you use directly must be sourced from upstream, especially if it's less than 15 years old or written in a language other than C or C++. Luckily pretty much all of the modern programming language environments have evolved to cope with this onanistic clusterfuck.
Haskell has more fucked by Debian than any other language I know of — when I last had to deal with it a year ago there were two broken+old builds of GHC with different package names and mutually-exclusive sets of packages that depended on them. On top of that the version of cabal (Haskell's packaging tool) in the repository was so far out of date that you couldn't use it to build anything remotely recent (including useful versions of itself), nor could you use it with anything in Hackage (the central repo).
My old roommate had listened to me bitch about this stuff for years, and always dismissed me as crazy for thinking that the packaging was fucked (though he did share my hate of debian-legal). Last week he called me out of the blue and apologized — he'd installed Wordpress through Debian and they'd broken it up into a bunch of library packages, but still left a base skeleton of random php files and symlinks, accomplishing nothing but breakage and unportability.
A lot of the software they package comes with unit tests. Those unit tests have a purpose. They are meant to see whether or not the software as configured and installed, works.
Debian systematically strips those unit tests out, and never runs them to see how much stuff they are shipping already broken. Why? Why not package the unit tests as an optional package, and make sure they have a wide variety of systems in different configurations running them to notice problems?
I can't count how many times I've tried to install a Perl module from CPAN, found it was failing its unit tests, installed the Debian package with it, ran the unit tests, and found that the unit tests said the package, as installed, was broken. It's not as if the package developer didn't try to tell you you were missing something. Why throw that information away?
If they did this then they'd automatically catch a lot of problems that currently get missed. Heck, insert a test for ruby gems that says, "Does this software start?" They'd have caught this bug automatically.
Until Debian catches up with standard best practices, I completely can't agree with the meme that they run software better than everyone else. It isn't as if unit testing is a new idea. It has been pretty mainstream for most of the last decade. Some were doing it earlier. Perl has been doing it since the 1980s.
Debian CPAN packages do run the unit tests--they just run them when you create the package and not when you install the package. In fact, the package will not build unless the tests pass.
One of the big things that unit tests are supposed to catch are hidden dependencies. By nature, hidden dependencies are likely to be installed on the maintainer's machine, so unit tests only help if they are run somewhere else.
As a prime example, the bug that Zed was complaining about was a dependency problem of EXACTLY this sort. It worked fine on the maintainer's machine. It didn't work on Zed's machine because he didn't have a key library installed. No unit test the maintainer can run would catch this. Even a stupid unit test Zed can run would have flagged this immediately.
Incidentally about the Debian CPAN packages, my experience with them a few years back was that they were so consistently broken (usually because of locale issues) that I have a hard time believing that unit tests had actually been run by anyone. Or if they were, they were ignored.
It seems the same to me as slackware or one of the other "compile everything yourself" distros--they force you (the installer) to do a bunch of redundant calculations. If someone can compile on my architecture then why should I ever have to compile it? The same with unit tests--if someone can run the unit tests on my exact configuration then why should I ever have to run them?
If you're installing item after item, then the odds are that at some point you're going to install the dependencies, and after that the tests will pass and mean nothing. So you keep on wiping, installing one thing, and testing. Fine. Now you have cases where package A conflicts with package B and you never find it. And now what? The number of combinations to test is incredibly large, and nobody is likely to have tested your exact system.
As for the unit tests, make them available, make them optional. That's OK by me. I'd opt to use them. Particularly since I learned the hard way that most of what I want in CPAN either isn't in Debian, or isn't up to date there. So I install things directly from CPAN. That means that when I install from Debian, there is a possibility of library conflict that I want to notice.
I don't know about 'a lot', but in any case, your idea is potentially a good one, although probably not 'low hanging fruit' in terms of the amount of work it would take to get all the various systems integrated and talking together.
And in terms of 'beef with Debian', it seems to be a general beef with modern OS's, because none of the others do it either, do they?
Could well be, but at this point, we're moving pretty far away from Zed's spittle-flecked exhortation to pursue "in-person confrontations meant to embarrass package maintainers, SEO tricks to get people off Debian, promotion of any alternative to Debian, anything to make them pay" and merely talking about what could be made better (and there's always something).
I'd be embarrassed by that!
This package contains the tests of ipython.
You can check this way, you can test, if ipython works on
The reason is simple: control. If I can't control every stage of the development, deployment, and distribution process, I don't want in (yes, I'm a control freak. No, I don't think necessarily a blemish on my personality, it's just who I am). If there's something wrong with how my users perceive my software, it's because of something _I_ did wrong, not because someone took my hard work and toil and perverted it with their own changes, be it making the code ugly with nasty function names or dirty hacks (in my opinion, of course) or if they distribute it in a way that makes users cringe. It's my hard work, and I deserve to be (a) in control of the user experience, and (b) attributed.
If you're willing to make your awesome utility/code that you've spent 5 years developing and maintaining fully available to the public, giving up all control of the end-users' perception of the package, you have a bigger heart than me. But me, I'm a selfish guy when it comes to my users, and I don't want anyone to even have the possibility of hurting them. I have _my_ users' best interest at heart, you probably don't. At least, not to the same extent that I would.
Then it would not be open source.
Some people, first and foremost rms, dismiss the term "open source" completely for missing the point.
As the upstream developer, Zed wants to make sure that his software is distributed in a manner that is useful to people and actually works and shit. This gives him jollies.
As the distributors, the Debian contributors want to fullfil ideological goals, fit everything into their neat hierarchy, assuage neckbeard sysadmins that the version number is sufficiently ancient (not that the software is, given the distro patches), pretend to care about people running it on the PA-RISC port of Debian GNU/kFreeBSD, wank about licensing, and reach consensus. This gives self-important teenagers and manchildren all kinds of joillies.
That said, Zed is blowing this way out of proportion.
Uh... that might work for one language, but for 10? 20? And some poor sysadmin is supposed to use all these disparate tools to install security fixes too?
And you're forgetting that there are "end-user" programs that depend on those libraries as well, so they really do need to be packaged up.
> That said, Zed is blowing this way out of proportion.
That goes without saying.
I'm not forgetting this. I don't think the user's programs should use the same libraries (or even the same language install) that the distro's programs depend on. This way, if the user messed something up (like trying to upgrade a library), the entire distro wont be hosed. It's a really bad idea from a stability, security, and design standpoint if you don't keep these two things separate.
Have their been clashes between the Ruby and Debian communities? Yes! Are we all working toward a solution? Yes! Will it happen instantly? Unfortunately no, but I think the two groups are now talking and are making some good progress. Keep an eye out for some good stuff in the future.
IME, separate package-management for each and every language is more painful than the problem he is describing.
Rubygems, CPAN, whatever other languages use.
As soon as you install things outside of the distro package-manager, it has no idea what is going on with those packages and so continues to think that they aren't installed. If there was a way to get the package-managers for language libraries and the distro to play together nicely and work for dependencies etc then this would be OK, but as it is it's a bit of a sysadmin nightmare.
Edit: I thought that installing distro-supplied packages into /usr/local was against the FHS?
Yes, hacking the language's package manager to talk to the distribution package manager is probably the only way to really go about solving the problem in an elegant way, and even that is not without its pain points.
Absolutely, but this problem has been around for a long time, and fighting with distributions on how to get it to work won't help anyone.
Of course, it also means that if you install a buggy version of a library from, say, CPAN then you could break distro-supplied packages... I guess it depends on which you value more: distributor-provided testing or being able to install out-of-band packages onto your distribution more easily and with proper dependencies.
Thanks for posting the link, but being labelled a rant, I nearly didn't go there. Just want to encourage others to read it for an excellent explanation of the issues at hand from the guy who is doing the work.
Sure you could argue that yes, ruby should be installed with apt-get and alternative versions should be handled with Debian's alternatives infrastructure...
I think this is an interesting case in which version numbers are more than just version numbers, they are more like sister projects, and they don't fall neatly into the "conservative and stable" or "bleeding edge and risky" camps the way maintainers typically view different versions.
From the perspective of a package maintainer, if we had to include an alternatives list for every dot version of every package, then the distro would explode in complexity.
Ruby and Python just happened to grow so quickly that their growth didn't immediately trigger the appropriate response from distro maintainers, and very quickly the community worked around the problem.
Not everyone is a heavy Ruby user - someone might just want to install a package that depends on Ruby and be sure that things work. Just as they may want to install a package that depends on Java, Ada, Haskell, Erlang, Perl, Python, PHP, Ocaml, Tcl or any of the many other languages present in Debian, and not have to deal with a different package management system for each language on the computer.
> Ruby and Python just happened to grow so quickly that their growth didn't immediately trigger the appropriate response from distro maintainers
It's actually a fairly old problem. Perl and CPAN have many of the same issues. I'm not sure there's a perfect solution.
A user can manage his/her own stuff in ~/bin and add it to path if he/she wants.
A sysadmin can put things somewhere like /usr/local/experiment33 and make sweeping changes just by changing the default path, which a user could override for customization.
This is true, but it's more than that. Ruby and Python are both committed to working on a wide variety of platforms, including platforms that don't have native package management systems (Windows, Mac OS X, Solaris, etc.). For Ruby programs and libraries, RubyGems are a more elegant solution than telling people 10 - 20 different ways on how to work with their project, depending on the OS.
The only people trying to make it bearable here are the package/ports managers and yet they don't get any kind of reward for their job. They have to come up with crazy tricks to make things just work because people who write software are unable to write proper install notes, list dependencies correctly, etc..
This process is heavy, slow and doesn't always produces the expected results (there's no doubt about it). So people thought it would be great to have just language-specific package management systems and make it unbearable again. Alright. I personally never use them and install stuff at hand unobtrusively.
Now, do your job. Go fill a bugreport. Or better yet; fork. This is not discussing things, this is not helping. Tears don't help.
The build process is also sophisticated enough to be able to 'slot' potentially-incompatible versions of the same package so that they can be installed simultaneously. From my recent snapshot of the package repo I can install 5 different simultaneous major versions of postgres, with a variety of independent build options.
Debian is all about stability and volunteer effort.
Whining is not going to fix things.
On Debian stable "aptitude install rubygems" followed by "gem install rake" just works. This looks like a completely gratuitous rant, I've seen Jed Shaw better inspired.