Hacker News new | past | comments | ask | show | jobs | submit login
Debian 7.4 Released (debian.org)
156 points by bbzealot on Feb 9, 2014 | hide | past | favorite | 41 comments



I usually use Arch Linux but I've been playing with Debian lately. I don't like dpkg as much as I like pacman, but it a really nice distro (assuming the age of some of its software packages doesn't bother you).


Sigh... People always remark on Debian having old packages. Well the "stable" repository might be stable which in this case means run-in-production-for-5+-years-on-many-archs without problems.

But then there is "testing", and "unstable" too which gets newer packages quite often. I personally run "unstable" as my main development machine and have for many years. Even "unstable" is very stable, but might require some manual work once-a-year or so, with a solution often easily found by some google-fu.

Then there is also "experimental", which might very well have broken dependencies etc.

The cool thing is you can run stable, but then pick some packages from unstable using what is called apt-pinning. This way you can get a rock solid base, but use newer packages of some software. Best of both worlds!


Also, besides apt-pinning, I sometimes just do a build-dep for certain softwares and build them manually, the good old way. ;)

But obviously you try to avoid this in production environments if possible.


Actually, I've become more and more a fan of "manually backporting" -- just throw apt-src registry for experimental in your sources, apt-get build-dep <package>;apt-get source <package>; cd <package>; dpkg-build.

It only works for comparatively small packages, you'll need to keep an eye out yourself for security patches -- but it allows you to install stuff via dpkg -- and generally if the thing is too complex, you'll probably want to run a distribution package anyway (something simple in this context is nginx, varnish, nodejs (but I tend to run upstream nodejs -- too much of a moving target...) -- something complex might be pygame or network-manager -- basically stuff that has deep and wide dependencies.

For stuff that I need to get from upstream, I try to use xstow -- it's a great way to keep either /usr/local or ~/opt/ manageable. Note: for nodejs -- it doesn't read the prefix variable for just make install, use DESTDIR (and do some manual cleanup):

http://www.debian-administration.org/articles/682#comment_26


I should add that yet another option is to use chroots, via schroot. It can get really hairy, but it can be the "easiest" way to get stuff like 32bit upstream wine running on a 64bit Debian box (without resorting to downright virtualization of some kind).

https://wiki.debian.org/Schroot

http://www.debian-administration.org/article/schroot_-_chroo...

Note the warnings about going willy-nilly "xhost +". Much better to bind-mount home under the schroot(s) -- partially you'll probably want access to your home folder anyway (say Firefox or Chrome profile?) -- and it gives access to the X11 auth cookie keeping everything as (in)secure as it usually is.


Docker seems like another good way to do this. Maybe I just fail at using schroot but I usually end up with a huge mess in /opt when I use it.


It does have a (too) steep learning curve. I second docker as an option.


I gave up on stow in favor of simply --prefix=/local/package-1.0.0 and adding that bin/ to $PATH in user-specific .zshenv. But this may only make sense as the packages I compile from source usually aren't dependencies for other packages. (Although if they were, I envision myself simply adding to LD_LIBRARY_PATH etc).


It's nice to have one (or a few) directory-hierarchies, (such as /usr/local) -- as you then can set LD_LIBRARY_PATH, MAN_PATH and PATH once, and have (most) everything just work. I find that the stuff that "only" needs to be added to path might as well just be symlinked to a "bin"-directory somewhere already in path.

That said, I don't advocate doing things that are more complex than what is actually needed. For me xstow strikes a good balance.


Hey thanks for "apt pinning". Never knew about that. One more question - how do you get the Ubuntu PPAs into Debian? It seems a lot of new software is exclusively published on Ubuntu.... at least that's what it seems from Webupd8 and omgbuntu


Answer to your question and method for figuring out other "how to do something ubuntu in debian" questions. A great way to answer a lot of these questions is simply use `apt-cache search` however this is not always enough. The next thing to try is apt-file. Install apt-file:

  # apt-get -y install apt-file
Update apt-file db (might do this on install, I forget):

  # apt-file update

Now the bit you are interested in: How do you add PPAs in ubuntu? Answer: `add-apt-repository`

Does debian have something named add-apt-repository?

  # apt-file search add-apt-repository
  software-properties-common: /usr/bin/add-apt-repository
  software-properties-common: /usr/share/man/man1/add-apt-repository.1.gz
That looks familiar.

  # apt-get install -y software-properties-common 
  # add-apt-repository ppa:libreoffice/ppa
As a long time debian fanboy I have to say "new software is exclusively published on Ubuntu" was kind of funny. Unless you were talking about `upstart` ;) A great reseource for debian is the Debian Administrators Handbook. The section on pinning is:

http://debian-handbook.info/browse/stable/sect.apt-get.html#...


No I'm not referring to upstart. Incidentally, I'm pretty familiar with apt in the Ubuntu context, but this is not what I was referring to.

For example, take a look at this - http://www.omgubuntu.co.uk/2014/01/turpial-3-available-ubunt.... How do you get this into Debian? This is what I mean by software that is published for ubuntu. I do not mean that it cannot be built for debian, but rather that it is not PUBLISHED for debian.

Or is the only option to do a dpkg-build?


I don't know what you are missing here. Did you think I spent all that time typing about add-apt-repository because a-a-r was a complete red herring? Did you notice that the page you link to mentioned a-a-r?

  $ sudo add-apt-repository ppa:effie-jayx/turpial


Apart from people on various #debian IRC channels getting grumpy if you mix package sources, sometimes Ubuntu has a newer libc than Debian so it can possibly cause problems. I wouldn't pull in an Ubuntu package for something in production - try backports or building from dsc first.


It is true that "sometimes it has a newer libc" and you can check this. If the libc version is important there will be a versioned depends in the control file. If the libc is too new in saucy I can change:

  deb http://ppa.launchpad.net/.../ubuntu saucy main 
  deb-src http://ppa.launchpad.net/.../ubuntu saucy main 
to:

  deb http://ppa.launchpad.net/.../ubuntu precise main 
  deb-src http://ppa.launchpad.net/.../ubuntu precise main 
 
OP was asking about software "exclusively" for ubuntu, so I have no idea why you are mentioning backports.

When I need to get something (especially when it is an alternative twitter client as in the instant case) to work I could really care less if people on #debian would get grumpy if I told them about my solution.


This is awesome, thanks! I've always assumed that PPAs were Ubuntu-only magic for some reason.


Please look at backports before thinking about apt pinning. Mixing stable with newer releases can sometimes get tricky, while backports are actually designed to work with the stable release.

I've been using Debian stable on desktops for years now with few problems, and backports have only improved in the past couple of years.


What's the difference between me grabbing a package from backports and me doing a build-dep/dpkg-buildpackage from a testing/unstable dsc?


You'd have to diff the package(sources) -- generally backports will try to be "conservative" in it's dependencies (not pull in a new libc or kernel unless actually needed). I'm not sure (never checked) how far backports go in actual backporting -- but in general they'll be less invasive. I expect there's probably a couple of small patches when needed (say to patch around a newer version of a library not being available if new functionality isn't strictly needed).


It's exactly how the backports are built. Fetch from testing, build on stable. Dependencies are not backported unless they are absolutely necessary, only the package.


I used to be of the opinion that Debian stable was too old, that its packages were outdated. Then I realized that I only need Emacs 24 (compile it from source) and a recent browser (both Chrome and Firefox have up-to-date repositories). I've been using Debian stable for over a year now, and I don't regret it. As a student, it is more important that my machine remain break-free that it is that I have the latest version of my bittorrent client.


For firefox on debian use http://mozilla.debian.net/


Out of curiosity do you have issue when running Iceweasel? I had some issue with my IDE (PyCharm) and a few random web applications that just hated Iceweasel. I just self installed Firefox.


I'm not the parent but I've never had issues with Iceweasel. There was a time when the user agent string did not contain "Firefox", but it does now [1] (that might have caused you problems?). And if you're using wheezy, there are backports to the latest Firefox (iceweasel) packages. I usually see the update on the same day or the day after the official Mozilla release.

[1] since 2009 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=399633


No problem with Iceweasel once I changed the user agent string.


I've discovered, between using Arch, Debian, and OpenBSD, that I rarely need the newest version of anything. The only time it has been an issue for me is really at the kernel level, when a newer kernel supported something that an older kernel didn't.


I've used Arch linux on my main computer for the past ~4 years, but I'm considering using Debian on my next computer, just to give it a shot as my daily driver.

Debian packages aren't necessarily much (any?) more stale than they are in Arch. It depends on whether you're using stable, unstable, or testing. (If I remember correctly, "unstable" is actually fairly stable, whereas "stable" is more like an LTS. "testing" is closer to Arch, though less "bleeding edge" than Arch is.)


I think that it's stable, testing and unstable in order of stability to instability. Stable is guaranteed long term support. Testing generally has had a week of testing to ensure no obvious catastrophic errors, and unstable is basically as is straight from initial packaging

I run testing on my non-prod (experimental) server instances and would probably go with stable for anything production quality, although I've never experienced anything going wrong with Testing.

I have my server tied to "Testing" rather than the current name, which turns it into a rolling distribution (like Arch).


I'm a Slackware man in everyday life, but I do use Debian to run our NAS at my workplace. Stable, easy to use, and dependable; I have no complaints about its performance.


dpkg? Sure you mean apt-get / aptitude, unless you want to play with some non straightforward way of installing packages.


I'm sorry, your right, I meant apt-get. Since I know it is a front end for dpkg, I sometimes mix them up.


I can't find any official announcement of it, but it looks like there are now official HVM AMIs available with the release of 7.4: https://wiki.debian.org/Cloud/AmazonEC2Image/Wheezy


If you're running Debian on EC2, do you have the problem where restart from within the instance doesn't complete? (Have you found a way around it?)

Edit: Not sure whether there was a real problem before (vs PEBKAC), but the new images _do_ reboot successfully (takes about 2 minutes). Now to get systemd running...!


No, I'm not noticed this. That said, in the past (2years+) I have contacted AWS support about an instance that wouldn't start and was told shutting down from within an instance would normally work, but was not officially supported.

I think they were just looking for a scapegoat in that particular case.


Thanks.

For anyone reading along, reboot does now work, and I was just able to install systemd (apt-get update, apt-get install systemd, edit /etc/default/grub, edit /etc/fstab to remove the systemd-unsupported nobootwait line, update-grub, init 6). Thank you Debian cloud team :-)


Ah. The debian-cloud mailing list announcement for the new images just landed in my inbox. The archive[1] has yet to update.

[1]: https://lists.debian.org/debian-cloud/2014/02/threads.html


Nice to see openssl enabling assembler implementations on arm. Seems to improve scp performance a lot!


There a typo in the HN title.


What is the preferred way of upgrading from 7.3 with aptitude?

On the wiki https://wiki.debian.org/Aptitude they say dist-upgrade is no longer recommended, I am a bit confused (I don't keep up with linux world very much, I am just a linux hobbyist).

Thanks.


Does anyone have a link for a netinstall image hosted via BT? The one on the debian page is broken.

https://www.debian.org/CD/netinst/


I don't think the 7.4 images have pushed out yet. I always download the newest version, in case I need it, and its not available for me yet either.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: