Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What is the best Linux distro for a development laptop?
177 points by selmat on Jan 7, 2017 | hide | past | web | favorite | 318 comments
Recently I upgraded my 2yr old laptop (now it has 16GB RAM, 275GB SSD, baklit keyboard) and now consider which linux distro use as main OS.

What are your experience with various linux distros as main OS? Which one is the best? How looks like your dev environment?

For me are important these points :

- Ability to run virtualized systems (Windows, other linux distros)

- Good battery management

All necessary dev tasks and experiments will be performed in virtualized systems.

Thanks




Debian and I'm surprised more people aren't saying so. Ubuntu or Mint, which are built on Debian do not add much value these days, IMO and Ubuntu's 6 month release cycle is intolerable. If you get behind with upgrades you are screwed.

Even after all these years it's hard to beat apt-get. I think a lot of people dismiss Debian as unmodern or some how less than the distros built on it but that's hardly the case. Debian "testing" has all the latest software and yet is still very stable. Many people are scared away by the "testing" release but this is what Ubuntu and Mint are based off of.

I find Arch, Gentoo and many others to be a lot of work. I don't have time to spend fiddling around with the OS.

Don't get me started on Fedora/CentOS/RedHat. yum/rpm is just horrible if you've used nearly anything else. I think the only reason this OS family is still alive is because RedHat managed to sell so many copies to big corps and universities. I think people still use it mainly because they were exposed to it at school or work and don't know any better.


> If you get behind with upgrades you are screwed.

Why? You shouldn't rely on system files for development... you probably want a proper dev environment that can later be replicated into production or staging (eg. containerization, VMs, virtualenv, etc). Just stick with Ubuntu LTS releases, and enjoy the consistent updates (eg. security) and the massive support benefits (eg. forums) associated with running one of the most popular distros.

Personally, I love the consistent Ubuntu updates. Granted... I always switch to xfce for the desktop environment.


> Why?

Because if you miss three releases then you have to do three upgrades. If you setup a system and then come back to it a couple of years later and find you need to upgrade you can plan on fighting through multiple system braking upgrades. Yes I know you can mitigate this with LTS but that's a big compromise.


But it's a dev machine, right...? So you just nuke the install and then `git pull` all your key configuration files and dev environment, and then you're right back up and running. Same as if you switched to a new machine or one whose harddrive died. I still don't see the problem.


Reinstalling is a sorry solution. I haven't had to reinstall my dev machine in about a decade.


Where do you buy your HDDs? You're sitting a couple standard deviations beyond MTBF.


I have upgraded to an SSD but just copied the OS and boot sectors to the new device.


One of my friends asked me how often I reinstall my operating system.

I told him I haven't installed a new OS on my primary PC in 10 years. I just DD it from one drive to another.

DD is by far the easiest way of making bootable media on a flash drive as well. I use that method to create install media for my other computers.


> Yes I know you can mitigate this with LTS but that's a big compromise.

Genuine question as a Ubuntu user, where do you win using Debian on a laptop? I totally understand using it on servers, as it is the definition of 'rock solid' and you can ensure that it will work 100% of the time.

Admittedly I don't pay a huge amount of attention to the differences, but isn't using Ubuntu LTS compromising in the same way as Debian stable? Specifically, packages are oriented for stability. Consequently, you don't get improvements to the kernel, or other packages you may use frequently. On my laptop, I don't want to use Kernel 3.6 that doesn't have the latest improvements to work with Intel processors, direct rendering management, and power saving. I don't particularly want to use quite outdated versions of GNOME either. Six months, to me, is a solid amount of time to sit on a release.

Some people suggest (misguidedly, according to Debian volunteers) using Unstable on a workstation / laptop, which caused my system to outright break entirely because I was using a Nvidia graphics card, so I just installed Ubuntu and it worked from first boot.

Using testing is sworn against by almost everybody, as it's the worst of both worlds - it's not stable and it's not fixed quickly.

Stable plus backports, suggested elsewhere in these comments, sounds nice only if you consider ensuring that water drains out of each hole in a colander equally an adequate use of your time. You then have what I'm fairly sure it's explicitly listed as "don't do this" on the DontBreakDebian[0] page.

[0] https://wiki.debian.org/DontBreakDebian


If you run Debian on your servers then having the same software, and the same versions available, is a good for consistency.

There's something to be said for running multiple distributions so you spot binaries in different places, etc, but really running the same system "everywhere" has more value to me.

My laptop/desktop run Debian stable, my servers run Debian stable, and so I know what to expect.

When I need things that aren't available, or have to be backported, I know how to do that. For extreme changes I can use containers, or virtual machines. But for the past few years I've not been convinced in the feature-churn or value added by Ubuntu. (Especially when you see that their "universe" isn't really supported by anybody - hell even looking at bug reports for their supported packages is often depressing. Bugs open for very long time with no updates because of lack of people willing/able to fix them - combined with forum advice which is often the blind leading the blind.)


> Using testing is sworn against by almost everybody, as it's the worst of both worlds - it's not stable and it's not fixed quickly.

Nonsense. Testing is stable and up to date. Ubuntu is based off of testing. Debian's rolling release system makes upgrades easy.


The issues I had whenever I strayed from Debian weren't technical deficiencies of yum, etc, but rather the choices made by other distros. Installing "vim" and unexpectedly getting some sort of "minimal" version of vim. Packages which symlink or install large binary files or log files into /etc (suddenly grepping /etc becomes problematic). In general, lots of little decisions which make me exclaim the distro equivalent of "Did anyone even play-test this?"


> Packages which symlink or install large binary files or log files into /etc

wtf? who is actually incompetent enough to do something like this?


This was something like ~8 years ago, but here's an excerpt from my notes when using Redhat:

    # ls -l /etc/httpd/
    conf
    conf.d
    logs -> ../../var/log/httpd
    modules -> ../../usr/lib/httpd/modules
    run -> ../../var/run
Reactions:

    * Why is 'grep -r /etc' taking so long?
    * Why am I seeing log entries in the output?
    * Why am I seeing 'permission denied' on a bunch of socket files?
Debian seems to have the most respect for the FHS. http://www.pathname.com/fhs/


A few more, from my notes:

These were the default permissions of the apache log dir:

    # ls -ld /var/log/httpd/
    drwx------ 2 root root 4096 Jan 18 12:09 /var/log/httpd/
I guess their expectation is that only root should ever need to read a log file? Contrast this with Debian, which gives read access to admins:

    $ ls -ld /var/log/apache2
    drwxr-x--- 2 root adm 4096 Jan 16 06:25 /var/log/apache2
Redhat also decided to stick wsgi sockets into the log dir. If you try to run a wsgi process as anyone other than root, you get a permissions error:

    (13)Permission denied: mod_wsgi (pid=31431): Unable to connect to WSGI daemon process 'bob' on '/etc/httpd/logs/wsgi.1965.3.1.sock' after multiple attempts.
To fix this problem, you have to tell wsgi to stick its socket files somewhere more reasonable, like /var/run (which is what Debian does by default).

When you look up the modwsgi "Common Problems" page, the very first entry describes exactly this issue... https://code.google.com/archive/p/modwsgi/wikis/Configuratio...

'cfdisk' was removed from the distro because it was "a pile of junk". Really? That's funny -- I've used it to create dozens of Debian partitions and never had a problem. https://partner-bugzilla.redhat.com/show_bug.cgi?id=53061


> These were the default permissions of the apache log dir

Since systemd came along, you're supposed to use journalctl though.

That applies to Debian and Ubuntu too.


Side note: What do you use to keep detailed notes, to enable you to quickly refer back to stuff 8 years later?


My problems on rpm systems are always about some packager using different versions of dependencies than what ships with my OS.

Nobody has the QA force Debian has.


I agree that Debian is a great developer platform, I use it myself. But as a note on the Ubuntu's 6 month cycle, it is easy to just ignore it and only use the LTS releases and upgrade every 2 years. This makes it very similar to Debian stable in release cadence and the LTS releases tend to be of better quality.


I use debian but I'm not sure I'd recommend it for devs who don't want to fiddle with their desktop. Stable is great for servers and simple desktop computing, but is too stale for developers. Testing and Sid are both rolling updates, so if you don't keep updated, things can break (or more usually, change in workflow) sometimes when you do update. Ubuntu and Mint are snapshots of these rolling setups, so they stay more consistent over time.

If you don't like fiddling with your OS once you get set up, then you should be using some sort of LTS distro like Ubuntu or perhaps a BSD. Things like debian, arch, and fedora require a modicum of maintenance (not much, but it's there). If you don't mind (or like!) fiddling, then a rolling release distro is best.


Having spent years on debian, and having run stable and sid, the only really sustainable arrangement is to use stable+backports. Don't run sid! Gradually your system will become a mess and you'll end up doing a reinstall.

If you need a library that isn't in backports, just compile from source and install to /usr/local, it's not hard!


Datapoint to support you: this conversation reminded me to do an update on this debian sid laptop, which is a couple of months un-updated. about a gig of downloaded updates... and I came back to my lockscreen which had a whole new look, and my desktop wallpaper had changed (I think I was just using the default before, so understandable).

It's fine because I understand what Sid is about and this is to be expected, but I wouldn't suggest this to a naive user. And since Testing gets these changes in short order, I'd stay away from that as well if the user isn't comfortable fiddling.

So yes, stable + backports, unless you're comfortable with change/fiddling (same with any rolling distro).


> Even after all these years it's hard to beat apt-get.

I would say Fedora's dnf has it beat quite objectively, but to each their own.


After almost 20 years Fedora is only just catching up with where apt-get was long ago.


DNF uses delta updates, apt-get still doesn't AFAIK.


See debdelta


> dnf has it beat quite objectively

You mean even though the slightest care was not given to reliability while coding it?

https://lists.fedoraproject.org/archives/list/devel@lists.fe...


Depends on your definition of reliability I guess.

If I have two scripts running in parallel which both wants to install packages, using apt will cause one of the scripts to fail, because apt is already locked.

dnf however will be just fine, because instead of failing it will just queue and wait for the lock to be released.

There's small things like that all over which to me, as an end user, makes relying on dnf feel much safer.


> Depends on your definition of reliability I guess

No, not at all. Not doing the elementary work to design the package manager in a way that it will keep the system in a consistent state as it is reasonably possible is unreliable. On top of that, if the developers of the said software are ignoring the issue, denying it is their responsibility and tell people to use a different program instead, I will personally not touch it with a 10 foot pole.

Refusing to run when a concurrent copy is already running on the other hand, is just a comfort issue.

> makes relying on dnf feel much safer

So you are OK with ignoring a real reliability issue and a trivial UX behavior is enough for creating an illusion of safety? Check the thread I've posted, Fedora developers are more than happy to recommend not using DNF and using offline updates thing instead.

Now try running multiple instances of "offline update" thingy and see how far it goes in your pet scenario ;)


> so you are OK with ignoring a real reliability issue and a trivial UX behavior

Reliability has many dimensions.

If I can't trust that the package manager handles all the issues you can encounter when managing packages, I cannot trust any component which interacts with the package manager to be reliable either.

dnf does a lot of right things(tm) which apt doesn't, which means I can trust that scripts I write to work with the package manager will work reliably.

Apt outsources approximately 2 out of 3 failure-modes to its users meaning every automated attempt to interact with it, is bound to fail on stupid shit it shouldn't.

And then you can't rely on apt. Apt is not reliable.

To me it seems like a misinformed version of "KISS"/"YAGNI", because you do need a package manager to handle and abstract all those things. That's what it's for.


OK, now I understand what you mean by reliability and I appreciate that. Thanks for explaining it.

I'm just not comfortable you claiming it is more reliable than apt because then some people will think it is a package manager with minimal amount of importance was attached to its design so that it won't leave the users system in an inconsistent state even in most easily correctable scenarios when that's actually not the case.


You can not safely install two packages concurrently. apt-get's choice to refuse to run is a much simpler solution that better adheres to the KISS principal and is therefore less prone to error. You could queue apt installs of you really wanted to with a simple for loop on the command line.


Which is why dnf doesn't.

But dnf doesn't break your script or force every script in the universe to reimplement the same polling loop, because it's implemented one place, centrally, so everyone else can forget this problem even exists.

dnf clearly does the right thing here, while apt causes breakage. I don't see how anyone can argue anything else.

It's not KISS if everyone else has to work around your shortcomings. Then it's just inadequacy.


100% agree with this. Debian is stable and highly supported.


> Don't get me started on Fedora/CentOS/RedHat. yum/rpm is just horrible if you've used nearly anything else.

Your experience may support such a statement, but without any context it doesn't really help the question at hand. Sharing specific criticisms of rpm-based distributions or rpm package managers would go a lot further towards arguing why they would be a poor choice for a development laptop.

> I think the only reason this OS family is still alive is because RedHat managed to sell so many copies to big corps and universities. I think people still use it mainly because they were exposed to it at school or work and don't know any better.

Do you really think there are no situations where running Red Hat/CentOS or Fedora would be a suitable choice?


> Do you really think there are no situations where running Red Hat/CentOS or Fedora would be a suitable choice?

Fedora is IMO a great desktop distro, but for some reason consistently underrated in the wider community.


Yum has been replaced by dnf in Fedora.


Is it any better?


I never had problems using yum. dnf has the same interface but works faster, and does more things (e.g. Copr/repository management and `whatprovides`) and better.


I've found yum to be slow and prone to breaking. Part of the problem is that CentOS packages are often far behind. EPEL is a symptom of the problem more than it is a solution. Invariably, when building software which builds easily on Debian based systems, on CentOS, I end up compiling many packages from source because either they are not available or are horribly out of date.



> I find Arch, Gentoo and many others to be a lot of work. I don't have time to spend fiddling around with the OS.

I agree, first install was boring and confusing. I installed it once and didn't touch it since then, haven't touched it for 6 years, it's still running. In fact that's the last time I installed an OS.

edit: I'm running Arch.


> Debian and I'm surprised more people aren't saying so. Ubuntu or Mint, which are built on Debian do not add much value these days, IMO and Ubuntu's 6 month release cycle is intolerable. If you get behind with upgrades you are screwed.

Ever tried to install Debian on a Macbook without an Ethernet-to-USB cable? I take it not, because then you would know how completely retarded it is that you need a second USB stick with the broadcom-wl driver on it to do a proper install. Debian's 'free software' zealotry makes for a horrible UX.

Edit: and here come the 'free software' zealots. Keep downvoting, but everyone out of your echo chamber can see how rabidly sticking to ideology completely destroys usability.


Actually, one of my main machines is a Macbook running Debian. Works great. Admittedly, I had a few driver issues in the beginning but that is common with cutting edge hardware with any OS that was not shipped with the hardware. Now I'm running stock Debian with no issues.



As a side note, I love how this question would be begging for a distro war anywhere but HN. Here everyone says things in the way of "There's no best distro" or really humbly say what they are using and why. This is the best way IMHO that HN shows how it is the opposite of any random internet website; In HN we can have civilized conversations about some topics that are not possible almost anywhere else on the internet and that is awesome.


This is so true.

I was really tired of not being able to find insightful content and discussions on so many other social networks and/or forums. Most of them are full of frivolousness, hatred, squable, swear, all that memes, gifs... etc. Then I came across HN, ~3 years ago. I am not a coder (trying to learn fundamentals on my own, though) and not so fluent in English either. Even though, it was (and is) like brilliant content filter - in terms of both news and comments. You cannot get a grip if you are not civil enough or if you are looking for some visual entertainments. You know, I guess it's all about text. You read. You get valuable information and knowledge from other readers'/writers' experiences. Text is not dead here on HN.

All in all, HN is like "The best content aggregator for humans, by humans".


Completely agree. This place is now my "front page of the internet" even more so than Reddit. There is a lot of diversity, and little that is frivolous. Also, it's obvious to me that people here will upvote a comment even when they disagree with it, simply because it adds to the discussion. And THAT is what makes it so great.

This place does, however, have its limitations too. For reasons I cannot fathom, the community here absolutely _loves_ to nitpick. People will pick on the absolute nitty-gritty specifics of something some OP said and spend an inordinate amount of time splitting hairs.


Nit-pickings are what I love most here on HN. You never know what interesting, beautiful comments (stories) will come up, even if they are tangentially related to OP. Btw, we are (kinda) nit-picking in this very thread. :-)


True


ArchLinux is very decent distro. If you need something that you can tweak to be the continuation of your mind, environment designed exactly for you, it is Arch. If you have some time in the beginning to read archwiki, you will avoid a lot of problems in the future. After you understand how it works, it is easy to fix whatever breaks.

For me Arch breaks extremely rarely. I use it for 7 years and remember no more than a couple of times it made me confused after an update. Most issues are covered in the wiki or newsfeed (like breaking changes).

If you need something Windows-like, you can try Ubuntu, sure. But once you need fresh drivers or recent libraries for development, you add third-party PPAs and the shitshow begins. The very idea of keeping old stable versions with just a few new ones that you really need leads to problems surprisingly hard to fix.


Yup. I switched from Ubuntu to Arch after I realised I was trying to make Ubuntu work like a rolling release distro. This lead to frequent breakage. Arch definitely has a learning curve, but it also helps you realise that some things really are as simple as they appear (if that makes sense).

That said, if you're like me and prone to fits of 'mad science', you will find yourself breaking your OS from time to time. But I've yet to get myself into a situation I couldn't recover from, largely thanks the the excellent Arch wiki. The 'big jumps' (i.e. staged distro) leading to 'big breaks' can happen to Arch, mind you. It's not a distro you want to 'leave alone' for too long.

So I guess if you're going to use the machine daily, Arch should be fine. If we were talking about a server or something, or maybe a work laptop that would go unused for a few weeks from time to time, I'd probably suggest sticking with Ubuntu or the like. Trust me, I've attempted running Arch on my NAS and it was not very fun. Ubuntu/Debian is much better suited there.

I guess the other downside of Arch is that there's less official packaging for it (whereas there's an rpm and deb for everything, it seems). But I think the AUR more than makes up for that, and simplifies quite a lot of things that are complex to do on Ubuntu (e.g. compile FFmpeg with HW accel support, and with all the other fancy encoder dependencies etc.).


This is a really good summary. I use and love Arch as my desktop. I've also used it as a server and for non-techie family members and in those situations you end up creating a bit of a rod for your own back.

For those cases I use Kubuntu for destops and Ubuntu for servers, pick the current LTS release and install unattended-upgrades to ensure security updates get maintained.


Yeah, as a development distro Arch is fantastic. Nobody's mentioned the fact you get bleeding edge compilers, etc. as well.

But I could not imagine running Arch on a server. Each week I forget to update Arch it's like 1GB of updates a few of which are kernel related and need a reboot.


The nice thing about Debian is that they have all your cases covered. Long-term release for my server (Stable), rolling release for my laptop (Unstable). And the latter serves as a learning stage for the former, especially during transition times (e.g. introduction of systemd).


You forgot the porridge that is just right, "testing".


I actually found Testing to have the drawbacks of both: it's neither well tested and rock-solid, nor up to date with bugfixes. I used Testing on my laptop for a year, and it was the only time I had serious bugs that lasted for more than a day.


Maybe you were unlucky. I haven't had that problem.


If you want an Arch-based system with less worry about updates, have a look at Manjaro. The team do a pretty decent job of making sure each batch of updates work together - though if you use the 'stable' branch you'll be a week or two behind Arch stable (but not for security updates). If you want it more frequently, Manjaro's 'unstable' is synced to Arch stable.

Yes, Manjaro has a small team and occasionally mess up the website, but don't let that stop you from trying the distro.

(Retired Core team member)


Another nice thing about Manjaro is that there's a OpenRC spin if you don't want systemd. Although it's possible to replace systemd with OpenRC in Arch, it's not super straightforward and can be a bit risky if you don't know what you're doing.


Antergos is another option, I like it a lot better than Manjaro personally.


I am running ArchLinux right now on a my machine (webapp dev and network technician) and I don't recomment it. I don't blame ArchLinux per se, but rolling updates are no good for someone like me.

Updates break things. Often. Almost every time. And you don't notice until you need to use that broken thing. To install new things you may need to update whole system. And you break something completely different.

-PHP7 just released? Yes, we will replace your PHP5 and leave you in agony (compile from source, or wait for AUR and compile it too).

-Networkmanager GUI works with VLANS? Pfff, who uses VLANS, some bug will remove them for you. Use nmcli. Or roll back to older version. Oops, that depends on old library that can not be installed anymore, beacause everything now depends on newer version of that library.

-Wine drag&drop support works flawlessly in one version? Guess what won't work in next? And in next it's the clippoard.

-Prepare for the new world order, Chromium will now train you reading right to left by writing to input boxes backwards randomly.


Your experience mirrors mine: Arch just breaks way too many things all the time. I feel like I'm a sysadmin. Yesterday, Python 3.6 broke so many things (including my DE). I fixed few things I needed but who knows what else is broken on my system... it's hard to tell since I don't test every single thing that depends on a specific package.

I still use Arch daily because I've set it up the way I like it and don't want to waste more time trying to setup Debian/Fedora/Ubuntu...

I just wish there's some more testing before something so critical, like Python, is unleashed.


Switch to Manjaro. Its to Arch Linux what Ubuntu is to Debian. Puts stability and ease-of-use before ideology.


The problem is- you cant test against that- i mean, how should they detect ahead of update that they break things. Add a unittest function to every program on every system out there, and do a test-update with a opt-out?


Seconded. Arch has some nice properties from the developer's POV. For example, every library package comes with its headers by default, so source code often compiles without having to install additional -devel packages.

Also, you can try out the hot new stuff (e.g. Vulkan, or the recent stable Rust) without having to enable backports repos.

Edit: And you don't have to sacrifice stability. Arch is surprisingly stable. I run three servers on Arch without any issues.


Hard to beat Ubuntu LTS. It has plenty of documentation and community support, has mostly up-to-date drivers and runs VirtualBox. LTS offers stability over several years. I use Xubuntu variant with i3. Moreover, I prefer to develop on the same OS I deploy to.

ArchLinux: Great community but too many surprises when updating that sometimes break things or in one case left the OS unbootable. Fun to tinker with but if your livelihood depends on it, choose something more stable.

Fedora: good option but no LTS version

Debian: slow to upgrade and doesn't support newer hardware


Fedora has really cleaned up the release upgrade process over the last couple versions. Transitioning from Fedora 24 to 25 was surprisingly painless!


The fact that you are celebrating one recent successful upgrade is telling.


I used Ubuntu prior to Fedora and I never experienced a single clean upgrade. I can't speak for recent versions of Ubuntu though.


Debian's rolling release system is the solution for this. Ubuntu's 6 month release cycle is a nightmare.


Painless for you maybe. Gnome Software wouldn't let me upgrade, so I had to do it through a terminal. Afterwards, I found that the graphics in certain videogames were broken. I tried fucking around with drivers, but ended up bricking my OS. In my effort to recover, I destroyed my boot sector, and thereby made my windows partition inaccessible (destroying another avenue to playing games). I tried backdating fedora to v 24, but now it won't let me play the video game either. Absolutely painful.


>Debian: slow to upgrade and doesn't support newer hardware

zzzzzzzz

This is said so often it's not even fun anymore. Debian stable is made for servers, for stuff that's not going to move for ten years and is supposed to be up 99.9999% of the time. Run testing and your problems are solved. Recent hardware is supported, relatively recent versions of all software are in the repositories, depending on the maintainer. Or run unstable if you're feeling adventurous


Unstable is hardly adventurous. I have ran unstable for 15+ years as my desktop/laptop. I have never had more than a couple hours of downtime and no data loss. Things rarely break in unstable anymore, that's what experimental is for. Moreover apt-listbugs prompts me about bugs in any package I am about to upgrade.

Seriously, install apt-listbugs and apt-listchanges. Unstable is not that adventurous.


Agree it's made for servers. Great for older machines. Too often I've run into old libraries when I want to use a recent desktop app. I'm not knocking Debian but its kernel is older, dependencies are older. Great for stability. Ubuntu (based on Debian as you know) IMHO is a better laptop distro.


That's what Debian backports (https://backports.debian.org/) are for. Best of both worlds - run stable as your base distribution, use backports whenever you need a more recent version of something.

E.g. Debian stable comes with kernel 3.16, but 4.8 is available as an official backport, should you need the (almost) latest and greatest. Just add backports to sources.list && apt update && apt -t jessie-backports install linux-image-amd64.


Running Ubuntu LTS on personal machine + cloud servers.

Pros:

- It's stable

- Easy to replicate same environment on home/cloud machines

- Handles millions of request daily, just fine

- Almost all popular s/w packages are easily available

Cons:

- Some advanced manufacturer specific laptop features/drivers don't work out of the box

- No real complaints. I'm getting great value for no money.


Yeah that imo best. I do donate 10$ during every year upgrade.


Ubuntu LTS follows almost the exact release cadence as Debian stable, so if it is slow to upgrade then so is Ubuntu. If you ignore LTS then you would have to compare that to Debian unstable to compare apples to apples in which case Debian would usually be faster to get updates.


Fedora LTS is Centos / RHEL


> Debian:

> slow to upgrade

More or less the same as Ubuntu LTS, ~2 years.

> and doesn't support newer hardware

Enable non-free, install firmwares and now they're almost equal in terms of hardware support (which one has newer kernel depends on the year). For support comparable to HWE stack kernels use the kernel from backports repository and now Debian probably has even better hw support because backports kernel is updated every ~3 months following mainline, while Ubuntu follows latest non-LTS Ubuntu kernel, thus every 6 months).

It's been more than a decade these claims aren't true anymore. Please stop repeating the same old myths.


Ubuntu is a good option. It is easy and things work. I've run virtualization of other Linux distros on it fine. The documentation and message boards are robust. The desktop bug reporting system has a lot of traffic (although it is not necessarily better).

If you need the very latest packages (although Ubuntu has PPA), or are developing for Linux itself, or want to know whether system components are all FLOSS, another distro might be better. For ease of setup and maintenance and stability, Ubuntu is good.


Solid points.

> Moreover, I prefer to develop on the same OS I deploy to.

Me too, but I use containers (Docker) or VMs to achieve this.


+1 for the latest LTS Xubuntu


I use Ubuntu because there is a StackExchange site, askubuntu.com, and because it's easy to set up and fairly reliable and I can always go to Archwiki if I want to dig deeper. And anyway, I run Xmonad so a lot of the gripes about Ubuntu's interface, Unity, don't really affect me. [1]

In the end it comes down to Ubuntu provides me with a better abstraction layer for support and problem solving over the top of Linux than the alternatives I've seen.

[1]: Edit. As that sort of interface goes, I think Unity is better than many, but it took some time fooling with it enough to be familiar to reach that opinion.


Yep. I tend to use Ubuntu Server and then dump i3 on it as my WM. The support from the community is second to none, so when I need answers they're always a Google search away.

Mind you, if I don't get answers I normally end up looking at the Arch documentation, which is phenomenal.


I used to do that with my XMonad setup, but at least with 16.04 and 16.10, I've found it more effective to go ahead and install Ubuntu normally, install Xmonad, and then use the session switcher to switch to XMonad. It seems to set up the NetworkManager and other similar such things more correctly, more easily.

I once went into an epic fight trying to make it so my normal user was able configure the network through network manager. Presumably someone out there must understand all that policykit and related project stuff really well, so well that nobody seems to feel the need to even remotely document it. (At least at the time.) I never did win; I just ran the nm-applet as root via sudo. With the way I'm doing now, it just works correctly, which seems to be the only way to get that stuff working at all.


I recommend Fedora, it has more up-to-date stuff and very developer friendly. I'm probably biased though because the high-energy-physics stuff I develop for usually runs on EL-variants, so Fedora is naturally a good platform for it. I've always had more trouble compiling random things on debian-based stuff though (probably has something to do with default linker flags and poorly-written packages).

RH has a big virtualization focus, and Fedora is their development playgroond, so if you're planning on using KVM for virtualization, it's very easy to get going.

However, if you use VMWare, sometimes VMware won't support the newest kernels immediately which can cause problems since Fedora upgrades quite frequently.

Others have mentioned trouble updating with Fedora, but I upgraded from F16-F24 on my desktop fine. I finally decided to do a clean install to F25 on my desktop just to switch to a UEFI boot. F24->F25 worked perfectly fine on my laptop (and the upgrade experience has gotten better and better). Either way, as long as you put /home (and, maybe /opt, /usr/local and /var) on their own partitions, doing a clean install is no big deal.

It seems many people had bad experiences with yum/rpm in the past, but dnf is much faster and I think people usually run into problems when mixing non-compatible repositories.


> I'm probably biased though because the high-energy-physics stuff I develop for usually runs on EL-variants

This is due to university lock in. RedHat convinced a lot of institutions that they were better off paying for Linux. I think a lot of the establishment couldn't use free software because they had no procedure in place for doing so. Often buying a RedHat support agreement was the most palatable or even only way to get on the Linux bandwagon.

Around the time RedHat stopped supporting desktop RedHat and spawned Fedora they turned their focus towards selling contracts and creating so called enterprise features such as remote management which appealed to institutional IT teams, to the detriment of the desktop user.

RedHat has made a lot of money this way but they lost their position as the dominant Linux distro because the hackers whom RedHat originally catered to moved on to distros which paid then more attention.


Let me put this to you in another light... Would you rather simply update the definitions on your satellite server and then check the status of your upgrades the next day, or would you like to upgrade 1500 assorted servers and VMs manually?


If the user just wants to get work done and doesn't care if they have the latest thing, why not CentOS?


I wonder this too, because on our servers, which are running WHM/Cpanel, they're all running on CentOS and it seems to work very well. I'm on macOS privately.

But yeah, for some reason, on our servers, it seems to work solid, but here it's not very liked as dev distro.


If you want recent kernels, a non LTS distro has advantages. The kernel is the main thing that matters if you mainly use containers for development.


CentOS is meant to be a server OS and misses a lot of the niceties that Desktop Linux variants have.


I use CentOS on my workstation at work, but I wouldn't recommend it for a laptop with new hardware.


Thanks. What kinds of niceties? I spent awhile looking at it and the other RH distros, and didn't come across anything saying CentOS was a server OS (not that I doubt you, I just would be interested in more details if you happen to know them).


Packages are often kept at stable (but old) versions for a very long time. This is good for servers, but bad for users. Same goes for the kernel - again, good for servers, but not so much for new-ish hardware. In general its a very conservative distro.

As far as it being a server OS, well..

'In July 2010, CentOS overtook Debian to become the most popular Linux distribution for web servers, with almost 30% of all Linux web servers using it.[15] (Debian retook the lead in January 2012.)'


I'm using Elementary, and I really like it so far. It's based on Ubuntu, and the standout things for me were:

1. Gets out of your way really fast - very little config and setup to do.

2. Clean UI - very macOS inspired, so they try and get rid of everything unnecessary. To the point where it doesn't feel like a normal Linux distro with knobs all over the place.

3. Excellent built in terminal - can't stress this enough - the built in Terminal on Ubuntu is very barebones - the Elementary terminal feels really polished and had everything I wanted coming over from macOS.

4. HiDpi support - you haven't mentioned what you monitor is, but Elementary handles retina and 4K displays well straight out of the box.

5. Ubuntu based, so pretty much everything that works there works here, apt is available for everything you need to install.


Seconded. As someone who's hopped from distro to distro (Arch, Fedora, Ubuntu) on my ThinkPad x220, I'm really happy with elementaryOS for all the reasons you mentioned except HiDPI support (which I don't need).


I can only recommend it in older hardware, because Elementary is actually very, very fast. I used in a 32bits/2GB laptop and it was the only usable distro there.

But not having windows menus in any application or contextual menus in the default video player is way too minimalistic for me.

Also, installing and using Chrome instead of the default browser results in three icons in the dock.

Now my laptop can run 64bits and has 8GB of RAM, so standard Ubuntu (and having menus) is a much better choice.


Thirded? I've ran several linux distros and this is the one I've settled on. It's clean and pretty, it just works, and it stays out of your way, like a good wife. :)


Regarding your important points:

- All modern distros can run virtualized systems. You just need to install your favorite virtualization software (VirtualBox, qemu, VMWare, etc.).

- No distros have good battery management, at least not really on par with Windows, and certainly not on par with macOS. This is just the unfortunate state of affairs with the Linux kernel. Some of it is due to lack of focus on improving that aspect, and much of it is due to the difficulty of programming power management modes for various bits of hardware (a decent amount of this stuff isn't well-documented, or documented at all, depending on manufacturer). Using "powertop" (on any distro) can help you figure out the things that are eating into your battery and can help you reduce usage. TLP, a set of tools for configuring your system better for longer battery life, can also help, and should be available on most if not all distros.

Personally I run Debian (stretch) as on my main dev laptop, and it's been working well for me for years (well, jessie before stretch existed). I used to run Gentoo years ago, but got tired of long compile times. The flexibility of compiling or not compiling certain features into software just turned out not to be all that useful for me.


Regarding battery life:

I've owned laptops in the past that actually get better battery life under linux than windows. My current laptop is a $189 Acer Cloudbook. It gets ~ 10hours with the backlight set low. I accidentally left it on overnight (backlight off, no web browser) and it was at 20% > 10 hours later. The only power tuning is "sudo powertop --auto-tune". I run evilwm (so, not some heavy desktop environment like unity).

(Web browsing is painfully slow without noscript; I use it for light C++ and VNC)


Gentoo's "compile everything" approach doesn't seem like such a bad proposition up front, but it really turns out to be the PITA. Since it's a rolling distro packages get updated often and if you haven't updated you're system in couple weeks and you happen to have couple of heavy packages installed (chromium, firefox, octave, openoffice, kernel) upgrading your system may take up to 10hrs on a average 2015 machine.

Similarly, compiling flexibility didn't prove like important feature for myself also. It allows you to slim down package installations or compile them with different optimization flags, but in most cases you don't gain much by using them:

- smaller package installation (by turning off features not commonly used) in the time where disk is cheap and OS is not the biggest consumer of the disk space

- have marginally improved performance if you know when it's safe to crank up compiler flags (better leave this decision to package maintainers)

- have debugger symbols built in - if you happen to ever need to do this, doing it for the binary package is not that difficult; you shouldn't have source distro just for this use-case

Gentoo deffinitely excels in providing you with concrete reasons to deepen your Linux knowledge if you have it as a distro on your main machine, which can be both pro and con.


You've missed out another benefit (of Gentoo specifically): the ability to conveniently apply your own patches – just copy them to /etc/portage/patches/category/package-version and then re-emerge the offending package.

Most users will probably never need to do this, but I find it invaluable.

I maintain my own patches for dozens of packages in order to fix quirks/WONTFIX bugs, add features that upstream refuse to touch, remove intrusive/unwanted features that piss me off, etc, and being able to do this without also having to maintain my own deb/rpm/<some other package format> makes it much less painful.

It's also nice for applying emergency security fixes without waiting for your distro to pick them up.


Is there anything that watches powertop and throttles process accordingly?


TLP is the closest there is to this AFAIK. They have a FAQ section about powertop.

http://linrunner.de/en/tlp/docs/tlp-faq.html#powertop


I have been using Linux since the late 90s and tried most of the bigger ones for development over the years. In my practical experience distros which use aptitude are far easier to work with than distros with yum (awful, how can people use that?) or other package managers I tried. Stuff just works. I myself use Ubuntu; I have not reinstalled my laptop for years, I just run dist-upgrade and it works well on both client and server (I like to have things set up the same way on both).

For battery life, I get Windows like battery life out of my laptops using powertop/tlp but mostly by swapping out the window manager for i3. I3 is very efficient. Not only to work with but also for the battery; it literally makes hours of difference on my Thinkpad X2x0 systems.

I read about Archlinux here a lot and I will try it some time in the future, but if you don't need the latest, cutting edge linux related software, you are fine with Ubuntu (which, compared to Debian, is already quite cutting edge). I say Linux related, because many tools web devs etc use have no apt-get package or have a package you don't want anyway. So that's not related to the OS installation then anyway.


Arch (Or any rolling release) isn't just about cutting edge. OS upgrades tend to break stuff depending on your usage (something compiled against the specific kernel version). Updating your computer every few days means you can avoid it entirely.


Arch is a joke an should be considered only if you LIKE tinkering with your OS all the time.

The preposition of always up to date would be great if that wouldn't mean that your PC doesn't work properly most of the time due to some new bug or incompatibility.

We've run Arch on embedded PCs (arm) due to out of the box docker support, but after updating a year later we've literally were unable to deploy Docker containers. There was a bug in the kernel, and no amount of hacking, upgrading or downgrading made it work.

Luckily we had a sdcard snapshot a year old.

I suggest Ubuntu or Debian, they got good docs, good support and generally a positive community.

As far as windows managers go - pick what you like. Gnome looks slick, xbuntu is my go-to for Chromebooks and default isn't too bad.

A lot of devs in our company go for tiling desktop managers - i3, awm....


> We've run Arch on embedded PCs (arm) due to out of the box docker support, but after updating a year later we've literally were unable to deploy Docker containers. There was a bug in the kernel, and no amount of hacking, upgrading or downgrading made it work.

You are talking about a non official Arch project (Arch Linux ARM) as it was an official derivative of Arch. It is not.


For others:That would be Arch Linux ARM[0] a derivative of Arch Linux(it runs only on x86-64). I am surprised to find that it doesn't have a linux-lts package. What made you use Arch in the first place? [0] https://archlinuxarm.org/


The Arch Linux ARM distro is not related to Arch.

> The preposition of always up to date would be great if that wouldn't mean that your PC doesn't work properly most of the time due to some new bug or incompatibility.

I haven't found this to be the case at all, quite the opposite, I've run stable Arch systems for years, even done large upgrades like the switch to systemd without an issue.


Arch Linux

If you want a developer laptop you want modern hardware. If you want to run modern hardware you want a modern kernel.

In your case (since your hardware is a little old), you might not care about support from recent kernels but still want modern compilers, toolchains, etc available to you.

You also probably don't want it to break every 6 months when you have to system upgrade. Rolling release is the best.


I agree Arch and rolling release is great to avoid breaks. Paradoxically, people are afraid of it for the opposite reason.

In 9 years running Arch, I have never ever experienced the need to downgrade my kernel after an upgrade, and only once had to downgrade a package (ghc). I key point was always to run hardware that needs no drivers aside from those coming with vanilla kernels.

Quite on the contrary, I love getting new versions of things quickly as this means new features and bugfixes come to me ASAP.

With that said, I am considering to migrate to NixOS or GuixSD (currently toying with 2 virtual machines). I find Arch to be a great imperative distro, but functional ones are simply superior. Declarative system configuration, declarative package recipes, and the ability to install multiple versions of the same package is key if you have a complex setup (e.g. doing deep learning).


> I agree Arch and rolling release is great to avoid breaks. Paradoxically, people are afraid of it for the opposite reason. > I am considering to migrate to NixOS or GuixSD (currently toying with 2 virtual machines). I find Arch to be a great imperative distro, but functional ones are simply superior. Declarative system configuration, declarative package recipes, and the ability to install multiple versions of the same package is key if you have a complex setup (e.g. doing deep learning).

I've been eying NixOS, but haven't heard of GuixSD. I'll look into it, would be interested to hear your comparison.

The main downside to NixOS seems to be package availability. I get the impression that you spend a lot of time repackaging things for it because the coverage isn't there yet.


That was my impression about Nix, however mind that things are moving very very fast. NixPkgs is one of the top GitHub projects now by number of contributors. Loads of things are packaged. I think it's getting comparable to Arch plus AUR.

GuixSD is a GNU version of NixOS, running on Guix package manager. Extremely neat, as it's written in Guile Scheme. They have most basic things packaged. Also, derivations are compatible with Nix ones, so you can use any Nix channel.


Thanks for the reply, I'll give NixOS another look.


>I agree Arch and rolling release is great to avoid breaks. Paradoxically, people are afraid of it for the opposite reason.

That made me laugh. Yesterday, when I updated Arch, it included new python 3.6 which not only broke half-a-dozen packages but it also disabled my DE.

Arch breaks way more often than any other distro I've ever used.


I run arch on one of my laptops because I like to tinker (and have the latest of everything), but I certainly wouldn't recommend it to someone who wants to get work done. I run Ubuntu LTS on the laptop I rely on.

Just last month a kernel update rendered my arch laptop unbootable. I had to boot from a live USB drive, revert the kernel, and temporarily blacklist the package from upgrading. What I love about arch is the community, and that awesome wiki.


> I find Arch to be a great imperative distro, but functional ones are simply superior.

Can you tell me more about what an imperative and functional distro is?


Basically with most package managers you start from some base system and then execute commands which change the state of the system.

With a functional system you describe what kind of system you would like and it gets built for you. See https://nixos.org/nixos/about.html


I love my Arch laptop and desktop, but I would caution that Arch is a very Linux way of doing things. You're in full control and you take full responsibility. This is fantastic if you want to get an understanding of what your laptop is doing.

If you're looking for a MacOS replacement, Ubuntu might be a quicker path to just get things going. It mostly just works, and when it doesn't, there are lots of good resources to get you going again. There are a lot of developers on Ubuntu so a lot of package development happens there first.

In my experience, Arch has been more solid than Ubuntu, but I like to heavily customize my laptop. If a default Ubuntu (or Xubuntu, Lubuntu, etc) desktop environment works for you, it might be more stable.


I've been using Arch in my laptops and workstations almost 10 years already. I can't think of using any other distro, it's just something I need to install once and after that it just works.

For the first installation, there are way more things to do though than with your typical Ubuntu installation. People complaining about fonts should know, that in AUR there are font setups with the Ubuntu patches and the results are awesome. Why these are not in the mainline I don't know. I guess it has something to do with patents...


Arch is worth it for the AUR for sure. If you don't want to deal with setting up Linux from scratch you can go with Antergos. It's basically just an installer for Arch.


Antergos and friends are fine, but they're not the whole Arch experience. (Some say they're not Arch, full stop.)

Installers are brittle. They don't always fit every case you give to them. That's why vanilla Arch lacks an installer, because chances are you're smarter than a shell script. No matter what crazy, inane setup you might be facing, Arch can tolerate it. Specifically because it doesn't have an installer.

(I once dual-booted Mac OS and Arch, with a shared /home partition. It totally worked and was a lot of fun.)

With vanilla Arch, since you're the one who brought up the system, you know exactly what you put in and why. If something breaks, you are better able to fix it. Because it's no longer magic. You did it yourself, after all. You can at least identify the parts of the system that are wrong, if not being able to outright fix them.

And since setting up the system means learning how to use the wiki and IRC channel, for most people, you already know the support network to fall back on.

Arch's installation procedure doesn't just prepare the computer for booting Arch. It also prepares you for using Arch.

If you're using Antergos et al. there's no shame in it, but you're seriously missing out on a very interesting experience.


> (I once dual-booted Mac OS and Arch, with a shared /home partition. It totally worked and was a lot of fun.)

Interesting! What caveats did you experience? Did you end up symlinking stuff like ~/.firefox to ~/Library/Application Support/Firefox?


Remember that if you do it that way you can't have your /home encrypted by either Linux or macOS. Which is a very, very bad thing, for obvious reasons.


I was really only interested in keeping my personal files (like pictures or my development directory) synced. However, this still opened up a great big can of worms.

* Linux hfs drivers don't support journaling. Solution: turn it off, and suffer the inability of Mac OS to make a disk clean without booting into Recovery or Linux first if the computer powers off in a way that would dirty the disk.

* Mac OS user/group numbering differs from that of Linux by default. Solution: modify my user/group numbers, attempt to fix permissions, apply "Fix Disk Permissions" from Disk Utility judiciously.

* HFS is not case sensitive by default. I like case sensitivity, and so does Linux. Solution: make a case-sensitive HFS volume and ignore that it leaves Unreal Engine's storefront unable to launch.

* On Mac OS, /home is no longer /home. Solution: change the advanced options of my user to change the user home folder to /Volumes/UserData/rob, ignoring that this is a pretty awful hack and that if the disk is dirty or unmounted, login will simply spin forever.

* Bash scripts (and I think node-gyp?) running under Mac OS either 1) don't expect your home folder to have spaces, or 2) don't expect it to not be at /home/username. Solution: 1) name the /home volume "UserData" and not "User Data", 2) cry.

It was an ordeal, but Arch was totally normal and never broken by it. Everything in Arch worked perfectly well. And a couple of things were not so okay in Mac OS, but never any deal-killers.

All my application settings were either just synced automatically or nicely contained in .files ignoring the Mac OS standards, so I didn't bother doing too much symlinking.

However, I didn't have the foresight to shell out some extra cash for a 256GB SSD, so I had to deal with space limitations caused by the 128GB SSD. Larger apps like XCode or GarageBand had to be either shuffled around or diligently symlinked to an external drive that would later be mounted for use of those utilities. Typically apps on Mac OS are nice enough to not spray their contents everywhere, but the Mac App Store insists on downloading things to /Applications and not where you want things (so you shuffle things around) and still some apps are not just contained in .app files but instead download things to some arcane directories. Judicial use of `ncdu` was applied to clear out some disk space, as well as just a big ol' external drive.

At least it wasn't Windows. Which would likely die horribly if you tried something like this.

I wonder if maybe I should get a blog.


Did you consider or try ExtFS via FUSE? Did you consider to upgrade your FS to AFS (Apple's File System, the successor of HFS+)? What about disk encryption? Did you use e.g. VeraCrypt? Did you consider using the SD card or USB stick on your Mac?


I've found https://arch-anywhere.org to be perfect for that.


I hadn't seen this before, thanks!


I can't comment on arch, but I've found the same with my own software. Many small, incremental releases cause less breakages than big code drops every 6 months. Even if there is a bug introduced, it's usually much quicker to find debug and fix.


Best giving them all a whirl and making up your own mind. A good starting point is The Live CD List[1] website. If you're switching from Windows to Linux for the first time, Linux Mint will certainly smooth the transition for you. Also for newbies, Ubuntu is a great first option.

I usually test distros in a VM instead of installing them on bare metal. So far I have not found a distro specifically tailored to development though, and the question really should be what tools are best for development?

In that case, Emacs/Vim[2] would be a good start, and being able to develop without an Internet connection helps harden your coding ability too as you're not so reliant on the solutions of others. Go for one day of coding without Stack Overflow/Google and see how you fare.

[1]: http://livecdlist.com/

[2]: https://stackoverflow.com/questions/1430164/differences-betw...


I will second Linux Mint as a very nice well done distro, with sane defaults, and that has been very robust for me.


I also recommend Linux Mint. It is friendly to proprietary software (e.g. drivers), comes with a very nice UI (Cinnamon), and is based on Ubuntu 16.04 LTS.


A lot of the things on on Live CD list seem to be outdated; to pick an example, it lists the last release of DragonflyBSD as being over two years ago[1], when they released 4.6.1 less than three months ago[2].

Another place to check out different OS's is distrowatch[3]. Its interface is a definitely a bit too busy, which can make navigation a pain, but it's constantly kept up to date and has fairly detailed information about each OS.

[1]: http://livecdlist.com/dragonfly-bsd/ [2]: https://www.dragonflybsd.org/release46/ [3]: https://distrowatch.org/


I'm also a fan of Linux Mint, but prefer the KDE edition - I used to dislike KDE, but with plasma 5.6+ it's become quite polished and nice to use. I've also found LM to be much more stable than other KDE based desktop distro's, specifically KUbuntu and Neon (both of which were quite crashy).


Fedora 25 without question. on newer laptops (including the xps) i have suffered through Ubuntu's inconsistencies in setting up bootloader with Uefi, firmware issues, nvme ssd, etc.

it also helps that Fedora is pushing the edge with newer tech like wayland, etc.

its polished, dont have to muck about with complex howto. just pop the bootable usb drive and you're done in 5 mins.

Brilliant experience.


+1 - As a long time Ubuntu user I've recently switched to Fedora +i3wm and love it.


+1 I started with F22 and have never looked back since!

I also find dnf/yum a lot simpler to use than the apt-* ecosystem.


It's a matter of tools. All distro's are basically very similar but come with different desktops and package managers.

I spent many years jumping around Debian and its derivatives but then I found Arch and it just felt right. I love the package manager and the fact I can control what I have installed. With a 9 cell Thinkpad battery and some cleaver settings I can last all day.

I have to admit that these days I tend to run from Antergos which is a great Arch derivative but that is mainly because of a lack of time and a need to get stuff done. Also from years of installing and running standard Arch I can tweak things very quickly from Pacman.


When it comes to tooling, I found the packaging strategy of Arch to be pretty invaluable, too:

- Pacman is a great package manager (esp. when compared to e.g. ´yum´).

- The Arch User Repository (https://aur.archlinux.org/) has a large list of additional packages maintained by the community.

- The PKGBUILD format used by the AUR is easy to read and essentially just generates Pacman packages you can install alongside packages from the Arch repo.

- PKGBUILD also makes it really easy to create a package for some obscure software yourself (or e.g. a specific font), if necessary.

- There are tools that create PKGBUILD packages from language repos like gem, npm or pypi (via pip). Those are really useful since they prevent language based packages from clashing with pacman.

In addition, I really like that arch is built with choice in mind: There's no default "way to go" - you can e.g. choose from several desktop environments without having to install a specific one first, meaning you can configure your system from the ground up.

I think Slackware and Gentoo work similarly? (I've personally never tried them)


Another aspect of the AUR that I skipped: The way Pacman packages are set up, it's possible for multiple packages to deliver a certain functionality.

This allows for a great number of AUR packages (the ones ending with "-git", "-hg" or the like) that essentially download/compile a bleeding edge version from the according source versioning repo (e.g. on Github). If they're correctly set up, the system will recognize them as a replacement for another package, that might even be in the main system repos.


Slackware does indeed work similarly with its SlackBuilds system, albeit with much more minimalism (no dependency resolution is probably the most significant difference). SlackBuilds are also standalone shell scripts, unlike (AFAICT) PKGBUILD scripts (which seem to rely on being invoked by a separate command).

Gentoo is a whole other ballgame, and its packaging system is much closer to a modern-day BSD-style ports tree.

As for the starting environment, Slackware actually takes the opposite approach from Arch: the installer defaults to installing all package sets available, which in the case of the install DVD includes KDE, Xfce, a bunch of other window managers, and a whole lot more. You're of course free to deselect the package sets you don't want (or even individually deselect packages), but Slackware's approach is definitely to start the user off with a fully functional system rather than expect the user to set things up from the bottom up.

Gentoo's somewhere in between those two extremes, at least as far as I remember from last time I installed Gentoo.


Can you honestly say you remember the exact pacman syntax of how to perform a system upgrade or how to install a package? :-) For me the UX was a problem, even if technically the package management is good.


I just remember the commands I use most, namely:

´sudo pacman -Syy´ to update the package db

´sudo pacman -Suy´ to update all packages

´sudo pacman -S package´ to install/update a specific pakage

´sudo pacman -U package.pkg.tar.xz´ to install a package from a local file (useful if you're using makepkg)

Those are usually enough to maintain my system. I'll have to hit the wiki for the various query options though. But if the CLI side of pacman bothers you, I'd suggest defining aliases in your shell's RC. I track my dotfiles using git and share them on all my machines, and this method works quite well for me...


And -Ss to search, -Rns for purge/uninstall.


i too would recommend defining aliases. if not for UX reasons, for brevity and consistency. i use the same (short) commands for pacman as for apt or yaourt.


Pacman UX may be difficult to getting started, however once learn a bit it is really easy. For example, you know the "-S something" install a package named "something" and "-Ss" search a package in repositories. So, how to search an installed package? Well, there is "-Q' parameter that installs a local package, so maybe:

$ pacman -Qs package

And yeah, this is exactly how it works. Once you learn a bit about how pacman works, it all connects, same thing as vim.


You do not have standard package manager aliases in your dot files? Why would I want to remember sudo pacman -Syu when I can just enter pkg-update?

Useful if you have to jump distro's when you surely have your commands aliased.


Another upvote for Arch. I've been Mac only for 5 years, but my linux of choice for all personal projects and my Linux machines (when I use them) has been arch for 7 or 8 years already. Rolling updates are something I really, really appreciate in Arch.


Having recently broken my Arch Linux unbootable (needed USB stick rescue to fix) by not upgrading quite everything (needed some libs for new packages), I think it's a bit fragile.

I would recommend having root FS on ZFS or other filesystem with snapshots. Take a snapshot before or after running a full system update with pacman.

Another aspect I dislike about Arch is that package binaries get removed very soon - it's not an option to not keep it constantly up to date.

That said, it's still my favorite distro.


I don't know if it's just me, but I had this situation with pretty much any distro I've ever encountered (it also happened on Arch, of course): I'd try to update/install/configure something and after the next reboot I'm on the TTY or I need to fetch a USB stick with a live system.

From that perspective, what works in Arch's favor is that fixing it is usually more straightforward than other distros. I remember spending a saturday afternoon trying to uninstall the proprietary ATI graphics driver from an Ubuntu machine - only to find out (after much googling) that you need to set a very obscure, barely documented environment variable before the attempt.

With Arch, the benefit of installing it manually is that fixing the system works pretty much the same - you just skip a couple of steps (partitioning etc).

Also, I found the Arch and Gentoo Wikis to be very useful for those attempts, regardless of the distro I actually tried to repair.

On a related note: On Arch, I stopped breaking things through updates after I subscribed to their main news feed (https://www.archlinux.org/feeds/news/) - pretty much every breaking change is announced and explained properly there.


My last time was an issue (or oversight) with pacman, which allowed me to update libicu-x.y without updating all its reverse dependencies. Some important part of the system depended on libicu-x.z (earlier version) which was no longer present so I had to get to the rescue console.

So this was an Arch-specific issue that could have been mitigated with ZFS snapshots. E.g. Gentoo does consistency checks of dynamic libs, and other systems don't allow you to make such upgrades.

I had something critical happen to me twice in one year and both times it would have been avoidable with an earlier snapshot of the rootfs. I will definitely do that next time I re-format a disk.


Your post seems to suggest that you won't be developing for Linux and just need something that's a good virtualization host and all-round desktop. In this case, any distribution that you can get along with will do. Especially if you don't have much experience with Linux, Ubuntu and Fedora are both excellent choices. I tend to recommend the latter over the former.

If you do actually need to develop for Linux, I would suggest something with a rolling release model, otherwise it won't be long before you'll need to start compiling things from source because you need a more recent version of <something> than your distro is packaging.

"Rolling release" means that there is no Arch Linux 1, 2, 3 and so on, as in Fedora's case. Arch periodically releases an install image, which you use to bootstrap your system, but the latest tested version of every package is what's available in the package repository, for everyone, and as soon as a newer version is packaged, you can install it. This seems to be the best way to guarantee that you get the latest packages and the most stable system that you can get with them (spoiler alert: it's not that stable, but not disastrously unstable, either; I've ran rolling release distros on my laptop for years).

Arch is an usual recommendation in this case. Red Hat Enterprise Beta, uh, I mean, Fedora, is also a good choice -- it's not a rolling release, but it regularly ships with very recent packages. It's also a testbed for new technologies, and does have the advantage that you get a fully setup (and generally mostly working) system from the beginning.

If you're a more experienced user, you might want to have a look at Gentoo and Void Linux.


> If you do actually need to develop for Linux, I would suggest something with a rolling release model, otherwise it won't be long before you'll need to start compiling things from source because you need a more recent version of <something> than your distro is packaging.

It depends on your requirements. If you constantly find yourself needing the latest and greatest upstream software releases, then yes, use a rolling release. But if not, rolling releases can make your life much more painful. If you're not paying attention when you update your system, then each update is a roll of the dice as to whether your software will still build or run afterwards, as any update could introduce incompatible changes to packages your software depends on.

A distribution with a release model (usually) tries to maintain API and ABI compatibility for the duration of a release, so you can update with more confidence that it you won't have to re-build or port your code as a result.

There's trade-offs between stability and shininess across the spectrum of distributions with rolling releases, with frequent releases, and with long release cycles. As long as you're aware of that, you can decide for yourself how frequently you want updates, and therefore the type of distribution you should run.

> Red Hat Enterprise Beta, uh, I mean, Fedora

I really wish people would stop making comments like this. I volunteer a significant amount of my free time and effort to improve Fedora, and I see a lot of others in the Fedora community doing the same. I can't speak for anyone else, but I'm doing it to make Fedora better, not to crowd-source Red Hat development for free. Fedora is a first-class distribution in its own right, with an open, inclusive, and independent community. Reducing it to a beta distribution for Red Hat glosses over that fact, which I find very unfortunate.


> I really wish people would stop making comments like this.

I didn't mean to imply that Fedora isn't a first-class distribution in its own right. In this post's context, I see why it would look that way, and I apologize for it. I don't mean to belittle the work you folks are doing, and I know that the Fedora project is more than just the distro, and that the distro itself is more than just Red Hat's testbed for new features.

I don't run Fedora on any of my home computers anymore. I ran it up to Core 3, I think, having ran Red Hat Linux before. But I've always "ran into" Fedora computers as part of my work and occasionally ran it on my company-issued laptop at $work. While Red Hat's presence can account for some of your stranger choices, it cannot be the only reason behind your success (and I honestly don't think it is).

The origin of my snarky comment is that to many of us outside the Fedora community, it often feels like a lot of things are finding their way into Fedora largely because they need more ample testing. The fact that they're so active and so pushy means that most of them are from Red Hat. I have very few fond memories about dealing with very early breakage from NetworkManager, PulseAudio, systemd and so on. All of them are now successful technologies, but they were not "ready" by any responsible use of the word when they were first included or enabled by default in Fedora. Wayland is, to some degree, an exception, but only because it has already seen wide enough deployment in the embedded/infotainment area that, for once, the desktop is not the earliest adopter.

This is one of the reasons why I recommended Fedora as a development distribution (my wording probably didn't really look like a recommendation, sorry!). At $work, I already work on Linux software; I don't want to deal with more Linux craziness at home, so I tend to stay way from bleeding edge stuff. But if you do need to keep in touch with what's happening in the Linux world, Fedora is the most stable way of doing it that's also reasonably low-maintenance (the second best option, IMHO, is Gentoo).

Staying up-to-date with all these changes is very important, IMHO. For better or for worse, very important pieces of a modern Linux system, like systemd and GTK, are making a lot of breaking changes in-between releases. Running a bleeding-edge system is about the only reasonable way of becoming "passively" acquainted with them, and is a great way of weeding out the subtle bugs that they introduce.


Thanks for taking the time to write that out. My intention wasn't to call you out specifically, it's just that I see a similar sentiment expressed almost every time Fedora is mentioned, and I finally decided to say something about it.

Part of why you see things working their way into the distribution quickly is because of the "first" foundation[1]. Fedora purposely does not wait for other distributions to do the hard work of integrating new technologies, which means they're often among the first to discover bugs in new technologies. That said, I think that these large integrations are getting smoother over time, as the community is putting a lot more effort into coordinating these large changes and instituting QA that has the teeth to block releases on bugs with flaky integration.

[1] https://fedoraproject.org/wiki/Foundations


I've been using Gentoo since 2002 I think.

It's a great distribution for developers, especially if you're developing Linux packages. It's very easy to create your own local overlay and test your package changes against a system, without needing your own custom repo or VM.

I'm glad you mentioned Void too. I'm currently using that on my router.

I like how Void is systemd free and Gentoo makes it optional.

If you try building a Linux From Scratch (LFS), think of Gentoo as LFS with package management.


I took Void for a spin on my desktop because, as much as I like OpenBSD, I do occasionally need to touch stuff that's Linux-only. It's like a breath of fresh air, really. I can go about and mind my own business, as if all the mind-boggling complexity that's been steadily poured into the Linux cup during the last 3-5 years or so has been nicely tucked away under the rug or just thrown out.

Not having systemd isn't such a big deal to me, but it helps. I had to learn it at $work a while ago so it doesn't baffle me anymore. I get why it's so appreciated by DevOps and software outsourcing companies, but it doesn't do anything I need. I can live with it (and I have), but it helps if I don't need to.

Gentoo is great, but with Linux land being the way it is lately, I wouldn't recommend it to anyone who's not very familiar with it. Asking someone to get a working system with Grub 2, xdg-* and the like using nothing but the Gentoo handbook and the Gentoo forum is more or less the equivalent of sending someone to fight World War II with a fork.


Likewise, Void takes some getting used to, though - meaning its package manager (I was used to apt-get).

The manual partitioning also was quite cumbersome for me, even though I've done a lot of fdisk fiddling when I was younger. The main issue for me was the lack of clear documentation regarding this phase [1]. However, I took it as a "forced learning" opportunity and spend a day fiddling with it until I got it working, from which point onward it really is a fast and lean system to work with, very BSD-ish in style, that doesn't get in your way. I love it. My only alternative nowadays would be Manjaro-OpenRC.

[1]: If I remember correctly, you can screw up the rest of the partition management step if you select the incorrect "label type" (gpt or dos), with no clear way to revert your selection.


> The manual partitioning also was quite cumbersome for me, even though I've done a lot of fdisk fiddling when I was younger. The main issue for me was the lack of clear documentation regarding this phase [1].

The way I do it, lately, and for pretty much any Linux system I use, is to just use LVM for everything except the /boot partition which I leave unencrypted. I think you can have an encrypted /boot but with Grub 2 being the way it is, I don't want to bother with it, my threat model is pretty much thieves stealing my computer, I don't need much plausible deniability.

The only install-time inconvenience with this is that the manpages for LVM-family commands (lv, pv, vg*) are nothing short of terrible. They're pretty much the equivalent of the // add 1 to i comments next to i++.

You don't necessarily "screw up" if you pick the "wrong" label type. You can actually convert between the two (e.g. https://wiki.archlinux.org/index.php/Fdisk#Convert_between_M... ) . Also, I don't know if the functionality was eventually merged, but a while ago, you had to be careful to use the "correct" set of utils for the partition table type you used (e.g. fdisk for MBR, gdisk for GTP, cfdisk for MBR, cgdisk for GPT and so on)..


It's probably better to look at the Arch/Gentoo documentation for partitioning. The void wiki should probably be updated to point to some of those wikis. There is a lot of crossover in the Gentoo/Arch dev communities.


Like many in this thread, I'm personally a proponent of Arch Linux, but I'm objective enough to admit that there is no such thing as a "best" Linux distro for general development work. The best choice for your needs will be dictated by a balance of priorities and it'll necessarily be a compromise. I'll try to give a short outline of the current state of affairs as I see it.

If you like or at least don't mind regularly tinkering with your system and have a preference for running the newest software, Arch is definitely your best bet. Depending on UX and ecosystem preferences, you could go with Antergos with a DE of your choice (personally, I prefer KDE, it's gotten much better over the last year or two), but there's a lot to be said for setting the whole thing up from scratch at least once. It really teaches you a lot, and will invariably be useful when something breaks. Battery management will be as good as you bother to make it, with some effort it's possible to come close to Win10 levels of battery life.

On the other hand, if you want something that just works out of the box and the last paragraph sounds annoying to you, don't get on the Arch bandwagon. Fedora (my preference) or Ubuntu are you best choices here, and I suggest trying both for a week.

And, of course, if all you care about is a thin base for VMs of your choice, all of this is completely off topic. If you want the newest kernels, Arch will be a bit more convenient, but ultimately distro choices don't matter much for this use case.


Not a distro recommendation, but one thing I would highly recommend when setting up your Linux box is putting your /home directory in its own partition. I'm not sure if this is the default in most Linux distros, but you have to specifically configure it in Ubuntu.

The benefit is that you can easily switch to a new distro or reinstall a completely hosed one with much less migration effort. That way, if you don't like the first thing you tried, there's no harm done.


Antergos. It's Arch without the initial pain of setting everything up. The "arch way" is that you set everything up yourself so that you know how to fix it if it ever breaks, but I find the Arch wiki to be so good, that you can usually figure it out. So if you have time and want to build a custom system, Arch. But if you just want a base image to start with, Antergos.


The Arch wiki is so good it is my goto resource for Linux in general.


Also leave yourself a 10-20GB partition sitting empty, so you can install that OS without having to choose which existing-and-working thing to nuke.

There can still be issues here or there when Package 3.0 upgrades Package 2.0's dot directory irreversibly, but I haven't had much trouble; it happens that most of what I personally deeply care about are things like .emacs or .vimrc type files that don't work that way.


The best development OS is the production OS, so just use whatever you're going to deploy on. Ubuntu (Server) and RHEL and Centos are ubiquitous. I think the easiest to manage in prod is Ubuntu.


TBH I much prefer to keep that kind of stuff in a VM so I can tear down or rebuild them from scratch.

Of course, I still have Windows as my primary OS, if for no other reason than I'm too lazy to deal with the inevitable driver wrangling. (Caveat: PuTTY into an Arch VM is my dev environment.)


Driver wrangling, seriously? I haven't had driver issues for ages, save for a few bits exotic and obsolete hardware (an ancient Creative webcam and an ancient PCMCIA Wi-Fi card).


I use Ubuntu as my primary OS because things like Bluetooth and ebook-reader detection are more reliable than in Windows. Particularly Bluetooth headphones have been a PITA under Windows 8 and 10. They work better with Ubuntu.

However, my laptop has an Intel video card. I'm sure with an AMD or nVidia video card, using Linux can be more problematic.


> The best development OS is the production OS [...] Ubuntu (Server) and RHEL and Centos are ubiquitous.

As is SUSE Linux Enterprise and openSUSE. People often forget about us.


What is the value add for SUSE over Ubuntu? Sell me on it. I'm open minded. I use Ubuntu Server because I know it well, but honestly I don't have any other preference. I don't generally rely on package managers for prod deploys (I like source and it works well with my security restrictions).


> What is the value add for SUSE over Ubuntu?

To start, I'm not a sales person. I'm a developer. If you want a proper sales pitch, you can request one from www.suse.com. Also, if Ubuntu works better for you then that's what works for you -- I work with people from Canonical and RedHat on a daily basis, and I know they make good stuff just like us.

Longer history in enterprise deployments. PTFs and patches for packages are supported in zypper (not true for dpkg). Kernel live patches (no reboot kernel fixes) are supplied as packages. Very good integration with a wide variety of tools such as the Open Build Service (making your own appliances and packages), YaST (our manager for the system which is integrated into everything), snapper (snapshot before and after updates/upgrades/changes), SUSE Manager (allows management of many machines and configuration), and a bunch of other cool stuff.

Also (and this is honestly my fault for being bad at using computers), I've personally never had an Ubuntu install that has worked for more than 4 months.


Container Linux. Oh wait, that won't work so well...


Why not?


It's not designed for that. It doesn't even have a package manager, or a number of other things that are needed for a dev machine.


Archlinux (https://archlinux.org) is a great rolling release distribution, and the Archlinux wiki (https://wiki.archlinux.org/) is an invaluable ressource.


Surprised at the amount of love for Arch (incl derivatives) on this thread. Let me add my vote to it as well. I converted to Arch more than a year ago from Ubuntu for the same reason as many others here - the need for PPAs to get up to date versions of software and the dist-upgrade pains that come with PPAs.

One thing when using Arch, you won't get an out of the box experience and you will need to tweak almost every aspect of the system manually.

- Missing even a trivial step during installation (such as setting locale) can lead to quirky repercussions. The plus side is that getting your hands dirty during setup really familiarzes you with the system.

- Pacman (and Yaourt) is simply great and make updating, removing and filtering packages a breeze

- AUR (arch user repos) contain nearly every popular FOSS and non-FOSS software not provided officially. Installation and removal is painless with Yaourt. No more PPAs.

- Packages occasionally break. E.g. When libx265 is upgraded but VLC is not compiled against the latest version, resulting in the latter unable to play any media. These are usually fixed pretty quickly in the repos, or you can workaround with a bit of searching.

- Archwiki and the forums are some of the best resources to turn to when you eventually run into problems, and trust me you will


I second that. Unlike some other comments here, I personally have been driven to Arch for its rolling-release nature and not some problems over a window manager or ACPI:

I started with Ubuntu, which technically seems to do fine; however I always ended up wanting/needing to use some current version of some dev software (up to date compiler or a newer Qt Creator for cross compilation for example).

To get those newer versions, I found myself having to rely on PPAs providing them as well as the necessary libraries, which ended up destroying a dist upgrade half a year later. That happened to me at least 2 times.

After that, I tried Fedora and had various problems from the start (maybe that's not the case anymore). Then I ended up with Arch, and have never looked back.

So, if you want to use up to date software, I'd recommend you to strongly consider a rolling release distro. The occational fixing of a single package is worth it over the loss of multiple days/much hair reconstructing your old dev environment on the next distro version any day of the week.


There is nothing better than Arch. I have been using linux daily since 2005, tried multiple distros and now all of my machines (laptops, desktops, PIs, remote servers) run arch. Hardly ever a problem and making packages is a so damn easy compared to any other distro.


How many servers? How are you managing the updates? Are you using it for any 'production' code?


Definitely interested in an answer here, too. If the rolling release ain't bad enough, the focus on bleeding-edge would ordinarily make Arch the absolute least desirable production server OS imaginable. I certainly don't have that kind of courage :)


Got burned so bad with Arch in production.

You want your servers running old and verified software.


That makes sense, yes. However, since we're talking about a dev-machine here, that doesn't necessarily apply.

Of course, this is much a matter of taste, but "stable" environments can work to the detriment of a developer's productivity (especially since many developer tools improve quite fast these days). It might be a mistake to discard a tool/distro just because it didn't do what you expected for an unrelated use case.

Edit: Also note that "server" != "production code". I run a couple of servers (OpenVPN, Backups, mopidy) using Arch (on Raspberry Pis) at home, that doesn't mean I'd use it for a public facing web server.


I prefer Arch or derivates on Laptops because of the better default battery management. I easily get 2-3 hours more on a default Arch installation vs. Ubuntu.

Arch however can be a pain if something does not work, if you want it easy just use a Ubuntu derivat (or Ubuntu if you like that fugly Desktop)


People are surprised when I recommend openSUSE. I personally use its rolling version Tumbleweed.

- Its stable while being very latest of packages. So I am done with distro upgrades which might break something between version changes.

- It has largest number of packages in repository https://en.wikipedia.org/wiki/Comparison_of_Linux_distributi... .

- YaST is great for everything to administer - NetworkManager, static network config, printers, kernel parameters, sysconfig, docker and what not. So I do not have to hop through different GUI's

- Hardware works out of box mostly (because I am on MacBook Pro)

- The community is vibrant, receptive and responsive. Till now I haven't seen anybody pushing an agenda of preferring one way over other (systemd would be another discussion). Most packages are as good as upstream with little branding change, which also can be changed with a package change from Yast (or zypper).

- Server, desktop, RaspberryPi (and variants) are supported from same base.

- OpenQA, OBS, Suse Studio and packman!

- Defaults set are good to go, but you can easily change them, for most packages and configurations in general. The distro doesn't stand in your way that you have to change something very basic for the distro itself.

IMHO after inside all distro's are same, since you still need to use bash, KDE/GNOME, systemd, NetworkManager, FFMPEG etc. unless you are rolling your own solution. They differ in what they pack as defaults, updates and administration tools provide, and what they consider as "best config" for your use case. openSUE seems to have a fail balance on these.


I've just uninstalled openSUSE Tumbleweed after using it as my primary dev machine OS for about three months.

One good thing that can be said about it is that it has the most stable KDE packages of all KDE distros that are still left.

Other than that, it was the most unstable non-Microsoft OS I've ever used. Kept crashing on me all the time, updates broke stuff frequently. After the last update, it refused mounting my encrypted home partition for whatever reason.

From my experience, I'd recommend steering way clear of openSUSE if getting actual work done is a priority.


I'm going on 9 successful years of fedora. I spent 4 years switching distributions ever couple months until then. I think its probably different for everyone.


I'd recommend fedora as well. It's cutting edge yet stable.

The only caveat is, use it if all hardware works right off the box (Eg: you don't need proprietary drivers for your display and suspend/resume works without twiddling etc...). If that's not the case, don't bother with fedora because you'll just end up fighting the hardware with every update.

fwiw, fedora has been my default OS for 10+ years now. My dev usually happens on the system (since I don't want to keep replicating my dev environment) and testing happens in containers and/or VMs. It's easier to spin up docker/lxc instances than VMs imho.


I'd also like to suggest the Fedora route.

I've tried most(and then some) popular distros and the last 6 years I've settle on it.


What are your thoughts on Korora? It looks like Mint for Fedora.


Since I use KDE as my desktop, I don't think it would do me any good. In general I've found that Fedora's spins(https://spins.fedoraproject.org/) are actually pretty good. So I am not exactly sure how someone would benefit from moving into a considerably more obscure distro.

It may be excellent as a distro, I really don't know. I am just thinking out loud here.


I've been running Fedora Rawhide on my two systems for the past two years. It's surprisingly stable.


+1 for fedora here. Really very stable, "just works" experience.

I have had nothing but trouble with Debian and its derivatives on my machine. In a VM they're fine, but on bare hardware they are a pure nightmare of dysfunction. Fedora was the first linux OS I was able to get going on my PC out of the box.

Arch is good too - i hear a lot for that. I might look into that for a VM, but i'd lean more towards something like Fedora for the hardware, so you don't need to muck around to get the basics going.

YMMV though, hardware can make/break a distro.


+1 for Fedora (KDE spin) as well, only distro I've ever found to reliably work on laptop hardware consistently.

It's very actively developed though so sometimes kernel upgrades break propriety drivers, but that is extremely infrequent and is easy to fix.


Ubuntu would be the safe bet. Just pick a distro that appeals to you, since you will be the one working in the environment. Also any modern distro should have built in KVM support to run your virtual systems or you can always go the VMware Workstation route.


If you use your laptop for personal relaxation as well and like to game, then Ubuntu has the added benefit of being very well supported by game developers selling (Linux supported) games on Steam.


Your requirements are met by any distro so you'll get people recommending their distro of choice here. Mine is Fedora. Arguments for Fedora include:

* Best SElinux implementation if that sort of thing turns your clock.

* Latest gnome 3 and yearly new releases with even more recent desktop software.

* dnf, or yum as it was called, which from personal experience I think is very good. On par or better than apt/dpkg.

* Backing of RedHat, a huge Linux company that have proven their dedication to open source countless times and are one of the major contributors to the Linux kernel.


> Your requirements are met by any distro so you'll get people recommending their distro of choice here.

Exactly. You'll basically just need to experiment to find what suits you for the little things.

For me, it's Elementary OS, and before that, Slackware. When I finally started getting over my aversion to systemd I started using Elementary full time, and it's a great fit for me. The interface is superficially a near-clone of macOS, but in reality, only where it makes sense to do so. It's Ubuntu LTS based so it's got a solid foundation, and it has some great features for developers. The Terminal app is great, the built in visual text editor (Scratch) is good but not superb (it meets my needs but programmers will probably stick to what they're used to), and hardware support in my experience is, again, as good as Ubuntu. It also happens to have excellent laptop support, with ACPI and sleep/hibernate/resume usually working out of the box unless you have esoteric hardware.

Two downsides I've found: One, it can be a bit slow on underpowered hardware (but that's probably not an issue in your case), and two, it's not a rolling release which is a dealbreaker for some. If those are issues, Arch may be a better fit. I've found Antergos to be a beautiful, painless introduction to that distro.

An honorable mention is BunsenLabs, which was started by former users of Crunchbang Linux. It's almost pure Debian with a well-configured Openbox+Tint2+Compton theme and some really great defaults. It's as minimalist as you can get while still running a stacking WM and visual effects. Hardware support is slightly less robust than Ubuntu thanks to being Debian based, but I never had any issues with it except on one old Pentium M laptop that didn't support PAE kernels. They do offer a non-PAE installer for just that scenario (again, not a concern you'll have).


I tried several and these were my experiences:

* Ubuntu. Works fine (I used kubuntu and xubuntu).

* Mint. Tried it and it seemed to work okay until I upgraded it one day and ran into issues that I think were related to the system having an identity crisis over who it really was (ubuntu or mint?).

* Fedora: had a few more driver issues than ubuntu but still worked okay for the most part.

* Debian: pain to set up and a lot of issues - there wasn't a bootable ISO I could find that would let me boot into a USB and test run the latest version with all of the (nonfree) drivers I needed.

* Arch: ran into more bugs than on ubuntu/fedora: the project maintainers don't do QA as effectively as ubuntu, debian and fedora. There seems to be a pervasive attitude that since its the distro where you get your hands dirty, that you should hand-fix a lot of bugs too. I tried manjaro as well because I wanted to try Arch and didn't want to configure every little thing until I saw this horrifying post and then ran far, far away: https://www.reddit.com/r/linux/comments/31yayt/manjaro_forgo...

* Gentoo: tried and failed to install firefox due to dependency issues and then gave up. Waste of time.

* Opensuse: tried and again got bogged down by the package manager crashing when trying to upgrade some pretty basic part of the system.

For me the decision comes down to comprehensive driver support out of the box and QA. Ubuntu still seems to be the best at those although Fedora isn't far behind. Arch has the most up to date packages which is nice but IMHO its instability caused me way too many headaches.


> Debian: pain to set up and a lot of issues - there wasn't a bootable ISO I could find that would let me boot into a USB and test run the latest version with all of the (nonfree) drivers I needed.

Does this help? https://fiendish.github.io/The-Debian-Gotham-Needs/


Definitely, thanks. I will try that next time I'm distro-shopping.


What were your experiences Kubuntu and Xubuntu?


Good. Both worked out of the box, upgrades (seemingly the most fragile part of every distro) pretty much never broke anything in ~5 years. Clean UIs.

I'm not 100% satisfied with it, but the only problem I have that is actually fixed by another distro (Arch has more up to date packages) went hand in hand with more instability than I was willing to tolerate.


"what's the best X" is why most programmers waste their time shasing a myth. There is no best thing for all, it all depends on the individual. A better question would be "what's a good enough X" And then go from there to best suit your needs. A generic best X simply does not exist out of a context, and the contex is usually personal.


As a conventional conversation starter, maybe. But I found questions like these to spawn interesting discussions on platforms like HN - which is in the end, what we're here for, right? :)

Also, anyone commenting here about their personal favorite will most likely comment on why that distro is their favorite, which should at least be enough for OP to inform himself further.


I would go with Gentoo. I reckon that compiling might be a PITA but I'm rather accustomed with the workflow, using flags to handle specific requirements, OpenRC and other gentoo-specific intricacies that might scare off new-comers.

On the other hand, gentoo makes a great linux experience for those who would like to get more intimate knowledge of Linux internals, networking, etc. It's a great experience if you have time and will to get your hands dirty. The documentation is excellent IMHO and the community (forums) very helpful.


> […] gentoo makes a great linux experience for those who would like to get more intimate knowledge of Linux internals […] It's a great experience if you have time and will to get your hands dirty.

Very true; keep the time investment aspect in mind though. Gentoo is probably not what you want if things should just work out of the box right away with minimal tweaking and a minimal learning curve. Ubuntu (or perhaps Arch) is more friendly in that respect, and unless you have really outlandish hardware, it will have you sitting in a working desktop five minutes after you've plugged in a USB-drive with the installer.


I've recently moved to Manjaro (Arch) after trying to upgrade from Kubuntu 14.04 to Kubuntu 16.04 and finding out (you don't wanna know how much time I wasted) that Kubuntu is practically unsupported atm. So I tried Neon (the "new" Kubuntu) ... but they don't even have a working installer. I didn't want to go to Ubuntu directly for the obvious reasons.

Now to me stability is everything. I have shit to do and don't have time to fuck around with the system every two days (that's why I was on 14.04 in late 2016). But a friend of mine convinced me to invest some time, install Manjaro and learn the slightly different ecosystem. And I can say, it's been worth it. Fantastically stable, things work, and in the 3 months of usage I have yet to find a bug.


> So I tried Neon (the "new" Kubuntu) ... but they don't even have a working installer.

It worked flawlessly for me.

Now I wonder if I was lucky or you were unlucky.

Oh, and I live it: clean fresh KDE 5 built on a Ubuntu/debian base.


Try using any non-US settings and it will break :). Unless they fixed it, but I know it went unfixed for more than a month because a friend of mine tried as well and got a similar error.


Thanks for clarifying.

Can't remember installation settings but I know I have successfully used the Norwegian locale after installation.


I occasionally boot into Manjaro/XFCE, it's nice. But i find it a bit odd that you use the term "stability" in conjunction with any rolling distro.


I also vote for Arch Linux. Have been using it for a last few years and I like it. I started from pure Arch, but switched to the Manjaro about 6 months ago since I needed system to be more stable. I believe Manjaro provides a little more stability than pure Arch since Manjaro has a kind of release cycle (every week or two) - they do addition check that all core system components work well together before releasing updates for the users. I prefer Xfce Desktop over Gnome/KDE since it's simpler and therefore it potentially can cause less issues (I used Gnome/KDE/Lxde before, as well as Xubuntu/Lubuntu distros).

PS Manjaro in my case has been installed not using GUI installer since I needed to go with a kind of complex LUKS/dm-crypt encrypting scenario and GUI doesn't support a complex configurations.


I thought so too. I fought hard against going to a roling release. But so far THIS SPECIFIC CASE seems the superior option.


After trying anything with rolling release in it, I settled on openSuse Tumbleweed , it runs here on a 12 year old x61 lenovo/ibm , on a samsung with atom 150 processor, on a lenovo T460s, and on 2 tower boxes. There are no complains once you have all the toolchains installed for your development effort. The other rolling distro to recommend is Arch, but it takes some time to get it where you want it to be.


Xubuntu is my daily driver.

Fast and light, rock solid, many packages available, a lot of tutorials and documents on the 'net, and ltd is guaranteed to receive updates for three years on desktop and five years on servers (well, on server is Ubuntu).

Just works and doesn't get in the way.

If I need something else, qemu-kvm is awesome or docker.


I'm having a lot of issues with Xubuntu 16.04 to the point where I consider moving to another distro.

* Chromium tends to be unresponsive after waking up from hibernation.

* VLC tends to hang and freeze.

That's not the stability and simplicity that I originally moved to Xubuntu for.


I'd go for Mint. The only system that never disappointed me and always just worked. Second place would go to Debian "stable", old (== problems with Wifi) but rock solid.


I loved mint until I switched to Mint 18, now I have weird problems with the wifi stopping working randomly, the webcam and the audio. Besides the new desktop theme Mint-Y dont work well with some software I use like Repetier and Sli3r

I'm considering changing to Ubuntu :(


I love cinnamon so that's a big plus on Mint but I didn't love being stuck on an old version of Ubuntu and all the associated core packages/drivers. I had some hardware issues that were fixed by using a more recent version (now on Fedora).

I'm by no means a seasoned Linux veteran though so just explaining that YMMV.


Doesn't Mint stay in-sync with the latest Ubuntu versions? It's literally Ubuntu with Cinnamon/MATE and different branding. Everything version-wise should be identical with Ubuntu, since it uses the same packages.


Mint used to stick to the latest Ubuntu till Linux mint 16. Starting from version 17 they only track the long term support versions of Ubuntu. Mint 17 - 17.3 were based on Ubuntu 14.04. Mint 18-18.3 will be based on Ubuntu 16.04.

They do have another version that is based on Debian directly, but I have never used it so can't say much about its upgrade paths.


Think it's pegged at least to the LTS ones so when I was on Rosa it was still 14.04 when 15.11(?)/16.xx were already out


That is correct.


If you want that, there is an official mate version of ubuntu now which works extremely well. Not sure about cinnamon. Definitely much less update-related headaches than mint.


Another vote for Ubuntu MATE! (with redmond panel theme) Has just enough polish without trying to redesign a work flow that I'm perfectly happy with. Glad it's an official Ubuntu dist now.

I have moved fully to having my panel on the left side which used to be one of the things that kept me off Unity though. Resisted for a long time, but it turns out that this is the correct placement, ah well.


I use NixOS for my full-time development laptop and it's great. Declaratively specified OS environments are the bees knees.


I haven't found a "best" distro yet.

Before I discuss specific distributions:

1) powertop is your friend for battery management

2) Pretty much all distros will let you run virtualbox and qemu-kvm so there's no real wrong choice for your use case.

A lot of people like Ubuntu, but I've had terrible luck with it. It somehow seems to be in the (for me) uncomfortable spot where it adopts certain things before they are ready for prime time while also having other things feel old compared to a rolling-release distro.

Debian "testing" tends to be more stable than Ubuntu and "unstable" more bleeding edge. If you want a middle ground between those two, give Ubuntu a try.

That being said, nearly 20 years of running linux has given me a strong bias for source based distributions. Rolling binary distributions (e.g. Arch) tend to have issues with DLLs, and release based binary distributions are often a pain to switch releases.

For non-laptops I use Gentoo, but you don't want to have to install or upgrade packages when not plugged in as you will be able to watch your battery usage go down in realtime.

My current laptop uses Nix, which is a source-based distribution with a binary caching system. I love Nix, but it's a small enough community that you are likely to find software for which no package exists; as a source based distribution, you can often write a package expression in a few minutes, and I've done that about 6 times in the past 6 months. It is a very non-FHS layout so a ./configure && make && sudo make install like you can do on Gentoo is unlikely to work.

Another upside of Nix is that it also has a lot of great tools for developers, letting you easily make isolated environments with different libraries and utilities in each one. All this being said, I usually don't recommend it to someone who is asking "what distro should I use?" since the small community and sheer difference of the approach makes it much harder to do google-based troubleshooting.


I've recently made the switch across to Arch. If you need to spin up something quickly, it's not going to be for you. But, if you have a little time and want to build it into something that is exactly what you want and need, it's perfect.


Well there is AntergOS and Manjara, but it takes a bit of the fun and brings some new unique bugs.


Frankly, this question is a very personal choice. Any distribution you choose will really just be the closest base point to get you to your "perfext" Linux installation.

After just a few weeks, even using the same distro, everyone's installation is customized to its user like a glovesl. Some start with more more comfortable distros than others. Something like Ubuntu will give you a very stock, GUI heavy, "beginner/I don't want to think about my OS" operating system. From there, you get into Debian, a much more "pick and choose distro" than Ubuntu, but still with great support and stable packages. After that, maybe into Arch, a rolling release distro with grear, always-up-to-date, system with a very minimalist and well designed base install. After that, Gentoo, a hand crafted, labor intensive distro that will be completely bespoke to you.

Some never move on from Ubuntu, some never make it to Gentoo. AND THAT'S _OKAY_. It's about what YOU are comfortable with.

As you grow and become more familiar with the CLI and Linux, you'll want to take more control of what you install and I think you'll start to appreciate simplicity and purity over "initial ease of use". You might even consider trying out different window managers like KDE, i3 (my current love affair), or awesome wm. At that point in your Linux journey, you'll actually have some intuition about how a window manager is different from your distro and how most window managers are built on top of X server. You'll have your own hand rolled .Bashrc. You'll know what you _want_ from your OS.

Personally, that's where I am now. I've been running Arch for 2 years now and made the switch from GNOME to i3 about 6 months ago.

My advice is not to try the most highly recommended distro, or the most "barebones" but simply begin with so.thing very easy like Ubuntu. You might quickly hit some frustrations and identify things that you would like your distro of choice to better. At that point look elsewhere. Migrate and give that some time.

Don't just jump onto the "perfect developer's distro", ease into the easiest and most comforting distro for you "where you are now".


Would you be open to trying non-Linux distros such as BSD distros?

There are various flavors of BSD that have the toolsets to do what you requested. Not very sure of laptop battery management though.

FreeBSD (Very stable)

OpenBSD (Stability + Security focused)

PCBSD (Linux like ease of setup for Desktop environments)

They all have access to large/comprehensive application/system/development software.


I can confirm FeeeBSD is very stable on my 2008 UniBody MacBook. Even Suspend/Resume works


PC-BSD has changed name to TrueOS.


I personally use ArchLinux on my development laptop.

Three major reasons:

A) AUR

B) custom installs are default

C) rolling release + up-to-date

Of course, Arch is not that easy to setup compared to other distros (Antergos might help), you're gonna have to manage the system yourself to some extent.

If you don't like ArchLinux you could go for Fedora, NixOS or Ubuntu variants.


Would you explicitly put KDE last, or did you just forget it? As a Mac user and former Ubuntu, I wonder whether KDE is losing the race.


I haven't used KDE a lot, tho with the newer release I'm definitely willing to try.

Personally, I think KDE has a bit less good integration, KIO and the such don't work as well as GVFS. (GVFS uses FUSE while KIO (IIRC) is sockets or something)

Gnome 3, for the most part, works like an oiled machine.


You may want to consider how well each distro supports different profiling / tracing tools, especially in userspace.

For example, I've recently started to play with Systemtap, and found it to be the perfect tool for some of my work.

But after some frustrating attempts to use it on Ubuntu 14.04 LTS, I discovered that it's broken and Canonical won't fix it. I tried to build my own, nonbroken version of Systemtap on this distro, but ran into nontrivial problems with library versions.

Even on Ubuntu 16.04 / 16.10, Ubuntu seems to build Perf Tools (which is part of the kernel's code base) without support for Python scripting. It was pretty easy for me to fix, but it did require downloading the kernel source, and rebuilding "perf". I don't know if they have a good reason for doing things this way, but it's irritating.

In contrast, Oracle Enterprise Linux has DTrace, which arguably is the (aging) gold standard for dynamic tracing on Linux. But OREL is a distro I never seriously considered using before needing this kind of tool.

* NOTE: I'm not trying to state which tracing / profiling tools work well on specific distros. I haven't done enough research for that. My point was only to bring attention to this category of feature.


Right now the situation with macos and Windows 10 has spawned a bit of a migration. There is a growing trend of people buying pc hardware and running Linux on it. I think you will find that the most common choice of os among these people is Ubuntu. This is because it's designed with user friendliness in mind and has a lot of support for new users. It wouldn't be a bad choice.

I personally am using elementary right now. It seems to take advantage of the macos/win10 exodus by offering the next closest experience. It takes the principles of user friendliness from Ubuntu and takes it a step further. As others have said here, it sets up quickly, has an awesome terminal, has a great default text editor, and looks sublime. It's great. It has the occasional bug but I still find myself using it. I really believe that elementary is the closest thing that there is to a universla, open source os that makes computing accessible to _everyone_. I have been doing a lot of rust stuff in visual studio code and everything works very well.


My daily driver is FreeBSD (not a linux distro, I know). It has VirtualBox and bhyve (its homegrown virtualisation system), and runs most if not all software that runs on Linux. Plus the benefits of a very coherent system that's easy to grok. Also, very stable and dependable.


Pretty much any reasonably-mainstream distro will be about equivalent for the features you're prioritizing. Usually, when someone asks about which distro to use, my go-to answer is openSUSE. It's easy enough to give Ubuntu a run for its money, and I've found it to be much less prone to breakage.

----

I personally use Slackware. Some developer-friendly features I've found to be useful:

* Ships with all sorts of editors, including Emacs (which is what I use)

* Ships with the full GCC suite among other compilers (including LLVM)

* Convention is for all packages to include development headers; no more "foobar" and "foobar-dev" like in most other distros

It ain't for everyone, though. Like Arch and Gentoo, Slackware carries an expectation of being very comfortable with low-level Linux use.


> Arch and Gentoo, Slackware

Over years, I have tried all three and found Slackware to be the easiest one to install and get up and running as a development environment or as a server. It worked perfectly on a Dell laptop (including wifi) in 2003, and it is working perfectly now on an Asus UX305 (the best light-weight non-Apple laptop I know of).


It's definitely the easiest to install of the three. There are some tradeoffs, though:

* No dependency resolution. I consider it a feature, but quite a few folks will understandably consider it a huge drawback.

* No PAM. Again, whether this is a feature or a drawback depends on the user.

* No systemd. Same story.

Basically, the aim of Slackware is full customization and transparency, which means that users are not required to worry aboit dependencies or broken authentication systems or binary log files. Slackware also has a pretty strong "if it ain't broke, don't fix it" mentality, which probably explains why the installer doesn't look all that much different from the one in Softlanding Linux System ;)

I'm sure you already know all this; just clarifying for other readers on this thread. It's definitely the easiest of the SAG Trifecta to install, but it deviates pretty strongly from the Linux distro norm due to its history and philosophy.


If you are a professional developer and value security of customer data probably Qubes-OS is the best Linux system you can use today, but check the HCL:

https://www.qubes-os.org/hcl/


I've recently started experimenting with NixOS on my secondary (testbed) laptop at home. (My main home laptop has Windows, while at work we use Ubuntu.) Based on that and posts on this thread, I'd say if you're willing to consider Arch, I'd seriously suggest it might be worth adding NixOS to your list too. In my experience, at some cost, it gives you one particular super-power, that I've never seen anywhere yet. Specifically, breaking down cons vs pros:

- ~CON: It's not as polished experience as Ubuntu on "first install", i.e. "end user first" or "Windows-like". But based on other comments here, I assume if you're willing to try Arch, you agree for some tweaking. Please note I've never used Arch, so I can't more precisely compare the level of tweak-ness required; what I can say for sure that it's certainly easier than what was required in '90s (at least because we have teh internets now; and esp. the ArchWiki + askubuntu...) But again, I started my post with "if you're willing to consider Arch", so in such case I assume this point is a non-really-CON, as you're already agreeing to take this cost.

- CON: you have to learn a new language (Nix). To sweeten the deal, IIUC it's one of the few truly purely functional languages around. No IO monads or whatsit. As a result, you may (um; will have to...) learn some interesting functional tricks you never imagined may exist. Note also, that some advanced usage ideas are spread over a few blogs (see esp. the "Nix pills" series), and also in inline comments in the Nix standard library source code.

- SUPER-PRO: I found out that NixOS is a hacker's dream OS. Nix's core idea is that removing a package from your OS nukes it clean, leaving absolutely no trace it ever existed. As a result, you practically don't fear any changes in even most sophisticated internals of the OS. Want to change a kernel compilation flag? Meh, let's just try this one, bam, compile, reboot! Oh, it hanged during boot? Pfff, reboot again, select previous version in GRUB, and we're back! [Um; I mean, as long as you haven't burnt your hardware ;) you know, Linux is powerful :)] Also, part of this is in the fact, that all of your OS config is described in a single file, so you can control everything from single central position, and VCS it trivially.

I've already sent a couple PRs based on this ease of hacking and tweaking, namely to neovim and systemd-bootchart. I'm also trying to write my own series of blogposts documenting this (while it's still fresh in memory); but, eh, you know, a bit too many side projects... not to mention even some of this weird "real life" thing people are talking about so often...


NixOS gets my vote too. I started on Gentoo back in 04, played with Arch for a year or so, then went to Nix a few years ago. I have never had so much love for an operating system.

The Nix language is cute, very powerful, and as you say, is capable of cool tricks. I personally find it very easy to read too.

For every project I work on, I define a .nix file describing its build environment. It is then just a matter of cloning the repo, typing nix-shell, and then all build dependencies will be downloaded and I'm ready to go. These nix files also specify which emacs I want to use and with which modes: I don't use Emacs' package manager.

I use nix-shell for pretty much everything. My basic environment contains only core utilities. I launch firefox with "nix-shell -p firefox --run firefox" (bound to a shortcut defined in my xmonad config).

Nix is easy to customize. The override infrastructure makes it easy to modify package definitions from your user config file, and I find I am much happier doing this in a programming language rather than a bespoke config file format.

Other pros: you don't need to be root to install stuff. And there isn't much more satisfying than getting a new laptop, git cloning the repo containing your intended system configuration, and being back up with your old OS with one or two commands.

As you say, if Arch turns you off, Nix may horrify. Despite the vote, I'm pretty discriminating about who I recommend it to.


Hm, I didn't intend to convey a message that "Nix is more horrifying than Arch". For me, it's actually the other way round. Never tried Arch; but at some point in the past tried Gentoo for a few seconds, as well as Slackware; compared to those, in NixOS I just feel this huge safety net, with which however* deep I'll dig, I won't be punished for that. And truth said, even in Ubuntu I have to hack the OS occasionally, so in some way NixOS feels safer than Ubuntu.

* -- again, standard disclaimer, any Linux distro can kill your puppy, eat your homework, and burn your laptop; but that's true even of Ubuntu :)

Not sure how to rephrase the original post to better express what I mean. That said, after reading your reply, I agree that I find it hard to push on NixOS to even fellow "regular" devs; I don't have balls yet to install it as my main OS either. Maybe it's that NixOS has somewhat harder initial curve than "your common Linux"; the install phase reminded me somewhat of Slackware of '90s for a sec (though it's still better); again, not sure how it compares to Arch; plus there's the new language (Nix). That said, personally I wouldn't even think about using Arch, knowing how even tweaking Ubuntu is risky. And exactly as you said, I found Nix (the language) truly cute quite quickly. Totally much simpler than Haskell (which I still can't break into, even after some initial successes with SML, OCaml, and growing interest in Idris, which is reportedly Haskell-inspired).


> It's one of the few truly purely functional languages around. No IO monads or whatsit.

The IO monad, as in Haskell, can be viewed as an entirely pure construct. Its implementation is impure purely because of performance reasons. You can write your own completely pure IO type if you want, with unchanged semantics.

A basic example of this would be

    data IOF r = PutStrLn String r
               | GetLine (String -> r) deriving Functor

    type IO = Free IOF

    putStrLn s = liftF (PutStrLn s ())
    getLine = liftF (GetLine id)

    main :: IO ()
    main = do
      putStrLn "Enter your age"
      age <- read <$> getLine
      if age >= 18
      then putStrLn "You can vote!"
      else putStrLn $ unwords ["You can vote in", show (18-age), "years"]
Of course, now you need a way to interpret your IO type in order to make the machine actually do stuff. This can be achieved in two ways: You can modify the GHC RTS to allow it to interpret your IO type, or you can define an interpreter in Haskell to convert it to the IO type that the GHC RTS knows how to interpret

    interpret :: IO a -> Prelude.IO a
    interpret (Free (PutStrLn s r)) = Prelude.putStrLn s >> interpret r
    interpret (Free (GetLine s)) = Prelude.getLine >>= interpret . s


Even though it’s very much out of context for this thread, it’s a little pearl that brought me again a little bit closer to understanding. Thanks!


I believe the parent meant to say "no IO", although my knowledge of Nix is limited.


NixOS has my vote.

On the first ~CON: If the "end user" is a developer that is new to nix and nix expressions, the transparency of nixpkgs and its nix expressions may turn tweaking from a problem into an opportunity. Seeing the details of builds, configs, and installs clearly laid out in a nix expression taught me more about the package that I was blindly depending on. Having the same tools (nix, nix-repl, nix-shell, *.nix) available across projects and vertically within developments has reduced the "special knowledge" that "end users" need to know for both development and operations.

Extra PRO: Have the expression for your configuration just perfect? Need to build on multiple VMs, machines, EC2? Simply add an expression for networking and deploy with Nixops https://nixos.org/nixops/. Now logical and physical configuration is a declarative data file. If a build fails, I check the source. If my cloud fails, I check the source.

For me, the reproducibility and variable independence in NixOS puts the SCIENCE in Computer Science.


> Have the expression for your configuration just perfect? Need to build on multiple VMs, machines, EC2?

+1. Dependency management and deployment is the biggest problem Nix solves and it is the only system that actually does so instead of putting the problem somewhere else and presenting that as a "solution." A lot of people are using Docker for this scenario, which gives the illusion of a solution for new projects because you have convenient pre-built images. The images do not come from thin air, and people will run into exactly the same problems as they did with VM images once their Docker projects age and dependencies will need to be updated. Adding virtualization layers cannot solve the problem of updating software dependencies.

The only solution is a package manager that tracks the complete dependency tree, all the linked libraries down to libc, and prevents different versions of packages from interfering with each other. Nix is the only package manager that does this.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: