Cellular data networks in many parts of the world deliver incredibly slow speeds. In those places, the hassle of finding and connecting to a WiFi hotspot is often worth it. I'm currently traveling in the Philippines and malls, restaurants and cafes commonly offer WiFi to attract customers, and it works.
Even when the local network is fast, buying a SIM card may not be an option if your phone is locked and under contract, which is very common for US users.
> As another example, rkt intends to work better [than Docker] with systemd/kubernetes, but AIUI that’s still on the roadmap and not actually implemented.
Could you elaborate on this? Do you mean working better with systemd as a process supervisor or as a runtime?
Many people don't know this, but it's already possible to use systemd as a runtime for Docker containers, which is about as integrated as I can imagine. Though admittedly, the Docker daemon and runtime do not play well with process supervision (of any kind, including systemd).
Last I checked, CoreOS had posted to the systemd mailing list announcing their plans to integrate with nspawn, though I don't think that's been released yet.
 This is the best-kept secret of both Docker and systemd. I recently conducted a workshop on "Docker Without Docker" - in other words, how to run Docker containers without even having the Docker runtime installed (using pure systemd).
 And, depending on your use case, I'd recommend giving it a shot - there are a number of things that systemd provides that Docker still does not. On the other hand, Docker has a large ecosystem, and the tools for building initial container images are very accessible.
 At least as of recently, you can use 'exec mode' to specify the initial process (PID 1) inside a container running under Docker, but systemd still does not have access to the actual process on the host, which makes it cumbersome to monitor - the CoreOS documentation tells you to do something like this for Docker + systemd: https://github.com/ChimeraCoder/znc-kibana-playbooks/blob/ma...
>  This is the best-kept secret of both Docker and systemd. I recently conducted a workshop on "Docker Without Docker" - in other words, how to run Docker containers without even having the Docker runtime installed (using pure systemd).
Could you expand on this? I'm curious as to what you mean/how you did this.
Consider Git. Git exists solely on the filesystem. If you want, you can read git repos by inflating the ZLIB-compressed objects yourself, and create git repos by compressing objects, hashing them, and storing them in right locations the exact same way that Git does.
It's a lot of work, and the Git toolchain exists so you don't have to type insanely long bash one-liners just to read your commit history. But it's kind of cool to know that >95% of Git is really just 'syntactic sugar' around functionality that's also provided by other command-line tools.
I'll wave my hands a bit, but in short: containerization uses features implemented at the kernel level, and in fact, until recently, Docker and systemd both built on top of LXC (Docker has switched created their own libcontainer).
If you take a running Docker container and dump it, you'll get a root filesystem. You could chroot(8) inside this root filesystem, but as we know, containerization is more powerful than chroot. Once you've dumped the container, systemd doesn't need to know that it was once a Docker container - it'll just look for whatever binary is located at /sbin/init and run that (or run whatever command you tell it to run instead. Just like your actual OS - which is not a coincidence!).
One advantage to using systemd instead of the Docker daemon/runtime is that systemd is capable of running itself inside a container, whereas running init systems inside Docker containers is tricky and not recommended. Futhermore, systemd is smart enough to know when it's running inside a container and when it's not, so the container init system plays nicely with the host init system - things like integrated system logs and networking.
Newer versions of systemd actually allow you to pull Docker images from the Docker hub directly, so you can even use systemd to replace `docker pull` as well as `docker run`.
 There's a tiny, tiny portion of Git which is home-grown, but most of the features it builds on (SHA, zlib, diff) are easily replaced by other command-line tools.
> I guess you lose all of the Docker metadata, links and volumes though?
You only need links at runtime, so they're not part of the frozen image per se; they exist as part of a running container. Put another way, systemd handles container networking, so you don't need the environment variables that Docker injects when making links, because the containers can talk to each other already.
Volumes - if you mount external volumes with -v /foo:/bar, you can do the same with systemd. I'm actually not too sure about named volumes in Docker, since I almost never use them (it's way easier to reason about volumes when I control where they are located on the host).
> You should turn this into blog post if you have time - I'd upvote it anyway!
Thanks - I'm actually working on that! Consider these slides a preview. :)
Downloading from one-click hosters such as uploaded or keep2share is easily possible with speeds exceeding 50 MB/s. When using a download manager (i.e. multiple concurrent downloads, with, say, 6 connections), I can max out a Gigabit (i.e. 117 MB/s) from these hosters.
The same holds for Debian and Fedora mirror servers.
For torrents, the ramp-up typically takes too long, i.e. at the time when you got enough peers to max out your line, the file is already downloaded :).
Another interesting angle is that it’s not only application software we need to change, but also the hardware drivers are not quite there yet:
I have a Dell UP2414Q (3840x2160 resolution, driven via DisplayPort 1.2) connected to a nVidia GTX 660 card, which was one of the cheapest ones that support DP 1.2.
With the proprietary nvidia driver, I need to manually edit the xorg configuration file to have the correct modes and most importantly disable XRandR in favor of Xinerama.
This in turn breaks e.g. GNOME shell on Fedora 20 (without RandR, you’ll just get an exception in your syslog), and in general prevents plenty of use-cases (e.g. redshift for controlling display brightness, or changing rotation settings without restarting X11).
The reason for having to disable RandR is that there is not currently a standard way on how to represent multi-stream transport (MST) connections, and 4K displays require 2 streams (1920x1080 each) at the same time. With RandR enabled, what you’ll see is 2 connected outputs, and all applications will treat them as such, even though you have only one monitor connected.
Fixing this requires changes in RandR (i.e. the X server) and each driver. AFAIK, on the intel driver this should work, on nouveau there’s work under way, no clue about the proprietary nvidia driver.