Hacker Newsnew | comments | show | ask | jobs | submit | secure's comments login

Out of curiosity, what is your reason for using wifi hotspots in today’s world?

Personally, I don’t bother connecting to a wifi network anymore (except when I’m at home or at work) because the mobile network is just so fast and convenient.

Even when abroad, I just buy a local SIM card and use that.


Cost. Monthly data plans are a waste (I spend weeks, sometimes months, without needing mobile access) and pre-paid packages are limited and expensive.

I could certainly afford them, but considering that Wifi is plentiful where I live, it's not worth it.


Cellular data networks in many parts of the world deliver incredibly slow speeds. In those places, the hassle of finding and connecting to a WiFi hotspot is often worth it. I'm currently traveling in the Philippines and malls, restaurants and cafes commonly offer WiFi to attract customers, and it works.

Even when the local network is fast, buying a SIM card may not be an option if your phone is locked and under contract, which is very common for US users.


Buy a (used) Sim-free phone that's 2-3 years old, it's dirty cheap and will save you much trouble.

You know how I know you live in a major metropolitan area and have a good deal of disposable income?

In the user interviews we carried out some of the key use cases we've found for Wifi use are:

- People without unlimited data plans who would prefer to preserve data on their mobile plan

- People travelling abroad and want to avoid roaming fees (although I'm partial to a local sim also, for many this is quite technical)

- People who want to find a nice cafe to use their laptop

- Faster internet (Much of the world can't rely on fast LTE networks for example)

- Connectivity in places where there's no mobile signal (e.g. underground restaurants, bars etc).


This would be useful for a not-so-monetizable crowd ... the modern "hobo" population: http://www.express.co.uk/news/world/572691/Homeless-hobo-cod... ... and anyone else who cannot afford or choose not to pay for extensive data plans.

Seems like you don't travel to the countries with poor/oversubscribed mobile coverage (i.e: Argentina).

+1 Funnily enough, I'm in Argentina right now working on the Android version of WifiMapper as we speak.

Great! we can have lunch next week if you are available.

Sure, I'm on a super hectic schedule, so it may just be a drink and some empanadas, but let's set something up.

Or to those countries where getting a local SIM can be a real hassle (ie. India)

I like rkt’s focus on deployment issues that Docker currently still has — as an example, rkt verifies signatures by default.

As another example, rkt intends to work better with systemd/kubernetes, but AIUI that’s still on the roadmap and not actually implemented.

Looking forward to when CoreOS actually recommends running rkt in production :).

-----


> As another example, rkt intends to work better [than Docker] with systemd/kubernetes, but AIUI that’s still on the roadmap and not actually implemented.

Could you elaborate on this? Do you mean working better with systemd as a process supervisor or as a runtime?

Many people don't know this, but it's already possible to use systemd as a runtime for Docker containers[0], which is about as integrated as I can imagine[1]. Though admittedly, the Docker daemon and runtime do not play well with process supervision (of any kind, including systemd)[2].

Last I checked, CoreOS had posted to the systemd mailing list announcing their plans to integrate with nspawn, though I don't think that's been released yet.

[0] This is the best-kept secret of both Docker and systemd. I recently conducted a workshop on "Docker Without Docker" - in other words, how to run Docker containers without even having the Docker runtime installed (using pure systemd).

[1] And, depending on your use case, I'd recommend giving it a shot - there are a number of things that systemd provides that Docker still does not. On the other hand, Docker has a large ecosystem, and the tools for building initial container images are very accessible.

[2] At least as of recently, you can use 'exec mode' to specify the initial process (PID 1) inside a container running under Docker, but systemd still does not have access to the actual process on the host, which makes it cumbersome to monitor - the CoreOS documentation tells you to do something like this for Docker + systemd: https://github.com/ChimeraCoder/znc-kibana-playbooks/blob/ma...

-----


> [0] This is the best-kept secret of both Docker and systemd. I recently conducted a workshop on "Docker Without Docker" - in other words, how to run Docker containers without even having the Docker runtime installed (using pure systemd).

Could you expand on this? I'm curious as to what you mean/how you did this.

-----


Sure. These are the slides for my talk, which includes some of the code examples that we walked through: https://chimeracoder.github.io/docker-without-docker/#1

Consider Git. Git exists solely on the filesystem. If you want, you can read git repos by inflating the ZLIB-compressed objects yourself, and create git repos by compressing objects, hashing them, and storing them in right locations the exact same way that Git does.

It's a lot of work, and the Git toolchain exists so you don't have to type insanely long bash one-liners just to read your commit history. But it's kind of cool to know that >95% of Git is really just 'syntactic sugar' around functionality that's also provided by other command-line tools[0].

I'll wave my hands a bit, but in short: containerization uses features implemented at the kernel level, and in fact, until recently, Docker and systemd both built on top of LXC (Docker has switched created their own libcontainer).

If you take a running Docker container and dump it, you'll get a root filesystem. You could chroot(8) inside this root filesystem, but as we know, containerization is more powerful than chroot. Once you've dumped the container, systemd doesn't need to know that it was once a Docker container - it'll just look for whatever binary is located at /sbin/init and run that (or run whatever command you tell it to run instead. Just like your actual OS - which is not a coincidence!).

One advantage to using systemd instead of the Docker daemon/runtime is that systemd is capable of running itself inside a container, whereas running init systems inside Docker containers is tricky and not recommended[1]. Futhermore, systemd is smart enough to know when it's running inside a container and when it's not, so the container init system plays nicely with the host init system - things like integrated system logs and networking.

Newer versions of systemd actually allow you to pull Docker images from the Docker hub directly, so you can even use systemd to replace `docker pull` as well as `docker run`.

[0] There's a tiny, tiny portion of Git which is home-grown, but most of the features it builds on (SHA, zlib, diff) are easily replaced by other command-line tools.

[1] While it's supported, the primary use case of Docker is for running application processes: https://github.com/docker/docker/issues/2170

-----


Ah, that's a really great explanation - thanks.

I guess you lose all of the Docker metadata, links and volumes though?

You should turn this into blog post if you have time - I'd upvote it anyway!

-----


> I guess you lose all of the Docker metadata, links and volumes though?

You only need links at runtime, so they're not part of the frozen image per se; they exist as part of a running container. Put another way, systemd handles container networking, so you don't need the environment variables that Docker injects when making links, because the containers can talk to each other already[0].

Volumes - if you mount external volumes with -v /foo:/bar, you can do the same with systemd. I'm actually not too sure about named volumes in Docker, since I almost never use them (it's way easier to reason about volumes when I control where they are located on the host).

> You should turn this into blog post if you have time - I'd upvote it anyway!

Thanks - I'm actually working on that! Consider these slides a preview. :)

-----


Downloading from one-click hosters such as uploaded or keep2share is easily possible with speeds exceeding 50 MB/s. When using a download manager (i.e. multiple concurrent downloads, with, say, 6 connections), I can max out a Gigabit (i.e. 117 MB/s) from these hosters.

The same holds for Debian and Fedora mirror servers.

For torrents, the ramp-up typically takes too long, i.e. at the time when you got enough peers to max out your line, the file is already downloaded :).

-----


Not “the server”, the attacker logged into a honeypot, see http://en.wikipedia.org/wiki/Honeypot_%28computing%29

-----


Track https://bugzilla.mindrot.org/show_bug.cgi?id=2319

I worked on this for a while, but lost motivation because of the slow development speed. If you’re more motivated, you’re very welcome to pick up where I left and bring this to thousends of users :).

-----


Author here. For the impatient: this is an IRC network implemented as a distributed system implemented in Go on top of https://raftconsensus.github.io/

If you have any questions/comments, I’m happy to answer them.

-----


But ~ is not a valid character in nicknames in the first place, according to RFC 1459, see https://tools.ietf.org/html/rfc1459#section-2.3.1:

   <nick>       ::= <letter> { <letter> | <number> | <special> }
   <letter>     ::= 'a' ... 'z' | 'A' ... 'Z'
   <number>     ::= '0' ... '9'
   <special>    ::= '-' | '[' | ']' | '\' | '`' | '^' | '{' | '}'

-----


I used to use InfluxDB + a custom program to scrape HTTP endpoints and insert them into InfluxDB before.

After playing around with Prometheus for a day or so, I’m convinced I need to switch to Prometheus :). The query language is so much better than what InfluxDB and others provide.

-----


(Prometheus author here)

Thanks, that's awesome to hear! Feel free to also join us on #prometheus on freenode or our mailing list: https://groups.google.com/forum/#!forum/prometheus-developer...

-----


Does this work outside the US?

-----


you can only collect money with an US or Canada bank account but you can donate/send money with any bank card

> Currently, if you want to collect funds from a tilt, you need to have a valid US or Canadian bank account.

-----


Another interesting angle is that it’s not only application software we need to change, but also the hardware drivers are not quite there yet:

I have a Dell UP2414Q (3840x2160 resolution, driven via DisplayPort 1.2) connected to a nVidia GTX 660 card, which was one of the cheapest ones that support DP 1.2.

With the proprietary nvidia driver, I need to manually edit the xorg configuration file to have the correct modes and most importantly disable XRandR in favor of Xinerama.

This in turn breaks e.g. GNOME shell on Fedora 20 (without RandR, you’ll just get an exception in your syslog), and in general prevents plenty of use-cases (e.g. redshift for controlling display brightness, or changing rotation settings without restarting X11).

The reason for having to disable RandR is that there is not currently a standard way on how to represent multi-stream transport (MST) connections, and 4K displays require 2 streams (1920x1080 each) at the same time. With RandR enabled, what you’ll see is 2 connected outputs, and all applications will treat them as such, even though you have only one monitor connected.

Fixing this requires changes in RandR (i.e. the X server) and each driver. AFAIK, on the intel driver this should work, on nouveau there’s work under way, no clue about the proprietary nvidia driver.

-----


I'm running 3840x2160 on a Samsung U28D590 with the GeForce GTX 780 6GB card over DisplayPort 1.2 with 4 lanes @ 5.4GB/s.

Driver version: 346.16 XOrg server version: 1.16.1 (11601000)

The only issue I've ran into is that Gnome Shell won't respond to clicks when I run in 30bits/pixel mode.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: