Hacker Newsnew | comments | show | ask | jobs | submit | inopinatus's comments login

Isn't this the construct that exhibits blurring problems due to subpixel alignment in Gecko / Webkit ?

Docker solves a problem that most people don't have. It's not a PaaS, rather, it's (some of) the building blocks to create your own PaaS. Most folks don't need that. Most folks want to put files on a server and start a process. For those folks, Docker in the raw ends up being a whole lot of confusing & unnecessary scaffolding.

Moreover, Docker / Kubernetes aims to solve the problem of building services that can easily scale to hundreds of machines and hundreds of millions of users, "Google-style".

That's great, if that's what you need. But most people aren't building a service like that. HN, I believe, runs on one machine, with a second for failover purposes. And HN still has many, many more users than typical company-internal services, community services, or at the extreme end personal services.

When you aren't operating at absurd scale, "Google-style" infrastructure doesn't do you any favors. But the industry sure wants to convince us that scalability is the most important property of infrastructure, because then they can sell us complicated tech we don't need and support contracts to help us use it.

(Disclosure: I'm the lead developer of https://sandstorm.io, which is explicitly designed for small-scale.)


"But the industry sure wants to convince us that scalability is the most important property of infrastructure, because then they can sell us complicated tech we don't need and support contracts to help us use it."

And lets not forget: replace any and all efforts at code optimization with "just throw another rack of blades at it".


http://www.commitstrip.com/en/2015/07/08/true-story-fixing-a...

i just use google cloud's HTTP Loadbalancer and AutoScale. it automatically spanws/remove's VM's based on load.

There was also a time when most people thought they didn't need version control. Back in the 80s and 90s it was a justifiable viewpoint because existing version control systems sucked.

The problem with Docker is not that it doesn't solve (or attempt to solve) widespread problems. At its best, Docker gives you dev/production parity, and dependency isolation which is useful even for solo developers working part-time. The problem is that it's not a well-defined problem that can be solved by thinking really hard and coming up with an elegant model—like, for example, version control—it's messy and the effort to make it work isn't worth it most of the time right now.

That's no reason to write off Docker though. Pushing files to manually configured servers or VPSes is messy and leads to all kinds of long-term pain. You can add Chef / Puppet, but it turns into its own hairy mess. There's no easy solution, but from where I stand, the abstraction that Docker/LXC provide is one that has the most unfulfilled promise in front of it.


> At its best, Docker gives you dev/production parity

I get that when I use the same OS and built-in package manager?

I would virtualize the environment using something like VirtualBox for my dev and EC2/DigitalOcean/etc on prod.

> and dependency isolation

If you're going to scale something, you're going to split everything out on different virtualized servers anyway, so you'll get your isolation that way.

Basically, current mainstream practice is to virtualize on the OS level, where as Docker is pushing to have things virtualized on the process level.

I personally don't see the advantage ... just more complexity in your stack. I never have to mess with the current virtualization structure, I don't even see it. It looks just like a "server", even though it's not. Isn't that better?


To be fair, I've worked in places where all the devs were on the same OS, same version, and we still had problems.

But I agree, just use VirtualBox. I know Idea already supports deploying to VMs and you they just look like another machine, so no learning curve. All the benefits with none of the hassle.


Yeah, but then there's still the issue of secrets, you need to have testing PayPal credentials, testing mailing service credentials, etc. There's the issue of deploying changes fast without leaving files in an inconsistent state (you don't want half of some file to run). How about installing the required dependencies?

I don't use Docker, but those are problems I can think of off the top of my head.


Docker doesn't credibly solve the credentials problem and the other problems you outline (which do exist) are as practically solved with something like Packer. And I mean, I'm not a Packer fan--oh look, VirtualBox failed to remove a port mapping for the VM that just shut down, throw away the whole build--but it's built on much, much more battle-tested technology with a much wider base of understanding.

(And, later, if you want to play with Docker, Packer lets you do that too. But you should use the Racker DSL in any case, because life is too short to deal with Packer's weird JSON by hand.)


Thanks for pointing me to Racker (https://github.com/aspring/racker). I'm currently building Packer and Terraform images with chunked together Python scripts that work, but I wouldn't call them a great solution. I'm actually using Packer specifically so I can start with regular EC2, and then move to a more Docker-based infrastructure.

Packer severely frustrates me, with the maddening regularity in which it fails just for funsies. Or the consistent but completely inane ways that it fails, like refusing to proceed based on not finding a builder for an 'only' or 'except' clause (making it nearly impossible to re-use provisioners and post-provisioners across multiple projects). Racker does help--my shared Racker scripts are in a Ruby gem--though I think that it pretty much removes Packer as anything more than a dummy solution into which you dump directives on a per-builder basis. As a tool that you carefully feed the bare minimum of information to do its job in any specific situation, though, it works okay.

Terraform, on the other hand, I think is a huge, huge mess, and I don't think they're going to fix it. I wrote a Ruby DSL for it the last time I tried to use it in anger, only to encounter that Terraform didn't honor its own promises around the config language it insisted on instead of YAML or a full-featured DSL of its own. Current client uses it, and every point release adds new and exciting bugs and regressions in stuff that should be caught by the most trivial of QA. For AWS, I strongly recommend my friend Sean's Cfer[1] as a better solution; CloudFormation's kind of gross, but Cfer helps.

[1] - https://github.com/seanedwards/cfer


Credentials have to be managed separately from Docker anyway.

> There's the issue of deploying changes fast without leaving files in an inconsistent state (you don't want half of some file to run). How about installing the required dependencies?

rpm / dpkg also install dependencies, are quite fast and well tested. They have the advantage of working in a standard environment which most sysadmins know but the disadvantage that you need to configure your apps to follow something like LSB (e.g. install to standard extension locations rather than overwriting system files, etc.).

The one issue everything has is handling replacement of a running service and that's not something which Docker itself solves – either way you need some higher level orchestration system, request routers, etc. Some of those systems assume Docker but that's not really the value for this issue.


> the disadvantage that you need to configure your apps to follow something like LSB (e.g. install to standard extension locations rather than overwriting system files, etc.).

Common misconception. You only need to do this if you're going to try to push the packages upstream. If they're for your own consumption, you can do what you like. Slap a bunch of files in /opt, and be done with it - let apt manage versions for you and be happy.

As with many things, this is one area where you've just got to know what to ignore. It's simpler than it looks.


I think we're actually talking about the same thing – I said “like LSB” simply to denote following some sort of consistent pattern, which will vary depending on how widely things are shared.

/opt is defined in FHS for local system administrator use, so installing your company's packages there is actually the recommended way to avoid conflict with any LSB-compliant distribution as long as you use /opt/<appname> instead of installing directly into the top-level /opt:

http://www.pathname.com/fhs/pub/fhs-2.3.html#OPTADDONAPPLICA...


rpm and fast don't really go together. dpkg is much better. dnf will be interesting.

> Yeah, but then there's still the issue of secrets

How would Docker help with this? Genuinely curious.

I store them in bash scripts outside the repo that populate the relevant data into environment variables and execute the code. The code then references the environment variables.

> How about installing the required dependencies?

There are two kinds. On the OS level and on the platform level.

On the OS level, you can have a simple bash script. If you need something more complex, there are things like Chef/Puppet/etc.

On the platform level, you have NPM/Composer/PIP/etc which you can trigger with a simple cron script or with a git hook.

> There's the issue of deploying changes fast without leaving files in an inconsistent state

So the argument here is that you're replacing one file in one go vs possibly thousands? That in the latter scenario the user might hit code while it's in the process of being updated?

Ok. With docker, you would shut it down to update. You would have to.

Same goes for the traditional deployment? Shut it down, update, start it back up?

You can, of course, automate all of this with web hooks on Github/Bitbucket, for both docker and the traditional deployment.

The traditional deployment should also be faster, since it's an incremental compressed update being done through git.


Kubernetes secrets are a really great solution to this problem. [1] They are stored at the cluster level and injected into the pod (group of containers deployed together) via a file system mount. This means that each pod only has access to its secrets which is enforced by the the file system namespace. If an entire machine is compromised, only the secrets of pods currently scheduled onto that machine are able to be stolen. That's a high level, but it's worth taking a look at the design doc.

Edit: forgot to mention, the file system mount means that they don't need to be in env var, which are fairly easy to dump if you have access to the box or are shipping containers around in plain text.

1. https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...


I don't know if Docker helps with this, I don't use Docker. But some kind of solution has to exist.

How AWS does updates is it first downloads the new code into a separate folder and then switches the link to point to the new folder instead.

But AWS has an unsatisfactory feeling because it downloads the entire code instead of doing a git update. These are all issues that could be fixed, and someone has to do them. I have no idea if Docker helps with any of them, but the opportunity is still there.


Ansible, systemd and go is stealing my heart at the moment. Basically pick the tech that doesn't cause the problems to start with.

I still reckon that the main reason VMware ESX is as successful as it is comes down to the lack of isolation and sheer deployment hell that windows has been for years. The same can be said for python or ruby on a Linux machine for example. Docker removes some of that pain like ESX does.


That's a bit of my point... If you're building relatively small independent services with Docker you can deploy service A with node 0.10 as it's tested environment and service B with iojs 2.4 on the same server, without them conflicting... when you need to update/enhance/upgrade service A you then can update the runtime.

The same can be said for ruby, python and any number of other language environments where you have multiple services that were written at different times with differing base targets. I've seen plenty of instances where updating a host server to a new runtime breaks some service that also runs on a given server.

With docker, you can run them all... given, you can do the same with virtualization, but that has a lot more overhead. It's about maximum utilization with minimal overhead... For many systems, you only need 2-3 servers for redundancy, but a lot can run on a single server (or very small cluster/set)

I have to agree on ansible, systemd and go... I haven't done much with go, but the single executible is a really nice artifact that's very portable... and ansible is just nice. I haven't had the chance to work with systemd, but it's at least interesting.


> The same can be said for ruby, python and any number of other language environments where you have multiple services that were written at different times with differing base targets. I've seen plenty of instances where updating a host server to a new runtime breaks some service that also runs on a given server.

This is a solved problem in Python and Ruby. In Python, use virtual environments. In Ruby, use RVM. You won't have the issue of one tenant breaking another.


And with node, you can use nvm... however there are libraries and references at a broader scope than just Python, Ruby or Node... Say you need an updated version of the system's lib-foo.

A runtime environment for a given service/application can vary a lot, and can break under the most unusual of circumstances. An upgrade of a server for one application can break another. Then you're stuck trying to rollback, and then spend days fixing the other service. With docker (or virtualization) you can segregate them from each other.


You're correct, but I can see that there's certainly a place for having a single solution that works across all ecosystems.

Also, RVM in production? Sledgehammer to crack a nut :-)


In local system yes, but in production its painful to work with. With RVM for isolation you would create gemsets for each app with specific ruby version. It OK for 2-3 applications, but anything more than that would be a pain to work with. And then if you plan to put everything behind passenger, it would just be too messy. Think of automating this? Would be a nightmare to maintain. Over here containerization does help.

Node has this too : nave, npm and n. But using these tools means that you are not longer using the package manager of the system and this can be a problem sometimes. Eg you need to open your firewall to something else that it is not the standard pkg manager.

I see docker as a valid attempt to fix limitations of existing and broken package system (eg: apt) at a price that I am not yet willing to pay.


Virtual environments doesn't work for the interpreter itself. Not only that some of the packages will need to build c extensions and they use different versions of the same library which might break.

It partially is, until you need native dependencies.

Which version of RVM are we running again?

If you want to put files on a server and start a process, you are probably looking for something like Apache, not a "PaaS" necessarily.

If my dry read of the code is correct, that line is for the "bocker ps" routine to be able to print the command later.

The run command itself is executed next, inside of ip netns exec.


Ahhh, no.. I was looking at line 57 and somehow missed the actual call to $2 on 59. Clear as a bell now

Two points; firstly, SRV records offer other advantages over A records beyond simply port diversity.

Most notably: the ability to exist at the apex of a domain, resolving to both IPv4 and IPv6 addresses simultaneously, and weighted-round-robin and fallback support.

The overload of the address record to discover service endpoints is ultimately a murky throwback to when servers were named like pets, not disposable commodities.

Secondly, in this age of containerized deployment it is already common to have many HTTP-substrate services bound to a single address, disambiguated by port. Leveraging the SRV record in DNS means not having to invent yet another endpoint discovery mechanism just to know what port number to connect to.

The reference to "typing an extra 5 characters" invokes a world of manual, static configuration that many of us are happy to have replaced with services that find each other through discovery protocols.

-----


You can skip the web tier entirely and have Pg returning JSON documents via an HTTP endpoint to a rich MVC js browser application that synchs to a local cache.

Thus re-inventing Lotus Notes.

-----


I like OpenRESTy for such endpoints; YMMV.

But I refuse. REFUSE. to re-invent Notes.

-----


I haven't used it yet but postgrest looks awesome for this.

https://github.com/begriffs/postgrest

-----


That looks like a good tool, but have you met nginx before? I sorta view OpenRESTy as a reason to sneak nginx into my stack. :)

-----


>Thus re-inventing Lotus Notes.

I would like to veto this with deadly weapons - up to and including nuclear.

-----


I can only agree, and further voice my strong disapproval at the continuing, damaging and absurd lack of DNS and IPv6 considerations, most notably the omission of any discussion of endpoint resolution.

Literally so: this protocol document does not specify how you determine which server to connect to. HTTP2 is, in definition, only very loosely coupled to IP despite making significant optimisations for TCP. Thus in implementation we simply get the same old mistakes and undefined behaviours. Issues with floating apex records, hacks based on IPv4/6 race conditions, unnecessary address wastage and so forth will continue; all derived from the colossal architectural wart of overloading the DNS host (A/AAAA) record as a service endpoint discovery mechanism.

Once again, I say unto the peanut gallery: shoulda used SRV. The benefits are many and the downsides greatly overstated. I bemoan the missed opportunity.

-----


I can't agree enough about the missed opportunity to use SRV records. This would have been such a monumental step forward.

Edit: It makes me a bit giddy (which makes sense if you factor in my being a sysadmin) to think about what SRV records would've done for load-balancing, running servers on non-standard ports, IP address exhaustion, and server migrations. Anybody who doesn't appreciate proper service-location hasn't ever done serious sysadmin work and, IMO, has no business designing protocols.

-----


You ever tested them on a large scale?

I think agl did; the state of DNS resolution is, I'm afraid, really not pretty.

-----


Have you got a reference?

-----


Grandparent says HTTP2 is a monstrosity, because it "includes a whole slew of features that we really don't need."

You say you agree, because "the protocol does not specify how you determine what server to connect to."

In other words, HTTP2 sucks because it simultaneously includes too many features, and not enough features.

At least everyone can agree that they don't like it for some reason, even if the reasons themselves contradict each other.

-----


> At least everyone can agree that they don't like it for some reason, even if the reasons themselves contradict each other.

How does too many features and missing features contradict each other?

It's entirely possible for something to both have too many, unneeded features, while at the same time missing other important features.

-----


> In other words, HTTP2 sucks because it simultaneously > includes too many features, and not enough features.

No, they are agreeing that it's a heap of extraneous features.

-----


What is a "floating apex record"? I googled it in quotations, and you appear to be the only person who has ever said that combination of words in Google's index.

-----


An apex record is one at the root of a DNS zone. Sometimes called "naked domains".

For example, in "https://github.com/" they are the records particularly for "github.com", rather than for subdomains that might exist such as "www.github.com" or "gist.github.com".

Apex records have a particular restriction: they cannot be aliases, because the apex includes DNS metadata that is not allowed to be aliased[3]. Read on for how this becomes a problem.

I've used the term "floating" as a visual metaphor, because what I'm about to describe lacks a universal standard name, because it is an ugly hack:

HTTP resolves endpoints using host records, so an URL of "https://github.com" means looking up A and AAAA records for "github.com". Yes, the protocol is arrogant enough[1] to assume that your host address for the whole domain is that of the web server. (This is why we ended up prepending "www" to domain names, as a service selector). In response to the query you get an IP address.

Unfortunately, IP addresses sometimes change without warning. The most common example today is the loadbalancer offered by Amazon Web Services. The solution to this is to use an alias record in your human-friendly domain, pointing at an hidden technical domain that the infrastructure provider keeps up-to-date (e.g. "my-elb-name-1-1160186271.ap-southeast-1.elb.amazonaws.com")

This is fine for "www.example.com" but not the naked "example.com", because aliases are prohibited at the apex.

As a result, DNS providers such as Route 53 have ended up with a hack: a spoofed record at the apex, one that tracks an external resource and synthesizes a fake A/AAAA response. Now you have a naked domain that tracks, or rather hopes to track, the correct endpoint. But it changes with the wind. Hence my description of it as "floating".

There is no consistent name for this kludge. AWS calls it an alias, and for reliability concerns restrict it to their own infrastructure only; DME call it an "ANAME" record [2]. The model can even be readily implemented as a shell script run out of cron on your nameserver. It is fragile, it is often unreliable, it is not at all standardised, and it doesn't scale beyond one service.

One better solution would be to require use of SRV records, which allow one to declare instead, for example, an "https" service for "example.com". Alongside, let's say, the xmpp service, sip service, or any other service you care to announce. SRV records can exist at the apex. They can also bundle the A and AAAA (IPv6) addresses for the resulting endpoints in the answer, and select alternative port numbers without bothering the user about it.

Not quite a universal panacea: there is a minor hazard of zone cuts that could increase the number of client lookups, but that's an edge case, not one you can easily blunder into and also easy to fix.

[1] HTTP/1.0 and earlier are forgiven, because they hail from a time when you just had a web server in a rack and called it "www". But HTTP/2 is supposed to respond to modern architectures.

[2] http://www.dnsmadeeasy.com/services/aname-records/

[3] none of you comedians are allowed to mention DNAME records as the exotic counterexample.

-----


Thanks for your deep explanation ! That's very refreshing !

Do you have any clue why SRV is not more widely used ?

-----


They are moderately popular outside of HTTP for new protocols (eg Minecraft can use them).

I suspect they aren't more popular because it requires some DNS knowledge before you think of them. It's a pity because they are very useful.

Aside: Cloudflare's free DNS hosting service supports them, with low TTL.

-----


floating -> pointing to an old resource

apex record -> alias for 'A' record (DNS parlance)

So a 'floating apex record' is an A-record pointing to an old IP.

-----


I think "apex record" means the root domain name in a zone, and is unrelated to A records (except for the fact that you would usually make an A record for your apex so web browsers can reach your site even without a "www." subdomain/prefix)

-----


That makes sense. Thanks!

-----


Have you ever considered just saying 'A record' instead of 'Apex record' so the majority of people know what you're talking about? Not all of us are DNS wonks.

-----


I was just trying to figure out what the GGGP meant, that wasn't my choice of words.

I'm not a DNS wonk either.

-----


Sorry about that. I see now that I responded to the wrong person.

-----


Variant on #3: git diff master origin/master just before you git push.

-----


Or, if your current branch has an upstream, `git diff @{u}`.

-----


The vernacular continues to hoover up brand names.

-----


He's just trying to put a band-aid on the problem.

-----


More like brand-aid?

-----


When a door is marked "Private", then the room beyond is generally a shared space for all those authorized to access.

-----


Try it on Wolfram Alpha.

Actually, they all seem to me pretty limited compared to the answers you get from Wolfram Alpha.

Also c.f. the responses to one of their test questions, "how much is a quarter cup of butter?". Google makes fun of the inquiry. Wolfram Alpha gives you a thorough nutritional profile, and links to variations based on international cup sizing and different types of butter.

-----


Wolfram Alpha is amazing at discerning the intention of the question.

In the article, the question "How old is the Lincoln Tunnel" struck me as incorrectly formatted for the parser (I know, that's the point), so I asked Siri, "When was the Lincoln Tunnel built." The Wikipedia article on the Lincoln Tunnel was returned. Wolfram Alpha was listed under other sources, so I chose that. The response? "1937"

-----


I've noticed lately that many times things I know Alpha will slam-dunk don't get routed to it by Siri. I'm not aware of the details of the deal we have with them but from my observations of Siri it looks like Apple might be looking for certain keywords (such as how, what, why, etc) before it tries routing anything to Alpha. I hope they can relax that in future.

Luckily if you say "Wolfram XXX" instead of just "XXX" Siri will route your question straight to Alpha no-questions-asked.

-----


Google isn't making fun of the inquiry, it simply bring up the relevant snippet from the provided web page.

The result Google also gives you is wrong (since its also a just a snippet).

-----


I'm not sure the query is really easily understandable. How much it costs? I assume it's to figure out which marking to cut on the stick wrapper?

-----


I just learned that Canadian cups != American cups. (227g vs 218g) Who would have thought....

-----


Wait a second. Cups are a measure of volume and not weight, though. Grams is not the right unit to compare here.

-----


There are approximate conversions for recipes, since many American recipes use volume measures and expect you to have measuring cups, while European recipes expect you to have a kitchen scale. But yes, there isn't any single conversion, since density varies: there's one cups/grams ratio for granulated sugar, one for powdered sugar, one for sifted flour, one for water, etc.

-----


Which, as a geek, I find superconfusing and generally insane. I wish cooking was treated as chemistry (which it de facto is) and at least used precise units and proper measuring tools.

-----


Outside of baking, you really don't need to be that precise. Bakers typically weigh their ingredients to get the correct measurements.

-----


Siri is based on Wolfram Alpha

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: