Hacker Newsnew | comments | show | ask | jobs | submit | ABS's comments login

For those looking into this topic only now we interviewed 6 people who have been doing this for some time, collected their answers and published them + did a hangout with them here: https://highops.com/insights/immutable-infrastructure-6-ques... (pure content, no sales pitch anywhere)

can people who should know better please stop calling 'open source' anything and everything? this is not open source

-----


> On a related tangent, apparently modern karate originated in Okinawa

it's well documented actually, before 1900 or so no one in mainland Japan knew anything about what would later be called karate.

Gichin Funakoshi (father of shotokan karate, even though he never called it that himself) moved to mainland Japan in 1922 to popularize it. Before then only a few people in the Japanese navy and the prince/emperor-to-be had seen it while visiting Okinawa.

Yes, there are other styles other than Funakoshi but he is undisputably the one who broke into mainland Japan first

-----


> show improvement within 90 days

any (senior) executive who changes things in the first 90 days has got it completely wrong. And if who hired them expect sweeping changes in 90 days then they got it even worse.

Anything that a new hire can do in their first 90 days can be done, and better, by whoever that new executive will report to.

E.g.: if you hire me to fire the people you want fired then there is no reason to wait for me and you should do it yourself. The moment you hire me to get help you have to give me the time to decide for myself who to fire and who not to and it will take more than a few weeks to make sensible decisions.

And yes, that's the exact conversion I had when I found myself in that situation and it's not valid just for SMEs, see 'Who Says Elephants Can't Dance?' for basically the same idea in a place as big as IBM (when he talks about everyone expecting him to come up with a brilliant strategy fast)

-----


I've been a VPE coming in from the outside. One thing to note right away is that the OP is talking about growing startups. That means that there are usually plenty of things wrong in the startup that everyone agrees are wrong but haven't made a decision on how to fix yet. So, absolutely, the new executive should be facilitating solutions well before 90 days.

As I recall, in my first 90 days:

1. On day one several people hinted to me about work from an under performing contractor. On day 2, I met with this contractor and looked at his code. On day 3 I put him on a performance improvement plan. On day 8 I decided that he was the wrong fit for the role and ended his contract. Great guy, but he was just hired for one thing and then asked to do something else. But there wasn't anyone else who was really qualified to track down whether the problem was direction or fit.

2. There was a general consensus that we weren't shipping code fast enough. On week 3 we implemented Scrum (it was 2005). The engineers kind of hated it. But Scrum achieved what was needed. The rest of the company had enough visibility into development speed to realize that the problem with the company was bad product/market fit, not bad developers. We pivoted a bunch of times. Twitter was one of those pivots.

3. We were running our own servers in a cage at 360 Main. Those servers had been setup originally by one of the founders. There was definitely a desktop computer in there serving production code. The servers were administered by various engineers on the team (administered poorly because it's not what they wanted to be doing). So I started recruiting for a sysadmin. I definitely had hired him within the first 60 days.

There were plenty of things in there that I tried to fix as well and failed at. I can't imagine an exec coming into a startup and not trying to fix things. 90 days is an eternity. Normally there's so much wrong that the exec actually has a lot of latitude from day 1.

-----


I respectfully disagree. Any significant org is going to have have some process that sounded good at the time, but is now ridiculous.

If you're not finding and correcting those things in 90 days as an executive, you're warming a chair.

-----


> Any significant org is going to have have some process that sounded good at the time, but is now ridiculous.

That most people will be inexplicably attached to emotionally.

Try this experiment if you're in a Unix shop. Get people to sign up to maintain the unpackaged programs in /usr/local, and then delete the ones that nobody signs up for. Bring popcorn and an asbestos suit. Make sure you take a backup before you do it.

Congratulations. You now understand about removing stupid processes from an organization.

-----


sure but there is a different between finding them, knowing how to changed them and executing that change. You need to know way more than what process is not working before knowing how to change it and you will need people on board to achieve a working lasting change. If you go in shooting from the hips you are very unlikely to achieve what you want, even if the problem is clear to everyone. If it were that straightforward they would do it without needing someone new

-----


"Any significant org is going to have have some process that sounded good at the time, but is now ridiculous."

And yet, it's still there. Understanding why that's the case is a pretty important part of ensuring that it actually changes. In other words, correctly identifying problems is the easy half of the problem. Dealing with the problems behind the problems is where things get tricky.

Fools rush in where angels fear to tread, etc.

-----


It's often still there, because there are politics, and people cover their ass/don't want to take the risk of breaking things.

-----


Re-evaluating process isn't necessarily a priority. One place I worked at had a paper handling process involving a specific method of stapling and folding a form.

Why? Nobody knew, but it was something that was audited by the QC people. The reason turned out to be a special accommodation for a one-armed man (literally) who was a clerk at some point in the past.

-----


You're right. My heuristic is not without exceptions.

-----


Here's the video from his talk at EuroClojure 2012

https://vimeo.com/45130708

-----


little bit of snark: wasn't Docker "fundamentally flawed"? If that was really the premise to launch Rocket why bother with this humongous PR?

Don't get me wrong, I totally see how this is good for Rocket, just be honest and admit the "fundamentally flawed" argument was mainly smoke and mirrors to justify a defensive-offensive move by a VC-backed, for-profit company launched against another VC-backed, for-profit company.

Again nothing wrong with that, it's business and in fact a good move but in my eyes CoreOS lost quite some trust when they tried to potray Rocket as a selfless act of kindness towards the community that needed to be saved.

-----


All of us want containers to be successful, they solve a ton of problems. But, part of that success is getting the format and the security correct. And we want to have that technical discussion and settle on those best practices for all implementations.

There are things in the App Container spec that we would like to see in Docker, this is why we put in the work to make a spec, write the code to make it work and start a technical discussion. This has been the goal since the beginning. The problems that exist in the current Docker Engine that we would like to address are technical and real:

1) We believe in having a decentralized and user controlled signing mechanism. In the appc specification and rocket we use the DNS federated namespace. See the `rkt trust` subcommand and the signing section of the ACI spec.

2) We believe that image IDs should be backed by a cryptographic identity. It should be possible for a user to say: "run container `sha512-abed`" and have the result be identical on every machine because it is backed by a cryptographic hash.

In rocket another thing we wanted to do was enable running the download/import steps outside of being a root user. For example today you can download and import an image from disk in the native ACI format with rkt. And in the next release `rkt fetch` will be runnable as a user in the same unix group as `/var/lib/rkt/`.

-----


I agree with everything you say from a technical point of view, it just doesn't change the political/business/marketing consideration IMHO

-----


> All of us want containers to be successful, they solve a ton of problems.

Not sure I want containers to be successful (unless of course the main business is building and marketing containers). I want my problem solved but whether they are solved with containers, mocks, jails, VMs, and so on doesn't matter as much.

-----


"they solve a ton of problems"

Name two?

-----


Deploying to a cleanly defined fresh state without paying any performance penalty. Documenting your dependencies by writing the deployment script (=Dockerfile) and not having to reinvent the wheel everytime (image inheritance). Sandboxing linux applications without paying any performance penalty. Creating a PaaS where your services internally always see the same standard port, externally they're linked together through docker, thus separating the routing concerns from your application logic.

-----


These are all great but I get the most of the same benefits from VMs and many more:

fresh state/ no performance penalty (AMI+autoscaling) Document dependencies, not reinvent (Packer file) Sandboxing (same) Always use same standard port (easier with VMs as 1:1 map)

I know most people think that containers/docker/whatever new stack does these things better and they may be right. The benefits however don't outweigh the costs in weaker toolset and less mature stack.

For my use cases, the biggest problem is that containers don't solve the "where does this run" question. Whenever I ask this, people loudly exclaim "anywhere!" which is the same as "I don't know" to me.

AWS AMIs run in 11 regions x N AZs around the world. This solves a much bigger technical problem for me than "it's lighter weight and easier to do incremental releases on top of" which seem to be the only things in favor of containers.

Many people, including Amazon, say "run containers on VMs!" This seems unnecessarily complex for little additional gain.

I'm really curious if the containerization folks are using Packer and if not why not.

-----


I run containers in Amazon. Not using their service, because their service is silly, but on Mesos.

I am not locked into a 1:1 tenancy between applications and instances (though I could have it if I wanted). Multitenancy is trivial. I have the ability to spin up new instances of my applications to combat spike loads or instance failures in single-digit seconds rather than in minutes. My developers can run every container within a single boot2docker VM instead of incurring the overhead of running six virtual machines. It's easier to integration-test because my test harness doesn't have to fight with Amazon, but can rather use a portable, cross-service system in Mesos. In addition, I don't have to autoscale with the crude primitive of an instance in production. Multitenancy means that I can scale individual applications within the cluster to its headroom, and only when the entire cluster nears capacity must I autoscale. I can better leverage economies of scale, while allowing me to leverage more vCPU power to applications that need it (running two dozen applications on a c3.8xlarge is very unlikely to bring to bear at any given time less computational performance to a given application than running each application on its own m3.medium).

I could do this without containers and with only Mesos. It would be worse, but I could do it. I could not do this at all with baked AMIs and instances without spending more money, doing more work, and being frustrated by my environment. I know this I've built the same system you describe (I preferred vagrant-aws because when something broke it was easier to debug, but we moved to Packer before I left) and I would never go back to it. It was more fragile and harder to componentize than a containerized architecture with a smart resource allocation layer. The running context of a container should be "anywhere", and it should be "I don't know", and you caring about that is a defect in your mental model.

-----


Thanks for the reply.

-----


The one thing I don't believe you, is that you don't have a performance penalty. If you compare a bare metal machine running VMs with a bare metal machine running containers, there will always be a performance penalty for the VMs - they are more heavyweight by definition by running an additional kernel for each VM. Even assuming CPU doesn't get a hit at all, it still incurs memory and diskspace penalties. As an effect, from an IaaS POV, a container can be made available cheaper than a VM, and you can think much less about using them from a performance point of view - does it make sense logically or from a security standpoint? -> use it.

-----


Why do you mention like Linux? Containers shouldn't be specific to Linux.

-----


Well.. in Docker's case they are? It's based on lxc, a Linux kernel technology. Of course BSD had it before that, although with a different name.

-----


Docker hasn't been based on LXC since 0.9 by now. It uses its own libcontainer.

-----


Which to my knowledge ist still linux-only, so while it's good to be technically correct, it doesn't change my point.

-----


That much is correct, it's still tied to namespaces, cgroups and the standard mechanisms for implementing jailing on Linux. The point is it stands at a layer of abstraction designed for easier portability than outright depending on LXC.

-----


Ya, the messaging is starting to get really confusing. If the container formats really are that similar then there is no point in two parallel implementations, either augment docker containers or app containers. Doing both at the same time is just silly since from the looks of it they are going to converge on the same format anyway.

-----


This initial discussion is just about the container image format and we would really like to see convergence on that front.

As container runtimes Rocket and Docker have different design goals though. As one example, Rocket is designed to be a standalone tool that can run without a daemon and works with existing init systems like upstart, systemd and sysv. We needed this standalone property because on CoreOS containers are our package manager and the daemon can get in the way of doing that correctly.

It is ok that Docker and Rocket have different design goals and both can exist for different purposes. But, I think we can share and converge on an image format that can be used by multiple implementations that includes cryptographically verifiable images, simple hosting on object stores and the use of a federated DNS based namespace for container names.

-----


Brandon, let me respectfully ask you 3 questions:

1) As you very well know, Docker is already working on cryptographic signature, federated DNS based namespace and simple hosting on object stores. If you "would like to see convergence", why didn't you join the effort to implement this along with the rest of the Docker community? The design discussion has been going on for a long time, the oldest trace I can find is at https://github.com/docker/docker/issues/2700 , and the first tech previews started appearing in 1.3. Yet I can't find a single trace of your participation, even to say that you disagree. If you would like to see convergence, why is that?

2) You decided to launch a competing format and implementation. That is your prerogative. But if you "would like to see convergence", why did you never inform me, or any other Docker maintainer, that you were working on this? It seems to me that, if your goal is convergence, it would be worth at least bringing it up and test the waters, ask us how we felt about joining the effort. But I learned about your project in the news, like everybody else - in spite of having spent the day with you, in person, literally the day before.

3) Specifically on the topic of your pull request (which we also received without any prior warning, conveniently on the same day as your blog post). So now we have 2 incompatible formats and implementations, which do essentially the same thing. Once we finish our work on cryptographic signature, federated dns based naming etc, they will be functionally impossible to distinguish. How will it benefit Docker users to have to memorize a new command-line option, to choose between 2 incompatible formats which do exactly the same thing? I understand that this creates a narrative which benefits your company, CoreOS. But can you point to a concrete situation where a user's life will be made better by this? I can't. I think it's 100% vendor posturing. Maybe it's bad PR for me to say this. But it's the truth. Give me a concrete user story and I will reconsider.

-----


> How will it benefit Docker users to have to memorize a new command-line option

User here. I couldn't care less about a new command-line option, but it would be worth a lot if I could run any image on any platform.

If you claim this is "all about the user" then talk more about what the user gains or loses.

Is the biggest downside really just another command-line option? Docker already has a metric fuckton[1] of command-line options, what's one more?

Impugning the motives of your competitor is at best an irrelevant distraction, and at worst an indictment of your own motives.

[1] https://docs.docker.com/reference/commandline/cli/

-----


> but it would be worth a lot if I could run any image on any platform.

That technology exists, it is called a VM. Any platfrom that supports x86 for example will run any x86 compatible image. You can use wrappers and scripts like Vagrant on top of it.

Or if you want all hosting managed as a pool of resources (storage, CPU) try something like oVirt.

http://www.ovirt.org/About_oVirt

-----


And VMs far more heavy-weight and doesn't address any of the reasons why people prefer containers to VMs for some types of workloads

-----


> and doesn't address any of the reasons why people prefer containers to VMs for some types of workloads

I was responding to one reason -- which is "running any image on any platform".

> why people prefer containers to VMs for some types of workloads

Sure but there are no magic unicorns underneath, knowing what you get from a technology requires some understanding on how it works. Saying things like "I want very lightweight but also want it to run any image on it" is asking for a trade-off. Or a complicated multi-host cabaility based platform.

-----


I've got a use case. ACI support would let me use containers without being coupled to Docker's registry. I really don't want to run that software, and I really, really don't want to rely on Docker Hub. ACI's use of existing standards for their "registry" implementation is a major draw for me.

-----


Actually, I think you don't have to rely on Docker's registry:

- you can simply use Dockerfiles and build your own images,

- apparently, it seems like you can host your own registry [1]

- you can even use a service run by CoreOS, ie Quay, to host your Docker images [2]

I'm not sure I understand what you mean by "I really don't want to run that software". Does it mean you don't want to use Docker ?

[1] https://blog.docker.com/2013/07/how-to-use-your-own-registry...

[2] https://quay.io/

-----


To clarify, I don't want to run my own registry and I don't want to rely on any third party for image hosting. I just want to pull tarballs from a dumb file server. No need to run a registry for that, and no one company has a privileged position in the namespace.

It's maddening, because I love Docker-the-concept but not Docker-the-implementation nor Docker-the-ecosystem. I honestly do understand how many would find the UX of "Docker, Inc. at the center of things" to be a refreshing convenience, but to me that notion is frustrating and repellent, as much so as if Git remotes defaulted to GitHub.

-----


> I just want to pull tarballs from a dumb file server.

Is there something I'm missing that you couldn't just use wget? If you have the URIs, I can't imagine how pulling down an image by name would be more than a quarter-page Python script, even if you include the untarring and such.

-----


Yeah, that's about what I've been doing, but AFAICT I lose the benefits of layering when I refuse to speak the registry protocol. Docker's export command dumps the entire image tree, so I'm stuck transferring GB-sized payloads to deploy the tiniest change to my app. appc manages to do layers without a coordinating registry. (Kind of funny that CoreOS bought Quay, on that note.)

-----


Yes, you can run your own registry, but doing so without every pulling anything from DockerHub means rebuilding all images yourself and tagging all of them for your own registry and pushing to that, or DNS / firewall hacks to redirect requests for index.docker.io (or forking Docker).

They've made it much harder than necessary.

-----


There is nothing "respectful" about anyone's behavior on either side of this trainwreck, and calling each other out on a web forum isn't going to help anything.

-----


shykes, I will follow up to all of this on the proposal on GitHub.

-----


In that case I don't see how the image format ties into any of what you just said. Seems to me the image format is completely irrelevant. Docker's format could be augmented to include all the security features you want and rkt could just use docker containers. That's where the confusion is. It is clear that the image format is orthogonal to all the other issues you mentioned.

By the way I don't have a dog in this race and am not rooting for either side. Just from purely a technical perspective and resource use the fragmentation is now starting to feel something that is mostly driven by public relations and marketing. As someone that tries to use the best tool for the job I now have no compelling reason to choose either format and run-time which means I'm just going to wait it out and both sides are going to lose contributions from independent open source developers because their effort is going to be wasted.

-----


It's not just the image format, it's about getting the DNS based federation and content-addressible images, which effectively takes away "index.docker.io"'s special status.

And that's where the problem is. I can very much understand why Docker sees holding onto that as a great advantage to them, but it's not an advantage to me as a user.

-----


Basically App Containers is about laying down the gauntlet for Docker because the changes they are asking for are/were unlikely to be accepted without backing them up with the pressure of facing a competing project if they're not.

The federated nature of image identity that CoreOS is pushing for is a direct challenge to the special status that Docker has given index.docker.io, and that they have been strongly resisting attempts to change.

I don't care much if Rocket or Docker "wins", but I really hope the App Container federated approach does.

-----


Right, in which case this should be the messaging instead of "Look guys docker can run ACI images". Why waste effort on interoperability if the end game is federated image identities? Pour all engineering resources into making that happen instead of silly patches for interop since that can always happen later.

-----


fixed, thanks

-----


Phew -- thank you! (And it's "Keith Wesolowski", for whatever it's worth.) Personally, I would title this "A young software engineer retires", but I'm 0-for-1 on HN titles today.[1]

[1] https://news.ycombinator.com/item?id=8816101

-----


I, for one, like your proposed title. It captures a salient fact about the author which makes a big difference in how one views the post.

-----


and fixed again, looks like I'm unable to write in an input box today...

regarding the title: what you said :-)

-----


pretty clear in Docker Inc CEO words here: https://gigaom.com/2014/12/20/on-docker-coreos-open-source-a...

“The closest analogy I guess I can give you is, for people who think of Docker and containers as a new form of virtualization, so [with] open source we gave away ESX and what we are selling is something akin to vCenter or vSphere.”

-----


Which, if you put the pieces together, clarifies that the purpose of new Docker features is to lasso in fragmentation and make sure you're going to buy their vCenter/vSphere rather than any alternative.

This also clarifies why things like Rocket are the biggest existential threat facing Docker Inc., and why the mud is being slung.

-----


well, those are two different things, aren't they? the direction is pretty clear.

On everything else I'm on the fence but so far the 'batteries included, but swappable' motto has been respected: everything new they have done/announced lately is not in Docker core but rather an external project (think: Swarm, Machine, Compose) that uses the public APIs

-----


I guess I'm unlucky: have tried them multiple times over the years and always had issues, in particular connectivity has always been pretty bad :-(

-----


Which data center?

-----


GRA-1 for sure, we then tried at least another one in France (need to dig into emails to find out which) and was kind of the same

-----


I've only tried the Canadian one and it's been rock solid for me.

-----


My standard reply to these posts:

I highly recommend to just get the book [1], it's written very well and in layman terms but here's an extract taken from a review [2] of the same:

"Stout traces the birth of this “fable” to the “oversized effects of a single outdated and widely misunderstood judicial opinion.” Dodge v. Ford Motor Company was a 1919 decision of the Michigan Supreme Court. The opinion’s status as a meaningful legal precedent on the issue of corporate purpose is tenuous at best. Yet, its facts “are familiar to virtually every student who has taken a course in corporate law.” As Stout has observed in the past, “[t]he case is old, it hails from a state court that plays only a marginal role in the corporate law arena, and it involves a conflict between controlling and minority shareholders” more than an issue of corporate purpose generally. The chapter explains quite well that any idea that corporate law, as a positive matter, affirmatively requires companies to maximize shareholder wealth turns out to be spurious. In fact, none of the three sources of corporate law (internal corporate law, state statutes and judicial opinions) expressly require shareholder primacy as most typically describe it. To the contrary, through the routine application of the business judgment rule, courts regularly provide prophylactic protection for the informed and non-conflicted decisions of corporate boards"

[1] "The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public" by Lynn Stout http://www.amazon.co.uk/The-Shareholder-Value-Myth-Sharehold... [2] http://arizonastatelawjournal.org/book-review-the-shareholde...

-----


I also recommend Professor Stout's book, and especially to fellow specialists and governance wonks. Not because I agree with her main thrust, her characterization of the orthodox view, or her conclusions about it, but because resisting the book in good conscience requires summoning foundational primary sources that practitioners don't have to handle very often. If you lead right, it's good to fight a southpaw from time to time.

When recommending it to those who aren't corporate attorneys or otherwise involved in the subject matter, I include a caveat: If this is the only book you read on corporate governance, be aware that it's a controversial and contrarian book packaged for non-lawyers. If its subject were political, I could find and recommend a book arguing the opposite view it in a similar style. I'm not aware of any legal rebuttal that isn't presented in more traditionally legal, less approachable form. It's been reviewed in legal journals, but even most lawyers consider those long-winded and dense.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: