It really is striking how products like Docker, even while delivering incredible value, continuously fail to message themselves in an intelligible way. If you go to https://www.docker.com, you see:
"Build, Ship and Run. Any App, Anywhere."
Jesus Christ. I get that you're The Future, but make the value prop for me here, at least. Why should I use Docker? What parts of my stack does it replace? When does the cost-benefit make sense? What new things can I do that I couldn't do before?
Thanks for the frank feedback. I agree that we have a lot of "wiggle room", as you so diplomatically put it. Out of curiosity how do you feel about the current README on Docker's repo? https://github.com/docker/docker/blob/master/README.md
One problem we've encountered is that the audience for Docker is incredible broad - much broader than you might imagine when reading Hacker News for example. It is extremely common for CIOs, IT directors, and various business managers to just pick up the phone to ask us (or our partners) what Docker is all about. So it's difficult to tell a story which satisfies all audiences.
But I don't think that excuses everything. We are definitely better at building our product than at explaining it.
One interesting side-effect is that, if you've been exposed to Docker-related marketing (in a broad sense), most of it probably didn't originate from us, the creators of Docker. This is sometimes problematic because Docker is so polarizing: depending on who tells you about it you will get such a different, often distorted picture of reality. Either Docker is a miracle cure for every disease on Earth (indicating an over-enthusiastic Docker fan, or a vendor trying to sell something to Docker fans), or it's a scourge sent by the gods of the Unix valhalla to punish mankind for techno-hipster false idols such as javascript and php (indicating a jaded Docker-skeptic, or someone trying to sell something to Docker-skeptics). Either way, it creates a lot of noise. And as you point out, there might be less noise if we ourselves did a better job at explaining when using Docker is, or is not, a good solution.
Big platforms tend to be different things to different people. Might help to have a different "what is docker?" page for the various roles you encounter.
For CIOs: click here
For developers: click here
For sysadmins/devops: click here
For platform providers (e.g. heroku): click here.
Whenever I see a product segmented that way, I'm immediately suspicious.
That much work means that a) they're trying to sell something because it isn't obviously better and b) they're more worried about messaging than being simple and useful.
> they're more worried about messaging than being simple and useful
But useful means different things to different people. To a dev, Docker might mean you keep your environment clean, scripting of automated testing, and have platform agnostic deployments. To the CIO, Docker means your devs spend less time futzing around with their stuff and more time working on the product. Or that he can upgrade his infrastructure gradually and not have to worry about compatibility. Yes, it's the same thing, but different audiences need different messaging.
> To the CIO, Docker means your devs spend less time futzing around with their stuff and more time working on the product
The question is whether or not the CIO knows that the devs are in fact futzing around with ad hoc solutions to problems that Docker solves.
I think the parent is arguing that in the right flow of things, that awareness is going to flow up to them from the people closest to the problem (the dev/ops folks working with it) rather than from a vendor with a vested interest in adoption to an exec/manager whose understanding of the problems their staff face may well be a high-level view at best. And who are prone to make decisions off of social proof plus that good messaging rather than knowing how well the solution fits their problems.
(Not that having the engineering staff involved is any guarantee that decision won't be mostly made off of social proof plus messaging. It just decreases the chances. :/ )
From my experience, it's not either-or. First you need "bottom-up" adoption by actual practitioners (in the case of Docker, developers and sysadmins). Then you also need to understand the constraints and requirements of people who are in a position to say "no". Those include managers, but also procurement, architects, network engineers, security teams, etc. Those people aren't the ones championing your product (most of the time they don't have very strong opinions either way), but they have a job to do, and if a new dev tool affects their ability to do their job, they're going to say "no".
It's very common for developers or sysadmin to contact us and ask for a powerpoint deck, so that they can give a convincing presentation to their management about the virtues of using Docker in their new project. We even have specialized teams that go in and do everything they can to help Docker-based projects succeed.
But as you point out, it all starts with someone inside the organization who really, really wants to use your product. Otherwise no amount of messaging is going to save you.
That's funny, I get the exact opposite feeling. They understand their audience and they know how to communicate with them. But without being secretive; any curious cat can see what the others are being told.
In fact, imprecise or unclear messaging is usually a red flag.
I'll tell ya, the biggest conceptual problem for me is "where does the data go". This subject isn't touched in the Docker material until very late; and even then, it's brushed over.
To me, docker can be thought of as a process wrapper. The executable is called an image, and the running process called a container. The benefit of docker is three-fold: 1) each process thinks it has an OS to itself, which is a killer feature for native binaries that have weird dependencies 2) network (port) indirection, and 3) filesystem indirection (mounting an arbitrary host dir into an arbitrary container dir).
Against all of this is the whole question of how to really use it to develop and deploy custom software! You could, for example, develop and deploy Java without ever installing Java on the host (not even the development host). But when you are finally ready to deploy, how is that supposed to work? Which pieces are static, which dynamic? Do you bake your binary into the image, or do you mount the binary from the (remote) host filesystem?
The docker docs don't answer any of these questions, and I really think it should.
Solomon, I gave a talk at lunch at your company about this very topic, but I think you weren't there, so you missed it. Ask avinson for a link to it. I think it was saved on your BlueJeans system.
> Thanks for the frank feedback. I agree that we have a lot of "wiggle room", as you so diplomatically put it. Out of curiosity how do you feel about the current README on Docker's repo? https://github.com/docker/docker/blob/master/README.md
I have two thoughts on this.
First, if CIOs and IT directors are calling, then it's possible they're confused by the website, too.
Second, the README is definitely better, but would be even better by being more specific and exaggerating less. If it's targeted to web apps and back end services, say that in the first place, instead of "any application." Can I run iPhone apps in Docker? Are there Docker packaged apps in the iTunes store? If my application runs on an Arduino, can it also use Docker?
> If it's targeted to web apps and back end services, say that in the first place, instead of "any application."
That was deliberate. Although obviously most people use Docker for web apps today, there is no reason they can't try it in other contexts too. For example, there is a very vibrant sub-community of people dockerizing desktop apps, and running Docker on embedded devices like Raspberri Pi etc. There are also (very) experimental ports to Android, and I heard of at least one person who after learning that Tesla is a modified Ubuntu under the hood, set out to try and run Docker on it (I don't know if they succeeded).
I read over and over about docker, did some tutorials. But when my boss asked me: should we use it? And can you please explain one more time what is it anyway? I simply say, well, it IS like a VM but without the VM. No other explanation stuck for me, and I'm sure it's a wrong description of what is docket, but the messaging so far got at least this part for me as the quintessential "what's that docker thing again?"
It's a VM that doesn't need a full guest OS. It's a sandbox. I'm sure I'm wrong but I can't think of another ELI5 for this.
The problem is, it's not a full sandbox, because all of the containers are running on the same kernel, and are capable of influencing the others, at least indirectly.
I have trouble with the idea of calling them 'the future'. So far, I haven't seen a REAL reason to use docker in a production environment.
Anecdotally, something I've seen work in the B2B space is framing the message `n` different ways for each type (n) of stakeholder. So you'd have a quick view of value props and stories for the CIO, the IT director, etc. Sometimes it's hard to find the general value proposition, and it's much easier to have the different cohorts self-select into the message that means the most to them. And sure, sometimes this doesn't cut it and you have to get on the phone.
The general message could still be there (kitchy and sexy), the focus would still be on the end user (developer), but others wouldn't have a hard time finding the message that resonates with them.
Has anyone else seen this strategy succeed (or fail) for their business?
From the perspective of this longtime software developer, this isn't that great either. I'll try to give some specifics.
* The terms "container" and "containerization" seem to mean a lot to the author of the document, but they're never defined well and they're used an awful lot despite this. That kind of thing isn't half-bad marketspeak if the point is to get people thinking about this in the vague "It's the future!" manner that The Fine Article is satirizing (and/or accurately reporting as part of the social dynamics of the industry). The invocation of an unfamiliar term over and over can serve as a form of social proof and generates curiosity. But it might well be what's triggering suspicion on the part of some engineers.
* Positioning containers as an alternative to VMs is somewhat helpful in giving at least some idea of what kind of rough problem space we're working in -- someone familiar with a VM knows that they're often used to reproduce the specific runtime environment a given application needs. But by the end of the section I still have no real idea how "containers" are different other than that they're "lighter." Except for one clue: I know what FreeBSD jails are. So I might guess that something like them is involved -- but you're describing them as "primitives", so there's something else involved and I don't see an explanation of what it is. Is Docker a glorified chroot jail? If it's something more, what's the additional value?
* And... two sections down "escape dependency hell" -- that might be the additional value prop! But again, this section is really confusing. Dependency management means package management, right? But it's being done with Yet Another New Undefined Entity called "Layers" without replacing any other package manager so... we have two package managers? Or Layers aren't package management? What the hell are they? I could guess they're something like an image but I have no idea.
* "Plays well with others" Gives strong hints this is mostly a Unix thing (which, if this is some kind of enhanced chroot jail makes sense). Somewhat in conflict with the hints of platform agnosticism earlier in the document. Is there a story here for Windows, either as a host or for windows apps?
* "Real World Examples" These tell me how to "Dockerize" different server apps... but there's no context about why I might want to do this. What problem am I trying to solve?
And that's the thing -- at the end of this, I don't know what problem I might be trying to solve with Docker. I might guess Docker helps me deploy an application along with a specific normalized runtime environment, but that's from a lot of guessing and reading between the lines rather than from an upfront communication from the text itself.
If my description is accurate, a clearer version of it should be your first sentence. Followed by a second paragraph telling me enough about some specific frictions you've done away with compared to other solutions that I'm wanting to learn more. Then tell me some specific stories about situations where someone might have a problem that Docker is a good fit for, and explain the rough usage that be applied to address it.
I'm not ignorant. I understand VM and virtualization in general. I understand chroot. I understand how The ANSI-Standard Multitasking Multiuser OS works. It still took me a few attempts to understand what Docker even is because, frankly, it isn't quite any of those things.
It's a lot more akin to what Plan 9 was doing with namespaces, but I think they take it a bit further. It finally clicked when someone described it in terms of "multiple distros on the same kernel at the same time" and then defined a distro in terms of being an init process and a userspace. That makes sense. Reading up on the clone(2) system call, which is where the 'magic' is, made it even clearer.
But that's impenetrably technical unless you already have a pretty good background in operating systems. As any marketer will tell you, being technical is poison. Ideally, you should be able to sell the product in terms of what it will allow, not how it works. Except with Docker, that's hard: "Oh, it will allow me to run multiple applications at once. It's... an OS kernel? Nope, it says it's Linux. So it's a distro? Nope... uh... is it a new VM? Nope, not that, they all have to share the same kernel... what?"
I guess my point is Docker is hard to market. The website is either going to be vague or rather dense, and vague seems to win.
Pretty much this. I was working on a simple Rails app to get familiar with Rails and someone mentioned Docker to me. So I went to their website, did the tutorial, didn't understand why this is essential or better and just pushed to Heroku instead. Half of the time, if not more, I have no idea why this or that technology is being pushed so much or mentioned or whatever. And the landing page is not making it easier. Sometimes I feel like web development is more complicated than it should be.
The problem is not that they "message themselves" (gross) wrong, but that they feel like they have to "message" at all. Sales & marketing mumbo jumbo has no place in devtools -- it is at best, obfuscation -- at worst, misrepresentation.
I find that position to be very hard to understand - devtools live or die by their adoption. A clear understanding of what a tool does is critical to its adoption.
Look at, say, the homepage of Ruby: https://www.ruby-lang.org/en/. There's a clear, two sentence explanation of what it is:
A dynamic, open source programming language with a
focus on simplicity and productivity. It has an
elegant syntax that is natural to read and easy to write.
There's an example embedded on the page. There's also recent news that intermediate users might be interested in.
On Docker's website, there's a huge amount of confusion about what Docker even is. A platform? A runtime? Both? Which is the one I should care about?
The nerd-centric viewpoint that tool should succeed entirely on their own merits, with no affordances for the user, is crazy. It's that attitude towards UX that has lead to the following one-liner to being the only way to do something so mundane as "removing all your untagged images":
"I find that position to be very hard to understand - devtools live or die by their adoption"
Some of the time, that is true, but not all the time, maybe not most of the time -- you are putting the cart before the horse. In fact, I would argue that this is an anti-pattern. Yes, you might use (e.g. to pick 2 unrelated domains) Hadoop or Python because they are popular, but consider how they got popular in the first place.
Devtools exist to solve a problem. You should not evaluate devtools based on the webpage or how many people are using it. That way lies Oracle enterprise. :-)
The problem with Docker's website is not that it exists. It is that it substitutes sales & marketing for just simply explaining what it is to a developer. While one could classify this under the category of "marketing", it would be a mistake -- kind of like classifying man pages as sales pitches. Just tell me what the fuck it does for god sakes and I'll decide! I could give a rats ass whether Facebook uses it, etc...
To be clear, I find Docker, the tool, useful, I just think it doesn't need "Marketing", it needs a useful webpage.[1]
Ruby was not adopted because of its webpage or its user base, it was adopted because one person bothered to look into it, liked it and decided to build a very popular web framework around it. Others saw the value in that domain and it exploded. Similar situation for Linux, which started from an FTP site and usenet posting. :-)
"The nerd-centric viewpoint that tool should succeed entirely on their own merits, with no affordances for the user, is crazy."
Unrelated to what I was talking about entirely. Affordances to the user is a merit of the tool itself. Docker could be considerably easier to use in some regards, and that would improve it's usefulness as a dev tool. However, this has nothing to do with attempting to gain marketshare with no direct relation to merit.
[1] But then Docker, the organization, is selling something, aren't they?
> To be clear, I find Docker, the tool, useful, I just think it doesn't need "Marketing", it needs a useful webpage
What exactly do you think Marketing is? It's not all about BS, it's about communicating a message. If that's a simple webpage, then so be it. Often an idea or product is far too complicated to explain through 2-3 lines of text and needs more.
> Devtools exist to solve a problem. You should not evaluate devtools based on the webpage or how many people are using it.
That sounds rational and it's what I used to think, but I think this talk (https://www.youtube.com/watch?v=FzzL_QDKv0c) makes a good case that fuzzy human factors always play a role in technical decisions, and that's not necessarily bad.
Eg, which is better, Angular or Ember? Ruby or Python? Go or C++? Haskell or Common Lisp? You can accomplish the same things in either tool. Which you like better has a lot to do with what you already know. And popularity may seem like a shallow measure, but it affects whether you can get questions answered, find blog posts and books, locate a library to do something for you, and hire developers who already know the technology.
Popularity is probably also weakly correlated with stability. If my custom jQuery code doesn't work, there's a 99.999% chance that it's the fault of the code I wrote (used by 1 person) rather than of jQuery (used by thousands). When jQuery was new and used by tens of people, there was a higher chance that it was jQuery's fault.
"Ruby was not adopted because of its webpage or its user base, it was adopted because one person bothered to look into it, liked it and decided to build a very popular web framework around it."
I agree with everything else you're saying completely. Although messaging a value prop is hard when you have so many use cases. We faced the same issue and ended up trying to segment users as quickly and high up in the funnel as possible so we could speak directly to their needs.
A clear understanding of what a tool does is critical to its adoption.
The definitions of "clear" differ depending on who the target audience is. If you're assuming a heterogenous group of unknown faces, then you aim for colloquial and simplified language.
When marketing to programmers, however, the use of technical jargon and specific concepts is an absolute necessity for something to attain clarity. It's the avoidance of such that obfuscates meaning.
The nerd-centric viewpoint that tool should succeed entirely on their own merits, with no affordances for the user, is crazy
This is a straw man. Introductions can be concise or detailed, but they must convey some of the technical intricacies and underpinnings regarding the software. Using marketing language, clouds of buzzwords and too many dumb copy-paste examples leads to cargo cult development and people who jump on bandwagons as opposed to surveying for what is technically superior.
Furthermore, there's nothing wrong with your one-liner.
Getting people to use things requires communicating to them what the things is for, and how it is better than other alternatives, and how to use it to realize that benefit.
Therefore, devtool adoption requires messaging related to what the devtool is for, how the devtool is superior to other alternatives in the same space, and how to use the devtool to realize that superiority.
Actually having the tool is a start, but its not the ballgame if no one can understand what its for, why they should use it over other things that serve the same purpose, and how to use it.
I get that you want to make people aware of the tool and its advantages because you think others might find it useful. But what part of that requires any attention to the condescending idea of "messaging" vs just straight up telling people (a) what it does and (b) why you created it.
Anyway, perhaps I'm being a grumpy old man late in the workday, I'll leave it be :-)
> But what part of that requires any attention to the condescending idea of "messaging" vs just straight up telling people (a) what it does and (b) why you created it.
"messaging" isn't a condescending idea, its simply having clear, coherent means of communicating some message, with awareness of the audience that message is directed to. Like, what your product is for and why people should (and how they can) use it.
You are correct, and the people downvoting you are wrong. Technical engineering decisions should be made on the basis of concrete analysis and not popularity contests. The noisy kids will come around sooner or later.
A few days ago, the CTO of Soylent, the food drink, was describing their elaborate computer infrastructure. They have one (1) product and a simple web site. Based on their sales volume, they do about two sales a minute. That could run on a HostGator "Hatchling" account for $4/month, using one of the seven off the shelf shopping cart/payment programs HostGator offers.
There's an "off-topic/meta" flag sometimes applied to posts that are more about HN than about the topic, which causes them to float to the bottom (above the stuff greyed-out into oblivion, but below the top comments). Seems reasonable that people could apply that to their own posts.
Soylent had exceptionally poor order handling prior to their latest round of funding. The way they've run their business and development seems like a total joke. 6-10 month order delays over a year since they started selling? (Not to mention charging orders before shipping) "Most scientific food" - zero clinical trials? Are you shitting me?
Figures. They overbuilt their public-facing web system, and underbuilt their fulfillment system. Reading that, you can see the problem - their order system, fulfillment system, and order tracking system didn't talk to each other properly. You don't just ship stuff blind; you read back the carrier's online shipping data and match it with the orders, so you know you didn't miss anything. Otherwise, you don't know you screwed up until the angry customer calls you.
Can you guarantee success at development? And even if you can, is it a good idea?
The coolest software project I've ever been on was also the dumbest. We were trying to build a bios in-house - something that can be bought off-the-shelf for a buck. In theory, it could have saved us millions in the long run. In the short run, we were wasting man-months of engineering on something that was not our business, when the clock was ticking loudly. Fun as heck, but if I caught my own employees doing such a thing today, I'd at least threaten to fire them if they even thought about it again.
Likewise, Soylent is not in the inventory management business. Why should they be writing inventory management software? Buy that stuff. Focus development money on the core business.
Funny you say that. I just had a conversation about a meeting I was involved in a few years back when our EE's didn't understand why we thought it was a spectacularly bad idea to design our own Pentium-class CPU boards vs. just buying them off the shelf.
Cost advantage? What's that? We just want a cool project!
It's not crappy management for engineers to have technically cool ideas that are bad business, and then rationalize their value. It's bad management to let them execute on them (except if you have some sort of do-what-you-want time for developers, which is a good way to compensate and balance for this).
On the other side of the coin, you have the semi-technical manager who has ideas that are unassailable, because they are able to make "do this that way or you're fired" decisions.
These kinds of managers are utterly toxic. At the slightest hint of this kind of behavior, I'd be reaching out to my contacts to find new employment, where I would certainly be paid better, but hopefully don't have a feudal work environment.
Well these technologies are there to help bring Google type infrastructure for businesses that need it. If you're running a CRUD app just fine on Heroku you don't need to do any of this and you shouldn't.
When your availability starts having problems or your data is getting too much for one machine, you start having problems of scale. Where you are in terms of scaling issues should lead you to the next iteration of technology required to keep your service up.
Starting out of the gates with that one idea you're not sure anyone will be interested in? Just stick to deploying something fast and not worry about Google style scaling solutions.
Seems to be the problem with a lot of technologies. If you're a HN reader, you can start to get the feeling that you're falling behind if you're not doing these things. Even though they are totally inappropriate for the majority of us who are working on smaller systems.
Often there's some tension between individual career development and what makes sense for the project and company.
Ideally, managers create opportunities for people to exercise the former in a sandbox (e.g. some variation of "20% time"), without YAGNI-ing up the project.
I like the fact that I know about a lot of these things, so that if a need arises I'm aware of what's out there and current. I figure when/if the time comes then I'll actually learn how to use them. I think it's just more of a perception that everybody else is moving forward and I'm stuck here in pragmatic land for my work.
As an engineer I feel an obligation to know about these things and how they work. It might be appropriate for my work. For my own projects I am just going to start out with a Django or Meteor app and if I'm lucky enough to have to worry about scale I'll have some idea of what to do next.
so might be AS400, but few are trying to learn that I guess.
Let's be fair, it's quite likely that if anyone is looking into this stuff is more because it's the technology du jour, rather then the off chance that they end up with 100 million users.
It could be useful, and learning stuff is fun, but that's not the real drive.
As a sysadmin, I feel an obligation to know about these things so I have sufficient warning of the next trendy maintenance burden that's coming down the track. This is how I credit my HN reading as real work ;-)
Sadly I often feel this is the state of most of the web-dev stack due to high churn. There are extremely smart people working on tools who keep forking out new frameworks/tools/components/scripts and a bigger pool of smart people who keep consuming it.
Having seen in enterprise incredible amount of man-hours wasted on migrating, re-writing and creating POCs on the latest fad, I sometimes wonder whether this cycle is productive or we keep at it just to keep our brains working and not go insane thinking of geo-political/financial/meta-physical questions. Its perhaps Crack for the tech crowd.
> There are extremely smart people working on tools who keep forking out new frameworks/tools/components/scripts and a bigger pool of smart people who keep consuming it.
Well, at least they get the privilege of being regarded as very smart. Why be a lowly smart web developer when you can be elevated to the status of extremely smart framework author/designer? If they care about that kind of status.
If you want them to be a bit less trigger-happy, perhaps don't praise them so much for creating their own small contributions to the fragmented landscape.
This is exactly how I feel whenever people start talking about this stuff. Is that bad? I know this is a satirical post but it seems like a pretty faithful depiction of reality.
If people are talking about breaking up a simple web application and saddling it with all sorts of crazy complexity, then this is the correct reaction and you should feel bad.
If people actually have a legitimate need for really high availability and really high scalability and are willing to pay through the nose for the software development necessary to make a straightforward system into a distributed system... well, you should still feel just bad about all the complexity, but you're basically stuck with it :)
Even Napster was used for non-music stuff towards the end, by renaming archives as mp3.
And the basic protocol was later adopted for more generic sharing systems, never mind the number of clones that came about after Napster was lawyer bombed.
Bittorrent is just the latest in a long string of P2P systems, with the biggest difference being the lack of a central search server.
Yeah, I remember we had to put an MP3 frame checker in the app to help prevent them from being shared. We had enough legal trouble with just the music industry suing us.
I get the feeling that there is a second aspect to it, netsec fundamentalism.
People so deep into CVEs and such that they see every computer as requiring the digital equivalent of fort Knox level security, or civilization will fall.
I may be old guy but I never get it. Why do I ever need all this virtualization stuff? I just buy a server, install CentOS, install postgresql, create another user, install java, download&unarchive tomcat and write some simple bash scripts. Voila. Everything works, everything is protected, performance is superb and I can do all that within one day. Good enough to serve few thousand requests per second. May be not enough for Facebook, of course.
And then that data center has a fire, your server is gone, your data is gone, your business loses X thousands of dollars per day, all because everything was on one server.
Maybe they got their IT-timescale mixed up with our normal timescale. Things that happened in IT in the 70s on the normal timescale was like 200 IT years ago on the tech timescale, man.
I've always thought of the whole containerization and orchestration "hotness" (I guess we're calling it "gifee" now?) to be great for people who want to write their own PaaS. And not really anybody else.
I mean, it's great that the pieces are all there and open source now. You can get to a really great place with automation by gluing together Docker/Rocket, Mesos/Kubernetes, etcd/zookeeper, etc. But for now, a complete solution still requires you to bring a lot of your own glue.
I mean, by the time you've actually covered enough edge cases to make all these ops components work together in a meaningful way, in a way that doesn't doesn't fail randomly or trip over itself when some critical component breaks, you may as well just start charging other people money to use it.
I say this from the point of view of someone who's done it all (implemented a private Heroku on the stack described above [1]), and although it works amazingly well (for literally hundreds of internal apps), it was not trivial... not even close. We're talking probably 1-2 man-years of effort to get it to the point where it's usable, and that's with leveraging as much existing tech as we can.
To anybody else, as in, any company who actually wants to ship a product (where the product isn't just a PaaS), I just don't see how it's worth it. Just use heroku (or elastic beanstalk, or appengine, or whatever.)
[1] I'd love to make all the glue open source but I'm not really in a position to do so. But I suspect I'm not the only person who's done this... I really think anybody who's gotten this whole "the future" stack working solidly is in a similar position as me: if you really did get the job done for a single app, chances are you've invented an internal Heroku of your own.
I just want to give my project to a PaaS and let them figure out everything.
I was looking into Google App Engine, but they didn't support some language features I wanted (e.g. Java 1.8, Servlet 3.X["I know, programming in Java? You're stuck in the past"]). So I looked into their new Container Engine. But like this article points out, it makes deployment 500000000 times more complicated than it should be.
Meanwhile, the last two web applications I've consulted on are now more than 12 years old, use SOAP and where written in Delphi.
They work surprisingly well to this day and despite being horrified upon hearing that this was what they were built on, I found them surprisingly well done.
I wonder what will still be running 12 years from now and what it will look like.
Now do this for the Hadoop ecosystem. I'm ripping my hair out because of the complexity of it all. I get that distributed is difficult, but this just feels like too much. Abstraction upon abstraction upon abstraction. Hadoop is this thing over here, but now we have this other thing built on top that is way better. But actually that thing also sucks, so we built a whole layer on top to abstract away the badness of that other thing plus two other alternatives to it. Oh, but that wasn't good enough, so now you can write queries in this other language. And on and on and on it goes.
This hits home. While everyone is talking containers, we're running simple processes with a Linux user per instance, and I feel no need to add more complexity to our system, except I'm really struggling to automate stuff.
It seems that if you aren't running a full dockerized cluster of services or outsourcing everything to a PaaS, you're left with building all the infrastructure yourself. What did people use before this great new wave?
I think there is a mix of things being bandied about under the "container" banner.
On the one hand it is about getting more bang for your hardware buck.
On the other it is about someone getting so deep into netsec that they have developed gills.
In the bang for bucks category you have a chain of one box pr database etc.
Then noticing that the hardware sits idle most of the time, so virtualization is depolyed to pack more server on a single box to keep it in use.
Then noticing that virtualization comes with a performance overhead, so it gets replaced chroot/containerization to give the impression of unshared box.
In the netsec category it is really about namespace. Limiting the view of the world the processes gets.
This has a superficial similarity to chroot, but can go much much deeper.
And if one go deep enough, every server ends up looking like a digital fort Knox...
Yeah, but none of that actually needs docker and images and such. If you want to take advantage of the whole server, you can simply run more regular processes on it, and you can launch them on different namespaces using systemd or other process manager. You don't need the whole workflow that comes with these new tools.
Yeah, we're using configuration management, but that still seems too low level. I haven't tried CFEngine, but in the ones I have, there's no concept of an instance (essentially some configuration files and a few databases) that you should be able to treat as a single unit (e.g. delete instance or move it to another server).
I wonder what's the architecture behind WPEngine and similar services. It must provide some isolation since clients can install their own plugins, but on the other hand I don't see them creating a new Docker image for each client, especially since they're self-managed.
To the position that containerization is needless complexity for simple or non-scaling apps: one of the benefits of containers is it can create development environments which can be identical to your production environment(s); no matter what platform you're always running the same code, same artifacts, same images.
Virtualization does this too, but at great cost. I wish the kinks were better worked out at this point as well, and hope we start to converge around a few well-working patterns and toolsets. I expect it to happen. In the meantime it is chaos and easy to laugh at.
But (and this is a thing I've been working on) do you really want your dev environment to be identical to your production environment? I think you don't.
As an example, suppose you use Go. Your dev environment is 500mb of compilers and toolchain. Your production environment is (hopefully) a container with a single static binary on it.
The point is not that dev and production environments are identical, but that dev staging and production environments are identical. Your dev box can have whatever you like on it, but if it can't fake production behaviours, it's not going to help you catch and debug bugs in production. Of course there are limits. You could, it's perfectly valid, buy twice as much capacity in production and run it twice there. And use remote debugging tools. The point, especially with ops involved, is that your dev box is a snowflake and you want the least amount of manual configuration possible. It's not a requirement. Your app will run fine in production without it simply by testing in production. But few people willingly recommend that approach, even as pretty much everyone does it at some point or another. Even the best emulation of production won't prevent the need for debugging in production when a bug isn't caught before it gets deployed ;-)
Yeah, but folks are talking about using containers for actual dev environments. Because you want to make sure everyone is using the same version of the go compilers, etc.
you're absolutely right, one needs to build images sans all of the dev toolchain and with staging/production flags. but the deps and parts they share in common should be identical. this is a hairy problem to solve, but containers are the solution, mixed with the right pattern, whatever it is. I might not want my dev containers to run on prod, but I want my staging/prod containers to be able to run on my local or whatever environment has the container/orchestration tool.
A seeming eternity ago I let Maven convert apps into war files that supposedly could be deployed into any web application server.
What do the new containers add on top of that (or other than that)? Only the option for more services (not just web app, but database, different languages, whatever)?
It seems odd having to worry about that kind of thing.
Mabye it is either a ploy of admins to make programmers do their work for them, or a ploy of programmers to put admins out of their jobs?
I don't really know docker yet. So if I need a database, rather than instantiating a server with a database, I would create a docker image that runs a database? Then deploy it to some server that can digest docker images (is there a docker image for that)?
The point of war files was not having to worry about the server.
In the immortal words of David Wheeler and Kevlin Henney: "All problems in computer science can be solved by another level of indirection, except for the problem of too many levels of indirection." Feel free to substitute abstraction for indirection.
"Build, Ship and Run. Any App, Anywhere."
Jesus Christ. I get that you're The Future, but make the value prop for me here, at least. Why should I use Docker? What parts of my stack does it replace? When does the cost-benefit make sense? What new things can I do that I couldn't do before?
They made a separate page just to address this giant "Huh?": https://www.docker.com/whatisdocker, which I feel is equally obtuse.
Luckily, the product itself is fantastic, so that gives you a lot of wiggle room on your website and documentation. Like, a lot.