If you're going to run these containers in production [on more than a single host], throw out the volumes and docker compose. Mock up your dev sdlc to work like production (ex. you can't use Docker Compose to start Fargate tasks)
In fact, I'm going to make a very heretical suggestion and say, don't even start writing app code until you know exactly how your whole SDLC, deployment workflow, architecture, etc will work in production. Figure out all that crap right at the start. You'll have a lot of extra considerations you didn't think of before, like container and app security scanning, artifact repository, source of truth for deployment versions, quality gates, different pipelines for dev and prod, orchestration system, deployment strategy, release process, secrets management, backup, access control, network requirements, service accounts, monitoring, etc.
The reason to map all that out up front is to "shift left". If you do these things one at a time, you lose more time later as you slowly implement each piece, refactoring as you go. Whereas if you know everything you're going to have to do, you have much better estimates of work. It's like doing sprint grooming but much farther ahead. Figure out potential problems sooner and it saves your butt down the road. (You can still change everything as you go, but your estimates will be wayyyy closer to reality, and you'll need less rework)
A weird comparison would be trying to build wooden furniture without planning out how you were gonna build it. You can get it done, but you have no idea if it'll take a weekend or two months. Plan it out and you can get more done in one shot, and the quality even improves. This is also the principle behind mise en place.
I don't think you're worrying about the right things here if you're about to start writing app code. Infrastructure can be changed easily - poorly architected code cannot.
What I'm talking about isn't infrastructure, it's the entire system architecture and workflow. Code architecture is a part of that. If you design your code architecture, and then look at system architecture, your code architecture may have to change. I'm suggesting to do them at the same time.
Say you did your code architecture, and you've been writing code for 3 months. The security architect comes by and takes a look at your work, and announces that your design is inherently flawed; you need to fix some token-passing thing that's tied deeply into your app to support some system they have to audit company apps. You end up doing rework for a sprint or 2 to fix it. This in particular may not apply to you, but there are hundreds of examples like this.
And even if you are planning to write desktop only software or an app for mobile, think in advance how do you want to package and release it, sign the code, provide help, branding customisation, etc.
Agile is an anti-pattern of SDLC as the lie "improve as you go" doesn't apply to release planning
I will make a heretical suggestion on the other side and say that unless you're pretty certain up front that your app will succeed, you need to get it in front of users ASAP, and if to you that means cutting corners on the SDLC and infra, so be it. If the app falls flat in the market, you'll never get a chance to amortize all that work.
Those lessons are highly subjective. You can use docker in production with 5% of this info. Ie I’m personally not a fan of using docker locally for development. I use it sometimes to boot local dependencies but never direct project I’m currently working on.
I test and deploy in Docker containers because the artifact that's produced is simpler to deal with. I don't have to specify packages in a Chef cookbook or Ansible playbook, then figure out how to best automate the running of those, then figure out how to do it fast enough to support rapid deploys. And while I run Fedora locally, it presents an abstraction layer that's sufficient, in the 99% case, that developers on a Mac can test predictably; as a trivial example, the Dockerfile specifies an informal but strong interface with regards to environment variables, configuration files, and network ports. (That goes down to like 95% with Windows, but I haven't worked on it with WSL2 yet.)
Treating Docker containers as artifacts--as Configurable, Better Tarballs--by itself is a significant improvement for much of the non-JVM world, and even does have some benefits for the JVM world as well.
Trying to do local dev in it is silly, IMO, but there's real value to shipping Docker containers to wherever you want to actually run the thing.
Because it's easy to devops-manage it. Your docker image is built by ci during automated tests and that image is versioned and immutable from that point. It'll appear the same on staging enviornments and production environments. It's easy to deploy it on multiple hosts/clusters, manage upgrades/downgrades etc. Involving docker during development in most cases just adds friction without any benefits. There are cases where you need to work in development with docker, but those are very rare.
It is not necessary. You probably achieve similar results with ansible+selinux+vpn I suppose.
But it has its uses. For instance, putting all your services in a private network and only expose port 80 and 443. Images gives your reproducibility even when your build system is not. The image validated is the one deployed. Disencentive hand editing in prod ...
Basically nothing you can not do you yourself. It just simplify (and potentially accelerate) some deployment processes.
Is there a benefit to why you'd `npm install` in docker? I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
> I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
No you cannot, at least not for stuff that ships nodejs extensions to be compiled (e.g. by node-gyp). So for example if you're working on OS X and then run stuff in the Docker container you may hit errors. Additionally, if you are running e.g. on Ubuntu 18.04 and compile there and then run npm in a docker container on Ubuntu 16.04, you may hit library mismatches.
Thanks, @JohnHammersley! Your 2016 writeup of the same name was well-received; given how fast things change in this space, I'd bet this "updated for 2019" version is worth bookmarking.
I'm no fan of Kubernetes in the general case, but by no metric is Docker "dead". And the article isn't spam; it's insightful, though not an approach I'd use.
It seems like your account exists just to scream about Kubernetes and Docker. If it's so bad, why waste the time?
I think the hype might have diminished a little bit, but I'm curious what would make you think any major containerization technology is dying? Both Docker & k8s certainly continue to show up in enough job postings...
(Also, I will out right object to the claim that this is off topic; it's about technology and someone found it interesting, so it belongs.)
The performance hit is absolutely tiny and it doesn't matter for the overwhelming majority of users. It might be that ex. High frequency Trading avoids it, but anything like that is already running an incredibly precisely engineered stack; for 99% of usecases, the performance hit is insignificant.
Are you referring to https://stackoverflow.com/questions/21889053/what-is-the-run... ? If so: I'm hesitant to put too much weight on an answer from 2015, based on running different builds in/out of docker, sitting right under another answer that shows basically no performance difference on anything except network (and with the ability to fix that by using host networking if you care). And even that post doesn't show "100% performance decrease", it shows a 41% drop (184857.06 / 312836.64 = 0.59).
If that's not the SO post you mean, then please point me to the right one.
The thing is:
I'm not saying that docker has a big performance penalty, I'm saying that I don't know and that it's sad there's no real evidence on stackoverflow.
In fact, I'm going to make a very heretical suggestion and say, don't even start writing app code until you know exactly how your whole SDLC, deployment workflow, architecture, etc will work in production. Figure out all that crap right at the start. You'll have a lot of extra considerations you didn't think of before, like container and app security scanning, artifact repository, source of truth for deployment versions, quality gates, different pipelines for dev and prod, orchestration system, deployment strategy, release process, secrets management, backup, access control, network requirements, service accounts, monitoring, etc.
The reason to map all that out up front is to "shift left". If you do these things one at a time, you lose more time later as you slowly implement each piece, refactoring as you go. Whereas if you know everything you're going to have to do, you have much better estimates of work. It's like doing sprint grooming but much farther ahead. Figure out potential problems sooner and it saves your butt down the road. (You can still change everything as you go, but your estimates will be wayyyy closer to reality, and you'll need less rework)
A weird comparison would be trying to build wooden furniture without planning out how you were gonna build it. You can get it done, but you have no idea if it'll take a weekend or two months. Plan it out and you can get more done in one shot, and the quality even improves. This is also the principle behind mise en place.