The real problem is, people are deploying dozens of different coding languages, any technology that whimsically passes by, and replacing simple, streamlined monolith technology with 100 micro services.
All of this is endlessly pushed by AWS, Google, Docker, and anyone else with a foot in the "Snag as much cash from DEVs" crowd.
Other old timers will explain how they ran thousands of hits/second on 6 or 7 bare metal servers, 15 years ago, without CDNs, using LAMP, ajax/js frontends, and more. The key was code optimization, SQL queries without inane and poorly written MVC SQL logic, and the list goes on.
I am relentlessly gobsmacked at how people are spending quite literally 100 to 1000 times the cash to host on AWS, using microservices. And often amused how new devs just can't get it through their head, that yes, this is 100% factually true.
Docker replaced something that was already done ; identical PROD and DEV environments. All DEVs I had working with me, were issued VMs with 100% replicated PROD. VMs auto-build using debootstrap + SVN checkout.
Rollbacks in prod? Handled by SVN rollbacks.
When I look at the insane complexity being displayed by containers on top of VMs on top of baremetal, the MASSIVE loss in performance (yup, it's there.. especially for IO)... I just don't get it.
Many DEVs have all been sold a pack of complete and total lies.
I get called in again, and again by clients to reduce cost, optimize resource usage. I literally bring 1000x performance boosts to the SQL layer with minimal improvements.
Anyhow. Yes, this was a rant. Sorry it's a reply to you specifically.
Microservices is just a pragmatic, reasonable solution to the changing "economics" of software:
- There is a lot more software being built nowadays
- There are a lot more engineers working
- The average age keeps decreasing to offset rising labor costs
- The average skill level keeps decreasing because of increased abstraction and managed solutions (despite more information / history to learn from)
- There is more B2C software running in Prod now that is "on the hook" for millions of $ of revenue
So you have more, less skilled people working on things with higher economic value. The "microservices" solution is to limit the mess and destruction they can cause by giving them their own little sandbox to build in.
Provided the "plumbing" is well enough engineered, it nets out to a better outcome than letting hundreds of 23 year old junior engineers loose on a monolith.
It's a totally valid rant, and one that I enthusiastically agree with.
I work in adtech (apologies). We have maybe 10 - 15 instances (16 vCPUs, 31GB RAM) that each handle 10k+ HTTP requests per second. There's a push to dockerise all this. I don't see the point.
I've often wondered about the potential performance loss of Dockerising all this, do you have public numbers available? We recently hired a ex-Googler to a management position who claims that on GCP, running docker may actually perform better than VMs. If true, that's really interesting, but I can't find anything to back it up.
Code optimisation (or even profiling) seems to be a dying art since people think that it's easier to just throw CPUs at the problem.
Well it should be faster if you're replacing the VM layer and running docker on top of bare metal kernels, in theory you could use a cheap dedicated hosting provider to host a K8S cluster and probably outperform a managed cloud offering price and performance wise - but in practice this shit is so unreliable I would never want to be the guy on call maintaining this - they would need to pay me a lot more than the delta between managed cluster and bare metal. For non-critical computationally expensive stuff like analytics and BI it could make sense.
Docker images are just lighter weight VMs. Or to be more accurate, they accomplish the same goals as full VMs in a different way.
If you're running Docker in a VM on a bare metal server you're doing it wrong. You should be running Docker on a bare metal server.
You're also conflating different problems here. If someone is writing poor SQL it doesn't matter if their deploying with a VM, Docker, or onto a bare metal server.
"they accomplish the same goals as full VMs in a different way."
They are explicitly not that. Docker containers do not provide you any real isolation guarantees from a security POV and make no attempts at such. This is extensively documented. [1]
"If you're running Docker in a VM on a bare metal server you're doing it wrong. "
Ummm... Running Docker inside a VM is by far the most common deployment type of Docker there is. What do you think is an EC2/ECS/GKE deployment? Hint, there's a VM running your containers in all of them. This is also what Docker the company recommends - https://www.docker.com/blog/containers-and-vms-together/
> If you're running Docker in a VM on a bare metal server you're doing it wrong. You should be running Docker on a bare metal server.
Until a bug in Docker, or the CNI abstraction, or some resource hangs/panics the kernel on the bare metal, and then you have to reboot the whole thing taking out all the containers.
This gets rarer, and rarer, as the bugs get ironed out, of course, but In my 20+ year anecdotal experience, a kernel running just a bunch of VM's crashes far less frequently than a kernel running containers.
> You're also conflating different problems here. If someone is writing poor SQL it doesn't matter if their deploying with a VM, Docker, or onto a bare metal server.
They are related, as devs sometimes think of microservices as a way to speed things up and/or process more requests per second, under an assumption that a server with fewer responsibilities is a server with faster turnaround time.
Not if you're doing things that require certain kernel features. For example, if I have an application that uses io_uring, it's _very_ pertinent as to which kernel it runs on. A VM has that in scope, a Docker container does not.
I'm working on a command-line deployment tool that deploys to DigitalOcean and AWS LightSail (to start with). This is based on my experience deploying apps.
I expect to finish the remaining work in the next few weeks. Can I contact you to try it out? (My email is on my profile)
Terraform is so heavy handed - I wouldn't call it a deployment tool. It's more of a means to build out infrastructure in AWS.
Once you get your AWS account setup there's still virtually no tooling to actually manage deploys of new code into that infrastructure. We're going to hand roll some tooling on top of aws-cli most likely.
I agree Terraform isn't great for installing software. Cloud-init works but it can be cumbersome. If that is your major pain point though then I'd look into EKS and using Kubernetes to manage and install software. There is a CLI tool called eksctl that makes setting up an EKS instance a breeze. I don't know all the intricacies with AWS, but with Google Cloud you can setup a single node "cluster".
If you're already planning on standing up at least 3 compute instances though, might as well run EKS in a cluster.
TBH because Ansible is what I’ve been using for a long time.
And based on what FunnyLookinHat mentions in another comment, Terraform seems to offer a lot less. I have no first-hand experience with Terraform to confirm that.
Terraform goes quite a lot further than FunnyLookinHat mentions. I actually don't advocate for using it further than infrastructure myself (I like using specific tooling for specific jobs), but that doesn't mean it can't go a lot further (https://www.terraform.io/docs/providers/index.html for a list of the providers, and it's possible to use the null provider to write some more custom things and hack arounds).
I've had a lot of success coupling Terraform with provisioners like Ansible or Saltstack.
Of course, if Ansible is what you're used to and it works for you, there's no real benefit to using something else right now :) I'm a big fan of Terraform, so I hope you also have a play around with it to see if it can help with what you're doing in the future.
It has a separate state mechanism to keep things in sync that Ansible didn’t have the last time I tried only using Ansible. They’re a match made in heaven when put together IMO.
Ansible user Jinja templating. Might as well say Jinja + Jinja + Helm.
Helm uses Go templates which are awesome. It is what Hugo is built off. The main issue is that you still have to manage indentation with Helm. YAML is easier to read than JSON and TOML. If you don't like it, then what do you suggest is better?
Someone has probably already done this, but you could write a dashboard generator for all the services that can be configured with just YAML files and another generator to glue them all together into one page.
I am in the process of migrating my stack from Elastic Beanstalk multi-container to Fargate so this looked like an interesting thing I could 'pick up'.
This does potentially unify the container definitions between compose and ContainerDefinitions in the task definitions, but for my self, that's not a super helpful tool.
Much of the complexity of running Fargate is outside of Fargate, wiring everything up so Route 53 => CloudFront + WAFv2 => ALB => TargetGroup => Fargate w/ security groups, subnets underpinning it all.
I can't recommend looking at this seriously as something to run in production. CloudFormation/Terraform is still the best place to sink your time.
I just spent this weekend setting this up to learn a bit of AWS for a toy project. I thought I would "just" quickly drop a Rust API server image in an ECS cluster.
By the end of the weekend, I had the architecture you describe.
- Route53 alias A record -> ALB DNS name
- LetsEncrypt cert in IAM
- ALB listener doing SSL termination using the cert -> forwarding to target group
- ALB listener doing 80->443 redirect
- Security group on ALB listener allowing only approved IP ranges in (not ready for this thing to be public yet)
- Security group on ECS service only allowing ALB to connect
- ECS cluster using Fargate
- RDS instance only allowing ECS service to connect
- CloudWatch log group for the container logs
- Subnets
- Secrets Manager for pulling Docker images from private GitHub Packages repo
Did it all in Terraform, and then added GitHub Actions to the Terraform repository to do terraform validate on PR and terraform plan && terraform apply -auto-approve on merge.
Then, yesterday, hooked up GitHub Actions on the Rust API server repository build a version tagged image and publish it to GitHub packages, create a PR in the Terraform repository to update the ECS task definition for the new image, and if it passes the PR checks, automatically merge it (which triggers the Terraform plan/apply run).
It did seem complex the first time I did it, but looking back over both the AWS and GitHub Actions configuration, I wouldn't change too much. I feel fairly confident this is secure, and I understand most of the configuration options and why they are there. Something that "simplified" it for me would just become a straightjacket as I get more proficient with AWS.
IaC 101 I guess, but I was chuffed when the Rube Goldberg machine whirred away after making a code commit to the Rust repo, and two minutes later my new code was running on ECS :)
Considering writing up a blog post about it just to firm up my own understanding as well...
What you're describing is what I sold the company on last year (save Letsencrypt, that's weird but whatever).
We only use terraform for the initial burn in (VPC, 2x public/2x private subnets, empty lb, bastion, and some subnet groups) but the rest is one to one.
Fargate isn't the cheapest platform out there but it's great for "I don't have any ops people" or "I have a fraction of several ops people not dedicated to my product." It takes a lot of patching and maintenance out of the equation.
If you want to give yourself a huge resume item, hook AWS WAF into the load balancer and play with it (you can alternatively hook it into Cloudfront if you elect to implement that in front of your LB, though then you have to make sure you protect what Cloudfront is talking to).
Easy task would be to geoip limit your application to the US, Canada, and Mexico. You can verify this by running your site through uptrends and looking at what cities get 403ed https://www.uptrends.com/tools/uptime
I just want to mention that you can output the CloudFormation template that it parses out by saying `docker ecs compose convert`, which is always handy. You also could use other flags to help with joining clusters, services ,etc. Not saying it's a complete production ready too, but it is nice tool to help speed up the process.
Could you comment on why you are moving away from elastic beanstalk? We manage a fairly simple EB deployment but it seems perfect for our infra needs, and I'm not able to imagine why we might need to scale out of it - if any other services are required, I'm tempted to only launch multiple EB deployments instead of going the kubernetes/fargate route.
Largely I don't think it's worth moving off EB except in specific scenarios.
I'm a Rails monolith and DB migrations are not well suited in ElasticBeanstalk when running docker containers. There's no way to run a single container that 'completes' and then returns whether it was successful or not.
Currently I have a separate environment that holds a single EC2 (lots of idle time) to run migrations. It fires and the deployment process moves on. I have to hope the migration finishes before the next set of containers gets rotated out, 2-3 minutes.
That's not a good policy to run a prod env by so I'm switching to Fargate where I can run a One Off task and poll for the result.
Other benefits include not paying for that dedicated EC2 idle time. Not worrying about EC2 management. Direct access to parameter store and secrets (EB can't do secrets from Dockerrun.aws.json yet =(
Drawbacks - I lose 'Rolling Update/Deployments based on Health' and HealthD. Another big one, I lose access to the container unless you run sshd itself (don't do it).
I am running both environments side by side to evaluate but realistically the migration requirement is going to push me to a full Fargate.
I could only run the migration step in Fargate while the rest in EB - best of both worlds? But somehow that feels dirty... we'll see what I end up doing.
We have a similar setup — containers managed by Fargate — and solved the issue of migrations by:
1) having our app containers include the DB migration logic
2) on container startup, “check and run migrations” before app startup,
3) the trick: acquire a lock in postgres as part of step 2, so that only one node at a time can run migrations.
Migrations are run inside of a begin/commit, so with the lock we have reliable guarantees that 1) exactly one container at a time can try to run the migration; 2) the migration either completes or fails.
This setup has the benefit too that, if the migration fails for whatever reason, no new app containers will start in production. That is: we can basically trigger a production deploy and if it fails, the deploy halts and the previous app version remains up and serving traffic.
There are better ways of handling this, but for us at our scale, this has been both very simple and very reliable, which makes me happy. :)
Ah, very nice! Our setup is on python, and because of some other dev tooling we have, it’s very easy for us to have a simple decorator function that grabs the advisory lock. We can apply that decorator to any python function, including the one that triggers our migrations. If on rails, seems like using the above link would be better.
I might not know what I'm talking about (our deployments don't do automatic migrations, for eg), but if I understand correctly, your problem is to synchronize migrations with deployments, is that accurate? Couldn't you use something like GitHub Actions to run migrations and then deploy to EB?
Yeah the goal is to get the steps to be: Build, Deploy & Run Migrations, Deploy & Rotate containers.
Github Actions is CI/CD and it can be used to kick off any deploy step and manage it, but it's not one to actually perform the migration. That would be like Github Actions actually running the webserver directly.
In this example Github Actions needs to kick off a `eb deploy`, which it can totally do (I personally use CircleCI) but `eb deploy` (elastic beanstalk in general) is not designed to run a short lived script on a single container AND wait for it to return.
You can run a short lived script no problem, but you have no idea if it was successful or not because once the container becomes 'healthy' the command completes.
What I'm really waiting for is the container.lastStatus:'STOPPED' and exitCode: 0. Can't do that on EB.
EB is very limiting. If you want to deviate from it's prescribed path at all, things become a giant headache.
Then there's issues with when it goes wrong. Troubleshooting is really hard, and knowing what actually went wrong requires a ton of digging around, and god forbid the environment gets stuck in a warning state. You can only download logs when it's "OK", so you're pretty much SOL if this happens. There's also the issue where you can't be sure what's running anymore. If a rollout didn't finish, what actually got updated? There's just no good way to find out, which is absolutely terrible for a deployment system.
All in all, the drawbacks + lack of options/control make EB useless for a lot of companies. Personally, I find it best to avoid EB and just go with tried and tested methods without it. That doesn't mean rolling out K8S or using fargate either. You can if you want, but there are a whole lot of other options.
This, absolutely this. If you don't have anyone on your team with sysadmin / DevOps experience then EB is an option. However when it breaks, and it does, it's a nightmare to get working again.
Terraform is a nightmare to learn for the first time but once you've wrapped your head around it, it's a thousand times better than EB. If Terraform is too complex for your needs then use CloudFormation directly instead of EB.
* It's more cost effective to place all the services on one cluster. As containers can share the instance resources it's easier to increase resource utilisation (not true w/ Fargate though).
* Support for new features on EB can be quite delayed. Eg. we need support for ALPN policies, it's a recent feature and it's not even in CloudFormation yet. With ECS we just manage the LB directly and we can do everything the API allows us to do.
* More granular control. Eg. during rolling updates we can decide a floor to how many containers are active at a given time (many others things like this).
* Integration w/ SSM and Secrets Manager.
* Better IaaS support (with EB everything happens in configuration files outside of Terraform or Cfn resources).
As an alternative to Terraform, we've had great experience using Pulumi for exactly this stack. Writing Pulumi config in Python like the rest of our code has been great.
We’re thinking about doing a similar migration. One of my concerns is that the current Beanstalk infra isn’t described in code, and when I’ve experimented a little with Beanstalk in CloudFormation it’s been horrible to write - for something a lot simpler. I obviously want us to move to using CloudFormation or something like it if we migrate, how good or bad would you say using it (or Terraform if that’s what you settled on) for Fargate is?
If you're looking for confidence in infrastructure changes then I 100% recommend fighting through the 'horrible to write' CloudFormation/Terraform.
CloudF is infra as yaml and Terra is as code. CloudF also has the CDK which can use TypeScript and be infra as code that 'transpiles' down to yaml.
All options feel terrible. It's like learning a whole new programming language that takes 10 minutes to see if your code runs or which part got stuck.
But once you've got your stack working, changes are a breeze and can be done with way way more confidence. Spinning up clone environments is barely any work.
Is it terrible? Absolutely. Is it worth it? Again - absolutely.
If I had to do it again, I would pick the CDK (more powerful), but the fight would've taken longer.
Yep I definitely agree with the importance, it’s probably the most impactful technical debt we have. Clone environments is a big driving force for it too.
I guess what I was hoping is that Beanstalk wasn’t designed with infra as code in mind, but things like Fargate were, and somehow it had become less horrible. Maybe that’s just never going to happen though and it’s the nature of the problem.
Appreciate the pointer toward CDK too, I have used it a tiny bit and it definitely seemed worth exploring further.
The code differences between ElasticBeanstalk and Fargate are not very far, but JUST far enough to be annoying.
Because you've worked with Beanstalk in the past, I would personally stick with it unless I'm looking to fight 2 issues at the same time - Learning Fargate & Learning CDK/CloudF/Terraform.
Fargate is not a panacea - Elastic Beanstalk does have very cool options/features.
I have done a tiny bit, probably worth trying to move some smaller CFN templates over to it in earnest to try it out a bit more. I can definitely see it being nicer for things like Beanstalk/Fargate.
I picked up CDK & built out the same essential setup + DB, an EC2-ECS instance for ad-hoc stuff and a bunch of things to appease Config & Security Hub in ~420 lines of quite sparse python, that builds out to ~3000 lines of cloudformation yaml.
I feel CDK has the mix between optional granularity & high-level constructs just right, while this plugin looks nice for a quick MVP standup i'd be surprised to see it in use for production workloads
I also often look at these tools and don't see clearly how they are meant to scale beyond small, isolated projects. The simple examples where you let the tool take your small project and start serving requests from AWS doesn't show me how I would use it in the real world.
This is far from the first time, but I have a weird dirty feeling when i see infrastructure management tools directly reference specific service provider IaaS/PaaS services.
It’s the most direct route vs formalizing a standard but, uglh.
In my opinion, being cloud-agnostic is today’s being database-agnostic.
In more than two decades of professional software development, I have only once encountered a situation where an existing application was migrated to a different RDBMS instead of just being rewritten. Using an ORM framework for the sake of database independence seems hardly justified in that case.
I suppose that things will turn out much the same with cloud agnosticism.
Similar to the RDBMS case, being agnostic of the underlying technology prevents you from using that technology’s more interesting features, to the extent that you’re treating a database as a dumb data store or cloud hosting as a mere space for hosting your files.
I mostly agree with you but we definitely reduced reliance on AWS specific functionality after starting to offer on-premise software installations. Most of them don’t gain you THAT much and then the software really isn’t very portable.
I question this logic - there seem to be fundamental differences in even the basic services between various cloud providers. For example you can resize a compute-attached disk size live in GCP while you can't in AWS. Many intricacies like this are annoying, but at the same time advantageous if you know and use them. If you are primarily trying to just offer another layer on top of these cloud services, (like snowflake) where cross-cloud compatibility is part of your selling point, then it makes sense to abstract the cloud layer. Otherwise, I feel like it's better to choose a reliable vendor and stick to them, optimizing according to their strengths/weaknesses. Or go cross cloud but for very specific technologies (iirc BuzzFeed did this).
Not the person you are replying too but some industries have regulatory mandates for vendor diversity in cases like this.
In addition, when your bills get to millions per month, the provider supplies quite a bit of TAMs and technical resources to your account. This can be helpful, but they also get a good understanding of where you are and are not in a position to dip out if things get sideways in billing. (Also, other providers will throw 6-7 figure credits your way to earn your business, being in a position to leverage them is a good thing.)
>Many intricacies like this are annoying, but at the same time advantageous if you know and use them.
This is very true. Building services or features that depend upon these is a good idea. Enshrining them deep within your assumptions and requirements about how you operate cloud-based workloads can work against you.
When your build gets to be “millions per month”, you are already locked in. Any migrations is going to be a painful multi year progress. “Infrastructure has weight”.
Have you been part of an integration with a health care system? They are so tightly locked in to their existing EMR/EHR system it would make you cry. Every third party vendor that comes along has to integrate with it.
Part of the work I did at the company that I mentioned where I was a dev lead involved migrating our company from Workday. Their entire process was integrated into it.
You're definitely locked in at some level, but if you are able and even demonstrate the ability to swing 30% of your workload over to another provider in a quarter, you're going to maintain some leverage.
Yes, have worked in healthcare on and off over the past 20 years and know precisely of what you speak. That's a different situation IMHO, same with ERP/HR/etc.
You can create/resize/delete non-root EBS volumes without any downtime of the attached host(s), as far as I'm aware? Pretty sure I've done so in the past. You'll have to resize2fs or whatever but it shouldn't take any downtime.
EBS “appears to be local” but it is actually networked attached storage. Instance storage (is that the correct name?) is where the VM and the storage are on the same physical server and it should be faster.
I was so disappointed when I learned that I'd been duped in regards to Terraforms abilities. My colleagues had explained that Terraform would allow us to avoid being locked in to own cloud provider, because it's "cloud agnostic" so we could easily move to between Azure, GCP and AWS.
Imagine my disappointment when I learned that no actually, Terraform will need to know which type of EC2 instance you want, you need to configure an AWS load balancer, not just "a load balancer".
Terraform does NOTHING to help you move between cloud platforms aside from providing a language that will work across providers. You still need to understand how everything ties together and is configured in each platform, now you just can use the examples in the documentation or the newest features.
Either your colleagues were very misinformed, or this was a slight misunderstanding on your part: Terraform IS cloud agnostic, but not in the way you understood.
Instead, it allows you to use the same tool and management model for resources on any of the 3 big cloud providers, about a 100 assorted SaaS providers, and most importantly, wire them together (e.g. create a Mailgun configuration, and set up its verification DNS records on AWS), all in code, and in the same workflow.
I had a (friendly) back and forth with someone from Hashicorp on HN a while back. They are careful not to call Terraform “cloud agnostic” for that reason.
Disclaimer: I'm entirely biased. I am the CEO of Qovery.
I created [Qovery](https://www.qovery.com) to address this problem of simplifying the Cloud. Allow streamlining the use of the Cloud for any developer. Today traditional players stack layers to simplify what they have created but without going out of their way of seeing things.
Qovery makes application deployment very pragmatic for developers because we put ourselves in their shoes. How does a developer think? He simply thinks about his code and wants to focus on his mission, which is to address business issues and not all the plumbing around it.
I tried but when I scroll down slower I have to wait for the content to slide in from the left or from the right, or to assemble at random. Still no good.
At my current startup, we're building something like this.
A "catalog" of architectures that you could use to create a complete cloud architecture on your AWS, GCP or Azure account in less than one minute.
For example, you could create a docker-based architecture with CI/CD, auto-scaling, zero downtime deployment, SSL, load-balancing, high availability and MongoDB in less than one minute in your own AWS account.
Like Terraform with the user-friendliness of Heroku.
AWS in theory has a product called service catalog that fills this role.
It's not super fun. Or at least my interactions with havn't been so great.
Good luck.
I think your product would work great as an "a la carte" consultancy, it seems well aligned with startups which may have some incentives to pick particular pieces but need help to wire it together - and already need to allocate a developer's salary to do it.
When you say "'a la carte' consultancy", do you mean with a one-shot payment?
Do you think that the "user-friendly" features (deployment monitoring/rollback, CI on all branches, health monitoring...), that you can't have with Terraform only, are useful enough to ask for a recurring one?
What I mean is that early stage startups will have to bootstrap some cloud infra, but their engineering teams are laser focused on creating value for their users and validating product market fit. Every day that a technical founder/early hire spends writing their cloud story is a day taken away from their runway without (directly) creating value that could lead to revenue or funding.
People will pay you to build out their cloud infrastructure one feature at a time, or pick features off the cart as they want them (so maybe like an all you can eat cloud buffet?). There's a bonus to that too, iterative consulting is a great way to validate a tech stack for a problem domain before you figure out how to turn it into a recurring revenue source.
AWS for all its flaws is actually pretty good for this already, which I think is why it gets so popular with seed stage startups before they get locked in (plus the credits that they give out like candy...). I don't have to think about architecture, I google what aws cli command I need to set stuff up and move on with my life.
I have the feeling that docker compose only supports a small subset of ECS. ECS has stuff like Load Balancers, schedulers and a parameter store in very specific ways. I don't think you'd want to use docker compose for that on AWS.
This unnecessary condescension is made all the funnier by the fact that the quote the expression alludes to wasn't actually written by Mark Twain, but in all likelihood by a French writer named Nicolas Chamfort.
(attempt to) being sarcastic ain't necessarily condescending. my take from the unedited parent comment was that it was patronizing towards the grandparent ("in english it is" so and so), but anyway thanks for clarification.
I am pretty new to this space, but isn't one of the main advantages of WASI the fact that we can skip a lot of this containerization and simplify deployment?
Mirantis purchased Docker Enterprise at a firesale price last year. They've pledged support for two years, and they say they will be "continuing to invest in active Swarm development," but the writing is probably on the wall. Their release was more excited about K8s.
Whether it was luck or hard facts, Docker became the standard and default nearly to a point where juniors only knowing how to program a "Hello World!" webservice will dockerize it. It's an ubiquitous technical skill like basic SQL.
As someone who has very low interest in container technology itself, I have zero reason to inform myself about QUEMU..
https://qemu.org is VM host software, like VirtualBox, but FOSS. Instead of a Dockerfile I have a shell script spin up a VM and install the dependencies for me.
They both serve a similar purpose, isolate guest userspace from the host and provide reproducible environments. Instead of a Dockerfile I have a shell script that spins up a VM and installs the dependencies for me
Exciting news for Docker community. In our organisation we use ECS service to orchestrate our services within AWS eco system.
On a different note, recently I was looking to learn AWS concepts through online courses. After so much of research I finally found this e-book on Gumroad which is written by Daniel Vassallo who has worked in AWS team for 10+ years. I found this e-book very helpful as a beginner.
This book covers most of the topics that you need to learn to get started:
There must be a better way of doing all this.