That's not really correct. EBITDA is not earnings before bad stuff - that stuff is all % of your income and/or write downs that have no impact to the business, so financial folks like to exclude it as noise (since all it really does is reduce your taxes). Examples:
- You bought a company and now it's worthless
- You made money and now you have to pay taxes on it
- You bought a bunch of computers and now they're three years old and worth less than when you bought them
- You bought something 10 years ago, and, instead of paying for it all up front, you paid over time
Networking on AWS works fine with no additional networking as well.
Overlay networking is not required if you're running within a bunch of nodes that can see each other. Only if you get more complex will you require something, and there are quite a few solutions (Flannel, Weave, Calico, etc)
But most of them suck, and they suck even more when you configured them badly (badly in terms of you used an option which comes by default). However with the bigger vxlan adoption most performance issues are fixed, still, could have some improvmenents. I also think that IPv6 could fix a lot of these things...
Yeah, sorry if that came off as snarky; I appreciate the suggestion.
I guess I can understand the cost-cutting mentality that drives Google, AWS, etc. to limit these kinds of offers to "new customers" only. Just remember to consider what kind of incentives you're creating. By effectively punishing developers for being early adopters/experimenters, you're making them wary of signing up early for whatever new and interesting stuff you announce in the future.
I actually find this a common issue with a presumed sales pipeline I encounter.
1. He finds us.
2. He's interested and signs up for a trial
3. We hopefully convert before the trial is over
What actually tends to happen
1. I find something that looks interesting
2. I sign up
3. Real work intervenes
4. Several months later I have some time to look again but my trial has expired.
To be fair most companies respond to a quick email but they could be proactive and do the following:
1. If no activity is detected after the first day pause the trial
2. Some time later send an email saying "We've paused your trial. Please choose either: 1. to reactivate it, 2. be reminded in another x weeks or 3. never hear from us again.
"Today, each Kubernetes cluster is a relatively self-contained unit, which typically runs in a single "on-premise" data centre or single availability zone of a cloud provider (Google's GCE, Amazon's AWS, etc)."
Agreed. That's the only reason we're not on Kubernetes right now. It really dramatically increases the amount of infrastructure we need to run when we're forced to run three Kubernetes clusters to run a single MongoDB replica set. But I love everything else Kubernetes is doing and so I'm very anxious to see that be addressed.
I'm not sure that's what I would call an integration... AWS provides for easy host management and elastic scaling traditionally through the integration of the ELB with autoscale groups, and now with life-cycle hooks. I'm not aware that kubernetes integrates with this stuff in any way or provides a sufficient alternative. Reading through the documentation I was not able to find information about connection draining on rolling updates, taking hosts out of service for maintenance/scaling/replacement, and so on. I am aware that kubernetes will run on AWS now and there is a guide for setting it up.
However this really wasn't the point of my comment, which is that security for application secrets(and AWS API access) is currently a sore spot. It would be nice if kubernetes would adopt some of hashicorps stuff like consul, templates, and vault. Maybe that's too far up the container stack though and a popular bundling of technologies will appear.
Docker and Kubernetes works hand in hand. That is to say, if you choose Docker as your container format, Kubernetes runs Docker on every node to run your container. Kubernetes focuses on _what_ Docker should run, and how to move those container workloads around.
Docker also has Docker Swarm, which can be thought of as a competitor in some ways. But Google will be a heavy supporter of their container format for a long time to come.
So Kubernetes compliments Docker, but how it complements it.
I had tested Docker just for fun, thinking that maybe I could implement it in the way I work, and sure it is a super tool for developing (far better than Virtual Machines), but deploying was kind of nightmerish, for what I understood Docker wasn't at the time ready for being a deployment tool.
Does Kubernetes fixes or extends Docker in this way
Think of them as different layers. If you're a front end web dev, it's sort of like SASS vs CSS: the former is a layer on top of the latter that makes it more powerful/convenient/easier to use.
At the bottom of the stack (most low level) is the Docker runtime. It knows how to run containers on the local machine. It can link them together, manage volumes, etc but at the core it is a single machine system. (That's probably why Docker, Inc has developed their proprietary orchestration tools like swarm).
Layered on top of that are container-native OSes like CoreOS. CoreOS provides facilities for running distributed containers on multiple physical nodes. It handles things like replication and restarting failed containers (actually fleet "units"). This is a huge improvement over vanilla Docker, but it's still pretty low level. If you want to run a real production application with dependencies it can be tedious. For example, linking together containers that run on different nodes. How does container A find container B (which may be running on any one of N nodes)? To solve this you have to do things like the Ambassador Pattern. Any complex application deployment involves essentially building discovery and dependency management from scratch.
Layered on top of this is Kubernetes (it runs on CoreOS but also Ubuntu and others). As said elsewhere in this post, k8s provides an opinionated workflow that allows you to build distributed application deployments without the pain of implementing things like low-level discovery. You describe your application in terms of containers, ports, and services and k8s takes care of spawning them, managing replica count (restarting/migrating if necessary) and discovery (via DNS or environment variables).
One of the very convenient things about k8s (unlike vanilla Docker) is that all containers within a pod can find each other via localhost, so you don't have to maintain tedious webs of container links. In general it takes the world of containerization from "Cool technology, but good luck migrating production over" to "I think we could do this!".