
Azure Container Instances - bdburns
https://azure.microsoft.com/en-us/blog/announcing-azure-container-instances/
======
013a
Companies like Hyper [1] should be put on notice.

This is a surprisingly unique product. AWS ECS and GKE both require some form
of management of the underlying VM. A lot of that management is abstracted
away, but not in the same way this is.

That being said, pricing [2] seems odd. At $0.0000125/GB_Second and
$0.001/Core_Second, lets say you want to replicate an Azure A3 instance
(4core/7gb/$130). That would cost... over $10,000/mo. Is my math right on
this? It can't be.

For one-off jobs, maybe this makes sense, but as the backbone of a Kubernetes
pool or something I'm not so sure.

[1] [https://hyper.sh/](https://hyper.sh/)

[2] [https://azure.microsoft.com/en-
us/pricing/details/container-...](https://azure.microsoft.com/en-
us/pricing/details/container-instances/)

~~~
seanmck
The core second price at the top is incorrect. The per-second prices for cores
and GBs are the same: $0.0000125. We are getting that fixed now.

~~~
caleblloyd
So what you are saying is the calculation for 1 core, 1GB of memory for 1
month would be:

    
    
      0.0025 create + (0.0000125 1 core/second + 0.0000125 1GB memory/second) * 86400 seconds/day * 30 days/mo = $64.8025/mo
    

Is that correct? $64.8025 per month is pretty steep for 1 core and 1GB of
memory, I guess this is targeted at short-lived jobs.

~~~
bacongobbler
Yes, the billing is specifically targeted around per-second execution. The
containers can start within a few seconds, they allow for customization of CPU
cores and memory, and they allow you to focus entirely on your container
without having to worry about any VM management. Traditional VM-based
infrastructure is still the way to go for long-running applications. This just
opens a new avenue into using containers in the cloud.

~~~
mistermann
What would be some example use cases for this?

~~~
bacongobbler
Off the top of my head, this could be used for backups, database migrations,
auditing software, report generations, email blasts, CPU/memory intensive
operations... The possibilities are endless because the execution layer is
just a container, so you're not limited on what exactly is being run inside
it.

------
rattray
Tldr:

> An Azure Container Instance is a single container that starts in seconds and
> is billed by the second. ACI offer highly versatile sizing, allowing you to
> select the exact amount of memory separate from the exact count of vCPUs...
    
    
        az container create -g aci_grp --name nginx --image library/nginx --ip-address public –cpu 2 --memory 10
    

There's also a k8s connector, promising faster spinup times:

[https://github.com/azure/aci-connector-k8s](https://github.com/azure/aci-
connector-k8s)

------
dragonwriter
So, are we just now using “serverless” to just mean “dynamically scalable” the
same way that “cloud” used to?

Because, previously, “serverless” seemed to mean _not_ needing to deal with
anything lower-level infrastructure than function calls (that is, a higher
level of abstraction than even a classical PaaS like GAE managed runtimes),
while container hosting, dynamically scalable or not, is somewhere between
classic IaaS and classic application-language PaaS.

~~~
owebmaster
And both seems wrong to me. Serverless would be without server, not with an
"intermittent" server. Serverless should design architectures without server
processing per request (like jekyll compared to wordpress).

~~~
resouer
Agree, please don't mix this with serverless, ACI is definitely your server.
Hyper Func or AWS Lambda is much closer.

------
slap_shot
I'm a co-founder of a stealth-stage company that helps data analysts/data
engineers build data pipelines. Every "task" that can be done in our framework
is essentially just an image that can be reused over and over with different
settings.

We deploy these tasks across Kubernetes clusters on AWS, GCP, and Azure.

Since these tasks are schedule irregularly and are short lived, we had to do a
lot of work to dynamically scale the nodes up a head of their demand and down
after, and we typically have to pay for at least 10 minutes of usage no matter
how quickly the job finishes.

This "pay-by-the-second" will be a huge win for us. Most of our tasks deal
with S3/Redshift or GCS/BigQuery, do we can't immediately use this. But as we
onboard more clients working with Azure Storage/Data Lake/Data Warehouse I see
some big operational gains for us.

Here's hoping we see similar developments across the other major cloud
providers. Very impressed with Azure's development in the last 3 years!

~~~
gnarmis
Check out Hyper.sh too. It abstracts away the whole datacenter -- you can use
`hyper` instead of `docker`, basically. You don't need to think about VMs ever
as a concept, containers run directly on the hypervisor. And they have Hyper
Func, an AWS Lambda-like alternative that uses images. And per second billing.

On the downsides, they're small and they have one data center, and they're not
Microsoft. But their tech is open source.

~~~
gnepzhao
We don't want to compete with the big providers, instead we open source the
tech to enable more container-native clouds, where the world will become a
seamless (portable) network for containers (different clouds are different
ports with the same image spec and API).

------
hardwaresofton
What's really got me excited lately is the combination of Ansible (for dirty
work) and container orchestration systems like Kubernetes/Rancher/etc (also,
tools that go from one orchestrated host to many like dokku and flynn).

While I appreciate the competition from GCE and Azure, what I really want is a
tool that will run in any one of their clouds, but offers the same ease-of-
management, and lets me go from one cloud to another or to a private cloud
without breaking a sweat. I want the competition to be 70% on price and 20% on
added-management-value and 10% on bundled services.

Terraform is basically this tool, but I want an even easier interface,
terraform still feels somewhat too specific to me -- I don't want to even have
to write config or specify some "aws" adapter that will make my config work on
some provider. I want instant, multi-cloud (possibly) heroku, using only the
network, hard-drive-space, ram, and lxc "primitives".

Someone (maybe me if I ever find time) just needs to get to work making F/OSS
versions of all the bundled tech (ex. blob storage, cloud function runners,
dynamically configurable DNS resolvers, simple alerting, etc) that runs in a
container, and then the question just becomes "where can I get the cheapest
most performant VPS that will host my containers".

~~~
convolvatron
the clear end goal here is that you have to deal with things like
'provisioning virtual disks' and 'doing ubuntu updates'. this whole
virtualization thing received you of the burden of buying pci ethernet nics
and rack mount brackets and provisioning cooling.

but really this whole business of writing chef recipes and provisioning
harnesses is really the same kind stuff. it seems important because you can't
run without it, and thats what your whole day is...but really its pretty
secondary to what you're actually trying to accomplish (run a service)

its interesting to think about what that world might look like...someone is
going to make something like that stick at some point. so...why are people
provisioning their own containers/vms instead of using the higher level
services right now?

~~~
hardwaresofton
For me at the very least it's cost -- I can purchase a cheap VPS and run a
bunch of stuff on it, with full control for much cheaper than what an AWS EC2
instance costs monthly (with better specs).

Compare this:
[http://www.ec2instances.info/?cost_duration=monthly](http://www.ec2instances.info/?cost_duration=monthly)

To VPS hosting from providers like: [https://www.packet.net/bare-
metal/](https://www.packet.net/bare-metal/)
[https://cc.delimiter.com/cart/dedicatedcore-
vps/](https://cc.delimiter.com/cart/dedicatedcore-vps/)

The value provided by the services being managed is large, but honestly, for a
lot of well-built infrastructure pieces, there is a lot of trust already for
the services to not go down. Most startups/lifestyle companies/small
businesses/whatever couldn't bring down a Postgres instance on a reasonably-
provisioned machine if they tried (and the app is written with at least a
smidgen of thought towards performance)

~~~
cjsuk
This. Also the cost advantage of your typical t2.micro instance disappears
when you run out of CPU credits instantly running updates in...

------
mankash666
If this works as advertised, it's awesome. It's like AWS lambda, without
language restrictions, CPU or RAM throttling, etc. Truly serverless

~~~
bpicolo
The main limitation with lambda is really still code size. 50mb is tiny for
all but trivial apps thanks to libraries.

~~~
lindydonna
FYI that Azure Functions doesn't have this limitation. Code is stored on your
own storage account and you can go as high as you want.

(Disclosure: I'm a product manager on Azure Functions.)

------
djhworld
What's nice about this is the directness of it.

As far as I understand, with services like AWS ECS you need to provision the
infrastructure first and pay for its uptime, whereas this allows more
ephemeral containers to be run with minimal setup, and you only pay for the
compute time used

Would only be useful for short lived jobs, but a really nice idea none the
less.

~~~
anonacct37
I don't want to spread FUD, but my understanding is that the container
security model is not 100% and that's why people like AWS force your
containers to run on EC2 instances.

The container security model will almost certainly improve in the future, but
for now I'm only ok with other people in my same company sharing the kernel,
not incentivised attackers.

[edit]

I'm going to unfud my comment. Some further reading makes me think maybe they
spin up something like kvm containers and use a minimal distribution such that
they can get to "seconds". If it were me, I'd have pre-running instances of
the base image that were ready for a customer to attach and own.

~~~
bdburns
Azure Container Instances developer here...

Each container has hypervisor level isolation. We are not relying on kernel
level isolation for security isolation between different user's containers.

------
Ghostium
"Each container deployed is securely isolated from other customers using
proven virtualization technology. "

Does anybody know if they mentioned anywhere what they use? LXC or Jails? Or
some homegrown stuff?

~~~
benaadams
Just as a guess Hyper-V containers? [https://docs.microsoft.com/en-
us/virtualization/windowsconta...](https://docs.microsoft.com/en-
us/virtualization/windowscontainers/manage-containers/hyperv-container)

~~~
gnepzhao
github.com/hyperhq/runv

------
andrewstuart
I would like it if the major cloud providers implemented microsecond boot and
teardown times for instances along with suitable pricing.

Unikernels and a whole zoo of other types of tiny operating systems would be
enabled by this.

I'm not a fan of containers - I feel they are reimplementing much of the
operating system infrastructure within the OS at the price of high and
unnecessary complexity.

It's frustrating that cloud computing has so many benefits, but at another
level we must wait and hope that Amazon Google and Microsoft are willing to
implement new architectures such as microsecond level boot and teardown.

------
sammorrowdrums
Does anyone know what service discovery, network security policy and ability
to add multiple redundant copies of the same service this offers?

I'm guessing it's powered heavily by kubernetes, so maybe that answers the
question, but I'd be interested to know more about the details.

~~~
gabrtv
If you need service discovery, replicas, rolling deploys, etc. ACI probably
isn't for you. Check some of the experimental work we released today
connecting ACI with Kubernetes: [https://github.com/Azure/aci-
connector-k8s](https://github.com/Azure/aci-connector-k8s)

~~~
yebyen
Was looking to see if one of you guys turned up in this thread. "And how was
the Deis team involved in this?" Didn't see any mention of you all in the
article.

Thanks for showing up and weighing in on this! k8s connector looks really
cool, is this a totally unique thing or are there anything comparable for ECS?
I've never heard of a Kubernetes cluster with virtual nodes! Sounds like you
could use this connector and potentially save yourself from ever needing to
configure autoscaling in the Kube cluster.

I'm really curious how things are going at Microsoft for this incredibly
productive team of people, from Deis, who have put out so much great software
that has kept my attention. Hope that everything is great!

------
jbb67
This doesn't seem cheap >> £0.001 Core per second That works out at over £2500
for a _single_ core running for a month. And thats without memory costs etc...

~~~
jo909
That is very likely a display issue and they round up way way too much in that
table. In the Pricing Example they use $0.0000125 per Core, which would make
it ~33$ for a month for a single core, and double that if you include 1 GB of
memory.

------
edpichler
I prefer my Docker containers inside Digital Ocean servers, a lot cheaper than
Azure, and with automatic backups.

------
jzs
While it sounds cool i'm a bit dismayed by the naming choice as ACI in my head
is short for the Application Container Image format as used by CoreOS and
appc.

------
garganzol
I find it funny to see containers landing in Windows. While I fully approve
the containers on Linux, to me it looks like Windows does not really need
them: it already has a stable notion of executable files with full binary
compatibility. An old but gold EXE format is your container. Please excuse my
probable naivety, but am I missing something?

~~~
pm90
Containers are not just about providing stable executable files, but
consistent and reproducible _environments_. Besides, having a unified way of
deploying both linux, windows or any other OS seems like a win-win since your
backend (docker, or whatever container runtime you use) would remain the same.

------
eoinmurray92
Pretty awesome this can be done in one command. At Kyso we deploy a lot of
data-science images to GCP, it can be tricky.

Is the API support for this coming?

~~~
bdburns
API support is already there. The docs/SDKs need to be updated, but there are
some examples here:

[https://github.com/Azure/aci-
connector-k8s/blob/master/synch...](https://github.com/Azure/aci-
connector-k8s/blob/master/synchronizer.ts#L27)

[https://github.com/Azure/aci-
connector-k8s/blob/master/aci.t...](https://github.com/Azure/aci-
connector-k8s/blob/master/aci.ts)

Docs/SDK updates should roll out in the next 1-2 weeks.

~~~
seanmck
The swagger spec for the preview API is here:

[https://github.com/Azure/azure-rest-api-
specs/tree/current/s...](https://github.com/Azure/azure-rest-api-
specs/tree/current/specification/compute/resource-
manager/Microsoft.ContainerInstance/2017-08-01-preview)

Please send us feedback.

~~~
dougfish
Sean - I'd like to see this, but the link you provided seems unavailable to
me. I see only a 404 page. Maybe it's incorrect or access is restricted?

------
holografix
This looks like Heroku with more granular control of the container specs?

------
jwildeboer
Containers is Linux, even on Azure ;-)

------
tschellenbach
I don't see why you need a container for your cloud instances.
Puppet/Chef/Ansible are all much better solutions.

~~~
bacongobbler
The value-add here is not to be running cloud instances in containers. Rather
it allows you to run containers and be billed by the second, which opens the
doors to short-lived jobs running in containers on the cloud. It's closer to
serverless platforms rather than VMs.

------
ybrah
Lots of pains using Microsoft azure at my workplace. We always AWS when we're
allowed, which isn't often enough.

Lots of weird issues.

~~~
Swinx43
It might be worth mentioning what some of those issues are, otherwise it just
sounds like FUD.

