
Functions as a Service: Serverless Framework for Docker and Kubernetes - ingve
https://github.com/alexellis/faas
======
alexellisuk
Hi, OpenFaaS author here - it feels like I'm late to the party here. I can see
lots of folks chiming-in with their projects which you are welcome to do. I
agree with you that this is a confusing landscape with lots of options. Here's
my introduction to OpenFaaS (and Serverless), the top 3 new features and
what's coming next: [https://blog.alexellis.io/introducing-functions-as-a-
service...](https://blog.alexellis.io/introducing-functions-as-a-service/)

If you have 10-15 minutes you can just jump in with your own serverless
function in Python with this guide - [https://blog.alexellis.io/first-faas-
python-function/](https://blog.alexellis.io/first-faas-python-function/)

~~~
alexellisuk
I presented Open/FaaS at Dockercon in April showing an Amazon Echo dot with
voice skills, integration with 3rd party data, GitHub/S3 and auto-scaling in
response to demand - it's a nice high-level overview -
[https://www.youtube.com/watch?v=-h2VTE9WnZs&t=1058s](https://www.youtube.com/watch?v=-h2VTE9WnZs&t=1058s)

------
_Marak_
If anyone is interested in doing this without the additional requirements of
Docker and Kubernetes check out Microcule.

[https://github.com/stackvana/microcule](https://github.com/stackvana/microcule)

~~~
rcarmo
This is cute. And it looks like it can leverage Docker as well, with a few
tweaks. I especially like the agnosticism of it.

~~~
_Marak_
Thanks! I built this by refactoring out the core technology which hook.io uses
in production and put it into a separate open-source project.

By design, Microcule operates without needing containers or virtual machines.
It's up to the user how they wish to isolate their services. I have heard from
a few users who have successfully incorporated Microcule with Docker in
production.

see: [https://github.com/stackvana/microcule#no-
containers](https://github.com/stackvana/microcule#no-containers)

------
dundercoder
Is the goal of Lambda/OpenFaaS to basically be cgi/bin in a box? (Container)
Would sandboxed cgi/fcgi be faster at forking than spinning up a container?

~~~
alexellisuk
That's not far off what's happening, but in containers. For a deeper dive on
the mechanics of OpenFaaS checkout the intro
[https://blog.alexellis.io/introducing-functions-as-a-
service...](https://blog.alexellis.io/introducing-functions-as-a-service/) and
then the watchdog README here -
[https://github.com/alexellis/faas/tree/master/watchdog](https://github.com/alexellis/faas/tree/master/watchdog)

------
rcarmo
In case someone is looking for a way to run and deploy this to a public cloud
(in my case Azure), there is an extant pull request here that will set up a
dynamically resizable Swarm cluster:

[https://github.com/get-faas/labs-azure-scaling-
template/pull...](https://github.com/get-faas/labs-azure-scaling-
template/pull/1)

This allows the FaaS runtime to dynamically allocate more Swarm nodes based on
CPU load (indirectly, using Azure VM scale sets).

I've been meaning to see how I could tie in Azure Container Instances to FaaS,
but currently lack the time (and to do that it would need to cleanly separate
the admin UI from function endpoints, something that wasn't implemented yet a
few weeks ago).

~~~
alexellisuk
Function endpoints take a route of /function or /async-function and all
administration is via /system/functions route and the UI is hosted at /ui - so
you have all the separation in place.

There's an open issue for ACI - would be great if you wanted to contribute to
the project.
[https://github.com/alexellis/faas/issues/117](https://github.com/alexellis/faas/issues/117)

~~~
rcarmo
I meant separating the admin UI completely, into its own process/container and
IP/port, distinct from the reverse proxy.

Route separation just means I can DoS the admin UI by invoking functions
repeatedly - which will eventually happen even with "production" loads.

------
fohara
Seems similar to OpenWhisk: [https://github.com/apache/incubator-
openwhisk](https://github.com/apache/incubator-openwhisk) . The built-in
metrics reporting of OpenFaas looks useful though.

~~~
alexellisuk
For a great live demo of the auto-scaling and metrics shown in Grafana see my
video from Dockercon's Cool Hacks closing ceremony -
[https://www.youtube.com/watch?v=-h2VTE9WnZs&t=1058s](https://www.youtube.com/watch?v=-h2VTE9WnZs&t=1058s)

------
keithwhor
This is great stuff! If anybody's looking further up the stack and executing
functions via HTTP / etc. via a gateway, we've developed a formal
specification called FaaSlang [1] that includes type-safety mechanisms and
standardized error reporting.

This can be connected above your FaaS provisioning system; whether it's in
front of cloud-native offerings like Lambda or Azure Functions, or you're
running a system like OpenFaaS yourself. FaaSlang defines _execution
semantics_ , it is not opinionated about where functions are stored or how
they're run.

[1]
[https://github.com/faaslang/faaslang](https://github.com/faaslang/faaslang)

~~~
hbt
this is pretty cool. you can also convert it to swagger and restify your
function.

the main benefit is annotations in comments

------
DJN
Well chosen name.

Two questions:

1\. How does this square up with the efforts within the Serverless community?
[https://github.com/serverless/serverless](https://github.com/serverless/serverless)

2\. Does OpenFaas assume that the smallest unit of work for each function or
domain of functions is always a unique container? In other words, does it
support scenarios where one needs to deploy a new function into an
existing/running container?

~~~
alexellisuk
The Serverless Inc project helps you deploy to closed-source cloud vendor
function products using YAML. OpenFaaS is an open-source Serverless framework
to do largely the same thing, but you can use cloud servers or your own on-
prem equipment. Checkout the intro for more comparisons to other projects
[https://blog.alexellis.io/introducing-functions-as-a-
service...](https://blog.alexellis.io/introducing-functions-as-a-service/)

------
amq
Is it realistic to keep a complex system built around functions and
microservices in a small organization?

~~~
vanviegen
Why would you want to? Honest question.

~~~
hbt
distributed computing

by having a function with a rest API in a container, you can scale it on
demand with no infrastructure management

~~~
spdionis
> with no infrastructure management

I do have doubts on that part. Well, unless you mean physical infrastructure.

------
zokier
Is this just for web stuff? One of the key aspects (imo) about Lambda is how
it integrates to other AWS services, allowing the code to react to all sorts
of events.

------
mobiletelephone
This looks very interesting. Does anyone have any experience with this for C++
functions vs AWS Lambda?

------
edude03
Also checkout Funktion

------
droelf_
Last time I checked (a couple of minutes ago), NONE of those "serverless"
frameworks were truly serverless. Running your stuff or even your "functions"
in the "cloud" is still running it on a server. Just because you do not know
which specific server is running it at a given time does not make it
"serverless". Why is it so hard to use the correct term?

~~~
cwilkes
Do you also rail against "wireless" phones as there's wires involved
somewhere?

It is called serverless as in the ideal state you let the scaling of the
server be handled by someone else. Just like with cell phones you only care
about the last bit before it gets to you.

~~~
Can_Not
No, he calls his phone "towerless", "networkless", and "carrierless" because
he doesn't personally own or manage his own towers, networks, or carriers.

