
Serverless now integrates with IBM OpenWhisk - AlaskaCasey
https://serverless.com/blog/openwhisk-integration-with-serverless/
======
dkarapetyan
What I don't get about these things is how can they be performant. If for
every action you are loading an entire runtime, executing something, and then
throwing the runtime away you are paying the cost of starting the runtime
every time. It gets worse the more actions you execute per second.

So then you build a "serverless" thing and then it needs scale at which point
you have to rewrite the whole thing as a classic "serverful" application that
loads the runtime once, caches stuff, etc. So why not just build it right and
with proper discipline the first time?

~~~
Matthias247
"load the whole runtime on each request" is nothing new. That's also how PHP
and other cgi technologies worked, where for each request a new process was
spawned. There are certainly ways to speed it up (maybe only for a pre-spawned
process?) - but I guess for a lot of lambda use-cases performance is simply
not an isssue, similar to like it was not an issue in old cgi-based web
applications.

~~~
throwaway2016a
Well one misconception is that the code needs to be downloaded into the
execution context every time. Which is different than a PHP file being loaded
off disk (or opcache as the case is more common in PHP). It's more like if you
zipped up your PHP program and every time someone hit a page the zip file was
downloaded and extracted onto the server.

However, Lambda takes that one step further. The entire app stays running in
memory. No loading or recompiling code. The function is sent instructions via
interprocess communication.

So you can take advantage of things that are difficult (though not impossible)
to take advantage of in PHP such as connection pooling and shared memory.

~~~
Matthias247
If you are reusing the app for multiple function executions, how do you
enforce things like isolation between functions and execution limits
(processor time, etc.)? I think if 2 functions are running in parallel one
can't kill the whole process if the first one exceeds limits, because the
other one is still running. And in a runtime like node.js it's not possible to
isolate different code pieces reliably from each other. Or is a special
runtime and things like V8 isolates utilized?

~~~
throwaway2016a
You cannot do that. Each deployed function corresponds to exactly one entry
point in code. You can however pipe multiple HTTP endpoints through the same
entry function like a router.

So let's say you do that.

If two clients request the same function at the same time it will run the
function in two different containers. One for each client. They are isolated
(think multiple instances of Node running in multiple Docker containers). They
might as well be on different machines entirely.

The platform will not reuse the same container unless the previous request to
that container has finished.

------
jtth
These headlines sound like a chatbot having a stroke. I don't know what any of
these words mean apart or together.

------
cphoover
Awesome! Glad they stuck to their word and added platform agnosticism. Excited
to give it a try.

------
gaius
One question: why?

~~~
throwaway2016a
Serverless (in this context) is a framework for running functions on the
cloud. The more platforms teh framework supports the more choices the
developer has. Which is a win-win.

If you ask why Serverless in general... scaling to millions of users for
barely more expensive than running servers with little to no devops cost. At
less users it is significantly cheaper. A low-traffic Serverless app could
cost pennies a month and you have the full power of your favorite language at
your disposal. (assuming your favorite language is Javascript, Java, or
Python)

