Back in 2006, for a lot of people when you wanted to run a service, you configured your own LAMP server.
Once everything was running, you didn't touch it and hoped there was not going to be a power failure.
Then GAE came and for the first time you got security updates, uptime, and automatic scaling without having a dedicated sysadmin.
Needless to say that I've been a customer and advocate of this service from the beginning.
My oldest service is still running. I just had to update a .yaml file twice... in a decade.
It's kind of sad the amount of people that are wasting their time administrating their own servers when it could be done automatically. I suspect the hype around k8s will waste millennia of man-power.
k8s and serverless are different things. They are different layers of the stack. K8s is just an OS spanning multiple machines (more specifically process management layer which makes it look like we dealing with such an OS when it comes to process management). A scalable serverless implementation needs something like k8s or some similar process management layer under it.
It is, provided your service fits GAE's limitations. If it does, great! Otherwise, you are out of luck.
People are running a lot more workloads than LAMP servers.
Actually, as well known, nothing is new in computer science. Ubiquitous computing grid computing and many other ideas I am not aware of, are all rather similar to server less in the sense of granting application developer the least friction of accessing computing resources.
If I understand correctly AppEngin uses containers that do not re-use processes, so not serverless according to the above classification.
I wonder why AppEngine didn't take off, the best answer I heard was that the security sandbox was too small and people wanted more flexibility. What do people think?
BTW As for Functions-as-a-service (FaaS) we're thinking about running the Lambda code (which is open source) on Kubernetes https://gitlab.com/gitlab-org/gitlab-ce/issues/43959#note_74... how hard would that be?
This is why company like IBM can live, because tech is only a part of a large number of factors to build a successful business.
2M annual revenue...
Please review the serverless frameworks that already run well on top of Kubernetes: s.cncf.io (and see the installable platforms section at the bottom).
I'm happy to connect you with the Kubeless, OpenFaaS, and Fission folks, who would all love to see an implementation in GitLab.
Interesting that AWS Lambda is also running containers instead of reusing processes. Their pricing suggests that Lambda is more efficient then containers, but maybe they priced it while anticipating being able to re-use processes later on.
We're already talking to the great people of Kubeless and OpenFaaS. Implementing either of them (or both) would be less work for us then getting Lambda to work on Kubernetes. But all the production usage and real world examples on the internet seem to be around Lambda so that gives us pause.
Since we're not trying to monetize a serverless framework we might be the only ones that can afford to make Lambda work on Kubernetes.
In the end we want to make something people want. And for me it is hard to judge what that is at the moment.
Processes on App Engine are reused between requests. We cache some things in instance memory.
Is that a pragmatic consideration, though, or just a conceptual one?
My main concern is with the word "exactly", since cloud providers can charge a remarkable markup, which means that, though one is paying proportional to ones use, but that's not necessarily desirable, if the alternative is, for example, to pay less-than-proportionally (e.g. via an economy of scale).
Is the FaaS markup significantly lower? Higher? Do the decision makers even care?
I'm somewhat familiar with the possibility of reducing costs at IaaS providers like AWS with things like dedicated instances and the marketplace. Is that available with FaaS? Does that not matter because it, essentially, removes the benefit of minimal support?
Of course self-hosted is more work then SaaS, but I'm not sure that maintenance and scale are much worse. With Kubernetes you can autoscale things easily.
At least "the cloud" didn't mean anything. This is worse, because it's misleading. There's a server involved and you know it. Anyone who hears it for the first time initially thinks it means "oh, so there's no server involved, it either runs on the user's local machine or uses a peer-to-peer infrastructure".
Then you have to google around and find out that it actually refers to the notion of having someone else maintain the server for you, just as was the case for CGI scripts on shared web hosts people were writing 20 years ago. Only once you realise this does it start to make sense.
There are most definitely wires, they're just not yours, and you don't have to worry about them.
- Serverless you say, hmmm, like pushing code,
and having *a server*, any server execute it.
- We had it a decade ago, prior to the Cloud,
prior o the VPS, it was called CoLo. You were
pushing php code, and it worked on a server
we did not care about..