Hacker News new | comments | show | ask | jobs | submit login
The Original Serverless Architecture Is Still Here (khanacademy.org)
65 points by dangoor 6 months ago | hide | past | web | favorite | 42 comments



I still remember the absolute shock GAE was to me.

Back in 2006, for a lot of people when you wanted to run a service, you configured your own LAMP server. Once everything was running, you didn't touch it and hoped there was not going to be a power failure.

Then GAE came and for the first time you got security updates, uptime, and automatic scaling without having a dedicated sysadmin. Needless to say that I've been a customer and advocate of this service from the beginning. My oldest service is still running. I just had to update a .yaml file twice... in a decade.

It's kind of sad the amount of people that are wasting their time administrating their own servers when it could be done automatically. I suspect the hype around k8s will waste millennia of man-power.


>the hype around k8s will waste millennia of man-power.

k8s and serverless are different things. They are different layers of the stack. K8s is just an OS spanning multiple machines (more specifically process management layer which makes it look like we dealing with such an OS when it comes to process management). A scalable serverless implementation needs something like k8s or some similar process management layer under it.


> It's kind of sad the amount of people that are wasting their time administrating their own servers when it could be done automatically.

It is, provided your service fits GAE's limitations. If it does, great! Otherwise, you are out of luck.

People are running a lot more workloads than LAMP servers.


I guess the same goes for Lambda.


Same, I’ve had a single service run (almost) unchanged since 2009… it just works.


Title should be "GAE is an example of serverless developed in 2006" ( year might be wrong)

Actually, as well known, nothing is new in computer science. Ubiquitous computing grid computing and many other ideas I am not aware of, are all rather similar to server less in the sense of granting application developer the least friction of accessing computing resources.


The trend is to have applications use less memory. On bare metal an application used all memory, a virtual machine has the OS overhead per application, on a container an application uses just one process, and with serverless even the processes can be reused.

If I understand correctly AppEngin uses containers that do not re-use processes, so not serverless according to the above classification.

I wonder why AppEngine didn't take off, the best answer I heard was that the security sandbox was too small and people wanted more flexibility. What do people think?

BTW As for Functions-as-a-service (FaaS) we're thinking about running the Lambda code (which is open source) on Kubernetes https://gitlab.com/gitlab-org/gitlab-ce/issues/43959#note_74... how hard would that be?


- App devs were not ready for GAE's model. - Google was distracted by social & mobile, and is investing in YouTube as well. - Google apparently was not convinced that Cloud is the next big thing. - Many other things ...

This is why company like IBM can live, because tech is only a part of a large number of factors to build a successful business.


Thanks for sharing your perspective. I wonder if App developers where not ready since they did embrace Heroku.



Note that AWS Lambda under the hood runs containers. I don't believe any serverless implementation is reusing processes.

Please review the serverless frameworks that already run well on top of Kubernetes: s.cncf.io (and see the installable platforms section at the bottom).

I'm happy to connect you with the Kubeless, OpenFaaS, and Fission folks, who would all love to see an implementation in GitLab.


Thanks for commenting Dan!

Interesting that AWS Lambda is also running containers instead of reusing processes. Their pricing suggests that Lambda is more efficient then containers, but maybe they priced it while anticipating being able to re-use processes later on.

We're already talking to the great people of Kubeless and OpenFaaS. Implementing either of them (or both) would be less work for us then getting Lambda to work on Kubernetes. But all the production usage and real world examples on the internet seem to be around Lambda so that gives us pause.

Since we're not trying to monetize a serverless framework we might be the only ones that can afford to make Lambda work on Kubernetes.

In the end we want to make something people want. And for me it is hard to judge what that is at the moment.


> The trend is to have applications use less memory. On bare metal an application used all memory, a virtual machine has the OS overhead per application, on a container an application uses just one process, and with serverless even the processes can be reused.

Processes on App Engine are reused between requests. We cache some things in instance memory.


I think what I meant to say is the processes are reused for different applications. I assume that on AppEngine one process is used only for one application (like our application servers on GitLab.com).


Ahh! Yes, each App Engine app lives in its own container & process. App Engine Standard further sandboxes things (you can't use arbitrary C modules from your Python code, for example).


I tried to use GAE back in the day, and it suffered from cold-start issues. Same story as if you tried to develop lambda-backed website with Java on AWS, and your lambda may take 20 seconds to start from cold.


Lambda also has very long (15+ second) cold start times if you start lambda functions attached to a VPC (such as lambda functions that connect to your MySQL server).


Interesting. Heroku had cold start issues when you were on the free tier and didn't have any questions for a couple of minutes. Did GAE work the same?


Yes, exactly. They would shut down your container (or whatever it was).


Makes sense. With Heroku I used to run a monitoring service that made a request every minute to prevent this.


Yeah, other languages (Go, Python, Node) can cold start much more quickly.


If you want to self-host FaaS, there's OpenWhisk. The problem that I have seen with the whole concept of self-hosted FaaS though is that you lose some of the key benefits of "serverless": no maintenance of underlying systems and pay for exactly what you use. Self-hosted means you have to maintain the underlying systems and you have to pay to keep those servers running 24/7 with sufficient scale to support your usage model. It may make deployment easier once you adapt to it, but it's not really giving you the full benefit.


> pay for exactly what you use

Is that a pragmatic consideration, though, or just a conceptual one?

My main concern is with the word "exactly", since cloud providers can charge a remarkable markup, which means that, though one is paying proportional to ones use, but that's not necessarily desirable, if the alternative is, for example, to pay less-than-proportionally (e.g. via an economy of scale).

Is the FaaS markup significantly lower? Higher? Do the decision makers even care?

I'm somewhat familiar with the possibility of reducing costs at IaaS providers like AWS with things like dedicated instances and the marketplace. Is that available with FaaS? Does that not matter because it, essentially, removes the benefit of minimal support?


So far everyone I meet that uses it production is using Lambda. It might be because they want to consume it as a SaaS. But maybe a self-managed solution would have to be compatible with Lambda to gain adoption.

Of course self-hosted is more work then SaaS, but I'm not sure that maintenance and scale are much worse. With Kubernetes you can autoscale things easily.


What would the goal of self-hosting a FaaS solution be?


On-premise deployment might be one goal.


Indeed! That's a good one.


I sometimes think people are overly keen to look for a Reason why something did or didn't take off, when attributing it to the whims of fate would be closer to the mark.


Every time you use the term "serverless", a small, innocent puppy dies.

At least "the cloud" didn't mean anything. This is worse, because it's misleading. There's a server involved and you know it. Anyone who hears it for the first time initially thinks it means "oh, so there's no server involved, it either runs on the user's local machine or uses a peer-to-peer infrastructure".

Then you have to google around and find out that it actually refers to the notion of having someone else maintain the server for you, just as was the case for CGI scripts on shared web hosts people were writing 20 years ago. Only once you realise this does it start to make sense.


Well since you want to be pedantic about it, if you serve your app from a local machine or a machine in a peer network, you've still got a server.


I think the horse is out of the barn at this point. It may not be perfect, but we don't have a better term yet so this seems to be it.


It's like wireless.

There are most definitely wires, they're just not yours, and you don't have to worry about them.


That's a weak analogy, since the vast majority of products that are called "wireless" really are substituting, with radio (or optical), for products that would otherwise use wires.


Isn't GAE PaaS? I thought serverless meant programs that served only individual endpoints, like AWS Lambda.


You get a list of responses you need to handle. Nothing says your binary only responds to one, or that there is a 1-1 correspondence.


This isn't an article on PHP served from shared/managed hosting, is it? ;)


There is an important difference... PHP on shared hosting (often connecting to MySQL) most definitely did not autoscale and self-heal. Even if you had a load balancer connecting to multiple machines, there was no automatic provisioning of more machines. Your database didn't automatically grow to handle more data and traffic.


This actually is a good thing to point out IMO. This model did sort of resemble a primitive version of serverless; that's why it was so popular. It was quick, easy, and affordable despite it's many downsides.


If the downvotes are any indication, apparently pointing this out is less popular than the model itself.


    - Serverless you say, hmmm, like pushing code, 
      and having *a server*, any server execute it. 

    - Yep.

    - We had it a decade ago, prior to the Cloud, 
      prior o the VPS, it was called CoLo. You were 
      pushing php code, and it worked on a server 
      we did not care about..


Most of the cloud providers moved away from metered database pricing, and into provisioned tiers. SimpleDB is an example of that early spirit. FaunaDB (my employer) aims to fit serverless deployments like a glove. You can get started here: https://blog.fauna.com/serverless-cloud-database or go deep here https://serverless.com/blog/faunadb-serverless-authenticatio...


I am very excited AWS Aurora will 'soon' be available as 'serverless': a full sql database that comes up almost instantly (https://aws.amazon.com/rds/aurora/serverless/)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: