Hacker News new | past | comments | ask | show | jobs | submit login
GitLab Serverless (about.gitlab.com)
233 points by sriram_iyengar on Dec 12, 2018 | hide | past | favorite | 122 comments



Can someone explain me what that hype about serverless is? As I understand it, serverless is just good old webhosting, but in the cloud. With webhosting I mean providers offering a LAMP-environment and customer just upload their code and don't manage anything else. Is this correct? Then how is serverless different to this?


To me, it's about the abstraction layer that you interact with when bringing up your stack. That abstraction layer has gradually moved higher over time as providers try and deliver more for their customers:

1) Hardware) 'I want to run Linux + Stack + App'. You get an empty rack, a network port and a power socket. You have to buy iron, install and maintain OS/platform, runtime environment and your service. Scaling requires more work and buying more stuff.

2) VMs) 'I want to run Stack + App'. Machine and OS is provided for you and maintained. You still have to build runtime environment and your app. Scaling doesn't require capex like (1), but still slow - you have to decide how much capacity you want.

3) Containers) Still 'I want to run Stack + App' but faster scaling so can be responsive rather than in advance. A halfway step to:

4) Serverless) 'I want to run App'. Runtime environment, hosting etc is all taken care of. You just write the app and everything else works at the capacity you need.

We've been aiming at this last level of abstraction for a while. As you pointed out, the classic VPS with LAMP got you some of the way there, but without the scaling advantages. Google App Engine got closer but was its own special world. Containers are an important technical enabler to doing it more portably.


I'd like to think 4) is actually Containerless or PaaS - "I want to run App". Serverless or FaaS takes this a step further to "I want to run App on demand". What do you think?


I was going to write a point disagreeing with you, but the logic didn't work. The fact that the PaaS implementation used requires scaling in advance, vs FaaS which can scale as requests come in means that you are exposed to one more (important) detail with PaaS - number of instances. So yes, FaaS has gone up the abstraction stack one more step.


let me guess, you don't have any experience in ops?

1) Scaling doesn't really require more work. It needs a bigger investment because a hardware hypervisor costs significantly more than a vm... after all, that hard metal can host tens of VMs

2) you still need to manage your linux system and OS if you're buying a VM... the only thing you don't need to worry about is hardware. So, a faulty hard drive, hypervisor failover and similar stuff is taken care of. That doesn't mean that it works, however. It just means that somebody else will take care of it... though it might not work and you're entirely on their mercy to fix it

3) containers don't inherently give you better scaling either. creating images from VMs has been done for ages before docker was a thing, and provisioning a new Node from a VM Image is pretty easy if you're already using Terraform or similar.

you're just able to utilize a higher percentage of your hardware with containers, as you don't need to virtualize the kernel... so its a cost-cutting issue, not scaling


“let me guess, you don't have any experience in ops?”

This is precisely the benefit. It lets you not require as much ops experience to get the same output as what once required a lot.

This is pretty neat because now more teams can do more stuff since lots of orgs don’t have good ops or can’t pay.


I would dissent from #3 in that containers also have the inherent advantage of being able to scale and start quickly. A full VM takes time to boot and run through whatever setup/initialization needs to happen at the OS level, where a container just needs to copy/deploy and start.

In many cases it's an additional abstraction, but there are advantages beyond just better hardware utilization.


An optimized vm boots in < 2 seconds...

Heck, a hardware server can boot in seconds if you skip POST entirely


My experience with Azure and AWS is that it takes quite a bit longer than 2 seconds before a new server is up.


I would say something like step 3 is mainly needed to get to step 4. While it doesn't have to be containers most of the implementations, as I understand them, use containers as the underlying mechanism to offer up serverless / FaaS. So you have container runtimes, microVMs (Amazon Firecracker) and... What other components are out there?


Is Heroku considered Serverless? Or is it still more like Digital Ocean (which I consider to be more like #2)?


Heroku is more like containers in this list. Your app is packaged up, runs on as many machines as needed, but you pay while it is running even if idle.

As soon as the idle state is abstracted away, then it's serverless.


There is a step between 1 and 2, where you pay a per-month fee for an actual server that the provider owns. That was pretty standard operating procedure in the mid 2000s.


With serverless you only pay for a request and you (almost) don't care about scaling. While non-serverless implies that you have to manage how many server instances you want, and have them running and pay for them even if you have no traffic.


I feel the whole "not paying for underused server" argument is pointless when a serverless solution is way more expensive than paying for an idle server.


It isnt more expensive!

Our main path business path gets about 60,000 reqs min. For that, sure AWS Lambda would never compete. Not only would it cost more but you'd probably hit the concurrency limit in AWS Lambda.

But we also have an admin UI that gets about 2500 visits a day. We're running that for about $4 a month - with virtually no operational burden whatsoever, and with a reassurance of resilience.

We worked out that the inflection point of value is at about 3 million requests a month. (that's an fag packet estimate, arbitrarily expensive 'request', your mileage may vary).

Its no silver bullet, but for some applications, particularly personal projects, low traffic and startup scenarios it can be ideal.


I just hate it that discussions end up in people talking about keeping the function warm and quickly go downhill from there.

A managed app runtime can be great though: for those who tried Google AppEngine back in the day, it was amazing to work with.


>> Our main path business path gets about 60,000 reqs min. For that, sure AWS Lambda would never compete.

I'm curious about this statement because I'm currently working on a project where the intent is to port a legacy app to AWS Lambda. The legacy app is currently distributed across 96 VMs and handles ~ 50K reqs/min during normal times but can run into ~ 3M reqs/min during high demand.

Are you saying AWS Lambda can not scale to handle this type of demand and if so can you point me to some resources/references that explain this?

For the record, I'm not the one that came up with the architecture.


mcrittenden is right in that it probably will cost more to run than just having the old processes run on ECS/EC2. Lambda charges a premium for maintaining its control plane and development of the product that only makes sense to pay if your app has 'idle time'.

But also there is a per account per region concurrency limit of 1000 parallel executions. That's shared across all lambdas. If you hit it, requests will not be for-filed, increasing this limit is entirely at Amazons discretion, and wont they necessarily do it. I stand corrected, see comment.

https://docs.aws.amazon.com/lambda/latest/dg/limits.html

In your case 3m reqs/min will be fine if each request can be completed in no more than 20 milliseconds, and then you'll be on the knife edge.

If you wanted to reap the benefits of Lambda on the development side, loose the limits and keep server costs low(er) you could deploy OpenFaaS or similar in ECS. However you then loose the operational benefits of a managed solution.


The 1000 concurrency is a default limit, and we're very happy to raise it if you need more. The instructions for a limit increase request are here: https://docs.aws.amazon.com/lambda/latest/dg/concurrent-exec...

(Source: I work on Lambda at AWS)


Fair play - added a correction, that statement came from discussions with our DevOps facility rather than first hand contact with Amazon, so there could be something else going on.


I think he/she meant that Lambda wouldn't compete on pricing (as opposed to on performance).


Both pricing and performance. It will scale, but it’s gonna cost a ton, and won’t be faster than running servers (and could be considerably slower for a percentage of requests).


So the main advantage is that it scales easily, but then it's only cheaper if you actually don't need it to scale?


But it's so much cheaper bud (it's not). It maybe is only when you're doing <= $10/month.


I don't know if it's the HN effect but I never had a scaling problem. Prior acquisition the story says whatsapp used to run on a single server


It depends on the type of application. But when at 5pm your traffic grows by 200x (hint: it doesn't) and you petascale your aws vps megafleet (hint: you don't, especially your database you don't) then you are on route to unicorn-scale hypergrouth cash-money-team-happiness.


My personal take is this...

1. start with docker (or dokku), on a single server, it's easy enough to get going, automate CI/CD. Make sure your backups are working and relatively frequent.

2. Break your lower environments on to a separate server.

3. Break out your database, and set up redundancy at that layer. Leverage DBaaS if you can.

4. Grow to multiple app/api instances for redundancy, and setup rolling deployments.

5. Scale Vertically (bigger servers/instances)

6. Migrate to Kubernetes and expand your automation and tooling.

7. Break apart your application into smaller pieces to scale individually. Possibly leveraging platform tools like Lambda/Functions.

8. Work towards redundant datacenter and application data sharding to deliver a closer experience.

By the time you get to 5-6, you should be making money or have a good investor strategy in place for capital. There's very little need to go all out when at concept or earlier release stages.

6-8 may take place in a different order, depending on your needs.. but again, you should have money or have raised capital by this point, or you have a relatively good problem to have otherwise.

IIRC Stack Overflow grew vertically pretty big on a single server, then two (db/application split).


Also, you don't need to worry about server security or server setup outside of the code you're running. With a typical VPS, you'd have to install software updates yourself and have a plan for how to set up a server from scratch again if needed (nontrivial if you do a lot of setup manually over SSH).

Essentially, a good way to set something up that requires minimal maintenance.


They were talking about Webhosting, not VPSs. With webhosting, the hosting company usually manages the software, you just dump your code (or cgi-bin binary) into a directory. For example https://www.hostgator.com/web-hosting


Just like old shared webhosting :)


I made a quick list[0] of cheap shared hosting providers I found on LEB[1]

[0] https://gist.github.com/shaunpud/35f77b542eaec7c7024bbb15c2e... [1] https://lowendbox.com


Most webhosting plans are fixed monthly fees, AFAIK. The only one I knew that billed for used resources was NearlyFreeSpeech.



Was, at the time I last looked at it :)


The way I see it, it's basically just a cost-saving measure. If you have some very varying workloads, instead of paying for having a bunch of servers running all the time (and idling most of that time), when you actually need the extra compute power you can let AWS Lambda spin up a server, run the code, then spin it down again. That's clever and all, and might save you a pretty penny if you have huge workloads only part of the time - but for most users, it's hardly worthy of the hype.


It does feel like just FTPing a .php file into your LAMP webhost, but with autoscaling, slicker UX and much more vendor lock-in.


> and much more vendor lock-in.

This is a myth that really needs to be busted. There is almost no lock in with serverless. The serverless products from all the major providers are nearly identical.

The lock in comes from the services you are using within a particular cloud, which you can avoid if you want to by running everything else in k8s.


regarding your k8s comment, did you mean kubeless?


k8s == kubernetes, like i18n == internationalization


With the difference that you pay per request instead of per month / year. Mind you, given how cheap LAMP webhosts can be, I'm fairly sure they would be cheaper if you don't need the autoscaling.


Seriously, VPS hosts are so far ahead of the cloud when it comes to cost/resource ratio. I can get a 4-core 4GB 120GB SHDD instance for ~$2/month.

https://lowendbox.com/blog/hostedsimply-4gb-ram-ssd-cached-v...

EDIT: cheaper even, $1.58/month:

https://lowendbox.com/blog/n3servers-vps-hosting-and-hybrid-...


Low-cost VPSs are great, but their uptime is usually garbage. I myself run almost everything on a 9€/mo Contabo box (6c, 16GB) and while it's nice and powerful, it gets randomly rebooted way more often than acceptable (and I've heard similar about other hosts). For the "I'll get an angry phone call if the app is down for half an hour at 4 am" category of projects, the cost of a reliable enough VPS comes very close to the equivalent in the cloud, where you also get the benefit of pointing a finger at your provider if something goes wrong.


Doesn't happen to my VPS. I think my uptime is 100% this year.


Does it really? As far as I understand it, in a traditional webhost you have a certain set of resources allocated to your account for which you pay even when 0 traffic hits it. Serverless consists of an isolated chuck of logic that runs only when a request hits your server. When the traffic is 0, server resources are blocked on an idle service.


It’s kind of similar, but imagine php without the ability to break anybody else on the web host and infinite resources as much as you wish to pay. So no server migrations. No debugging performance problems.

So it is just the natural progression of a shared LAMP host that’s worked well for 20 years.


To add to the other comments, using serverless with AWS Lambda has been very useful for us in our data pipeline, in a large part because you can hook them up to respond to a number of events like S3 uploads, updates to dynamo tables, or to an SNS or message queue topic. It scales up effortlessly and is very cheap to run. Could we have done this by avoiding serverless? sure but we would have to maintain quite a bit more infrastructure - why do that when someone else can handle that infrastructure cheaply? I realize my example is provider-specific but I think having them act as asynchronous data transforms and workflows are where the serverless really shines, especially given the invocation overhead of 20-30ms or so when hot, and several seconds when cold.

We have run into instances where the serverless model doesn't work so well, like when we updated our graphql API to query a postgres store - each lambda invocation created its own connection pool and would overload the database. Currently theres no way to reliably persist those somewhere else and have the lambdas pick up the persistent connection (like how a redis session store would work) - perhaps that will change. the lambdas work best when they're more-or-less "pure functions", so if you have to keep things like sql connection pools, or a session around, you still need your own persistent server.


Serverless is identical in every respect to traditional hosting/managed services with one exception: micro-billing. In web hosting you are often either constrained by the "level" you pay for, or have to configure scale and pay for the size of your domain. True serverless should auto-scale on demand without your intervention, and it should only bill you for what you use. Have a slow month and are idle? Pennies. Have a phenomenal month and huge usage? Dollars. There is less server, not no server, but instead of worrying about the OS or even platform level dependencies, you can focus just on the bit of code you want to run. If your server is a house and a virtual machine is like renting a house, platform-as-a-service is an apartment and serverless is a hotel (thanks to Scott Hanselman for that analogy).


People abuse the term serverless, yes to mean "cloud" or "somebody else server" and "just write the function you want we will find a server to run it on and report to you the result".

The term I like to associate with serverless is all peer-to-peer, not client-server architecture, for example any datwebsite, on datproject, and any zsite on zeronet.

There you dont have a server, really, a zite is a peer in a torrent swarm like any other, and can be found like any other torrent by its infohash (sha1) in the DHT, and can at any time go offline, while the swarm continues as nothing happened, new people can still access the zite from peers - and the zite despite rendering in a web-browser, can/should not really talk to any server, only to a peer-to-peer network. There is no "form submit" allowed on a zite, then it defeats the purpose of zeronet and similar techniques.

Another example is gun.js


You are coming up with your ow definition of serverless. For most people serverless means

1. Per invocation billing 2. No need to provision resources ahead of time

Taking AWS's example

1. EC2 is not serverless but Lambda is 2. RDS is not serverless but Aurora Serverless is.


It's not their own definition, this was the default definition just a few years ago.


People started to use the term for something else and now it became the main meaning.

"Serverless" is nowadays "AWS Lambda" (and similar products).

I wish people called it FaaS (function as a service), but they call it "serverless", and it's fine I guess


Yeah, now the term is an oxymoron, "serverless" but there is a server and a client.


This is not what "serverless" means nowadays.

What "serverless" means, in the context of the article (and basically how anyone uses it today), is "AWS Lambda" (or FaaS). The name is kinda stupid, since it still uses servers; it's just that you don't need to care about them and scaling is done magically by Amazon.


That is very interesting topic. If you were able to add secure computation with some kind of homomorphic encryption [0] it could be possible to get rid of centralized compute vendors. The only issue is the performance of such computations which are still cheaper to run unencrypted in a trusted env.

[0] https://en.wikipedia.org/wiki/Homomorphic_encryption


Serverless means "I don't have to maintain or even know anything about a server", not "there is no server".


If so any memory-managed language should be called memoryless, any non-low-level language should be called osless and so on. It's an absurd naming scheme.


It's the new PaaS, but scoped down to a smaller payload of just a single function, although most platforms now let you upload an entire app or container anyway.

The other big change was removing the concept of servers/instances altogether so there is no step-wise scaling at all that you can see or control.


It just simplifies an entire field of software engineering (devops) into a simple "one-click" setup.

Before, you had to open an account, provision a server, figure out your machine image, install dependencies, figure out error/process handling, deploy...

...figure out optimization, payment, RAM usage, longevity, how many instances do I need? What are peak hours, and should it autoscale?

No you just run a single command line and it's up in the cloud, figuring all that out for you automatically.



I'm a huge Gitlab fan but this seems a little parallel to their core service if not orthogonal... I've been pretty pleased with the feature progress of Gitlab overall though, for example support for merge request-only CI steps just landed[0]

I see this as a play to start leveraging all the machines they have hanging around for running jobs and gitlab instances, and I'm not against it as long as the Gitlab itself doesn't suffer. Are they planning to pivot to becoming a cloud provider like digital ocean? It feels like if they can manage orchestrating serverless functions they can manage orchestrating containers or VMs...

[0]: https://www.reddit.com/r/gitlab/comments/a54mc3/support_for_...


> start leveraging all the machines they have hanging around for running jobs and gitlab instances

If anyone ever figures out the isolation problem I'd host an instance and/or ci servers in exchange for licence credit. Unfortunately the current gitlab-runner doesn't run on arm [1] but I wanted ci/cd to make containers for the project, 98% of the time it's idle.

1: https://gitlab.com/gitlab-org/gitlab-runner/merge_requests/7...


I'm actually working on a system to do exactly this I wanted to sell dedicated 1C 2GB RAM runners for $8/month (which is less than a t2.micro), maybe I should architect it so people can contribute runners and make it a marketplace...

Would love to hear some feedback if anyone has some.


Personally I run a gitlab-runner inside an LXC container. It seems to work well.


I think they are managing the continuous deployment part of the devops cycle which is traditionally done using webhook.


Any chance you've got a documentation link for this? My Google foo is failing me and this is a feature I've also wanted for ages!



Also there's the general `only` section of the `gitlab-ci.yml` documentation: https://docs.gitlab.com/ee/ci/yaml/#only-and-except-simplifi...

(I have this page favorited)


To clarify, we are not getting into the cloud play at all. We support multi-cloud deployments of any kind now including serverless. So you can use the GitLab UI to plan, build, deploy, manage serverless apps/functions with the rest of your code. Knative helps us deploy to any cloud.


Serverless has been on the roadmap for a little while: https://gitlab.com/groups/gitlab-org/-/epics/155 - I saw a demo of using OpenFaaS (https://www.openfaas.com) as a serverless endpoint for GitLab around 3 months ago. I think that the effort here is to control software, the deployment to Kubernetes is just an additional thing that is tied to the overall software development life cycle. I don't think that GitLab is looking at running against AWS as a cloud provider, but more at being a control plane for managing software. And for that, for me, it is far ahead of the competition.


I've read the announcement now twice but I can't make out what this is actually good for.

I know what they mean with serverless, so that's not it.

- is this tied to CI in any way so stuff can get tested via knative?

- is this CI for functions to be deployed (via OpenFaaS or Lambda)?

- is this just a frontend for serverless backends?

I'm totally lost, I use GitLab as a hosted amount of git repos, with inclusion of CI runners, so maybe that's the wrong angle of approach? Which _problem_ does it solve? Is it to make using serverless easier? If I wanted to use 'functions' would I first have to install GitLab? Why would I use a code hosting platform for that?


Overall, this is trying to give more support for pushing functions to Kubernetes using Knative as the method. I believe that there will be some support built in to GitLab for supporting Knative functions. I also know that this already works with OpenFaaS and OpenFaaS cloud, whereas Lambda normally is not deployed directly to Kubernetes, although you can use GitLab CI to do this.


Thank you, that makes it a bit clearer.


I would appreciate it if they first focused on getting their Kubernetes integration working with anything other than Google cloud. Or maybe add support for RBAC, or a variety of other things that make some of these amazing features like auto devops workable for the common man without having to spend weeks setting it up.

If I try to install gitlab on Kubernetes I have no less than 4 ways to do it, and none of them work perfectly.


> If I try to install gitlab on Kubernetes I have no less than 4 ways to do it, and none of them work perfectly.

Yes. Until a couple months ago the 'official' way was a deprecated (but not yet replaced with anything not marked alpha) chart: https://gitlab.com/charts/gitlab-omnibus

FWIW, I use that to run GitLab on Kubernetes on AWS (not EKS).

Though the new 'container-native' (more separate containers for each service instead of one big omnibus container for the web app + git server + else) one is now out and 'official': https://gitlab.com/charts/gitlab-omnibus

So I hope to upgrade to that soon.


Support for RBAC for GitLab Managed Apps was added in 11.4 [1]. You can also connect any Kubernetes cluster hosted anywhere - see the docs on that [2].

What problems are you hitting when installing GitLab on Kubernetes? You can always open an issue about it so it can be prioritized for fixing.

[1] - https://gitlab.com/gitlab-org/gitlab-ce/issues/29398

[2] - https://docs.gitlab.com/ee/user/project/clusters/#adding-an-...


Please also note that you don't have to install GitLab itself on Kubernetes in order to add a Kubernetes cluster to your projects or groups. We updated the docs yesterday to prevent confusion about this https://gitlab.com/charts/gitlab/merge_requests/599/diffs

We had multiple ways of installing on Kubernetes before but we reduced it to one canonical helm chart https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.h...


This seems to be the first product to run actual workloads on instead of out-of-band management?

I'm a bit lost when it comes to GitLab because they are trying to do so much, so I'm confused if this is a change in strategy or natural next step for them.

Edit: Clarifying question.


Their CI system runs actual workloads as well.

To be honest I don't understand why they prioritized this but it seems to me like Gitlab has a path to becoming a lot more than what Atlassian is right now; by owning not just the devops but also the cloud itself.

It sort of makes sense if that's something they aim for, but it'd be an uphill battle and to be honest I don't see it ever working out. But there's huge rewards there if it does work out. It's kind of the reverse of Amazon tried to do (AWS is trying to compete with Github etc by having their own code hosting and code deploy services, which doesn't ever lead anywhere except for die-hard AWS fans).

I'm giving Gitlab the benefit of the doubt on this for one simple reason: I was very doubtful of their current strategy of building in a monsterhouse of features to their product. Then I actually used it and was impressed with how smooth and useful it all is. It also took me actually using it to understand what exactly they were doing.

Most people are confused about Gitlab today because they compare it to Github, but Github really only does code hosting and barely even touches development lifecycle; whereas Gitlab is more like Atlassian (and goes beyond that, even). It owns code hosting, product management, development, and deployment. So the "natural next step" in that direction is devops, which they have already been playing with.


> Their CI system runs actual workloads as well

Only if you attach k8s right? Or can you leave services deployed with normal CI Jobs that don't end?


I don't think they are trying to do more than they can handle. Every feature I have used on gitlab has worked amazing so it doesn't seem like they are spread thin.


I don't agree. A lot of features seem half-baked. The public Gitlab issues are filled with features that don't seem to work or are poorly understood because of their spare or incomplete documentation. These issues often just languish until the gitlab bot closes them for inactivity. However the only inactivity for many of these is just Gitlab's failure to respond.


everything works great on gitlab, until the moment you had to migrate.

Landmine experience in issue lists


Hi, Community Advocate from GitLab here. Could you please reference what issues you are experiencing? We'd love to follow up on it.

Also, if you want to write the details, it would be great to open an issue https://gitlab.com/gitlab-org/gitlab-ce/issues. Thanks in advance.


Hi,

https://gitlab.com/gitlab-org/gitlab-ce/issues/27217

Something like this. When I fixed the VERSION manually, there are other errors, as expected.

However the peculiar thing is the errors are not idempotent. Somethings there's some postgres wrong column error, other times it says you can't import the project because you can't have hyphens in project names. (which I don't)

Migration is hard, considering everything can be linked in Gitlab.


One, having to come from the same version.

Two, having to connect to the same block storage in exactly the same way to make everything work.

I just want to upload a backup that gitlab makes, and upload that somewhere (anywhere) to restore from that one tarball, regardless of where I’ve decided to host my current instance or data (or indeed, if I’ve decided to use block storage again or not).


Thanks for the feedback.

1. Yes, you have to restore a backup on exactly the same version of GitLab and then upgrade that version. If you restore it to a different version the database scheme isn't correct.

2. You can restore a GitLab backup anywhere https://docs.gitlab.com/ee/raketasks/backup_restore.html as long as it is the same version. I think the output is a single file.


Thanks for the response.

1. Gitlab is able to migrate up to a new version, so is there anything preventing it from getting up to a certain (older) version before running the last few migrations after restore?

If I set up a new instance it’s generally one or two minor versions higher than the old instance.

2. I swear I restored from my omnibus installation to Helm based gitlab and lose all my uploads and registry images. Cannot see from the docs why that would happen any more though.

Maybe it was the opposite, with me restoring the backup to the same block storage devices, and gitlab hiccuping on the fact that all the files it wanted to restore were already there.


We've opened https://gitlab.com/charts/gitlab/issues/1015 to discuss this comment. I don't believe the db migrations are what are holding us back, as much as the other parts of the backup/restore.

For number 2, that sounds unexpected, and we would love for you to give us more details. https://gitlab.com/charts/gitlab/issues


1. I'm not sure and will ask.

2. It might be that this wasn't supported yet in the Helm chart at that time, I'm not sure.


Thanks for the feedback.


Where does Google AppEngine sit in the 'serverless' realm? That one started in 2008 and to me defines the concept of serverless. Auto scaling, support for executing multiple languages, hosted database backend (datastore), infinitely scalable transactional task queue, logging, etc...


For sometime now GitLab could be made aware of Kubernetes deployments. That is you added the integration of through your build. This allowed you to see your build through to the “deploy” phase.

All this new Serverless functionality does is now provide a window to see the Serverless workload rather than just your traditional pods.

All brought to you by knative.


Correct me if I'm wrong, but this is very different from Jenkins-X serverless.

Jenkins-X serverless: Jenkins only runs when it has work to do and shuts down while idle.

Gitlab serverless: tooling to support FaaS creation

Serverless is such a silly word to begin with, now it's even more confusing.


Yes, you've touched on a key difference between Jenkins and GitLab in general:

Jenkins: only does CI/CD, needs to be integrated with a suite of other tools.

GitLab: end-to-end DevOps in one application that has native project planning, source code management, CI/CD, artifact repository, configuration management, and observability built-in.

So Jenkins-X Serverless is about the Jenkins service itself running in a serverless paradigm. Or "using Knative to run Jenkins"

GitLab Serverless is a configuration management feature that allows you to build, deploy, and manage your own serverless functions from the same place where your issues, code, artifacts, etc. are. Or "Using GitLab (which uses Knative) to run your functions."


Jenkins-X adds a bunch of stuff to Jenkins including an artifact repository, a helm chart repository, release management, et cetera. It's pretty cool.


Is the only difference between Heroku and Serverless is that I pay for Heroku while the app is idle and that I have great debugging tools on heroku?

Please help me understand why I would choose Serverless over heroku


- on-demand pricing

- no capacity planning

- simple programming model

There are monitoring/observability tools for serverless (CloudWatch/Epsagon/Thundra/IOPipe)


Yes. The only difference is the unit economics. You pay per request in serverless whereas you pay "constant" amount in Heroku. Don't think it should be too hard for heroku to introduce price per request plans since they already have the infra for it (ala sleeping/waking dynos in free tier).


Is this addressing any of the cold start problems?

One of the things that attracts me to Serverless is not paying anything (or vey little) until a project is properly off the ground.

But with things like Heroku there’s a ridiculous delay to start up if here hasn’t been a request for a while.

Does Serverless / this implementation of Serverless address that at all?


> One of the things that attracts me to Serverless is not paying anything (or vey little) until a project is properly off the ground.

I guess it's not quite nothing, but can't you just buy a cheap $5 (or even $2.50) VM, and run all of your small projects off of the same one.

The cost is pretty small, and you don't have to worry about how your going to migrate your code off the serverless platform later.


Yeah that's basically what I do now. Amazon's Lightsail is really good for this.

What I'd really like though is one place I can just put the code, test it and know I can leave it there when it gets bigger. That's the dream anyway, but I guess nothing's perfect!


I can confirm AWS Lambda still has this problem. A dirty solution could be to time-trigger your function every x seconds to keep the container alive. For example, doing that every 10 seconds would produce 259200 request per 30-days month, so 0,05 USD per month for this in AWS.


Smart!

Kinda silly that it's required, but a good and super-cheap solution. Thanks.


> to zero and backup

I assume this should be "back up." Otherwise I'm missing what this has to do with backups.


Good catch Michael, that's right. We meant the "back up" and we are fixing this typo https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/....

Thank you for pointing this out and sorry for the confusion.


It's quite fascinating to follow your link and see the amount of "work" required to fix a simple typo. The fix consists of 1 commit with a change of 1 character. It then takes 1 merge request, 1 pipeline, 13 jobs and a grand total of 23 minutes and 41 seconds to process this change and deploy it.

Since one of the big pros of serverless computing is only paying for the resources that you use, I am wondering if we will hit a point where preparing a trivial release is not worth the effort because it's simply too expensive. Paying 23 minutes of cloud computing for a simple typo adds up easily.


No problem. Thanks for clarifying :)


Yes, sorry about that! Thanks for fixing, dsumenkovic :-)


If you're already deploying stuff on Kubernetes then why this over just packaging up the code into a container and running that? It's also probably using more resources to run the Knative stuff which would offset any scale-to-zero savings.

Is there really a big demand for this?


I didn't read the article but something tells me that "serverless" involves a server of some kind ...


From tfa: "...serverless computing is an execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources."

The name "serverless" is so unfortunate... On-demand would have been better.


Hehe, I agree. Sadly, IMO the term has taken over and so we’re stuck with it because of the quick recognition people have to serverless ...


Also "wireless" networking also involves wires of some kind. In terms of wireless networking, I don't care that there are actually wires that connect my router to the internet. It's the same with serverless architectures: I don't care that there are actually servers executing the code, I just don't want to have to think about them.


Article does a very good job of burying the lede.

tl;dr; GitLab have have a FaaS offering in alpha.


Serverless functions are a terrible idea. Way too restrictive and proprietary. Kubernetes is the future. Amazon throwing money behind the Serverless trend is only delaying the inevitable universal success of a standardized, open source container orchestration platform.


Your argument against serverless functions holds for AWS Lambda, but this Gitlab feature is based on Knative. That's an open source serverless platform based on Kubernetes.

I think all money Amazon is throwing at serverless is to build a foundation to ultimately transform all their "managed" services into a serverless form. They already released serverless RDS (https://aws.amazon.com/rds/aurora/serverless/).


Good to know that it's open source. I should have looked it up first.

My understanding of Serverless was that it was synonymous with 'Functions as a Service'. What Knative seems to be doing is more like 'Backend as a Service'. I guess it's good to know that these two very different approaches are now both labeling themselves as 'Serverless'. I guess Knative is just hitchhiking on top of Amazon's marketing success around the 'Serverless' term.


I'm fairly certain that under the hood serverless RDS is just "intelligently and dynamically adjusting the instance type with zero downtime" though, not really technically interesting compared to Lambda-serverless.


I don't understand, half of this announcement refers to the fact that this new feature is built upon Knative:

> It leverages Knative, which enables autoscaling down to zero and backup to run serverless workloads on Kubernetes.

So what are you referring to?


Hey, Community Advocate from GitLab here. Sorry for the confusion with the words. We actually meant "back up" which has a bit different context. Fixing that in https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/...


It’s based on Knative I think, so this is an open source, kubernetes-based alternative tied into GitLab.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: