Happy to answer any questions that might arise as well. Answers might be biased and opinions are my own.
See IBM's statement: https://www.ibm.com/blogs/cloud-computing/2018/07/24/ibm-clo...
In technical terms, OpenWhisk's invoker system could be replaced by knative, keeping the API/Controller bits stable to still support the notion of actions/triggers/rules/sequences/compositions you name it.
> Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere
Are these middleware components that someone else is supposed to package together into something useful like a platform? The other bits lead me to think it's a platform of sorts in itself. So, why the talk of middleware?
> Knative offers a set of reusable components that focus on solving many mundane but difficult tasks such as orchestrating source-to-container workflows, routing and managing traffic during deployment, auto-scaling your workloads,
PaaS systems already do this. Other serverless things, like kubeless, do this too. Are these reusable components supposed to be packaged in a higher level system? It sounds like that but other parts of the page suggest it's a platform to use now.
> Kubernetes-native APIs
This is found in the docs repo. The examples are long form k8s style objects. Have developers (those being targeted) been interested in these? In my experience they like shorter form ones (there are numerous tools making that possible).
I'm curious of the meat and usefulness behind the hype.
It's intended to be usable as a single installation with the option to install individual parts. The textbook case is Build -- you can get things done with or without it.
> Are these reusable components supposed to be packaged in a higher level system? It sounds like that but other parts of the page suggest it's a platform to use now.
I think a bit of both. The joke I've made is that if Kubernetes is IaaS++, Knative is PaaS--. It's usable as-is, it aims to provide a common set of primitives for shared concerns, but it can also serve as a base for higher-level systems. The Project riff team, for example, have pushed some of their efforts down into Knative.
> In my experience they like shorter form ones (there are numerous tools making that possible).
Mine too. My personal view is that Build (and before long Pipelines) will be the main entryway to Knative.
These are early days of course, but given that the goal is to codify the commonalities (the 80% we all do roughly the same anyway) and to improve customer workload portability overall, I hope to see new products built using Knative, and existing products re-base on Knative as well.
Good questions, btw!
Except it bills itself as serverless. In a PaaS your apps need to have a server (e.g., http server). Serverless (e.g., FaaS or Brigade) elsewhere doesn't need this. Stuff isn't long running.
How is long running software servers like this serverless?
> My personal view is that Build (and before long Pipelines) will be the main entryway to Knative.
Pipelines? That doesn't appear to be in the docs. What is Pipelines?
I think this comes down to the difficulties of terminology. I think of FaaS as a PaaS with some extra features (scale-to-zero being the most-noticeable). "Serverless" is a catch-all term for a variety of workloads, of which FaaS is the most visible.
> Pipelines? That doesn't appear to be in the docs. What is Pipelines?
It's a proposal to evolve Concourse into being a Knative component, picking up from Build for complex workflows. I've been working on parts of this with various folks over the past few months.
I should emphasise that it is a proposal. For what you do with Knative today, use Build. It's a simple abstraction and it works right now.
 https://docs.google.com/document/d/1PicF7UhvSBpZLwichuY5hdhT... (to view, join the knative-dev group: https://groups.google.com/forum/#!forum/knative-dev)
Noticed you talked a bit about kaniko. It has some security issues that have been talked about but last I checked not addressed by the kaniko team. How are you dealing with those?
There are a number of other promising tools like img, but aren't readily usable yet because of a dependency on some upstream PR's.
At GitLab, we're trying to think of ways to help developers understand the challenges, as well as provide easy to adopt solutions as these tools become available. Would love your thoughts and feedback: https://gitlab.com/gitlab-org/gitlab-ce/issues/48913
Billing models do in fact affect our design discussions and we're still kicking the questions around.
How though, won't you need a Kubernetes cluster running?
For cloud providers the boundary is likely to become execution-seconds. As Kubernetes worker nodes become abstracted away this will be the remaining way to track utilisation.
Are you using it? Because the docs seem to show that what you describe is not the case. The devs seem to have to write integrations for everything, so almost nothing is done for them. Saying Knative will "do it for you" is like saying your car will "drive itself". You only have to steer it and work the pedals ...
You don't have to explain physics to explain how a car works. You only have to explain it in terms of something your audience is familiar with. Magnetization would be hard to explain to someone without physics, because there's no other parallel to quantum mechanics for the average person.
But a car is based on general principles which most of us understand at a simplistic level. We understand "stickiness", so "stickiness" can stand in for "coefficient of friction". We understand "weight", so we don't have to explain how gravity works. We understand that fuel, oxygen and a spark creates a fire (or explosion), so we don't have to explain chemical reactions. We also understand that explosions exert a force on things around them.
So when I say that an explosion in your car's engine exerts a force pushing against a piston, and that piston is connected to a rod, and that rod pushes on and turns a crank, which (skipping the transmission for brevity) turns an axle, which is connected to a wheel whose tire is stuck to the road, and that the weight of the car on the wheel forces either the road or the tire to move, we understand that even though the car is heavy and the turning force on the tire is pretty strong, the road is probably stronger and isn't going to move, so instead the tire moves, and the car is attached to the tire. So you can understand at a basic level how a car works without having to know physics.
What I was asking for was what combination of components in what order are required to make the thing run, and how these things are accomplished without the developer seeming to have to do anything. If they've built a car for developers, that's great, but it seems more like they've created nuts and bolts.
The short summary if you had a nice client shell would be:
1) Run a command to deploy. That command:
a) determines what build templates are available on your cluster and what language/tools you're using, and finds a match between the two.
b) Creates a YAML definition of your application on the Knative cluster(and stages your source if needed).
2) On the server side:
a) The build component will (optionally) trigger to take staged source and convert it to a container.
b) The serving component will create Istio routes and various pieces to schedule your app into a k8s Deployment.
i) This Deployment will scale to zero if there's no activity, and scale back up if needed.
ii) Scale-to-zero is accomplished via a (shared) "actuator" which stalls the incoming HTTP request until a Pod is live.
c) Additionally, the serving components loads various observability tools like Prometheus and ELK (by default, the no-mon or lite installs skip this) so that you can see what's happening even as your pods appear and disappear.
Too many descriptions of tools for devs sell benefits instead of functions. This is normally good sales practice, but devs are experts in the mechanisms behind the tools they use, and they generally prefer more functional descriptions so that they can more quickly evaluate whether any given tool is a good fit for them, and what its strengths and weaknesses will be.
This is bar none the best explanation I have ever read of how a car works, without hand waving, to clarify for non-believers in car technology. I want to reward you, but I am kind of an outsider who honestly just strives to use Kubernetes, unsuccessfully, and I haven't got it off the ground at my organization yet (and I may never...)
So let me do my best, as an outsider who you must understand is absolutely hand-waving based on a quick read-over of the high-level documentation, and an understanding of how these systems go wrong; but I have no honest understanding of this particular stack (but then again, I've seen a lot of what some of the contributors are doing, so perhaps your grain of salt need not be too big... but I digress, in under 20 sentences...)
Serving Scale to zero, request-driven compute model:
You're aiming to build out your environment inside of a small footprint. If all of your customers go away, and stay gone for a day, you'd really like for your stack to approximately stop the cash bonfire altogether. This is a goal of the stack, too.
Build Cloud-native source to container orchestration
So your footprint is a program, and you programmed it in a language... great, an event... to be treated like other events, like a new customer that visits your website, or a new commit from one of your devs... whatever build is necessary for your stack to come into being, it's handled inside of this stack. Not to spoil it, but: events like this are the key driver for the entire system, which the system architecture actually reflects in a way...
Events Universal subscription, delivery and management of events
A minimal gateway serves as router that intercepts customers, and serves as infrastructure stander-upper, standing up the infrastructure on-demand while the greater parts of your stack are basically disposable and automatically self-destructive, so that every time a new customer comes along, the request actually starts the whole response stack anew. Then, upon finding no further traffic to answer, the newly provisioned stack rapidly disposes of itself to save on the cost.
The response stack tears itself down entirely on the way after the response is served. Unless there's another customer, if the capacity remains un-utilized for long enough to leave a mark... it's gone. But obviously this goes both ways. We don't want anyone to keep waiting in periods of increased load, if ever there's no capacity available, we want to increase the capacity as demand is spiking, in response to the demand to keep it satisfied. Again, this is baked into the platform.
Serverless add-on on GKE
* the fine print, you must at least have GKE or another Kubernetes cluster or provider at equivalent service levels to enjoy the benefits described above. This runs on GKE, or to be more precise, Kubernetes. That infrastructure stander-upper actually lives in the footprint of a GKE cluster. If you've paid for a cluster before GKE, don't worry, what I just said is still potentially much smaller and cheaper than you think. (GKE can scale to about $5/mo baseline footprint if you are small-time like me.) If you know how resource scheduling on Kubernetes works and you know how Autoscaling of Kubernetes cluster nodes works, you're about 90% of the way there toward knowing how this scaling situation works too.
There is a function gateway, that minimal gateway that I spoke of under "Events", and it is a persistent process that can't be stowed away for cheaper when it is not answering requests. But it drives the whole cluster. It spins up extra demand when events result in requests on topics that spawn Pods to respond, and Kubernetes will react to Pending pods with new Nodes to allow extra capacity to schedule those pods. tl;dr You need not keep extra capacity around when it's not actually needed. Don't even worry about it. The cluster will autoscale in response to rising and falling demand, and the bill will definitely come at the end of the month.
I've been trying to wrap my head around the whole Serverless Function thing for a couple of weeks now, as a Rails dev who hasn't had very much exposure to it and a Kubernetes enthusiast, well I think I get it now. (No one serverless stack is going to win, but obviously there will be a winner. Scaling to zero is the big win here. It's not the first platform to purport to scale in response to events, and even scaling down to zero, but not many have done this from what I can tell.)
Riff is one that advertised this "scale to zero" capability in their project before, and they are apparently involved with this project too, so that's neat. But if it's a car, back to where you started... and the commits from your developers are the feet on the pedals, ...oh hell we don't really need another car analogy do we? I seriously can't write anymore of this kind of garbage now, at least until I get my keyboard on the terminal for a while and try the thing out.
Supported on Minikube. I can tell you I tried Riff out this weekend (Riff team is represented here in this thread, they are apparently deeply involved in Knative), and I went through the experience of adding support for a new language runtime for Riff, and it was a lot like "not really having to do anything" other than put my feet on the pedals and keep control of the wheel, in terms of how the stack let me do what I know about, and getting out of my way for the most part.
I think I'll learn how to use gRPC now. I think I get the idea of what a "sidecar" container is really meant to be used for, now. I think I should stop writing though, and try compiling the source and see how this new runtime environment on Kubernetes behaves. I hope it's better than Riff (because Riff was impressive from the demo to the trial, but I don't think the Riff devs will be working and focused on this instead, unless it's actually going to be even better than that. They have no lack of vision in this space, in my humble opinion.)
This is about 20 sentences, ...
Knative on the other hand provides building blocks to eventually build a serverless platform. It provides the resources necessary to run and scale your application.
In short: You could use the serverless.com framework to deploy applications/functions on knative. But you still need some layer actually running your workload, like knative.
but it didn't use neither so ...
The samples show that you can make an app, make a container, and make a service config file, and deploy your app to K8s. Yes, we've been able to do that for some time now.
This thing is supposed to provide a bunch of advanced features for devs to not have to think about. However, the build repo says this:
"While Knative builds are optimized for building, testing, and deploying source code, you are still responsible for developing the corresponding components that: + Retrieve source code from repositories. + Run multiple sequential jobs against a shared filesystem (for example, Install dependencies, Run unit and integration tests). + Build container images. + Push container images to an image registry, or deploy them to a cluster."
"While today, a Knative build does not provide a complete standalone CI/CD solution, it does however, provide a lower-level building block that was purposefully designed to enable integration and utilization in larger systems."
So as a developer you still have to have all the things you had before, but with extra layers of abstraction now, apparently just to support hybrid cloud installations.
The marketing lingo appeals to developers as if it makes all this simple, when in fact it may be more complicated.
This seems nice though: https://github.com/knative/docs/tree/master/serving#more-sam...
It seems to have OpenTracing (Zipkin) integration: https://github.com/knative/docs/blob/master/serving/debuggin... (you need to install elasticsearch and stuff for it of course: https://github.com/knative/docs/blob/master/serving/installi... )
And assigning a custom domain: https://github.com/knative/docs/blob/master/serving/using-a-... ... okay, I was hoping that I can specify a whole URL where to mount the "app" (something like https://my.fancy.pants.tld/api/app2/
It seems to me that the weakest part of this is build currently. Mostly because that's what's pretty linear and one-off, and well explored by other projects (GitLab CI/CD can easily run on and deploy to k8s), and knative is mostly about serving and eventing, meaning all the interaction between the lifecycles of stuff on k8s.
disclaimer: I was one of the core maintainers of Deis Workflow.
I think looking at build is not interesting, because it seems that knative currently focuses on the serving and events part (some thoughts on this you might find interesting: https://news.ycombinator.com/item?id=17607401 )
The repository is just an example. Using an internal repo seems just as easy.
I had to learn Kubernetes very recently and this seems to simplify a lot of the boilerplate needed to have an app running.
One the most grating part for me was having the ingress run with a proper SSL certificate with the right handshakes though (had to install a nginx controller just for that).
That’s the kind of things that everyone will go through, and is solved in one click on heroku. Yet it seems to be left out of all the samples, if it’s out of the scope that diminishes a lot the appeal.
Knative uses parts of Istio for servign and TLS/SSL setup: https://github.com/knative/docs/blob/master/serving/using-an...
It's a bit funny that knative serving is so complicated I have still no idea what they use under the many layers of abstraction (probably something hacked together in go), and I don't understand why don't they use a generic configurable ingress component.
It's an abstraction layer being developed above both of Kubernetes and Istio.
I think it's an open question whether a PaaS like Google Dataflow is a good match or not. It will certainly require more planning, but I think it's doable.
It's a whole lot of infrastructure to support a PaaS / FaaS workflow. From building and routing/serving/tracing to complex event driven stuff.
It's fully k8s native (hence the name, I reckon), and it uses CRDs. Which is basically k8s's DSL for describing anything. Basically a k8s-standard for k8s plugins with schema, validation, schema change management, kubectl support, etc.
GitHub readme: https://github.com/knative/docs/blob/master/install/getting-...
Example Ruby app: https://github.com/knative/docs/blob/master/serving/samples/...
While I can see how this greatly simplifies deploying an app onto a Kubernetes cluster, I'm failing to see how this helps for serverless workloads
Knative calls the role of the person providing the servers are "operator". Kubernetes is great for operators because it has a lot of common low-level primitives, and lots of choice if you want to pay someone else to be your operator.
Knative aims to give a similar great experience to developers if you can convince your operator to install it on top of the kubernetes they already have. In particular, if an operator charges you for container runtime minutes on knative, you're getting close to the pay-per use model of lambda or app engine. Also, as you noted, developers should have fewer concepts they need to grok in order to deploy a knative app compared with kubernetes.
Subject to my biases, what would people like to know?
and one by this "Google" thing:
Edit - and Red Hat: https://blog.openshift.com/state-of-serverless-in-kubernetes...
It looks like I have some more catch up to play, I see a rails logo on this page...
I emailed you a couple of days ago or yesterday about the Ruby support in Riff. There's not a PR yet but we got Ruby into a working state again! It looks like not a lot of people are interested in Ruby on serverless platforms.
Can you shed any light on why? And what kind of Ruby support can I expect from knative, given that it's not apparently coming from the Riff project now... as a Rails dev who wants to use this stuff today, how do I get started?
(It has a Rails logo on the page linked by the post, so I assume I can use it now, but I haven't gone deeply enough to see if there are limitations... I'm used to Ruby support being neglected in the serverless areas, I don't know if it's because we Ruby devs are slacking, or there's a fundamental flaw I haven't seen yet, or what...)
So, what can you tell me about that Rails logo on the landing page?
> It looks like not a lot of people are interested in Ruby on serverless platforms. Can you shed any light on why?
I imagine there's some mix of market demand and path dependency. A lot of folks who like experimenting moved onwards to the Node ecosystem and a lot of FaaS work has happened there. Meanwhile enterprises are heavily invested in Java and .NET.
The riff team at the moment is pretty small. One thing coming up is to fold buildpacks into the code->running pathway for riff. This should make it easier for the riff team to get out of the business of supporting particular ecosystems and for some of the engineering work for buildpacks to be shared amongst multiple setups.
In terms of starting with Rails today, it should be possible to use the buildpacks BuildTemplate to run the existing Ruby buildpack. I don't know if it's working, it's early days for heavy automation of development on Knative itself.
I'll check it out in more detail when I'm back in front of a computer! Glad to see the rails logo somewhere new, too.
It's nice because we get to share some of our work with the wider community.
The riff team have a more detailed post: https://projectriff.io/blog/announcing-riff-0-1-0-on-Knative...
Just a comment here about this in general - given all the resources at Google’s disposal, I’m perplexed as to why we don’t see world-class (or even at least halfway decent) documentation being shipped along with the product at first release. As a community I think we need to demand higher standards in this area. If GNU can do it, Google certainly can.
Or a little higher-level:
There are a lot of technical docs in the individual repositories. You can get started, for example, with the serving docs under:
Same for the other repos, like build and eventing.
Thanks for asking!
By "theory of operations," I mean a design document, often but not always created before a line of code is written, that describes in plain English what is to be built (or, what was built). It often discusses things like:
* What problems are being solved?
* What attempts have already been made to solve the problem?
* How does it work? How do the components interrelate? How does one operate it, especially at scale?
* How does this solution solve the problems better than the alternatives?
There are lots of great examples out there. I like to point to Consul as a textbook example of fantastic documentation, and it's been there since day 1. Google would do well to follow Hashicorp's and GNU's lead.
The first chapter, when I picked up the book, was a tour of Perl. I loved it. It showed me all of the highlights with no details at all. I never read another chapter, and instead picked up the second book in the series to use as a reference.
My counter in this argument HATED that first chapter. They almost didn't read another one, because it put all of these examples in front of them with no depth. They thought the book would be much improved by removing that chapter.
I would have been bored to tears by the book this person wanted to read.
I'm not going anywhere in particular with this, except to say that the world takes all kinds. Sometimes docs don't exist simply because nobody realized someone else would find that shape of document useful, so they decided not to write it.
The docs I love may well be the docs you hate :)
We have the high-level overview and deeper dive into the details for each of the components, install instructions and samples.
Obviously this is a conflict of interest for Google, but just curious if you know of any plans in the works for being able to run this on AWS EKS? It's the obvious omission from the list of supported clouds on the installation page.
"If you already have a Kubernetes cluster you're comfortable installing alpha software on, use the following instructions"
This looks cool, and will likely be useful in the future, but this is still alpha quality software. Don't bet your business on it by installing it to your production clusters. Don't let your development cycle by driven by the latest shiny thing, no matter whose logo is on it.
How does this relate to OpenShift?
I really hope that, for instance, Knative's builder will be merged with Red Hat's source-to-image (S2I) builder.
As and when Builds becomes Pipelines, I anticipate that it will remain relatively trivial to integrate existing build infrastructure without too much fuss.
In order to do this, we've broken the problem down into 3 parts:
Buses provide a k8s-native abstraction over message buses like NATS or Kafka. At this level, the abstraction is basically publish-subscribe; events are published to a Channel, and Subscriptions route that Channel to interested parties.
Sources provide a similar abstraction layer for provisioning data sources from outside Kubernetes and routing them to the cluster, expressed as a Feed (typically, to a Channel on a Bus, but could also be directly to another endpoint). Right now, we only have a few generic Sources, but we plan to add more interesting and specific Sources over time.
Lastly, we have a higher-level abstraction called a Flow which bundles up the specification from the Source to the endpoint, optionally allowing you to choose the Channel and Bus which the event is routed over. (Otherwise, there is a default Bus used to provision a Channel.)
All of this is also very much work-in-progress. You're seeing the workshop as we put down our tools yesterday, not as cleaned up for a tour. :-)
The three pluggable abstractions are your event sources, the bus over which events are stored/sent, and the actions which should be invoked in response to an event.
OpenShift has done a really good job of being there, wherever Kubernetes is lagging behind. They may not provide the new canonical implementation of the solution to the forseen problem, but they are expert at predicting where the gaps will need to be filled (and preemptively making an effort to fill them.)
I see Helm project has a specific advice for multi-tenant usage now, too, in their best-practices advice! That was one of the criticisms of helm from OpenShift, and now it's evidently solved(-ish) with the advancements in modern k8s rbac and some documentation.
knative is a mashup of good ideas and patterns from app engine, openshift, cloud foundry, FaaS, and public cloud providers. I think it will fill the missing space between containerized apps and true FaaS on top of Kube, while still making it easy to break the abstraction glass as necessary.
Everyone builds upon other people's work.
2) If any cloud is supported, does it virtualize services such as storage, queues, API management and Identity
3) Does Knative can seamlessly switch monitoring to what is provided by each cloud.
I expect that some of this will end up upstreamed into Kubernetes proper if it's broadly useful (autoscaler work, to pick an example), but this is still super early so let's see what people want.
And yes, we're documenting the control plane APIs, the data plane requirements, and the contract for serverless container environments, with the hope that they can be reimplemented consistently wherever needed.
One of the explicit goals is workload portability, so separating spec from implementation is critical.
Solving those problems can't be done entirely at the layer in which Knative is developing -- some changes will need to turn up in Kubernetes (node-local caching awareness) and Istio (workload/node-aware routing).
We decided to announce this first thing with the blog posts just to get the word out as soon as possible and give our amazing partners (Pivotal, SAP, RedHat, IBM, and more) a chance to share their news as well.
And frankly, it's just more fun to run an open source project out in the open, so we were pretty keen to flip the git repo bits to public as early as we could. : )
This is hopefully received by Knative devs (or marketers) that the homepage needs a major overhaul, as it's not at all clear what the product is to the average visitor.
What about startup times? How do they compare to other solutions?
Anything that meets the Container Runtime Contract should work.
> What about startup times? How do they compare to other solutions?
Startup times need a lot more work. Knative has a number of moving pieces which are contributing delay to startup time and they're being discussed or attacked by various groups. Some of this work will probably require changes to be contributed upstream to Kubernetes and Istio.
Short term we have a few tactical features in the pipeline. In the next few weeks we are rolling our "parallel build and deploy" which moved the docker build to run parallel with LB programming. Depending on your build that saves a few minutes.
When doing development I usually just replace an existing version by deploying with:
gcloud app deploy --version <my-dev-version>
This keeps the same LB and VMs as before and does a gradual container swap. It is not safe for production but definitely helps when iterating.
Please please please at the forefront of all docs, presentations, and blogs put something like this:
Knative’s primary use case is for you to provide your own cloud-neutral, on-prem, or hybrid-cloud serverless platform built on top of kubernetes.
I think there are two parts to the story here. One is what Knative is for, what it can do. That's some version of "source code to event-driven system on any Kubernetes system without the tears". As with previous Big Changes there will be a cottage industry of explanations, and that is fine.
The second part of the story is: who is working on it. And that's the underrated part for me so far. You see Pivotal and Red Hat -- we are fierce competitors -- working on the same project with Googlers, IBMers and SAPers. You find folks who work on riff and OpenWhisk sitting in calls with engineers who've worked on Google Cloud Functions comparing notes on problems and solutions.
I have sat in working groups where experiences have been shared from Cloud Foundry, OpenWhisk and Google App Engine in the space of 5 minutes. I've sat in other calls with teams comparing notes on Buildpacks and S2I, Concourse and OpenShift ImageStreams ... it goes on and on.
The big story here is that Google were able to catalyse a conversation that would be very difficult to start any other way. People from contributing organisations are busily sorting out common ground that will let everyone to move past this level of abstraction much quicker than would otherwise be the case.
We are still very early in the process so code (and comms) are a bit rough. Really appreciate your feedback and we will work to clarify things over the next few days.
We were thrilled to see Microsoft contribute instructions for Azure (https://github.com/knative/docs/issues/208) but we have not heard much from Amazon yet.
Optimizing 1-3 is very much in scope for the Knative project. Optimizing (4) is a provider problem and will vary by provider.
The Knative Scaling working group is discussing this every week on Wednesdays and that meeting is open to the public. We also keep good notes linked from our community page:
Native with a silent K? (like "knave" or "knee")
Kops addresses the bits below Kubernetes.
This is an example of nesting reactive control (the autoscaler) with predictive control (the min/max values).
here's my pain point. I built a serverless REST API with token authentication on Lambda. However, if many people aren't using it all the time it will sleep and then the next sucker who calls the endpoint is stuck with waiting for the serverless instance to wake up.
In some cases even getting a token from a simple POST request would take an awful long time. This was a few years ago and I stopped using serverless since then.
But now I'm interested in serverless because I've been hearing that the cold startup problem is being reduced.
I wonder if in the future developers will be just taking core logics from serverless repository and wiring up the components, sort of like how wordpress does it without the crazy layers of PHP and bloat.
Our engineering view is that we want startup to be as fast as possible, which is a surprisingly nuanced problem with lots of moving parts that need to collectively do something smart, even before you get to the startup time of your own code. This will show a lot of improvement as we go, but right now it's early days.
The economic question is about trading off the risk of hitting a slow start vs the cost of maintaining idle instances. It is impossible for Knative's contributors to solve that problem with a black box solution. What we can do is to provide you with some knobs and dials to express your preferences.
Edit: I didn't answer this question --
> interesting...by floor and ceiling is that like the minimum and maximum threshold for latency?
Not for now, this would be bounds on what scale the autoscaler can choose. Latency is an example of an output setpoint that an autoscaler could attempt to control, as opposed to a process input. We have in mind to make autoscaling somewhat pluggable because different people want to target different signals.
Not downplaying the team's effort and the immense utility of this to many companies.
The tweet resonated with a sizeable part of the developer community hence linked here.