The use case of PCF is weak especially when Kubernetes does everything it can do. It just sells enterprise fluff to "architects" in suits who haven't touched code in years and are so far from the reality of Devs working day to day.
The time it took me to migrate a Java spring boot service from AWS to GCP was around a day.
Getting it to work with Pivotals way of doing things was more like a week.
We are now starting to do some of our infrastructure on Kubernetes, much less headache.
If anyone wants more info feel free to ask.
My post might seem angry but as a developer I know better than the execs.
I see this in my corp every day. Very smart people, super interested in actual feedback, and super interested in really providing great software. But in the end they still fail. I'm not 100% sure about the reasons, but I have a guess.
Usually people who move up the ranks and get into positions where they can make decisions, tend to build relationships with other such people. So more and more they change from "why the hack is dns not working on server-13 anymore?" to "this is the big problem the market has, let's attack that". And slowly that thought croaches in that you should only think about the big picture problems and not worry about the nitty-gritty details of every day getting-things-to-run.
And that's probably the problem. While it's clear that you can't solve all the details and work on big picture problems at the same time, you should not devaluate the details. Great solutions solve big picture problems BY SOLVING THE DETAILS.
That's why K8S for instance is a mediocre solution. Sure they have great big picture ideas, but usually people spend their time with having DNS not working, having servers hanging, having etcd nodes not talking to each other without explaining why, having deployments say "SUCCESS" while actually not running. K8S provides a standardized API for every detail and every use-case, cool. But in the end you need to be an expert in everything to make it run around 50% of the time (outside of AWS and GCP). Nobody really wants to develop on k8s when the underlying platform is only available 50% of the time. And that's the current experience.
So please, if you are one of these people who is really interested in creating great software, consider finding solutions to the detailed low-level problems an important part of the goals and tasks you define. Usually the core low-level problems are not a big amount. For instance with k8s it usually is networking, dynamic storage or security related. Have one person in your team really become an expert in one of these areas (by checking out problems and solutions that exist, not just by making up ppt slides with plans that have nothing to do with the real world) and allow them to influence your task planning.
Maybe that's your experience? I haven't had any of those problems in moderately sized on-prem k8s clusters.
It's almost just enough to make you wish for a Hybrid cloud. If you don't have Kubernetes experts on staff, you shouldn't be trying to manage your own Kubernetes on-prem. It won't be surprising that this is the experience 50% of the time – honestly that has to be why managed services for Kubernetes are evidently becoming so popular.
Managed services have these issues some of the time too, and if your managed service has those issues often enough that you'd want to talk about them, you better find a different managed service. I think AKS and GKE have had those issues, but they are rare. Don't know personally about EKS.
I think some people interpret the promise of managed K8s services to be that you don't need to have someone anymore on-staff who is the expert in how things might go sideways on Kubernetes.
On the contrary, you still need that person to be able to take advantage of Kubernetes with confidence, but maybe now thanks to (whatever vendor you chose for Kubernetes) you simply don't need to have an actual department of 6-12 of those experts focused on only doing that (managing Kubernetes.)
A result of using a real, stable managed K8s service should be that on a day-to-day basis, those people won't actually need to run around with their hair on fire doing ops things just to keep the business going. Automation.
With on-prem, maybe just don't expect it to come like that out-of-the-box; if your whole team is still new at this they're going to need training and planning and ramp-up to get it to that stage. This is exactly how you get "it might be better for us to start with a managed k8s service."
> usually people spend their time with having DNS not working, having servers hanging, having etcd nodes not talking to each other without explaining why, having deployments say "SUCCESS" while actually not running
I'm not saying k8s is free of problems.
Self-healing is great when it works, and those items you listed are all real problems. But they are not problems that you should expect to encounter on a managed service, at least hopefully not more than once. (YMMV, amiright?)
Just out of curiosity, how are you managing your on-prem k8s clusters? (Is there a toolkit you'd recommend using? Kubespray?)
Pivotal Function Service (PFS) builds directly on Kubernetes via Knative (https://github.com/knative/) and Project riff (https://projectriff.io).
Is the title even accurate? It sounds/looks more like "Pivotal creates branded offering of Knative+Riff and uses one of their main selling points as their headline feature", but maybe I'm just biased from past impressions of Pivotal marketing. Especially since it seems to be in contact-only early access.
I have to login to PCF to check environments or add services to bind to an app.
The CF cli is not very easy to understand compared to aws eb cli or herokus toolbelt
The login screen says "email" but you actually have to enter your user ID
Devops become your enemies, especially if they lock down PCF environments. I don't want to beg for a redis tile.
PCF eats a lot of ram from our virtual servers.
The PCF support is layered with different urgency levels, most of the time they ignore you or are slow to respond
Basically I think it's a tool that gets in the way of developers and gives headaches to the platform team. However my company signed a contract with them, so we're tied in for a year.... I might quit before then
You're doing devops wrong, which may explain why you're having a bad time with the platform.
> I have to login to PCF to check environments or add services to bind to an app.
You should automate this with CI servers and pipelines.
The complaints about RAM usage and support service I agree with.
It sounds like your organisation is disfunctional, and its treatment of PCF is the symptom not the cause.
We have low timeouts on most logins anyway, so having it on PCF isn’t a big deal.
We have a CI/CD workflow, so I don’t really have to login to the CLI that much.
Locked down environments are always a pain - but it would be no different if your org locked down AWS or Heroku, which they could.
You could squeeze them onto one VM, but that would require that CF teams cooperate on manifests. Inconceivable!
The CF team is a bunch of Pivots... they cooperate quite well and probably better than most other teams, especially since they rotate across projects fairly often. Disclosure, I worked on CF in the early days, on a number of teams.
Cloud Foundry (PAS) is not designed to be used this way -- it had multitenancy designed in from the beginning.
For PKS the idea of cluster-per-project makes more sense, given that Kubernetes doesn't reaaaally have multitenancy in the same sense.
PCF itself is also very expensive. I'm not sure if they charge per node or per app; they've changed it over the last few years. I would get the skinny on that before committing to anything and do a TCO/ROI analysis.
If you are going to go this route, I would also consider doing a PoC with OpenShift or Rancher. These are managed Kubernetes offerings, both on prem. Both have free open-source editions that mostly mirror what you'll get in the Enterprise offering at smaller scale. Rancher is also less "enterprisey" in that they aren't as aggressive about enterprise support or upsells, at least from what I've heard.
I think you'll be locked into big contracts with either option, but Pivotal's lock-in is tighter, at least from what I can tell. This could change after IBM fully moves in, however.
Then we are not doing our job. I am happy to talk to you and/or to put you in touch with any team you want to give feedback to.
What does your build process look like? Can you use CF buildpacks in your Kubernetes cluster? A built-in system for building and deploying your application when you push to a Git repo seems like a real advantage of CF to me.
(OpenShift uses https://github.com/openshift/source-to-image which works like CF buildpacks)
Gitkube thread on HN: https://news.ycombinator.com/item?id=16574893
My opinion of the product is that it is designed for organizations that are heavily invested in the VMware ecosystem that want to maintain that developer-operator separation through the guise of a PaaS with as little code or ramp-up (or investment from their own people) as possible.
As a dev, my gripes with PCF are as follows:
1) PCF is massive. You need at least 20 instances to run it...and that's without any workloads on top of it. (If you are on AWS, the from-scratch installation instructions require that you bump your EC2 instance count minimums to 50 across several different instance sizes. Yikes.) It is very clearly designed for companies that already have a significant VMware footprint in house. Running it on AWS (I don't think Pivotal supports running it on Azure or GCP, but I could be wrong) is a very expensive exercise in futility. You cannot _just_ install PCF on your local machine unless you use PCF Dev or cf-local, both of which remove a lot to provide some semblance of parity to a larger install. (You also can't _just_ install PCF Dev either like you can with minikube or Swarm; you need to have a Pivotal account (free) and then download it from the Pivotal network. This can be automated, but it requires a lot of steps.) This leads me to...
2) There's no real way to test that code on your laptop will work in a production PCF deployment since the infrastructure they run on top of can differ heavily (that's not including the labor involved in getting a local PCF instance running on your machine in the first place). Docker and, by proxy, Kubernetes make it easy to do this since you're deploying images, not code. While PCF uses LXC containers behind the scenes via Diego, they rely on buildpacks as the runtime mechanism. This slows deployment time down since you're effectively installing everything from scratch on every deploy, and it just feels heavier weight than working with Docker images.
3) The PCF community is mostly comprised of large enterprises hidden behind paid Pivotal contracts. Getting help on PCF issues from the Internet is quite hard, and it's pretty clear that Pivotal is driving a lot of the development behind the product despite CF being an open-source project funded by the CF foundation. (Apparently raw CF is quite different from PCF, though I haven't tried it myself and the lack of community excitement around it doesn't make me want to).
4) PCF licensing is expesnive. Combined with (1), you basically need to be a multi-$B company to afford it. See also: this (https://news.ycombinator.com/item?id=16663077).
As a not-yet-so-still-armchair tech manager, my main gripe with it is that managed Kubernetes products might be closer to IaaS than PaaS, but they offer the same level of application abstraction and deployment that PCF does with a SIGNIFICANTLY smaller footprint while having a significantly larger community driving its success. This gap will become even tighter as KNative, Helm and others mature. I personally like products that are vetted by companies large and small, not just enterprises. PCF is the latter, Kubernetes and others are the former.
The things I like most about Kubernetes that aren't present in PCF are:
1. Using container images to deploy by default instead of code + buildpacks so that I have greater confidence in my code working in Production (yes, PCF supports it, but PCF org admins can and probably will turn that off).
2. More flexibility around networking (PCF basically requires VMware/Cisco networking, whereas Kubernetes does everything through CNI plugins)
3. Being able to configure all of Kubernetes through manifests instead of solely the application
4. Kubernetes is purely open-source and has one of the strongest open-source communities behind it. Unlike CF, who really only seems to suit PCF and it's enterprise variants (IBM Bluemix comes to mind), the CNCF is a real foundation that supports all sorts of cloud-native applications from all sorts of paths. OpenShift, Red Hat's enterprise take on it, contributes heavily to the project while still managing to sell to the same market that PCF is after.
I bet you do it on AWS or GCP, right? Try on-prem, then check your headackes again. ;-)
Serverless implies that the management of the underlying infrastructure is owned wholy by a third party, and the only exposure to you is the higher order abstractions, leaving you to only worry about the business logic.
This is not that.
You still need to water and feed a Pivotal/K8s platform, and those things are awful to run.
"What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least."
This isn't unusual. Every software house seems to be building a dream of hybrid cloud on-top of Kubernetes. Rightscale, Openwhisk, Cisco Cloud Centre, IBM Cloud Private, ServiceNow and others build their sales strategy across cloud mobility.
This dream of cross cloud portability needs to die. You either end up:
- Leveraging the cloud native logging, storage, networking, database and analytics services, leaving behind only the container layer which is portable. By this stage, you end up having to re-write those parts of the json/yaml/terraform to handle deployments into other clouds ANYWAY. So what's the benefit again?
It also encourages you to do stupid things like single account deployments in AWS since federation for these abstraction layers are still very much a pipe dream, significantly increasing the blast radius concern.
- You run your own logging, storage, networking, database and analytics services inside the K8s which defeats the entire point of being in the cloud in the first place.
Being cloud agnostic is a great pipe dream, but the economics of running an abstraction layer (Pivotal / Openshift / Mesos) vs cost of exit just doesn't add up for 99% of companies. Just use the native FaaS/Data/Analytics.
a) re FaaS. Of course if you have an on-prem component, someone needs to take care of it. That's why it's on-prem. But still you can separate administration of that from the developers and have the additional new feature that you as admin don't need to care which software runs in these clusters. (In reality it's never that simple since specific hardware for the task outperforms the standard hardware by more than a margin, but at least you have to worry less about getting from cool-product:v1.0 to cool-product:v2.0 as admin anymore.
b) This dream of cross cloud portability needs to die
Well, you are either a cloud provider or a software/hardware provider that is not a cloud provider. If you are the latter of course you need a way to attack the cloud providers when everybody moves to the cloud. It's a common and traditional idea.
And at least for me it also makes sense to use such a solution. I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds. I hope so much that they can also find a way to more seemlessly work on arm and ppc now that IBM has bought them. Then it's truly hybrid.
PS: That said I also wouldn't trust any of these quickly assembled hybrid cloud solutions from companies who are not well known for their software achievements. It's probably only the marketing aspect that's driving these efforts and IBM has already shown that they don't trust their own solution themselves even.
Lets say it takes 5 engineers at 100k each to maintain the K8s and FaaS service internally. Are we getting 500k+ _more_ benefit from that internally managed platform than offloading it to a third party? What is the cost of exit from one platform to another if we just used the native services?
As long as the service management of FaaS continues to sit with your organisation, it feels disingenuous to call it Serverless. We lose out on the per invocation opex costing model, the benefit of scaling offloaded to a third party and the well integrated ecosystem of services which Serverless developer paradigms really shine.
B) Full disclosure, I work for a software consulting company. We are partners with AWS, Google and Azure. I'm relatively agnostic, although my preferences would be somewhere along AWS > GCP > Azure.
"I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds."
That true if we plan to run _everything inside the OpenShift cluster_; our own database, analytics, logging, iam, secrets managers, etc.
As soon as we get other cloud hosted services involved, the integration becomes really clunky and our teams end up split braining between two orchestration layers which aren't well integrated. An example of this would be Networking and Databases in AWS - You simply can't do microsegmentation inside and outside of the Openshift and cloud networks. Assuming you wanted to offload databases to RDS (and you should), all your security groups are going to have open traffic flow from every node in your Openshift Cluster, whether those nodes are tied to the Databases App partner or not. Welcome back perimeter based traffic rules!
And since the networking and database part of the deployment scripts are tied to a specific cloud, I need to re-write them to move workloads around anyway... So why not just re-write the whole deployment job to use a native service?
Knative is really an alpha piece of software at version 0.2.2 for the serving component. Riff is also at 0.2.0. What do Pivotal plan to do if both implement breaking changes (extremely likely), maintain a fork?
The other issue is what value is this all really providing? Kubernetes provides a standard API that abstracts infrastructure and deployments.
The benefit that Lambda brings is very simply connecting together cloud services. None of these FaaS on Kubernetes products do that. For anyone interested I looked into the current landscape on Kubernetes and gave up since it's all pretty worthless. https://kubedex.com/serverless/
The benefit that Lambda brings is very simply connecting together cloud services."
Generally, the benefits of a function service are:
- scale to zero: when a function is not active it won't use any resources and create costs.
- higher level of abstraction: if a piece of software fits well into the FaaS abstraction, it should be more productive to implement and operate it on the FaaS level over lower levels (PaaS, container, IaaS, etc.) K8s in particular is quite a complicated system to target by an app, which is why Knative was started.
If Lambda makes it easy to call other cloud services, I'd say that's a side-effect of a good FaaS implementation. Bringing this benefit to other function services should be a matter of using the right libraries.
(I work at Pivotal, but not on Riff or Knative)
You are correct in that the calling out to other things is just a concern of the function itself, but the value in the 'connecting' that Lambda does is from being _invoked_ by other Cloud services by way of integration to their event systems. e.g. Object storage file creation event X triggers Lambda function Y to update resource Z (resource Z isn't necessarily a Cloud service, it could be a database).
This is why I'm skeptical of on-prem FaaS. It's an easy value proposition to sell when you can use Lambda as an example. But Enterprises have heterogeneous environments so Lambda-like integration into other services is far from a given, and 'scaling to zero' is a little disingenuous because there always needs to be underlying infrastructure (k8s in the case of PFS) running to handle function invocation.
Because it's not a walled garden. As the ecosystem grows that pain (and it's real) will ease.
> 'scaling to zero' is a little disingenuous because there always needs to be underlying infrastructure
The point is to use it more efficiently. Mixed workloads with scale-to-zero help achieve that end.
The main riff team all work for Pivotal and Pivotal was the first external partner brought into Knative by Google. We were the first to submit non-Google PRs and the first to have non-Google approvers.
It would be strange for Pivotal to be blindsided by two projects it is intimately involved in.
Source: as you probably guessed, I work for Pivotal.
Large organizations are a whole other animal. There will be many people entrenched in various ways, each with their own ideas of how things ought to be run. Often guided by technical debt and unique historical needs.
Small companies seem like the primary choice to skip container orchestration layers completely.
This is an AWS talking point, which they push relentlessly because it suits their purposes. The idea that developers "I don't care about servers" implies or doesn't imply anything about the legal ownership of servers doesn't, by any necessity, actually follow, unless your name is Bezos.
> You still need to water and feed a Pivotal/K8s platform, and those things are awful to run.
I have run them single-handedly in my day job at Pivotal. We have customers who run platforms for thousands of developers with < 10 operators.
> By this stage, you end up having to re-write those parts of the json/yaml/terraform to handle deployments into other clouds ANYWAY. So what's the benefit again?
Or you could use BOSH today and Kubernetes in ~-12 years as its ecosystem firms up. Or both as fits your workloads. Pivotal is ready for both and to help you handle your workloads.
> You run your own logging, storage, networking, database and analytics services inside the K8s which defeats the entire point of being in the cloud in the first place.
> Just use the native FaaS/Data/Analytics.
I actually half agree with you. Cloud portability is to me not the primary selling pitch of the software Pivotal works on. It's a benefit, one which many of our customers see as essential. But to me the point is: can I just write some damn code? That's what I want to achieve. Solve the damn problem, rather than shaving yaks all the damn day. The portability thing is just a means to that end, not an end in itself.
Disclosure: I work for Pivotal.
As cool as "cloud agnostic" sounds, paying any real tax to achieve it probably exceeds the long term price differential between major cloud providers. The major clouds sell commodities, recognize that they sell commodities, and they operate as low margin large scale providers. Over time the pricing differences have to average out to about zero, otherwise the market would abandon the always-more-TCO-costly provider (which would cause them to lower their margins to recapture the higher total earnings).
Disclosure: I work at Pivotal.
The term serverless implies that the main benefit is not having to deal with infrastructure. While that is helpful, setting up a "standard" means of event driven and stateless business logic is useful in its own right.
Can you accomplish that with long running processes? Sure, as long as you are disciplined enough to do so.
PFS might be more fairly to commercial plans announced by other vendors to repackage OpenWhisk.
Disclosure: I work for Pivotal, though not directly on PFS.
These kind of things face the problem that they pump entropy out of one part of the system but risk creating more overall. In fact, unless you get all the details right you will end up worse than you started. The "minimal viable product" is not minimal at all if it is going to be viable.
For instance I was using Vagrant years ago and realized eventually that I didn't need 80% of what it did and that I hated the DSL. (eg. if it has to support five different ways to build your image, maybe that's a "bad smell" that none of them are good enough)
Even though I was using it locally to start, I wound up using it only in AWS and I also used it to stack up systems that I put in front of the public.
I wrote my own Java program that would set up a virtual server, networking, all that and write a bash cloud-init script that would run on the machine to set it up, then send a message via SQS when it was ready. If it passed tests, the system could then switch over the elastic IP to the new instance.
I looked at the problem of getting it to run on other clouds and it wasn't that bad but it wasn't that good either. In AWS you have to choose a region, availability zone, VPC, subnet, disk types, and a number of details.
In some cases those choices could be arbitrary but sometimes you have infrastructure that you have to hook into and you have to match that.
In GCS or Azure it's basically the same but it is different, and if you want to support those platforms you have to also embody the differences in the platforms and also the differences between customer's environment.
Being involved with software vendors, "hybrid cloud" has a special meaning. My biz dev guy and I picked it up as part of our pitch a few years ago because we needed it. We knew we needed to support customers wherever they were, but of all the messages we tried that was the one that confused customers the most and resonated the least.
"On premise" is a special case BTW, because there are many customers who have issues with privacy, security, etc. and they don't have a choice or perceive they have a choice to use public clouds. (eg. Bridgewater thinks that it is better to safer to desktop virtualize into AWS then risk people walking out with laptops, physical destruction of their HQ, etc.)
I really like CDI, which grew from ideas hatched in Spring and Guice. Microprofile is built on top of it as offers standardized, mutiple competing implementations, modular, interoperable specs.
1) Hyperscalers doing hybrid cloud
2) VMware's growingly independent approach to K8S (which I believe will ultimately result in VMware offering "generic" Kubernetes)
3) IBM Redhat becoming more aggressive with their on-prem approach (let's not forget SoftLayer and Redhat OCP can do hybrid K8S on bare metal, without VMware)
4) On-prem-focused vendors like NetApp having announced similar service (NetApp Kubernetes Service, which already works in all hyperscalers and on-Prem NetApp HCI is next) in recent months.
Pivotal's on-prem problem (compared to IBM Redhat, NetApp) is problem is they have little on-prem presence, sell no hardware or storage, and need to rely on DellEMC and VMware (which have been great for Pivotal but that was when Pivotal played nice and wasn't helping VMware (and Dell) customers escape to non-VMware based clouds.)