Hacker News new | past | comments | ask | show | jobs | submit login
Pivotal launches serverless framework that works across clouds and on-prem (techcrunch.com)
117 points by tpurves 3 months ago | hide | past | web | favorite | 62 comments



Pivotal Cloud Foundry is terrible. I am having a terrible time working with PAS.

The use case of PCF is weak especially when Kubernetes does everything it can do. It just sells enterprise fluff to "architects" in suits who haven't touched code in years and are so far from the reality of Devs working day to day.

The time it took me to migrate a Java spring boot service from AWS to GCP was around a day. Getting it to work with Pivotals way of doing things was more like a week.

We are now starting to do some of our infrastructure on Kubernetes, much less headache.

If anyone wants more info feel free to ask. My post might seem angry but as a developer I know better than the execs.


i work on pcf at pivotal. i'd love to hear how we can improve. spring boot should work great with "cf push" and when it doesn't we would love to know why and make it better. while pks and k8s are able to run stateless workloads, if pas isn't offering a lot of additional productivity with routing, logging, and multi-tenancy features then we've missed the mark. pivotal-cf-feedback at pivotal dot io will reach the product managers that work on different parts. i saw your note further below about having to login, the "email" prompt instead of username, you had to ask to get the redis tile, and that you find the cf cli confusing compared to other platform clis you are familiar with. we definitely want to know let us know if there are other major issues.


I work in a big corp myself. If you worked on cloud foundry you probably worked with some of my colleagues. ;-) Please don't mention the name though, so I can speak openly.

I see this in my corp every day. Very smart people, super interested in actual feedback, and super interested in really providing great software. But in the end they still fail. I'm not 100% sure about the reasons, but I have a guess.

Usually people who move up the ranks and get into positions where they can make decisions, tend to build relationships with other such people. So more and more they change from "why the hack is dns not working on server-13 anymore?" to "this is the big problem the market has, let's attack that". And slowly that thought croaches in that you should only think about the big picture problems and not worry about the nitty-gritty details of every day getting-things-to-run.

And that's probably the problem. While it's clear that you can't solve all the details and work on big picture problems at the same time, you should not devaluate the details. Great solutions solve big picture problems BY SOLVING THE DETAILS.

That's why K8S for instance is a mediocre solution. Sure they have great big picture ideas, but usually people spend their time with having DNS not working, having servers hanging, having etcd nodes not talking to each other without explaining why, having deployments say "SUCCESS" while actually not running. K8S provides a standardized API for every detail and every use-case, cool. But in the end you need to be an expert in everything to make it run around 50% of the time (outside of AWS and GCP). Nobody really wants to develop on k8s when the underlying platform is only available 50% of the time. And that's the current experience.

So please, if you are one of these people who is really interested in creating great software, consider finding solutions to the detailed low-level problems an important part of the goals and tasks you define. Usually the core low-level problems are not a big amount. For instance with k8s it usually is networking, dynamic storage or security related. Have one person in your team really become an expert in one of these areas (by checking out problems and solutions that exist, not just by making up ppt slides with plans that have nothing to do with the real world) and allow them to influence your task planning.


> And that's the current experience.

Maybe that's your experience? I haven't had any of those problems in moderately sized on-prem k8s clusters.


I think I've had that experience. I wouldn't generalize and say it's "the current experience," and certainly not 50% of the time, but I can say I have had this experience at least once on every cloud provider that I've used Kubernetes with to any degree of depth. Something goes wrong, and it's outside of your control, you just have to wait for them to fix it (or sometimes I guess, you can just pick a different AZ and try again.)

It's almost just enough to make you wish for a Hybrid cloud. If you don't have Kubernetes experts on staff, you shouldn't be trying to manage your own Kubernetes on-prem. It won't be surprising that this is the experience 50% of the time – honestly that has to be why managed services for Kubernetes are evidently becoming so popular.

Managed services have these issues some of the time too, and if your managed service has those issues often enough that you'd want to talk about them, you better find a different managed service. I think AKS and GKE have had those issues, but they are rare. Don't know personally about EKS.

I think some people interpret the promise of managed K8s services to be that you don't need to have someone anymore on-staff who is the expert in how things might go sideways on Kubernetes.

On the contrary, you still need that person to be able to take advantage of Kubernetes with confidence, but maybe now thanks to (whatever vendor you chose for Kubernetes) you simply don't need to have an actual department of 6-12 of those experts focused on only doing that (managing Kubernetes.)

A result of using a real, stable managed K8s service should be that on a day-to-day basis, those people won't actually need to run around with their hair on fire doing ops things just to keep the business going. Automation.

With on-prem, maybe just don't expect it to come like that out-of-the-box; if your whole team is still new at this they're going to need training and planning and ramp-up to get it to that stage. This is exactly how you get "it might be better for us to start with a managed k8s service."


I was replying to a specific list of problems:

> usually people spend their time with having DNS not working, having servers hanging, having etcd nodes not talking to each other without explaining why, having deployments say "SUCCESS" while actually not running

I'm not saying k8s is free of problems.


Not disagreeing with you, and those are all "noob problems" from my perspective as I've done all of those things wrong before myself, and I myself am a noob. But they are some actually very complex problems (and you can even have them on your managed platforms once in a while, too.)

Self-healing is great when it works, and those items you listed are all real problems. But they are not problems that you should expect to encounter on a managed service, at least hopefully not more than once. (YMMV, amiright?)


> I haven't had any of those problems in moderately sized on-prem k8s clusters.

Just out of curiosity, how are you managing your on-prem k8s clusters? (Is there a toolkit you'd recommend using? Kubespray?)


[flagged]


Personal attacks will get you banned here. Please review https://news.ycombinator.com/newsguidelines.html and post civilly and substantively, or not at all.


Having worked with James directly many years ago as a developer at/on Pivotal/Cloud Foundry, he is a bit of a suit... but he's also a really great human, very smart and he genuinely cares about the product. Cut him some slack please.


> We are now starting to do some of our infrastructure on Kubernetes, much less headache.

Pivotal Function Service (PFS) builds directly on Kubernetes via Knative (https://github.com/knative/) and Project riff (https://projectriff.io).


What does PFS do that Knative+Riff doesn't get me?

Is the title even accurate? It sounds/looks more like "Pivotal creates branded offering of Knative+Riff and uses one of their main selling points as their headline feature", but maybe I'm just biased from past impressions of Pivotal marketing. Especially since it seems to be in contact-only early access.


Good question. I work at Pivotal. PFS is a commercial, supported packaging of Riff (which adds dev and ops aspects to Knative). As opposed to running and supporting the OSS components on your own.


That's a very interesting point for me. Our company is in discussion with Pivotal to use PAS or PKS on prem. Do you have a specific example or something similar what was the problem ? I've only seen the slides which should sell the products to OPS guys and the idea looked promising.


I can only comment on PAS but the main issues I've experienced are:

I have to login to PCF to check environments or add services to bind to an app.

The CF cli is not very easy to understand compared to aws eb cli or herokus toolbelt

The login screen says "email" but you actually have to enter your user ID

Devops become your enemies, especially if they lock down PCF environments. I don't want to beg for a redis tile.

PCF eats a lot of ram from our virtual servers.

The PCF support is layered with different urgency levels, most of the time they ignore you or are slow to respond

Basically I think it's a tool that gets in the way of developers and gives headaches to the platform team. However my company signed a contract with them, so we're tied in for a year.... I might quit before then


> Devops become your enemies

You're doing devops wrong, which may explain why you're having a bad time with the platform.

> I have to login to PCF to check environments or add services to bind to an app.

You should automate this with CI servers and pipelines.

The complaints about RAM usage and support service I agree with.

It sounds like your organisation is disfunctional, and its treatment of PCF is the symptom not the cause.


I use PCF at work. I’ve noticed most of these but they didn’t really bother me...

We have low timeouts on most logins anyway, so having it on PCF isn’t a big deal.

We have a CI/CD workflow, so I don’t really have to login to the CLI that much.

Locked down environments are always a pain - but it would be no different if your org locked down AWS or Heroku, which they could.


I remember being in a meeting with a client, with a Pivotal salesman alongside me, when the question of PCF's overhead came up. We managed to work out that a PCF installation required about 10 separate VMs before you even got to the ones that would run applications.


The point being that those VM can be distributed across multiple hosts for scalability and reliability. I'm sure that if you wanted to really package it all up into a single VM, you could... but that wouldn't make much sense when you're building an onpremis deployment cloud.


No, the ten VMs are not there for scalability or reliability, they're there because there are ten teams building the essential components of Cloud Foundry, and the BOSH model is that each component gets its own VM.

You could squeeze them onto one VM, but that would require that CF teams cooperate on manifests. Inconceivable!


Maybe you could fathom that the BOSH model is designed the way it is, for scalability and reliability purposes.

The CF team is a bunch of Pivots... they cooperate quite well and probably better than most other teams, especially since they rotate across projects fairly often. Disclosure, I worked on CF in the early days, on a number of teams.


PCF is for those working at scale, and doesn't suit trivial deployments well.


Someone really should have told the salesman that, then - they were trying to sell the client on the idea that they could have a separate PCF foundation for each project.


> they were trying to sell the client on the idea that they could have a separate PCF foundation for each project.

Cloud Foundry (PAS) is not designed to be used this way -- it had multitenancy designed in from the beginning.

For PKS the idea of cluster-per-project makes more sense, given that Kubernetes doesn't reaaaally have multitenancy in the same sense.


Agreed - that is not how the platform should be used.


thanks for the answer


PCF works best with the VMware stack (vSphere + vROPs) hence the sell to Operations. It is an absolute beast to install. You can do a PoC with PCF Dev to see what using it as a developer will be like, but the actual implementation of it will absolutely require Pivotal consultants to get going.

PCF itself is also very expensive. I'm not sure if they charge per node or per app; they've changed it over the last few years. I would get the skinny on that before committing to anything and do a TCO/ROI analysis.

If you are going to go this route, I would also consider doing a PoC with OpenShift or Rancher. These are managed Kubernetes offerings, both on prem. Both have free open-source editions that mostly mirror what you'll get in the Enterprise offering at smaller scale. Rancher is also less "enterprisey" in that they aren't as aggressive about enterprise support or upsells, at least from what I've heard.

I think you'll be locked into big contracts with either option, but Pivotal's lock-in is tighter, at least from what I can tell. This could change after IBM fully moves in, however.


> I am having a terrible time working with PAS.

Then we are not doing our job. I am happy to talk to you and/or to put you in touch with any team you want to give feedback to.


> We are now starting to do some of our infrastructure on Kubernetes, much less headache.

What does your build process look like? Can you use CF buildpacks in your Kubernetes cluster? A built-in system for building and deploying your application when you push to a Git repo seems like a real advantage of CF to me.


OpenShift is great for this. For something like a ruby application I can just run `oc new-app .` and I get by current application built, deployed and available externally. All with full kubernetes support.

(OpenShift uses https://github.com/openshift/source-to-image which works like CF buildpacks)


I've never used CF, but you can git push build to k8s with gitkube

Gitkube thread on HN: https://news.ycombinator.com/item?id=16574893


We are not git ops yet but that's what I think we should try to achieve


I've finally gotten the chance to spend time with PCF in an enterprise setting. I've also spent time with Kubernetes (no KNative though, at least not yet) and Docker Datacenter. Unfortunately, I have to agree though I wouldn't call it terrible.

My opinion of the product is that it is designed for organizations that are heavily invested in the VMware ecosystem that want to maintain that developer-operator separation through the guise of a PaaS with as little code or ramp-up (or investment from their own people) as possible.

As a dev, my gripes with PCF are as follows:

1) PCF is massive. You need at least 20 instances to run it...and that's without any workloads on top of it. (If you are on AWS, the from-scratch installation instructions require that you bump your EC2 instance count minimums to 50 across several different instance sizes. Yikes.) It is very clearly designed for companies that already have a significant VMware footprint in house. Running it on AWS (I don't think Pivotal supports running it on Azure or GCP, but I could be wrong) is a very expensive exercise in futility. You cannot _just_ install PCF on your local machine unless you use PCF Dev or cf-local, both of which remove a lot to provide some semblance of parity to a larger install. (You also can't _just_ install PCF Dev either like you can with minikube or Swarm; you need to have a Pivotal account (free) and then download it from the Pivotal network. This can be automated, but it requires a lot of steps.) This leads me to...

2) There's no real way to test that code on your laptop will work in a production PCF deployment since the infrastructure they run on top of can differ heavily (that's not including the labor involved in getting a local PCF instance running on your machine in the first place). Docker and, by proxy, Kubernetes make it easy to do this since you're deploying images, not code. While PCF uses LXC containers behind the scenes via Diego, they rely on buildpacks as the runtime mechanism. This slows deployment time down since you're effectively installing everything from scratch on every deploy, and it just feels heavier weight than working with Docker images.

3) The PCF community is mostly comprised of large enterprises hidden behind paid Pivotal contracts. Getting help on PCF issues from the Internet is quite hard, and it's pretty clear that Pivotal is driving a lot of the development behind the product despite CF being an open-source project funded by the CF foundation. (Apparently raw CF is quite different from PCF, though I haven't tried it myself and the lack of community excitement around it doesn't make me want to).

4) PCF licensing is expesnive. Combined with (1), you basically need to be a multi-$B company to afford it. See also: this (https://news.ycombinator.com/item?id=16663077).

As a not-yet-so-still-armchair tech manager, my main gripe with it is that managed Kubernetes products might be closer to IaaS than PaaS, but they offer the same level of application abstraction and deployment that PCF does with a SIGNIFICANTLY smaller footprint while having a significantly larger community driving its success. This gap will become even tighter as KNative, Helm and others mature. I personally like products that are vetted by companies large and small, not just enterprises. PCF is the latter, Kubernetes and others are the former.

The things I like most about Kubernetes that aren't present in PCF are:

1. Using container images to deploy by default instead of code + buildpacks so that I have greater confidence in my code working in Production (yes, PCF supports it, but PCF org admins can and probably will turn that off).

2. More flexibility around networking (PCF basically requires VMware/Cisco networking, whereas Kubernetes does everything through CNI plugins)

3. Being able to configure all of Kubernetes through manifests instead of solely the application

4. Kubernetes is purely open-source and has one of the strongest open-source communities behind it. Unlike CF, who really only seems to suit PCF and it's enterprise variants (IBM Bluemix comes to mind), the CNCF is a real foundation that supports all sorts of cloud-native applications from all sorts of paths. OpenShift, Red Hat's enterprise take on it, contributes heavily to the project while still managing to sell to the same market that PCF is after.


> We are now starting to do some of our infrastructure on Kubernetes, much less headache.

I bet you do it on AWS or GCP, right? Try on-prem, then check your headackes again. ;-)


Pivotal haven't launched a serverless framework, they've launched a FaaS(ish) framework.

Serverless implies that the management of the underlying infrastructure is owned wholy by a third party, and the only exposure to you is the higher order abstractions, leaving you to only worry about the business logic.

This is not that.

You still need to water and feed a Pivotal/K8s platform, and those things are awful to run.

"What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least."

This isn't unusual. Every software house seems to be building a dream of hybrid cloud on-top of Kubernetes. Rightscale, Openwhisk, Cisco Cloud Centre, IBM Cloud Private, ServiceNow and others build their sales strategy across cloud mobility.

This dream of cross cloud portability needs to die. You either end up:

Option A

- Leveraging the cloud native logging, storage, networking, database and analytics services, leaving behind only the container layer which is portable. By this stage, you end up having to re-write those parts of the json/yaml/terraform to handle deployments into other clouds ANYWAY. So what's the benefit again?

It also encourages you to do stupid things like single account deployments in AWS since federation for these abstraction layers are still very much a pipe dream, significantly increasing the blast radius concern.

Option B

- You run your own logging, storage, networking, database and analytics services inside the K8s which defeats the entire point of being in the cloud in the first place.

Being cloud agnostic is a great pipe dream, but the economics of running an abstraction layer (Pivotal / Openshift / Mesos) vs cost of exit just doesn't add up for 99% of companies. Just use the native FaaS/Data/Analytics.


Why is it so hard to understand?

a) re FaaS. Of course if you have an on-prem component, someone needs to take care of it. That's why it's on-prem. But still you can separate administration of that from the developers and have the additional new feature that you as admin don't need to care which software runs in these clusters. (In reality it's never that simple since specific hardware for the task outperforms the standard hardware by more than a margin, but at least you have to worry less about getting from cool-product:v1.0 to cool-product:v2.0 as admin anymore.

b) This dream of cross cloud portability needs to die

Well, you are either a cloud provider or a software/hardware provider that is not a cloud provider. If you are the latter of course you need a way to attack the cloud providers when everybody moves to the cloud. It's a common and traditional idea.

And at least for me it also makes sense to use such a solution. I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds. I hope so much that they can also find a way to more seemlessly work on arm and ppc now that IBM has bought them. Then it's truly hybrid.

PS: That said I also wouldn't trust any of these quickly assembled hybrid cloud solutions from companies who are not well known for their software achievements. It's probably only the marketing aspect that's driving these efforts and IBM has already shown that they don't trust their own solution themselves even.


I'm all for the separation of concerns for managing underlying services! I just suspect most organisations have the scale to warrant running it themselves.

Lets say it takes 5 engineers at 100k each to maintain the K8s and FaaS service internally. Are we getting 500k+ _more_ benefit from that internally managed platform than offloading it to a third party? What is the cost of exit from one platform to another if we just used the native services?

As long as the service management of FaaS continues to sit with your organisation, it feels disingenuous to call it Serverless. We lose out on the per invocation opex costing model, the benefit of scaling offloaded to a third party and the well integrated ecosystem of services which Serverless developer paradigms really shine.

B) Full disclosure, I work for a software consulting company. We are partners with AWS, Google and Azure. I'm relatively agnostic, although my preferences would be somewhere along AWS > GCP > Azure.

"I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds."

That true if we plan to run _everything inside the OpenShift cluster_; our own database, analytics, logging, iam, secrets managers, etc.

As soon as we get other cloud hosted services involved, the integration becomes really clunky and our teams end up split braining between two orchestration layers which aren't well integrated. An example of this would be Networking and Databases in AWS - You simply can't do microsegmentation inside and outside of the Openshift and cloud networks. Assuming you wanted to offload databases to RDS (and you should), all your security groups are going to have open traffic flow from every node in your Openshift Cluster, whether those nodes are tied to the Databases App partner or not. Welcome back perimeter based traffic rules!

And since the networking and database part of the deployment scripts are tied to a specific cloud, I need to re-write them to move workloads around anyway... So why not just re-write the whole deployment job to use a native service?


It's even worse than that.

Knative is really an alpha piece of software at version 0.2.2 for the serving component. Riff is also at 0.2.0. What do Pivotal plan to do if both implement breaking changes (extremely likely), maintain a fork?

The other issue is what value is this all really providing? Kubernetes provides a standard API that abstracts infrastructure and deployments.

The benefit that Lambda brings is very simply connecting together cloud services. None of these FaaS on Kubernetes products do that. For anyone interested I looked into the current landscape on Kubernetes and gave up since it's all pretty worthless. https://kubedex.com/serverless/


"The other issue is what value is this all really providing? Kubernetes provides a standard API that abstracts infrastructure and deployments.

The benefit that Lambda brings is very simply connecting together cloud services."

Generally, the benefits of a function service are:

- scale to zero: when a function is not active it won't use any resources and create costs.

- higher level of abstraction: if a piece of software fits well into the FaaS abstraction, it should be more productive to implement and operate it on the FaaS level over lower levels (PaaS, container, IaaS, etc.) K8s in particular is quite a complicated system to target by an app, which is why Knative was started.

If Lambda makes it easy to call other cloud services, I'd say that's a side-effect of a good FaaS implementation. Bringing this benefit to other function services should be a matter of using the right libraries.

(I work at Pivotal, but not on Riff or Knative)


> If Lambda makes it easy to call other cloud services ... Bringing this benefit to other function services should be a matter of using the right libraries.

You are correct in that the calling out to other things is just a concern of the function itself, but the value in the 'connecting' that Lambda does is from being _invoked_ by other Cloud services by way of integration to their event systems. e.g. Object storage file creation event X triggers Lambda function Y to update resource Z (resource Z isn't necessarily a Cloud service, it could be a database).

This is why I'm skeptical of on-prem FaaS. It's an easy value proposition to sell when you can use Lambda as an example. But Enterprises have heterogeneous environments so Lambda-like integration into other services is far from a given, and 'scaling to zero' is a little disingenuous because there always needs to be underlying infrastructure (k8s in the case of PFS) running to handle function invocation.


> But Enterprises have heterogeneous environments so Lambda-like integration into other services is far from a given

Because it's not a walled garden. As the ecosystem grows that pain (and it's real) will ease.

> 'scaling to zero' is a little disingenuous because there always needs to be underlying infrastructure

The point is to use it more efficiently. Mixed workloads with scale-to-zero help achieve that end.


> What do Pivotal plan to do if both implement breaking changes (extremely likely), maintain a fork?

The main riff team all work for Pivotal and Pivotal was the first external partner brought into Knative by Google. We were the first to submit non-Google PRs and the first to have non-Google approvers.

It would be strange for Pivotal to be blindsided by two projects it is intimately involved in.

Source: as you probably guessed, I work for Pivotal.


Great analysis. I think of it as avoiding the DevOps explosion and it drives me nuts when I see early stage companies sinking lots of time into getting the perfect setup of what is actually just a cost center. There's likely some strained argument for how this ultimately might benefit the customer but that ROI is a long way off, and if you're still in business.


I'm not so sure. If I were a typical small (startup) company, I'd run on top of Kubernetes with a Postgres backend, some business logic layer in a container and serve some html/js to the browser. It's fast, inexpensive and goes wherever Kubernetes does.

Large organizations are a whole other animal. There will be many people entrenched in various ways, each with their own ideas of how things ought to be run. Often guided by technical debt and unique historical needs.


Why not start with Fargate/App Engine, or skipping containers all together and going straight to FaaS on your cloud of choice?

Small companies seem like the primary choice to skip container orchestration layers completely.


Yeah, or Heroku which is super simple and reliable to use.


That's funny, I've set up the infrastructure for 3 startups with exactly that stack over the last year and a half, works wonders and is very flexible.


> Serverless implies that the management of the underlying infrastructure is owned wholy by a third party,

This is an AWS talking point, which they push relentlessly because it suits their purposes. The idea that developers "I don't care about servers" implies or doesn't imply anything about the legal ownership of servers doesn't, by any necessity, actually follow, unless your name is Bezos.

> You still need to water and feed a Pivotal/K8s platform, and those things are awful to run.

I have run them single-handedly in my day job at Pivotal. We have customers who run platforms for thousands of developers with < 10 operators.

> By this stage, you end up having to re-write those parts of the json/yaml/terraform to handle deployments into other clouds ANYWAY. So what's the benefit again?

Or you could use BOSH today and Kubernetes in ~-12 years as its ecosystem firms up. Or both as fits your workloads. Pivotal is ready for both and to help you handle your workloads.

> You run your own logging, storage, networking, database and analytics services inside the K8s which defeats the entire point of being in the cloud in the first place.

Why?

> Just use the native FaaS/Data/Analytics.

I actually half agree with you. Cloud portability is to me not the primary selling pitch of the software Pivotal works on. It's a benefit, one which many of our customers see as essential. But to me the point is: can I just write some damn code? That's what I want to achieve. Solve the damn problem, rather than shaving yaks all the damn day. The portability thing is just a means to that end, not an end in itself.

Disclosure: I work for Pivotal.


It’s exciting to see ecosystem build commercial products using Knative (https://github.com/knative). The developers on those platforms benefit from increased portability. Congratulations to the riff team behind the new Pivotal Function Service offering


From an economic perspective, using a service like Pivotal feels like placing a bet that the pricing difference between cloud providers will be higher that the cost of paying for Pivotal's support plan (I know, open source, but if you reach a scale where these things matter enough that you care about the cost to switch clouds you are probably at the scale where you find yourself paying for Redhat and etc support plans).

As cool as "cloud agnostic" sounds, paying any real tax to achieve it probably exceeds the long term price differential between major cloud providers. The major clouds sell commodities, recognize that they sell commodities, and they operate as low margin large scale providers. Over time the pricing differences have to average out to about zero, otherwise the market would abandon the always-more-TCO-costly provider (which would cause them to lower their margins to recapture the higher total earnings).


Honestly, this rarely seems to come down to cloud pricing considerations. Rather, the goal for most shops who choose something like PCF is to (a) simplify onboarding and ongoing work of devs since they don't need to learn the nuances of each cloud IaaS, (b) stripe a single ops model across each infrastructure pool, which matters since no enterprise of size is using only one. People buy because they struggle to ship, and PCF helps these big companies put their focus back onto shipping software, not where or how it runs.

Disclosure: I work at Pivotal.


I suspect people aren't paying for Pivotal stuff to save money but because they aren't competent enough to use public cloud directly or they have unlimited risk aversion.


One of the reasons I prefer function as a service "FaaS" over "serverless" is because of how useful on demand ephemeral functions can be on premises.

The term serverless implies that the main benefit is not having to deal with infrastructure. While that is helpful, setting up a "standard" means of event driven and stateless business logic is useful in its own right.

Can you accomplish that with long running processes? Sure, as long as you are disciplined enough to do so.


Azure functions is also a serverless framework that works across clouds and on-prem.


I seem to recall Microsoft actually open sourced it


On-prem serverless? Stone cold classic.


I wonder how does Pivotal new framework compare to OpenWhisk? https://openwhisk.apache.org/


Project riff most closely compares to OpenWhisk and in fact both riff and OpenWhisk contributors work together on Knative. Of the two, riff has the advantage that it came along at the right time to be based directly on Knative without to be "ported", so to speak.

PFS might be more fairly to commercial plans announced by other vendors to repackage OpenWhisk.

Disclosure: I work for Pivotal, though not directly on PFS.


I thought it was weird that they were claiming to be the first open cloud agnostic serverless provider.


My take.

These kind of things face the problem that they pump entropy out of one part of the system but risk creating more overall. In fact, unless you get all the details right you will end up worse than you started. The "minimal viable product" is not minimal at all if it is going to be viable.

For instance I was using Vagrant years ago and realized eventually that I didn't need 80% of what it did and that I hated the DSL. (eg. if it has to support five different ways to build your image, maybe that's a "bad smell" that none of them are good enough)

Even though I was using it locally to start, I wound up using it only in AWS and I also used it to stack up systems that I put in front of the public.

I wrote my own Java program that would set up a virtual server, networking, all that and write a bash cloud-init script that would run on the machine to set it up, then send a message via SQS when it was ready. If it passed tests, the system could then switch over the elastic IP to the new instance.

I looked at the problem of getting it to run on other clouds and it wasn't that bad but it wasn't that good either. In AWS you have to choose a region, availability zone, VPC, subnet, disk types, and a number of details.

In some cases those choices could be arbitrary but sometimes you have infrastructure that you have to hook into and you have to match that.

In GCS or Azure it's basically the same but it is different, and if you want to support those platforms you have to also embody the differences in the platforms and also the differences between customer's environment.

----

Being involved with software vendors, "hybrid cloud" has a special meaning. My biz dev guy and I picked it up as part of our pitch a few years ago because we needed it. We knew we needed to support customers wherever they were, but of all the messages we tried that was the one that confused customers the most and resonated the least.

"On premise" is a special case BTW, because there are many customers who have issues with privacy, security, etc. and they don't have a choice or perceive they have a choice to use public clouds. (eg. Bridgewater thinks that it is better to safer to desktop virtualize into AWS then risk people walking out with laptops, physical destruction of their HQ, etc.)


My issue with Spring is it became what it was supposed to replace. Originally, we flocked to Spring because IBM WebSphere and WebLogic were slow and bloated. Now Spring has classes that are 72+ characters long and an inconsistent programming model.

I really like CDI, which grew from ideas hatched in Spring and Guice. Microprofile is built on top of it as offers standardized, mutiple competing implementations, modular, interoperable specs.


It's (the idea to manage on prem K8S from the cloud) a reaction to

1) Hyperscalers doing hybrid cloud

2) VMware's growingly independent approach to K8S (which I believe will ultimately result in VMware offering "generic" Kubernetes)

3) IBM Redhat becoming more aggressive with their on-prem approach (let's not forget SoftLayer and Redhat OCP can do hybrid K8S on bare metal, without VMware)

4) On-prem-focused vendors like NetApp having announced similar service (NetApp Kubernetes Service, which already works in all hyperscalers and on-Prem NetApp HCI is next) in recent months.

Pivotal's on-prem problem (compared to IBM Redhat, NetApp) is problem is they have little on-prem presence, sell no hardware or storage, and need to rely on DellEMC and VMware (which have been great for Pivotal but that was when Pivotal played nice and wasn't helping VMware (and Dell) customers escape to non-VMware based clouds.)




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: