Hacker News new | comments | ask | show | jobs | submit login

Why is it so hard to understand?

a) re FaaS. Of course if you have an on-prem component, someone needs to take care of it. That's why it's on-prem. But still you can separate administration of that from the developers and have the additional new feature that you as admin don't need to care which software runs in these clusters. (In reality it's never that simple since specific hardware for the task outperforms the standard hardware by more than a margin, but at least you have to worry less about getting from cool-product:v1.0 to cool-product:v2.0 as admin anymore.

b) This dream of cross cloud portability needs to die

Well, you are either a cloud provider or a software/hardware provider that is not a cloud provider. If you are the latter of course you need a way to attack the cloud providers when everybody moves to the cloud. It's a common and traditional idea.

And at least for me it also makes sense to use such a solution. I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds. I hope so much that they can also find a way to more seemlessly work on arm and ppc now that IBM has bought them. Then it's truly hybrid.

PS: That said I also wouldn't trust any of these quickly assembled hybrid cloud solutions from companies who are not well known for their software achievements. It's probably only the marketing aspect that's driving these efforts and IBM has already shown that they don't trust their own solution themselves even.




I'm all for the separation of concerns for managing underlying services! I just suspect most organisations have the scale to warrant running it themselves.

Lets say it takes 5 engineers at 100k each to maintain the K8s and FaaS service internally. Are we getting 500k+ _more_ benefit from that internally managed platform than offloading it to a third party? What is the cost of exit from one platform to another if we just used the native services?

As long as the service management of FaaS continues to sit with your organisation, it feels disingenuous to call it Serverless. We lose out on the per invocation opex costing model, the benefit of scaling offloaded to a third party and the well integrated ecosystem of services which Serverless developer paradigms really shine.

B) Full disclosure, I work for a software consulting company. We are partners with AWS, Google and Azure. I'm relatively agnostic, although my preferences would be somewhere along AWS > GCP > Azure.

"I would always go for Openshift (not working for Red Hat) since they are much better at getting on-prem to work than k8s, and you have the almost same setup and interaction experience throughout all your clouds."

That true if we plan to run _everything inside the OpenShift cluster_; our own database, analytics, logging, iam, secrets managers, etc.

As soon as we get other cloud hosted services involved, the integration becomes really clunky and our teams end up split braining between two orchestration layers which aren't well integrated. An example of this would be Networking and Databases in AWS - You simply can't do microsegmentation inside and outside of the Openshift and cloud networks. Assuming you wanted to offload databases to RDS (and you should), all your security groups are going to have open traffic flow from every node in your Openshift Cluster, whether those nodes are tied to the Databases App partner or not. Welcome back perimeter based traffic rules!

And since the networking and database part of the deployment scripts are tied to a specific cloud, I need to re-write them to move workloads around anyway... So why not just re-write the whole deployment job to use a native service?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: