Hacker News new | past | comments | ask | show | jobs | submit login
Bringing Serverless to a Web Page Near You with Hugo and Kubernetes (openfaas.com)
67 points by alexellisuk 59 days ago | hide | past | web | favorite | 48 comments



While the article is a bit informative, it is hard to take it seriously. It claims to bring "serverless" to a webpage near you, yet one of its prerequisites is: "You need to have a Kubernetes cluster up & running.", which implies a server (actually several), so not so "serverless" in the end.

The entire "serverless" movement seems to preach something that is impossible. You can't have internet without a server. You can offload server management and maintenance onto a provider, that is true, but you can't remove it entirely from your stack. So preaching "serverless" architecture is disingenuous and IMHO rather ignorant.


I agree the kubernetes part in the article is an overkill, but the serverless movement is here to stay. I used to be a serverless skeptic like you. However, after playing around with "serverless"[1] and AWS CDK[2], I truly believe serverless architecture is not only possible but preferred in many real use cases.

For example, API powered by traditional web cserver + database can be replaced with lambda/function + dynamo/s3/firestore. Cron jobs can be replaced by cloud scheduler + lambda/fargate/cloud run. As a serverless user, you really don't interact with "servers", but instead a highly-available and scalable managed task execution environment.

The only downside is each layer of compute abstraction comes at the cost of spending more and more money.

[1] https://github.com/serverless/serverless

[2] https://docs.aws.amazon.com/cdk/latest/guide/home.html


The problem I see is that the OPS code has outgrown your actual application code. This application code you own, and can be open software. But the deeper you go into the cloud/serverless landscape, the less of your actual product you own and the more you're locked into someone else's. I guess I could be considered old-timer by now and although I can see benefits in the New Way, better ways of doing some things, it's the conditions as currently imposed by business models I've got a problem with. There's a smell when you think you're writing software but are really assembling an Amazon App (tm).

You're tying your own software into somebody else's all-encompassing and potentially hostile (to your business goals) framework instead of preferring the library way with clearly defined and exchangeable interface boundaries.

Doesn't bode well if your OPS tools have one company's name in them.

Sort of like unlearn a man to fish, and he'll be depending on you forever.


> You're tying your own software into somebody else's all-encompassing and potentially hostile (to your business goals) framework instead of preferring the library way with clearly defined and exchangeable interface boundaries.

We experienced that recently, moving our ETL jobs away from a Hadoop cluster managed by a sister company that was having some issues (as in "all our data engineers resigned half way through a long overdue cluster upgrade, now nothing works")

Our data science team found Amazon Glue and fell in love with its ease and simplicity...

...and then spent three months trying to figure out why everything kept breaking. TL;DR - Glue's DynamicFrames come with a bunch of caveats and mousetraps, and a severe lack of documentation or source code.

Oh sure, you can read the Python source code for a DynamicFrame... ...source code that promptly delegates computations to a Java class you can't get the source code for.

After a few months of our experienced data engineers (myself included) saying "Just write a Spark job and run it in EMR", they finally did, and now everything's fine.

But the initial simplicity of Glue got them hooked, and they threw good money after bad on it. And as for those goddamned Glue crawlers...

TL;DR - things like Glue are great for early prototyping, but they tie you down to the AWS APIs and infrastructure. At least with EMR the Spark job you're running could be easily switched to another provider's infrastructure or your own infrastructure without any heavy refactoring.


As a small-timer I don't think I'll ever cloud up. Managing a few servers isn't too bad, is easy to maintain (from both a developer and "dev-ops" standpoint).The cloud abstractions would have been great before Ubuntu's LTS 8-10 year schedule, Nginx and Systemd. But now it's just so simple using those and my existing architecture to launch new web services.


Yeah, going into the cloud has great benefits that cost a shit ton.

We're migrating from a traditional DC into AWS because a) our current DC provider makes stupid mistakes that impact us severely and b) it's pretty easy to spin up the same K8s cluster in two availability zones to ensure we're (very nearly) always up.

Stupid mistakes list - "Oops, one of the disks in a machine in your Couchbase cluster had to be replaced, and we replaced the wrong one!"

"Oops, we lost your machine. It's responding to pings, but we can't find it, so we're going to have to go through the DC pulling plugs to figure out where it is"

"Oops, our sys-admin accidentally deleted your OpenStack deployment because he was testing something in the wrong window."

And this is the best DC we've managed to find in Europe over 12 years of operating there.

Going into AWS will nearly double our infrastructure costs, but our business is willing to pay for it for the reliability factor. No point having a self-healing k8s cluster if the underlying machines disappear because someone typed a command in the wrong window.


And when you want to change from AWS, you are completely locked in, no?


If you use one of the open serverless frameworks such as OpenFaaS (which is what this article is about) then no you're not locked in.

BTW my team has investigated OpenFaaS to be able to share code between the cloud and local installations, and if we do OpenFaaS looks promising. The architecture is well documented and makes sense.


Using OpenFaaS on AWS doesn't exactly alleviate all of the lock-in. It locks you in to a larger spend. This is because OpenFaaS is heavily dependent on, generally, a k8s cluster that is going to be chewing through bill time 24/7. I think that's the trade-off: you pay to not be locked into Lambda. Personally I don't think that's a bad thing, but I've seen plenty of times where customers would balk because their initial serverless trials cost $5, $10, $15 a month but when running something that would be better for them longer term it would initially be 4-8x that in opex. So they go the "easier" (cheaper) path and build everything around that lock-in tooling.


I think serverless framework and openfass are trying to make it so its not so locked. But yes you are locked with aws.


I suppose, in my experience, the amount of code you need to write to get all that stuff to happen without manual intervention is usually much greater and more complex than just creating a couple instances and ansibling your code onto them. The "you don't have to manage infra!" wave is technically true, but it's not that you have to manage less, just what you manage is different.


Okay so, serverless means that "I don't manage the servers that my stuff runs on."

Why don't they just call it "Sysopless" or something?


Because lots of companies have been hosting infrastructure elsewhere, with real managed servers or complete racks as unit. Now you supposedly don't have to know or care anymore that there actually are real servers and real infrastructure beneath your software.


Yeah, it's the supposedly bit that gets me.


Hi born2discover, I'm the founder and lead of OpenFaaS. Over in the community, we and many others such as Google, IBM, Red Hat believe that "serverless" is an architectural pattern and approach rather than a literal term.

I'd encourage you to checkout my video on Serverless 2.0 which draws out some of the issues with "Serverless 1.0" (which I think you are describing)

In 2019, serverless is not about SaaS products anymore. https://www.youtube.com/watch?v=JvXm-oHi5Mg&list=PLlIapFDp30...


Thank you so much for your work! OpenFaaS is an awesome framework.


> "serverless" is an architectural pattern

Ah, a marketing term.



I personally prefer the naam FAAS. It's a logical naming scheme where each xAAS moves the maintenance of an increasing part of the tech stack to your supplier. PAAS > SAAS > FAAS


It's not about the actual full stack being without servers, it's about the end user not having to deal with anything related to servers.

It's just a step further then running instances with any cloud provider. There is still physical machinery in the stack, but you don't have to worry about it anymore.

Even if you don't agree with the definition, it is how it is used by most of the people in IT roles currently, so you'd probably have to learn to deal with it.


I believe "serverless" in a way how it is used in this context means there is no top down full server starting from hardware all the way up to the OS available to you exclusively. Instead you just use services which you need for tasks you need. Think storage buckets for your files, lambda functions for your scripts, load balancers for your traffic etc. They do run on servers, but you never have access to the server itself.


I would echo this. Believe it or not, AWS also run many many servers to power Lambda :-)

Not every term in computing is literal. Think about "the cloud" - is it really in the sky? Of course not and Serverless like aivisol is rightly pointing out is an approach and architecture.

To the developer, it's "serverless" when they don't have to care about infrastructure. So if you're using OpenFaaS Cloud or AWS Lambda, as a developer you literally don't have to care.


Isn't that what webhosting providers have offered for at least a decade? No server management, just resources on a server with a web server, data storage (often MySQL) and the ability to execute code (mostly PHP). You run your software on a server without ever accessing the server itself or having to worry about OS updates.


I think the biggest difference is webhosting provider can't scale your php resource up and down infinitely in real time. Your software is not exposed to the server but constrained by it. The idea of serverless is that your service is automatically distributed and not constrained by the capability of some web server.


To scale shipping static content, I'd rather look into CDN with proper caching, instead of maintaining ten layers of abstraction just to feel good about how modern my stack is.


Yes, but what about non-static?

Also, a CDN would also classify as "serverless", under the definition used by the article.


Infinitely?


I think OP is quite aware of the distinction, yet I would also argue that 'serverless' doesn't apply to having to manage a Kubernetes cluster (on servers.) If you are only exposed at the services level - whether they are provided directly by your cloud provider, and whether or not those FaaS & buckets are implemented with k8s hidden from you, then you can use 'serverless', but not if part of the requirement is to manage that k8s stack yourself - and that includes provisioning so-called FaaS on the cluster. "Serverless" it decidedly is not.


Its "serverless" with respect to management of servers...

...not actually without server hardware.


What do you think about "fabless" semiconductor companies, then?


"selfhosted serverless" will change everything!


Article might be a good intro to openFaas but please do not do this in production.

Gitlab Pages and GitHub Pages will gladly host a git branch for you as a static site. Gitlab has built in support for building static sites with any command you want. They have more than enough infrastructure to host a static site for you and it will take <1hr to set it up.


This is one brilliant piece of modern technical humoristic literature, a genre that is still rare and should get much more attention!

The author has written that really well, simulating a style that some cloud marketing guys still will take serious. Of course every developer with some background in webdev will start to chuckle after a few sentences, but he keeps the flow on a very high level, great!

I wonder if some kind of NLP was involved and a second article with sources attached will follow.

Very good piece of parody!


Or you could just use Netlify with one click ;)


Netlify is one of those services that just makes perfect sense. Perfect vertical integration across the web stack.


Am loving the innovation and appreciate that the assumption here is if you've already got Kubernetes then this can be useful. Otherwise, a Kubernetes cluster is quite the pre-req for a static site generator that can run on the command line or on a build server.


So if you're new to OpenFaaS, it is a platform for building and shipping applications on Kubernetes. We're talking about the big picture here, if you have Kubernetes, you can now deploy static sites, blogs, micro services, functions and lift+shift just about any other code.


Yes it's true, OpenFaas is something everyone should try even if they are a beginner. Alex and the community have tons of blogs, workshops that you can help you learn and implement production-ready solutions. Openfaas is making Kubernetes development Easy. For static blogs, websites its really simple with Openfaas. https://twitter.com/openfaas/status/1162414762652835841


Looks like a major pain in the ass to manage. Why not use AWS Lambda or other FAAS? And wait, the whole site is hosted with k8s using some Nginx server? Where's the CDN and frankly, object storage like S3 would seem a much better fit for the static assets.

If the point of using this is to learn how to use k8s for FAAS, great. If the point is to save pennies and create a hassle-free deployment, I would rethink the approach. I've heard good things about Netlify, although as a tinkerer myself I used AWS and CloudFormation with Sceptre to create my site. As unrelated note, if you are familiar with React you should probably consider using Gatsby. It's much easier to extend and customize which happens quite often once you start adding your own little cool widgets.


I've used Hugo static pages, and I've used Kubernetes. Replacing the former with the latter is not a win for simplicity.

Static pages can easily be maintained well by a single person; Kubernetes is a nasty king rat of pain and complexity which requires a team of highly-skilled engineers and architects to do well. Its complexity is worth it, in many circumstances, but it is not — repeat, not — the first thing one should reach for. Or the second, even.


There are a lot better options of you like Hugo or other static site generators...

I have GitHub set up to push my merge requests to AWS Lambda. The lambda downloads the specific Hugo release I deemed to run on, generates my pages, an pushes to S3 + Cloudfront. Takes less than 10 seconds from merge to publish. No servers. No services fees. Costs $13/year (95% domain fee). Truly "serverless".

At some point I want to retry this on Gitlab, it seems to support the same flow


On Gitlab the flow is simpler, you can just add a configuration file to run Hugo on the integrated CI and pass the generated files to the hosting:

https://gohugo.io/hosting-and-deployment/hosting-on-gitlab/


I would honestly be interested in what the author(s) consider to be the advantage of such a setup over things like for example GitHub Pages, Netlify, S3 + Cloudfront, a plain Apache Server?


There is a note near the beginning:

"The goal of this tutorial is not to show you that you need a Kubernetes cluster to run a static website, it’s to show what can be done with OpenFaaS and Kubernetes."

I don't think it is being advocated that you should make this switch, just an informative article showing the steps required to do so.


I use nginx + static html. I deliberately kept it as simple as possible. I wouldn't follow this tutorial unless it was for the purpose of learning how to use Kubernetes and even then, there are probably better tutorials.


I’ve seen where someone has “hosted” an entire site on a router. Would that qualify as serverless?


Serverless is purely a cloud marketing term for the next rendition of a web CGI like solution we all used to do to make any programming language run behind a web server, e.g. C, Python, and so on.

You can trigger it by other means, such as if a file is added to a cloud based file storage (think Azure Blob Storage, not Google Drive, though it wouldn't surprise me if that's a hook too) or something happens in your cloud based database. Or old school: someone visits a URL.

The whole web aspect of serverless is just the most popular.


Have not looked into it but I feel that the headline qualifies for tech bs bingo...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: