I thought VCs were interested in companies with superstar teams with a killer idea, an established and lucrative monetization mechanism, an established defendable innovative product, in a global market with hockey stock growth and for some reason still willing to sign ridiculously unfavorable investment terms with no downside or risk for the VC at a very low price, but at a very high price to the next round of investors to drive up ponzi scheme valuations. Oh and a CEO/founder who at first they love but later can be shoved out by the board. Or have I been watching too much Silicon Valley?
So if I have a kubernetes env hosting all my stuff, with standardized CI /deployment flows, and one site is static, hosting it from a container like everything else would be a cardinal sin? Thats a strangely black-and-white way to put it...
I understand the sentiment and agree to a point. At the same time, I sometimes use docker for seemingly trivial problems because the most important requirement is build consistency. I don't know how to get around using containers when I need that. I can see a static site generator actually needing a reliable build process. What would you suggest though? Just manually/automatically test builds?
Disagreed, because an LB like Traefik will easily self configure watching docker.sock, otherwise, you'd need to change your LB configuration manually every time you add / remove a site.
One of the beautiful things about a static site is it’s ability to be served by object stores like S3 as your origin, and cached by a CDN.
From an operations standpoint, you are not responsible for maintaining much of anything, the performance is super fast, highly available, and relatively cheap (no dedicated servers, just paying for bandwidth and storage costs)
Contrast that with a docker container as your origin... it must be running (and that is your problem to ensure it is).
If you’re optimizing for developer convenience, your traffic is low or not mission critical, or maybe you have a globally distributed highly available k8s cluster and that is “the way” your company does all the things... sure why not
Docker is nice because you end up with the same configuration in development and production. There are many hidden details that "just let someone else host your site" or "just rsync your files to a server" gloss over. Who is renewing your TLS certificate? Where do you configure headers, redirects, mime type mappings, etc.? Where do access logs go? How do you update the version of the web server? What effects does that update have?
When you manually do these things, you rely on a bunch of implicit defaults. Maybe your production server and your workstation happen to have the same version of nginx, and happen to set the same defaults. So you can test a change on your workstation and the same change works in production. But more likely, that is not the case. So you get weird differences between development and production, and you only notice when you push to production. That is not ideal. Building an image with your webserver and static files ensures that you see the same things in both places. There is no need to test anything in production, as you have a copy of the exact code and data that is going to be running in production, locally. You can tweak and poke to your heart's content, confident that you'll have the same effect when you push to production. There is no need to maintain documentation about how to build your project and what versions of things you use; you specify them in a machine-readable format and the machine dutifully builds the project correctly every single time.
(One disadvantage of clean builds, though, is that sometimes you want old artifacts to exist. Consider a case where you use webpack to generate javascript. Typically, you'll output a bundle like "main.abc123.js" which is loaded from "index.html" via a script tag. What happens when the browser loads index.html from your last build, then you the next request goes to an updated server, which says to get "main.def456.js" instead? The page silently breaks, because the server doesn't have a file called "main.def456.js" anymore. "rsync --delete" has the same problem. And if you never delete anything, you eventually use an infinite amount of disk space. So there is definitely room for improvement here, but "it will probably work if I don't think about it and cross my fingers" is not the improvement we're looking for.)
I wonder what is the shortest path to deploying a web application that is fully codable (as opposed to no-code), that includes user database/management/auth.
I suspect the closest is the old school libraries like Ruby on Rails and the equivalent in other languages.
Edit: Dan has edited his blog post to address this comment ... good luck dan with Nodewood! See comments below. We should support a fellow founder and HN community member who is here and listening and working.
Original comment:
It’s not clear until the end of the long post that this guy is selling and all in one development package.
No problem with that but it feels disingenuous to make building a SAAS super hard THEN say “he he I have the solution for you!”.
He’d have been better to say up front his product solves the development complexity problem which looks like this....
Then at the end you’d say “gee this guy is right”, instead of “oh I’ve been played”.
Just did. Thanks for the advice! I spent so much time putting together the list of code that when it came time to plug Nodewood, I didn't have much mental oomph left, so I just put it at the end. Your suggestion is much better.
Seems somewhat disingenuous not to mention that up front.