I have a preloaded domain that would no longer be eligible (Chrome would keep it around for now, Firefox and their list consumers would prune my entry) for the HSTS Preload list if I were to use Firebase due to lack of ability to include the includesubdomains and preload tokens. :/
Checking my monitoring logs I have 26 minutes of downtime attributable to Netlify since 03 March 2017. That's about 99.994% uptime.
If I attribute every single outage. That is 43 minutes since 03 March 2017, though some of those were not Netlify errors, but even so, if we are uncharitable/inaccurate and attribute them to Netlify that works out to 99.98% uptime.
Not bad for a free service.
They're also very transparent about service issues:
And support is very responsive. I recently talked to them about deprecating TLS 1.0 and 1.1 for instance, or providing the option to force TLS 1.2 if users desired. They were quick to respond and helpful.
So if you are seeing unusual issues, get in touch with them, even if you're a free user they'll still talk to you.
For a static site, what else is there to break?
I use hugo and then setup my output folder (where the generated output goes) as a git repo. Generate the site -> git commit/push -> Caddy. Caddy has a feature to pull in content from a git repo so when you couple that with the built-in Let's Encrypt support it makes it dead simple.
I stopped using Caddy when I discovered that it didn't support one of the unusual TLDs I had. Maybe I should give it another go, though. That was over a year ago.
I assumed it was because of the TLD, because everything else was otherwise identical to some of the other virtualhosts.
It was a bit of work to get everything running, but there's very little to actually maintain afterward. I'm definitely going to start replicating the setup elsewhere (including my own homepage, which is currently down due to its VPS having failed hard and me not having enough time to rebuild it).
I simply have a Travis-ci.org setup that does an "rsync" after "jekyll build". And in case I'm not in the mood for waiting on Travis, I simply "rsync" from my localhost after build.
Having an automated build system has advantages though - if you get a PR on your website repository with typos and so on, you just have to merge it and the content will get published, so you can do it over your phone. And yes, I had PRs, since I publish 2 project documentation websites this way.
Also folks, you don't need a CDN or Cloudflare, or any of that — you just need a healthy Nginx setup hosted at a decent VPS provider. I've had my websites withstand Reddit and HN level traffic just fine, paying $5 per month for hosting about 4 static websites, plus other stuff.
I also hate Medium, Blogger, Wordpress and any of that crap, I hate their bloat and trackers and I do think having your own website published in a Git repository is worth it. Yes, there is a cost in maintaining my websites, but I do so willingly, because they are mine.
PS: shameless plug — https://alexn.org
You are also right about not needing a CDN. My site has occasionally become momentarily popular and my $5 hosting VM hasn't even blinked on my completely static site. A database is a fine thing but you don't want to be serving web pages out of one. Thats why I finally ditched WordPress.
 https://sheep.horse/tagcloud.html#computing - a complete waste of your time.
More specifically, I like the dry humor and scattershot nature of the content. Reminds me of how the web used to be.
That is literally the nicest thing anyone has ever said about my blog.
(Realistically I probably should have started different sites as my circumstances/interests changed, but it's my blog and I'll do what I want)
Do editing, with hugo in server mode so I can WYSIWYG edit my pages. Then run a bash script:
# build site from markdown + template
# post to S3 bucket which is a file storage service
aws s3 sync ~/sitedir/public s3:sitedirbuckket --recursive
# invalidate CDN distribution so content delivery is nice and fresh!
aws cloudfront create-invalidation --distribution-id XXXXXXXXXX --paths /*
echo -e "All done"
I like the CodeBuild solution for the times when I'm editing on my phone or a shared computer. I push to GitHub, and CodeBuild handles:
* build (as above, plus asset processing and minification)
* deploy (s3 sync, plus some fiddling to add 301 redirects)
* ping search engines
I keep a lot of drafts and temporary notes in my local checkouts and doing build/deploy on a fresh checkout helps to ensure they don't slip onto the public website.
Here's the implementation in the sister comment:
I wonder how much cool stuff you could add to this build gem. Minification, staging, SCSS, etc.
The c9 setup is really nice for online page editing and to compile the static pages. I am not quite sure, if I would really use it in my current workflow.
Disclaimer: I am the author of github-bucket.
Is this because of people wanting to host on static-asset only servers (GitHub Pages, S3 Website, etc) or is there some other benefit above simply using any standard blogging software? If it's a question of speed, that's what caching does.
It's simpler on the server as you say, you can serve the files from pretty much anywhere. You also need less resources to do so. I appreciate the cache idea, used to do it myself with wordpress, still do with MyBB, but it's imperfect and there are always misses, espceially if someone is actively cache busting you to DOS your site.
Much less hassle to setup. If you're going to do it 'properly' you're going to want to set up your dynamic site to run in a chroot jail, run the php process under a unique user per site (especially with nginx), setup unique database users and databases per site, secure your credentials, have a version control setup and update functionality and on and on. There's a lot to do. You can automate it (I have) but it's still annoying and requires monitoring.
You can move almost anywhere almost instantly, with just a git push/rsync and a DNS change.
Hugely reduced attack surface. It's literally a collection of text files.
The benefits are somewhat reduced if you get someone else to manage your hosting for you, but it remains simpler to move and usually cheaper to host as you need no database service.
Anyway, that aside: So it's literally just the desire to have a zero-footprint blog. I can appreciate that notion but I'm surprised this trend came about with brand new tools as opposed to just packaging up the output of existing blog generators.
Sounds like you should build a WP plugin that generates a static/exportable site. You know the space and clearly understand the market dynamics.
As long as you don't have functionality that relys on it though, like search. I've seen that used to DOS a site before. And pingbacks. Sometimes even comments (though to be fair, you can turn those off for parity with a static site).
Plus you'd probably want to do the caching up a layer and not rely on a plugin, maybe a varnish cache or Fast-CGI cache which adds complexity and cache invalidation etc etc. W3TC and ilk are good, but to get the most from them you need to have good control over the server environment, especially for object store, and you probably want to integrate them with varnish server or something anyway. And before that even just for running the site you'll need to tune php and mysql. Not to mention 'fully' warming the cache on a large dynamic site can take quite a while, if it can be done at all.
Something like Hugo can output thousands of pages in milliseconds. That kind of performance just can't be found in a dynamic site, so warming the cache will always take longer than generating a static site like that.
I guess there's just more to be aware of.
> So it's literally just the desire to have a zero-footprint blog.
I don't disagree but I'd say it probably goes further than that. It's just so simple to go static. If you really dig into hosting a dynamic site there is a lot to do to make it work well under most conditions.
> Sounds like you should build a WP plugin that generates a static/exportable site.
There are actually a few good ones out there. I just used on to archive a site. Wget was flaking out with converting srcset urls (even when I compiled the 1.19 branch which was supposed to fix it) so I used a plugin to export the site.
Overall, I don't really promote one over the other, they're tools at the end of the day and if one works for a workflow then it's the best!
Web sites are software products. If you think like a coder, you want to run them like a coder. If I were just a writer, I'd probably think very differently.
New thing for me was using Gitlab CI/CD. I taught the 'customer' how to edit on gitlab website and do merges. Now changes are deployed automagically without needing me to get involved.
Best part, no wordpress databases I need to worry!
Maybe I’m unaware of potential issues for a static site?
* Build systems besides Jekyll
* HTTPS for custom domains
...but the second problem can easily be solved with Cloudflare and the first one can be solved with a git subtree push. Beyond this it's pretty fully featured.
So an attacker can still alter/intercept content between GitHub Pages and Cloudflare before it gets to the visitor.
To some, the illusion of security might be considered more harmful that knowing you have none at all.
For an alternative, GitLab Pages offers HTTPS on custom domains, provisioned by LetsEncrypt I believe.
Netlify does something similar.
Both alternatives also allow any build system you configure.
Also, if using CloudFlare on a static site that collects no form data and only has links to external websites, would it matter so much, or is it just as important to remember about the potential harms?
The answer to the second question is "I guess it depends".
In one way I think it's more about perception - https should be https and https should be secure. Not https sort of halfway along the connection, then clear and unsecure for the rest over the public internet.
That's what a lot of people have trouble with over Cloudflare's particular popularisation of this 'broken' https model.
Also, all the data hoovered up by NSA et al puts a picture together, maybe about a person, their habits, what sites and content they read etc etc. Thanks to SNI https will likely leak the domain, but other than that it'll secure the rest of the info.
And what if (like I do on my site) I share a PGP key fingerprint? What if that's midified over the insecure portion of the connection? Now any communication by that route might be compromised.
I get that it can be seen as pedantic, but all steps in the connection as a whole need to be secure if https is to remain trusted.
I suppose overall the push is (and should be) towards default encryption and privacy for the visitor. That's something I'd support at least.
Terraform config here: https://github.com/charlieegan3/personal-website/tree/master...
Sounds "just fine" for your own personal blog, sure…