At a glance:
- Mine handles domain registration + ACM verification automatically
- This one wisely uses clioudformation instead of api calls
- This one does apex->ww redirects, whereas mine uses the apex and has no redirect
Seems pretty cool!
I also started off in the same manner of implementation - bash scripts wrapping AWS CLI calls - then stumbled upon the more straightforward, template based approach.
EDIT: Googling suggests Netlify offers a build, deploy, hosting pipeline all-in-one box. Which is substantially more than any of the projects mentioned here. These serve a single purpose - simple hosting of static websites.
I keep my "uncompiled" site in a private repo that builds and automatically "deploys" by replacing the contents of the public repo and pushing that up. Source is private, final result is public.
Not saying that there's anything wrong with it in particular, but it arrived a bit too late to set the standard for anything. People compared it to Netlify even when it was first announced.
(Note: that info is from anecdotally looking at Netlify site IPs, I could be wrong)
Fronted or frontended?
* provide (something) with a front or facing of a particular type or material.
* act as a front or cover for someone or something acting illegally or wishing to conceal something.
"he fronted for them in illegal property deals"
* stand face to face with; confront.
Perhaps to use GitHub Pages with private repos, one has to pay?
It's pretty simple to configure nginx for static sites, and by doing it yourself you reduce vendor lockin to just about nil.
Even if S3 is massively cheaper, $5/month for a tiny VM seems like a small price to pay for being vendor-abstract.
I suppose S3 is way less likely to suffer a meaningful outage than my little VM, but how many 9s do my personal websites actually need?
I used to host Wordpress sites for myself and family members. I've now moved nearly all of those sites to Netlify (for hosting) and Forestry (for editing/CMS). I no longer have to worry about malicious hacking attempts, Wordpress updates, or anything else outside of the site content.
Here is my post on this transition for those interested: https://dev.clintonblackburn.com/2019/03/31/wordpress-to-jek....
cp * /var/www/html
Yearly maintenance required: apt-get update, apt-get upgrade
View traffic stats: goaccess -f /var/log/nginx/access.log
I'd say its just as easy and seamless to do it yourself on a cheap VPS for a static website. HTTPS isn't that much extra work either.
I've seen way too many people get their boxes trashed to leave an internet-accessible one exposed and unsecured.
I'd say continuous maintenance with response to specific issues. Also debian updates don't restart services which rely on updated shared libraries, which means you need to restart your nginx after openssl updates. Also restarts when kernel is updated. Also...
There's really more to it than just an annual upgrade. You're likely not going to be affected if you ignore this, but why risk it?
Adding extra servers like own cloud storage, email, IRC, etc. just expands your risk to more services (unless you internally separate them into namespaces/VMs, but then we're really far away from a "simple static hosting" territory)
You're right that there's fewer wormable issues these days. But the question is: does your usual approach to security allow you to stay safe when (not if) the next one happens. And feel free to continue in not-super-secure way for personal, fun things. Just keep in mind that there's more to the story and the more moving parts, the more you need to work to keep things reasonably secure.
I guess I feel like the maintenance cost is worth the knowledge I gain from automating my own infrastructure, but I realize not everyone is interested in devops. I'll also note it costs me very little time - I don't remember the last time I had to do anything actively with it.
Elsewhere in the thread I mentioned vendor lockin, which does concern me. I also worry about vendor monoculture - if everyone just uses AWS, they gain undue influence over the market, so in some ways I guess my stubborn self-hosting is a small gesture against that.
I see a lot of people complain about how the internet has become a drab, uniform machine that treats people as eyeballs or wallets to be sacrificed to Moloch , little like the wild, free-spirited collection of small sites it was back in the late 90s.
I think a lot of that is the price paid for centralization and funding, so again, self-hosting is a small way to fight back just a bit against that.
1: Moloch in this sense: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
I never used Forestry but by the looks of it, it looks more of an actual CMS and far too sophisticated than Netlify. Being said that it looks over engineered to me for hosting static websites. But if I wanted a CMS to host my client websites whom I have to hand over control, I would definitely give Forestry a try.
And S3 just holds your HTML files, for super cheap. There’s no lock-in concern there. You can easily migrate to nginx in the future if you really want, but start with S3
CDNs may make a site a bit faster, but for a static site it's unlikely to make much difference if you're on a good host in US/EU or central Asia. If you're hosting in Australia or Japan, maybe it might be a little slower than expected, but still totally usable.
Nginx is unbelievably fast by itself, not to mention the optimizations that are completely unnecessary for a static blog. It's not going to be your blocker.
If you're serving up 20MB of JS and inlined images on each page load, yeah, you may want to rethink that. But we don't need to get wild. My homepage is 9.2KB. Longer blog posts (e.g. ) can clock in at 20KB. HN won't take that down.
For a personal site who the heck even needs a CDN, the only reason I might use that if I put photography website with huge shots or if there's a bunch of videos as well.
I run several hundred dollars monthly of infrastructure but my websites are nearly all on a simple VM for about 20€/month on Vultr right now.
Web hosting only is expensive when people run badly optimizer infrastructure
I'll consider moving to a VM if/when the ARM board eventually fails, but it's been running for 6 years so far. I have 6TB of storage, which mostly serves as a NAS but includes about 200GB of photos for the website.
There is no deployment process; the web root is mounted by NFS on my desktop. I can share large files with people just with "mv" or "ln -s".
> how many 9s do my personal websites actually need?
My router seems to crash every 3-4 months, and I need to reset it. There's around 15-30 minutes of power failure every year. I don't worry about this.
The upstream bandwidth is about 60Mb/s, which is fine for almost everything.
Deploys? One line of `scp`
scp -r -i ./certs/maddoxxxnginx.pem ./app/* email@example.com:/var/www/maddo.xxx/
(that deploy script just bulk uploads everything, but that's fine for now. The whole site is measured in KB.)
Last I looked, though, you couldn't deploy to S3 without using tools that work specifically with it.
I guess it's really not that big a deal, but I prefer the genericness of "I'm configuring a webserver and pushing my files to it."
That process can be just about fully automated, even including HTTPS setup if you want that, and then you can use with whatever server provider you like.
I'm a fairly aggressive automator, so I forget that doing it by hand is actually an option.
I don't know why anyone cares about vendor lock in. It's either trivial to move an aws lambda to a google cloud function because you don't have a lot going on, or it's not trivial to move stuff from even your own servers to other servers because it's under huge load and you have considerable amount of data you'd have to migrate under complex conditions.
Moving around is either hard or easy based on things that don't really have anything to do with vendor lock in.
I recently moved one of my k8s cluster from gc to aws, even terminology change can introduce a lot of awkwardness.
As an aside, I genuinely wonder under which circumstances a CDN will be useful for a static website nowadays.
I have a static website that has been on the HN homepage a few times and got picked up by the Chrome mobile recommendations and a nginx/https with slightly tweaked configuration never had a problem handling the traffic even on the smallest DO droplet.
Edit: Thanks for these replies.
After so many years I still can't really understand how easily people hand over almost complete control over their site to someone else, just because everyone else does. It's like handing over your e-mail account passwords when LinkedIn started. Yes, CloudFlare, Google and others are helping you, but there is a price to pay that might not be immediately visible.
That's the other odd part about this complaint: you're trusting a company like GitLab not to break their terms of service, which is a potential factor to consider but also one where they'd have severe negative outcomes to their business if they went rogue. Since you're already trusting a number of other parties, why is this one so much scarier?
You are giving them everything they'd need to obtain a DV certificate for your domain, though. You can stop them from using it at any time just by changing the DNS records, but you'd need to wait at least two years (825 days for maximum TLS certificate duration) before you could be certain any certificates they had been issued before that point had expired.
The first hit is brutal. I won't say the CDN since I'm not an expert, but it doesn't take long to go cold (minutes) and once it's cold even the cached hits are 400ms.
Is 400ms really a dramatic reduction in latency?
Netlify has been awesome and it made it stupid easy to combine our www site on Webflow with a hugo static blog in a subfolder (/blog). This might be my favorite web publishing workflow ever.
If you haven't tried Netlify yet, definitely give it a look.
Sometimes it’s nice to understand how all the pieces fit together, instead of using an automated system!
The $0.50 is the monthly cost of the Route 53 hosted zone; the CloudFront and S3 costs typically amount to pennies, but of course it depends on traffic.
docker run --rm -e "JEKYLL_ENV=production" -v $(PWD)/src:/srv/jekyll -it jekyll/jekyll:3.8.5 jekyll build
docker run --rm -itv $(HOME)/.aws:/root/.aws aws-cli aws s3 sync src/_site s3://www.<mydomain>
docker run --rm -itv $(HOME)/.aws:/root/.aws aws-cli aws cloudfront create-invalidation --distribution-id <mydistribution> --paths "/*"
I started with a setup similar to your diagram and tweaked it when I realized S3 didn't serve index.html when the URL was just the parent "directory", i.e. example.com/foo/ doesn't resolve to s3://example.com/foo/index.html. To get this working I had to write a bit of JS in a Lambda function and deploy it at the edge of my CloudFront distribution to do some URL rewriting.
Given that's the behavior most people expect, might be worth considering?
I'd definitely like to add more variants of the default stack. At the minimum, I'm sure there are folks that prefer `www` redirects to the apex domain, or removing the `www` subdomain altogether.
I was mostly going for a DIY solution since I wanted to "own" the bits being deployed while remaining as close to the infrastructure as possible. Providing a hosted service somewhat moves away from the DIY spirit; I suppose additional tools/UIs could be offered to simplify setup and deployment and still run everything directly on AWS, but at that point one might be inclined to just move to one of the other hosted solutions for the simplicity.
A what? In the majority of the World copyright has been automatic for about 140 years.
How you get copyright is you make a work. No need to put anything else on it. IIRC there are about 3 countries that aren't signatories to the Paris Convention.
In USA you can file a notice in order to get better treatment in court, but it's not been required for 40 years or so, is that what you're referring to?
FWIW the license is very clearly MIT, https://github.com/cloudkj/scar/blob/master/LICENSE.
and regarding license, they have the MIT license added to the repository
Running this project on aws can give a cloud beginner an interesting way to expose them to many concepts. Now I just have to figure out what static website I want to run in this!
Please do the same for running your own scalable wordpress install!
I will be staying with netlify
I’m confident I could figure out how how to do something much more complicated. But I want to focus on other things and it’s nice to not have to think about it.
It’s still a cool project though, since it shows exactly how many problems Netlify solves for us
Also, maybe consider configuring a logs bucket for the cloudfront logs?
Netlify assumes a version control repository that you can pull from, run a build step, and then host static files from. The build tools are open source, the output is static and trivial to download and rehost, and the repository is git meaning one clone is all you need to port to any other service.
Where exactly is the vendor lock in?
Netlify's playground is easy to use for setting this up, but I'd also like to have this available in a standard format - just as an escape hatch in case I need it.
https://www.htaccessredirect.net is there, but I'm thinking even less configuration if that's possible.
I didn't run analytics so I can't say how many hits it got, but traffic was probably fairly average for a personal site.
- S3 costs: $0.10/month
- Route53: $0.50/month ($0.50 per hosted zone)
S3 costs could be lower - I have other buckets with stuff that count towards my cost.
Your abstraction is nice, but the learning curve for someone is incredibly high for such a setup.
I have several simple games hosted on Github Pages using the storage API which is on that list.
1. Setup public repo with Hugo project
2. Add Travis CI integration with GH Pages
3. Use CloudFlare for free SSL + other goodies
Why would anyone need this?