I recently wrote a tutorial for how to get a static site set up with S3 and CloudFront: https://www.davidbaumgold.com/tutorials/deploy-static-site-a... I'm curious if others would find it useful?
It's very fast because it only pushes diffs. I use a bit of command line scripting to create an zip archival copy (for historical reasons) and to push the site to AWS just by doing:
PS: As I pointed below, I'm the CTO of Netlify.
To be fair, setting up AWS for the first time is a real PITA and I happen to have had it all set up (IAM, buckets, command line, etc) already, so my barrier to entry was low. Had I known of Netlify when I first moved my site to AWS I probably would have gone with Netlify for the convenience and free HTTPS cert.
Not to dump on Netlify, just reacting to those sly little not-really-equivalent promotions.
Actually the completely Free tier gives you custom domain and HTTPS. And it also gives you Continuous Delivery. Again for free.
And for open source projects you even get the Pro Tier for free.
(discl.: As written in post below, and in my profile description I'm from Netlify)
(I don't currently use it but I have in the past, primarily because they'll slave off of your own master.)
Netlify can do everything you explain there by running one single command line. Check this video out: https://www.youtube.com/watch?v=IfFenanuRnc&feature=youtu.be
Mom and Pop's Web Hosting Shop
Not sure why such a topic needs more than about 1 paragraph's worth of commentary. Why does this guy needs support for all these fancy features? It's a static site!
Do people simply not pay attention to history? Leave it to the "hackers" to complicate something as simple as static hosting.
As soon as you have to do more than a junk website that you check occasionally and care about the actual user experience you need to start considering things like CDNs, cache invalidation (global consistency matters) and C/D integrations. This is why you have a growth of static site hosting services (and hence the article).
Some good references: http://www.staticgen.com/, http://jamstack.org/, https://www.smashingmagazine.com/2015/11/modern-static-websi...
also (talking about 1994 sites): http://www.warnerbros.com/archive/spacejam/movie/jam.htm is alive
You can invalidate cache with fake (ignored) URL parameters (http://foo.html?hello) or HEAD section declarations.
CDNs are no different, your links simply point elsewhere. Presumably some finishing script could capture your CDN'd data and copy everything to the appropriate place.
How does continuous delivery integration even have anything to do with transferring files to a hosting drop? How is it that an FTP drop is insufficient for integration testing?
A static site host is simply a webserver returning the HTML file you requested, nothing more and nothing less. Everything relevant that the server returns is under the control of the developer who made the page. The things you describe are value-adds; or, as I like to call them: crutches.
But it is your own code. Which means dev cost & maintenance. Hence there are services for it. Could you set up your own DB? rack and stack your own servers? Absolutely, but AWS is way easier. It is about letting the developer work on the part that matters; not rote problems that have been solved.
To specifically address your cache invalidation. (1) you have to manually invalidate _each_ resource. And you have to do that atomically or you run into issues surrounding different versions globally. (2) you release each of your resources to a versioned destination (e.g. <img src="v2/stupidimage.jpg"> -> <img src="v3/stupidimage.jpg">). Again, yes it is straight forward (not to say easy) but it it tedious, error prone and really just annoying to do. Because you actually want to use something like the SHA-1 of the content (better cache hits). Again, you have an issue around the atomic behaviour of updating the site.
And then you need to make sure that you set your headers right (which I always screw up). This is your E-Tag and cache control headers. This is to force the browser to do a conditional get request.
And continuous delivery is about removing the person from pipeline of push -> deploy. That means that even if you have a script that does your FTP drop, you have to configure that. Easy enough in something like circleCI, travis, or (for the daring) jenkins. But you still have an issue surrounding the atomic actions. FTP uploads _take time_ if you're constantly serving traffic - this could lead to odd problems around "what does the customer see". Those are real issues for sites, usually unreported by the user (They just refresh - but it colors the "feeling" of your site).
The other part about this is that it raises the barrier of entry into website development. Should you need a neckbeard and CS degree just to make a site for your mom? Hosting services like gitlab, github pages, netlify, s3 and the likes try to make it easier.
Yes - you can solve this all yourself; there are a finite amount of checkboxes you need to...well..check. But like you don't really want to rack and stack all your own servers, do you really want to spend your time thinking about that?
On the dev cost and maintenance bit -- again, we're talking static sites. If it needs a database it's not static. If cache invalidation is such a big deal, presumably because you need it a ton for something, then I would ask if whatever you are working on is really static? Or is someone just shoehorning dynamic behavior into prerendered HTML? Is the site just incredibly high-traffic?
My bigger beef is with this proliferation of unnecessary tooling, how little it actually does, and the amount of learning and knowledge required not just for the process the tool manages, but also for the tool itself and well as its problems. You have to practically be a neckbeard with a BS in big-company bullshit software engineering tools to make the right tooling choices -- not so different from the programming neckbeard (except she's closer to the metal--and the process).
As an example, code that I inherited was configured to deploy with Capistrano... which was great, when it worked, but it would fail and all it was really doing was copying files to a new folder, symlinking the "current" folder to this new one, and restarting Apache (by the way-- here's a solution for your atomic copies). Sure, Capistrano abstracted deployment details away, but really, how many were there to begin with? Changing a couple development techniques and reducing deployment to a carefully-written 8-line shell script has eliminated nearly every problem we had related to deployment, reduced the architectural complexity of a part of our operations that we really do not want to care about, and given us something that can be taught to (and reasoned about by) new users in a matter of minutes -- all because we teach and stick close to the actual process of what we're doing. (It is WAAAAY easier to open up the script and say, "so this is where we copy the files over, this is where it decides which files are in the CDN, here's where it looks up that criteria..." than it is to try and guide new technical users through the thicket that is CI documentation)
I couldn't agree with too much tooling can abstract what is usually simple - just serve some HTML. I think that the whole API economy and saas/paas stuff really has to be evaluated carefully. You have business considerations around lockin, time to integrate vs time to build your own, etc. I think that they work really well when you're building something simple, but there is a range of the size of your site where it is more of a hinderance. The decision to use a service should be about what it gives you, not because it is cool.
Aside: I have totally been that engineer that has made something "clever". I am sure there are other engineers that curse me for what I thought was a great tool b/c I looked at the site for 0.1 seconds (sorry!).
I really wanted to address the talk about static.
Let's take the instance of a blog (like any of the heroku/rails tutorials out there). Yes, you must have a canonical place for the copy to live. Be it in a db or flat files on disk or in your git repo. But you don't need to have the actual request go to the origin for that info and then jam it through some jinja/unicorn/etc template. Just to render a silly article to the end user. When you write that article, you know what that page is going to look like, _why dynamically generate it_? This is the way that static can work, generate all the versions of the content and rely on JS to do magic on the frontend (https://www.destroyallsoftware.com/talks/the-birth-and-death...). Removing the whole call back to the origin db for what is essentially static content. This obviously is going to be faster than a DB query + template render + network traffic, as well as more secure. It is an http GET ~ hard exploit vector.
Now does this extend into the arena of apps (react, angular, newest fanciest JS framework). The actual assets are also static, no? They should be served exactly the same as the HTML we have. Then it is up to the JS to query whatever service/API you want and automagically generate some HTML.
The big thing is that services like wordpress/drupal/rails have made it very easy for people to build sites in a classic LAMP stack, but that is kinda flawed in a lot of ways. Wordpress's plugin system essentially lets you remotely run your code on their servers. That is a dangerous game to play. All to do something that doesn't even need a server in the first place. Why risk it when you don't need to? And you'd get some nice improvements if you don't. People shouldn't even know what a LAMP stack is to make there business site.
Now is this approach right for every site? Nopezzzz. I don't believe in silver bullets, but there are a lot of sites that fit this mold. And it is a different approach to building your site out.
Either way - sorry to hear about Capistrano. Shell scripts ftw (though I have some that are terrible out there too).
I don't see why. As soon as you use hashed names, you can upload all the content, without removing the old one. Then either replace the html files, or relink the main directory. Sure, if you're using some CDN that doesn't recheck very often, you need to invalidate the pages themselves - but that's regardless of the way of hosting.
FTP isn't nearly as cool today as it was when I published One of my first static sites circa 1997 (and that is truly a "static" site -- it hasn't been updated since 1998). Why do things the easy way?
Everything old is new again, I guess.
Gitlab even automatically builds with any static generator you want and deploys it.
Check this blog posts out:
you're absolutely right :) We did change it recently. Now Netlify Pages (Free plan) includes Custom Domain and HTTPS. It also gives you Continuous Delivery. Again, completely free of charge.
Lastly, any open source project get's the Pro Plan (normally $49 per month) for free as well :)
I mean, the 3rd link to "static site hosting" on google is amazon. (1st netlify)
There are also multiple issues with the Projects section on the author's homepage, including multiple sites with descriptions of "This is an example description," and one broken link.
EDIT: Looking at the OP's submission history, he has 6 submissions about or related to Netlify in the past 30 days.
As it is, I just run it in my amazon VPS that I have for other things anyway.
I'm sure you meant that.
Edit: we host all our sites - main site, engineering blog, press kit, and so on using Cloudfront in front of S3, and not only is it all-the-way fast (which we get comment after comment on) but it's cheap as all getout. https://tech.flyclops.com/posts/2016-04-27-flyclops-sites-st...
We work with several cloud providers to offer a better experience at the CDN level and manage caches for you. Our CDN also allows proxying end-to-end encrypted connections, so you can use it to host front-end apps and redirect requests to backend servers somewhere else.
We use http://deploybot.com but there are many others.
Edit: As informed, I was confusing cloudfront and cloudflare.
If you have a single HTML page which you edit manually and enjoy uploading to S3 using Transmit then sure that's a legitimate workflow and I've used it for years before migrating all of my sites to Netlify. Once your front-end needs its own build process there's a huge benefit in utilizing a service like Netlify to run your builds for you. This also gets you into the workflow of using source-control for your front-end (you're either doing it already or editing HTML files locally) and its just so convenient when your commits trigger instant builds+deploys - one less thing to worry about. In fact Netlify's new Deploy Previews and Deploy Contexts which build+deploy as many of your branches as you'd like is enabling completely new workflows that are genuinely helping teams scale their capacity because they spend less time on the mechanics of everything.
Like some have mentioned Netlify is like 9 tools built into one service taking care of you all along the way, and of course there's a free plan which beats GitLab/GitHub Pages every day of the week because that's what the company is set up to do, its not just a side feature they are maintaining. Netlify serves small personal blogs and main sites for billion dollar companies and spends its resources further developing tools to make developers and devops people's lives easier.
So once you commit and Netlify builds it also does atomic cache invalidations, deploys to a CDN, offers integrated pre-rendering/form-handling, password protection, snippet injection and many more features to make life simpler. Can you do all these things manually on your computer or on a VPS? of course you can, but are we developers lazy and enjoy tooling and services that let us spend time coding as opposed to devops'ing what some have already commoditized? yep. If you prefer your own Git installation and don't see benefit in using Github/Gitlab/Bitbucket/etc then Netlify is likely not for you.
Disclosure: I'm a Netlify investor and avid user.
For instance I recently purchased small VPSs at $11/yr (yes, per year) with the following specs: 1 vCPU, 768MB ram, 15GB disk, 3GB monthly bandwidth, 1Gbps link, 1 IPv4, 30 IPv6 (and "DDoS protection" whatever that means).
I use them for example to host various copies of my website (as a Tor Hidden Service (.onion), as a EepSite (.i2p), on IPFS, and on ZeroNet). I also have one that I only use as MX backup since I self-host my emails.
None of this would be possible even on multiples $ per month static hosting offers, or even on AWS as some suggest here, and yet it costs me less than a dollar per month :).
There are decent providers on there but caveat emptor.
What is so wrong with OpenVZ? (Which is indeed the technology at use.)
Netlify CTO here.
It's great that a combination of those services works for you. Our initial tier is completely free, $0. No costs per traffic nor storage, unlike S3.
Please, feel free to send questions and I'll be happy to answer them.
scp -R ./public_html sharedhosting:
Then again if cloudfront + s3 can indeed come in at under $1 per month then I can see why you'd go down that route. I'd like to see what kind of sites you can host for that cost.
Initial setup might take an hour but further deployments are pretty instant.