Hacker News new | comments | show | ask | jobs | submit login
[flagged] Hosting My Static Site (dajocarter.com)
52 points by gk1 on Aug 31, 2016 | hide | past | web | favorite | 71 comments



The author didn't even consider hosting on AWS directly, using S3? It's super cheap -- the biggest expense is buying a domain and hosting DNS. And HTTPS is straightforward as well, thanks to Certificate Manager!

I recently wrote a tutorial for how to get a static site set up with S3 and CloudFront: https://www.davidbaumgold.com/tutorials/deploy-static-site-a... I'm curious if others would find it useful?


I was just about to say the same! I've been VERY happy with my AWS static hosting via cloudfront. It's cheap, fast, and reliable. I can even deploy my site from the command line with this [1]!

It's very fast because it only pushes diffs. I use a bit of command line scripting to create an zip archival copy (for historical reasons) and to push the site to AWS just by doing:

$./deploy.sh

[1] https://github.com/laurilehmijoki/s3_website


Nice guide. I have my personal site on the same AWS S3/Cloudfront combo and I effectively only pay for the domain name ($12/yr) and certificate (~$35 for a 2 year?). AWS invoices me monthly for ~$0.05/month in fees with a credit that erases the charge, so I'm not actually paying for hosting. If you're unfamiliar with AWS it can be daunting, but it's mostly initial setup to get it working then uploading changed files whenever you update the site, which was a better choice for me than $5/$10/whatever each month for little or no additional benefit. I'd recommend it for anyone with a static site. I think you'd need a lot of traffic and/or updates to the site to incur $10/month in fees on AWS.


I agree with you, $10 in AWS fees sounds like a lot of traffic. That's also why Netlify has a free tier. We leave paid plans for people with more traffic. We event have a redundant DNS service for people that really really need it. For your needs is completely free of charge and hassle :)

PS: As I pointed below, I'm the CTO of Netlify.


I was just looking at your pricing. :) All in all Netlify does offer a compelling service for this scope. I'm also interested in the Pro tier being free for OSS, that's fantastic!

To be fair, setting up AWS for the first time is a real PITA and I happen to have had it all set up (IAM, buckets, command line, etc) already, so my barrier to entry was low. Had I known of Netlify when I first moved my site to AWS I probably would have gone with Netlify for the convenience and free HTTPS cert.


Great tutorial. Really well written, I'll check out what else you've written for sure. It is how ever ~20 pages long. And all of it can be done with a _single_ commandline in Netlify. Just saying :)


For you folks keeping track at home, Netlify is $108/yr for the lowest tier that lets you use a custom domain, while grandparent's advice is about $7/yr ($6/yr for Route 53 + "negligible" hosting and bandwidth).

Not to dump on Netlify, just reacting to those sly little not-really-equivalent promotions.


That's not correct.

Actually the completely Free tier gives you custom domain and HTTPS. And it also gives you Continuous Delivery. Again for free.

And for open source projects you even get the Pro Tier for free.

(discl.: As written in post below, and in my profile description I'm from Netlify)


It looks like I was incorrect, and ChristianBach is right. I'm sorry for spreading misinformation. I misread their pricing info.


If you're really a cheapskate and don't wanna pay $6/year for Route 53, Hurricane Electric offers free anycasted DNS hosting [0] for personal use.

(I don't currently use it but I have in the past, primarily because they'll slave off of your own master.)

[0]: https://dns.he.net/


A quick glance indicates your tutorial is the right one for me. I'll follow it and update my personal site this weekend and perhaps provide you feedback?


My personal site uses S3, but I hadn't thought to set up cloudfront in order to enable SSL. This is perfect, thanks!


I wouldn't call using AWS straightforward unless you're a developer. Heck your own tutorial is about 10 pages/screenfuls long.


that tutorial is pretty awesome, very well written.

Netlify can do everything you explain there by running one single command line. Check this video out: https://www.youtube.com/watch?v=IfFenanuRnc&feature=youtu.be


Literally ANY host that gives you an FTP drop works for static sites.

Bluehost

Dreamhost

Namecheap

Godaddy

Mom and Pop's Web Hosting Shop

etc etc

Not sure why such a topic needs more than about 1 paragraph's worth of commentary. Why does this guy needs support for all these fancy features? It's a static site!

Do people simply not pay attention to history? Leave it to the "hackers" to complicate something as simple as static hosting.


So static sites really have come a long way from the 1994 geocities versions. There are people that statically build sites with 1000's of pages. The idea is that you don't need to have the whole backend that we've all built before is growing (look at things like the serverless movement).

As soon as you have to do more than a junk website that you check occasionally and care about the actual user experience you need to start considering things like CDNs, cache invalidation (global consistency matters) and C/D integrations. This is why you have a growth of static site hosting services (and hence the article).

Some good references: http://www.staticgen.com/, http://jamstack.org/, https://www.smashingmagazine.com/2015/11/modern-static-websi...

also (talking about 1994 sites): http://www.warnerbros.com/archive/spacejam/movie/jam.htm is alive


Those are ages-old issues that have had solutions before the creation of all this fancy tooling.

You can invalidate cache with fake (ignored) URL parameters (http://foo.html?hello) or HEAD section declarations.

CDNs are no different, your links simply point elsewhere. Presumably some finishing script could capture your CDN'd data and copy everything to the appropriate place.

How does continuous delivery integration even have anything to do with transferring files to a hosting drop? How is it that an FTP drop is insufficient for integration testing?

A static site host is simply a webserver returning the HTML file you requested, nothing more and nothing less. Everything relevant that the server returns is under the control of the developer who made the page. The things you describe are value-adds; or, as I like to call them: crutches.


Absolutely you can solve this all with your own code. It is totally reasonable.

But it is your own code. Which means dev cost & maintenance. Hence there are services for it. Could you set up your own DB? rack and stack your own servers? Absolutely, but AWS is way easier. It is about letting the developer work on the part that matters; not rote problems that have been solved.

To specifically address your cache invalidation. (1) you have to manually invalidate _each_ resource. And you have to do that atomically or you run into issues surrounding different versions globally. (2) you release each of your resources to a versioned destination (e.g. <img src="v2/stupidimage.jpg"> -> <img src="v3/stupidimage.jpg">). Again, yes it is straight forward (not to say easy) but it it tedious, error prone and really just annoying to do. Because you actually want to use something like the SHA-1 of the content (better cache hits). Again, you have an issue around the atomic behaviour of updating the site.

And then you need to make sure that you set your headers right (which I always screw up). This is your E-Tag and cache control headers. This is to force the browser to do a conditional get request.

And continuous delivery is about removing the person from pipeline of push -> deploy. That means that even if you have a script that does your FTP drop, you have to configure that. Easy enough in something like circleCI, travis, or (for the daring) jenkins. But you still have an issue surrounding the atomic actions. FTP uploads _take time_ if you're constantly serving traffic - this could lead to odd problems around "what does the customer see". Those are real issues for sites, usually unreported by the user (They just refresh - but it colors the "feeling" of your site).

The other part about this is that it raises the barrier of entry into website development. Should you need a neckbeard and CS degree just to make a site for your mom? Hosting services like gitlab, github pages, netlify, s3 and the likes try to make it easier.

Yes - you can solve this all yourself; there are a finite amount of checkboxes you need to...well..check. But like you don't really want to rack and stack all your own servers, do you really want to spend your time thinking about that?


I'll agree, cache crap was and still is a pain in the ass to deal with.

On the dev cost and maintenance bit -- again, we're talking static sites. If it needs a database it's not static. If cache invalidation is such a big deal, presumably because you need it a ton for something, then I would ask if whatever you are working on is really static? Or is someone just shoehorning dynamic behavior into prerendered HTML? Is the site just incredibly high-traffic?

My bigger beef is with this proliferation of unnecessary tooling, how little it actually does, and the amount of learning and knowledge required not just for the process the tool manages, but also for the tool itself and well as its problems. You have to practically be a neckbeard with a BS in big-company bullshit software engineering tools to make the right tooling choices -- not so different from the programming neckbeard (except she's closer to the metal--and the process).

As an example, code that I inherited was configured to deploy with Capistrano... which was great, when it worked, but it would fail and all it was really doing was copying files to a new folder, symlinking the "current" folder to this new one, and restarting Apache (by the way-- here's a solution for your atomic copies). Sure, Capistrano abstracted deployment details away, but really, how many were there to begin with? Changing a couple development techniques and reducing deployment to a carefully-written 8-line shell script has eliminated nearly every problem we had related to deployment, reduced the architectural complexity of a part of our operations that we really do not want to care about, and given us something that can be taught to (and reasoned about by) new users in a matter of minutes -- all because we teach and stick close to the actual process of what we're doing. (It is WAAAAY easier to open up the script and say, "so this is where we copy the files over, this is where it decides which files are in the CDN, here's where it looks up that criteria..." than it is to try and guide new technical users through the thicket that is CI documentation)


+1 on screw the absolutely necessary caching. :)

I couldn't agree with too much tooling can abstract what is usually simple - just serve some HTML. I think that the whole API economy and saas/paas stuff really has to be evaluated carefully. You have business considerations around lockin, time to integrate vs time to build your own, etc. I think that they work really well when you're building something simple, but there is a range of the size of your site where it is more of a hinderance. The decision to use a service should be about what it gives you, not because it is cool.

Aside: I have totally been that engineer that has made something "clever". I am sure there are other engineers that curse me for what I thought was a great tool b/c I looked at the site for 0.1 seconds (sorry!).

I really wanted to address the talk about static.

Let's take the instance of a blog (like any of the heroku/rails tutorials out there). Yes, you must have a canonical place for the copy to live. Be it in a db or flat files on disk or in your git repo. But you don't need to have the actual request go to the origin for that info and then jam it through some jinja/unicorn/etc template. Just to render a silly article to the end user. When you write that article, you know what that page is going to look like, _why dynamically generate it_? This is the way that static can work, generate all the versions of the content and rely on JS to do magic on the frontend (https://www.destroyallsoftware.com/talks/the-birth-and-death...). Removing the whole call back to the origin db for what is essentially static content. This obviously is going to be faster than a DB query + template render + network traffic, as well as more secure. It is an http GET ~ hard exploit vector.

Now does this extend into the arena of apps (react, angular, newest fanciest JS framework). The actual assets are also static, no? They should be served exactly the same as the HTML we have. Then it is up to the JS to query whatever service/API you want and automagically generate some HTML.

The big thing is that services like wordpress/drupal/rails have made it very easy for people to build sites in a classic LAMP stack, but that is kinda flawed in a lot of ways. Wordpress's plugin system essentially lets you remotely run your code on their servers. That is a dangerous game to play. All to do something that doesn't even need a server in the first place. Why risk it when you don't need to? And you'd get some nice improvements if you don't. People shouldn't even know what a LAMP stack is to make there business site.

Now is this approach right for every site? Nopezzzz. I don't believe in silver bullets, but there are a lot of sites that fit this mold. And it is a different approach to building your site out.

Either way - sorry to hear about Capistrano. Shell scripts ftw (though I have some that are terrible out there too).


> Because you actually want to use something like the SHA-1 of the content (better cache hits). Again, you have an issue around the atomic behaviour of updating the site.

I don't see why. As soon as you use hashed names, you can upload all the content, without removing the old one. Then either replace the html files, or relink the main directory. Sure, if you're using some CDN that doesn't recheck very often, you need to invalidate the pages themselves - but that's regardless of the way of hosting.


I do understand the sentiment (my own first thought upon reading the title was "This is something I would've imagined reading 20 years ago!") but apparently it's kind of a fad nowadays to write your blog posts in Markdown, in a versioned repository, and use these tools to publish it to the server.

FTP isn't nearly as cool today as it was when I published One of my first static sites circa 1997 (and that is truly a "static" site -- it hasn't been updated since 1998). Why do things the easy way?

Everything old is new again, I guess.


GitLab Pages does support uploading your own SSL certificate, unlike GitHub: https://about.gitlab.com/2016/04/11/tutorial-securing-your-g...


I've tried gitlab pages out and love it. I don't understand why I'd pay $9 per month for a static site when there's a myriad options that are free and great :(

Gitlab even automatically builds with any static generator you want and deploys it.


Netlify Pages is our free tier that offers a few things that GitLab Pages doesn't offer. For instance, we build merge requests for you and integrate them with commit statuses. So you get a real development workflow for your websites.

Check this blog posts out: https://www.netlify.com/blog/2016/07/20/introducing-deploy-p... https://www.netlify.com/blog/2016/08/30/introducing-deploy-c...


Did you guys change it recently? I remember it being $9/mo the other day for a custom domain, which is pretty much a prerequisite for everyone. Your current pricing is much more attractive (of course, it's free)!


Hi StavrosK,

you're absolutely right :) We did change it recently. Now Netlify Pages (Free plan) includes Custom Domain and HTTPS. It also gives you Continuous Delivery. Again, completely free of charge.

Lastly, any open source project get's the Pro Plan (normally $49 per month) for free as well :)


This whole things feels like an ad.


Also submitted by a "Customer acquisition consultant to B2B tech companies." (info in the profile) and 2 Netlify people are here to respond to questions within 20 minutes of posting. I could be wrong, maybe it's just a complete coincidence or maybe gk1 did submit a real/"natural" post for Netlify. But it does sound like an ad that misses quite a few of known host sites almost on purpose.

I mean, the 3rd link to "static site hosting" on google is amazon. (1st netlify)


Also suspicious is that the blog only has three short posts, the oldest of which is in June. Two of the posts are on static sites, one of which is the one in the OP.

There are also multiple issues with the Projects section on the author's homepage, including multiple sites with descriptions of "This is an example description," and one broken link.

EDIT: Looking at the OP's submission history, he has 6 submissions about or related to Netlify in the past 30 days.


Whats the policy for dealing with abusers? (Imagines poles and fires.....)


Previously, when I've noticed flagrant violations (e.g. HN account that has only ever submitted articles at example.com, never commented, participated, etc.), I've e-mailed dang (hn@) and those accounts have been taken care of pretty quickly.


AWS S3 with static website enabled. Put a Cloudfront distribution in front of it. The whole thing won't cost you even $1 USD/month. Infinite scaling. Great tooling.


Let me use something besides awful cloudfront and I would so do this.

As it is, I just run it in my amazon VPS that I have for other things anyway.


You mean amazing, super-scaled CDN Cloudfront with support for free SSL certificates, S3 bucket origins, API-based invalidations with S3 deployments, and so many more features than I can mention here?

I'm sure you meant that.

Edit: we host all our sites - main site, engineering blog, press kit, and so on using Cloudfront in front of S3, and not only is it all-the-way fast (which we get comment after comment on) but it's cheap as all getout. https://tech.flyclops.com/posts/2016-04-27-flyclops-sites-st...


CloudFront is good. We use it for large assets. After managing thousands of sites, we realized that it's actually a hassle to manage cache invalidation yourself.

We work with several cloud providers to offer a better experience at the CDN level and manage caches for you. Our CDN also allows proxying end-to-end encrypted connections, so you can use it to host front-end apps and redirect requests to backend servers somewhere else.


I put it up on our blog, but many S3 deployment SaaS operations or even self-hosted solutions will invalidate CDN caches automatically based on files deployed to S3.

We use http://deploybot.com but there are many others.


I use s3_website (https://github.com/laurilehmijoki/s3_website) to deploy to S3 and invalidate in CloudFront.


You mean the awful gatekeeper of the Internet that soft-blocks tor and causes problems with legitimate traffic (tor or otherwise) across the world? Yeah, that one.

Edit: As informed, I was confusing cloudfront and cloudflare.


You do realise CloudFront (https://aws.amazon.com/cloudfront/) and CloudFlare (https://www.cloudflare.com/) are not the same?


You are confusing Cloudfront and Cloudflare.


For a static site, need something other than cloudfront?


Yeah, something that doesn't often block legitimate traffic would be wonderful.

Edit: As informed, I was confusing cloudfront and cloudflare.


You are probably talking about CloudFlare. Cloudfront by default won't block any pageviews.


I'm from Netlify. The point of the platform is that it's like 9 services in one. With free SSL that is renewed automatically and free hosting of Custom Domains. It also provides a specialized CDN that does Instant Cache Invalidation even while highly cached, atomic deploys, Continuous Delivery, Prerendering, Formhandling, and so much more. Perhaps you are thinking of static sites being merely tiny sites that have little or no updating, and doesn't need much of a workflow around it, and offers few features. There are solutions that's great for a small personal blog or a prototype. But if you are looking to do commercial sites, from smaller sites, to large enterprise sites with hundreds of contributors, multiple languages, and tens of thousands of HTML pages, then a viable and performant workflow is needed. This is what Netlify does. Integrates and automates all the services that goes in to deploying and maintaining a commercial-grade modern static site.


..that sounds like advertising copy to me. Perhaps you would like to disclose whether you are, in fact, a co-founder of Netlify?


He did say that in his first sentence, to be fair.


Is that your elevator pitch?


Surely HN knows how to host a static site without needing advice from the blogosphere..


The term 'static sites' is somewhat misused, its a spectrum that goes from static pages edited locally that get 1 page-view a month to the front-end code for large corporations which get millions of page-views a month. All of those fall under the modern usage of the term 'static sites'. As you move up that spectrum your static site suddenly needs to get built using a bunch of tooling, its dynamic in the sense that it interacts with many backend services via JS APIs, etc.

If you have a single HTML page which you edit manually and enjoy uploading to S3 using Transmit then sure that's a legitimate workflow and I've used it for years before migrating all of my sites to Netlify. Once your front-end needs its own build process there's a huge benefit in utilizing a service like Netlify to run your builds for you. This also gets you into the workflow of using source-control for your front-end (you're either doing it already or editing HTML files locally) and its just so convenient when your commits trigger instant builds+deploys - one less thing to worry about. In fact Netlify's new Deploy Previews and Deploy Contexts which build+deploy as many of your branches as you'd like is enabling completely new workflows that are genuinely helping teams scale their capacity because they spend less time on the mechanics of everything.

Like some have mentioned Netlify is like 9 tools built into one service taking care of you all along the way, and of course there's a free plan which beats GitLab/GitHub Pages every day of the week because that's what the company is set up to do, its not just a side feature they are maintaining. Netlify serves small personal blogs and main sites for billion dollar companies and spends its resources further developing tools to make developers and devops people's lives easier.

So once you commit and Netlify builds it also does atomic cache invalidations, deploys to a CDN, offers integrated pre-rendering/form-handling, password protection, snippet injection and many more features to make life simpler. Can you do all these things manually on your computer or on a VPS? of course you can, but are we developers lazy and enjoy tooling and services that let us spend time coding as opposed to devops'ing what some have already commoditized? yep. If you prefer your own Git installation and don't see benefit in using Github/Gitlab/Bitbucket/etc then Netlify is likely not for you.

Disclosure: I'm a Netlify investor and avid user.


If you need more than static hosting, there are quite good VPS that are very cheap. I regularly go to a blog which lists cheap VPSs called LowEndBox (you can easily find it using any web search engine, I don't want this comment to be an advertisement, even if the whole point of this thread seems to be exactly that).

For instance I recently purchased small VPSs at $11/yr (yes, per year) with the following specs: 1 vCPU, 768MB ram, 15GB disk, 3GB monthly bandwidth, 1Gbps link, 1 IPv4, 30 IPv6 (and "DDoS protection" whatever that means).

I use them for example to host various copies of my website (as a Tor Hidden Service (.onion), as a EepSite (.i2p), on IPFS, and on ZeroNet). I also have one that I only use as MX backup since I self-host my emails.

None of this would be possible even on multiples $ per month static hosting offers, or even on AWS as some suggest here, and yet it costs me less than a dollar per month :).


Good luck, but those VPSs are usually offered by fly-by-night companies, massively oversubscribed, running OpenVZ, and the company probably won't even be around a year from now.

There are decent providers on there but caveat emptor.


That may be a risk yes. But we'll see. For now the service is great and the support is quick to answer tickets :).

What is so wrong with OpenVZ? (Which is indeed the technology at use.)


To be honest, I hadn't heard of any of these static hosting providers. I've used S3/Google Cloud Storage with Cloudflare to get SSL. For low traffic sites this setup is effectively free and very easy to set up.


Hi,

Netlify CTO here.

It's great that a combination of those services works for you. Our initial tier is completely free, $0. No costs per traffic nor storage, unlike S3.

Please, feel free to send questions and I'll be happy to answer them.


Did you pay to have this piece written? It's suspicious that the author is so in favor of an overengineered solution for something as simple as static site hosting.


I use nearlyfreespeech for most of my hosting.


I moved an eight year old PHP wiki I didn't want to pay $10/mo for there, and I've been quite impressed. They're very good and very cheap!


I use it to host my personal site, and I'm currently hosting a Django project on NFS as well. Paying less than $1/mo for both.


Static site? Self host from home.


If you live in the right location, have enough bandwidth, and are sure you can properly isolate the host on your home network - great. Otherwise it's not the best solution.


Do you use dynamic dns or pay for a static IP? Seems like more trouble than it's worth the first and more expensive for the second.


Comcast changes my IP about once per year. I notice within a half day or so and manually update the A and AAAA records for my dot com, publish an update for namecoin domain, and my Tor address doesn't need updating. I've been self-hosting for almost two decades now with no problems.


yep, the cloud is overrated. I have my static site running on a raspberry pi, works great. BTW google domains has great ddns.


for static site, can't you use any shared hosting and have 1000sites for 3$/month? Not sure I follow the logic behind the article.


I would have thought so too. Then you can get your simple deploy by adding your provider to ~/.ssh/config and setting an alias for

    scp -R ./public_html sharedhosting:
in your shell rc file .. seems much easier than all this! you can even stick `gulp build && ` at the start of your alias if it so pleases you!

Then again if cloudfront + s3 can indeed come in at under $1 per month then I can see why you'd go down that route. I'd like to see what kind of sites you can host for that cost.


Yes I'll definitely expand my cloudfront + s3 for my next projects - to see how that could help. The only + for shared hosting is that I offer free shared hosting email with any hosting for my clients. Email are getting to be an expensive commodities.


What about Neocities? They provide HTTPS for every site, even without paying. I don't need it because I have my own servers, but I really think what they do is great. For example, each website they host is archived on IPFS :).

https://neocities.org/


https://firebase.google.com/docs/hosting/ also provides a free tier of static assets hosting


+1 on setting up an s3 bucket, cloudfront distribution, ssl sni, route 53 rules and simple "aws s3 sync" command with some cache-headers.

Initial setup might take an hour but further deployments are pretty instant.


s3_website is just about the easiest blog deployment system ever.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: