Hacker News new | past | comments | ask | show | jobs | submit login
A Starter Kit for Emergency Websites (mxb.dev)
323 points by mxbck 16 days ago | hide | past | web | favorite | 108 comments



What we did for tacticalvote.co.uk:

1. A static site generator, with markdown as the source input in Github

2. Data from Google Sheets

3. A bash job on a cron that would poll both for changes... if changes exist, re-publish the site or data and purge Cloudflare cache using their API

4. Configure Cloudflare via Page Rule to Cache Everything

Even with a very high change rate and hundreds of thousands of visitors a day and severe traffic spikes... the site was instantaneous to load, simple to maintain and update, and the cache purge stampede never overwhelmed the cheapest Linode serving the static files.

The content editors used Github as the CMS and edited Markdown, or just updated data in Google Sheets. Changes were live within 5 minutes.


I have something similar set up for a site I run using Jeykll and Google Cloud Build. The site is hosted on a free tier VM and short builds (eg 2 minutes) are free unless you're doing hundreds a day. I use a docker container to generate the site which is much faster than installing jekyll every time. The site isn't big, so a short fix is pushed live within about four minutes.

The builder just rebuilds when someone pushes to master and then scps to the VM. We've been trying out Forestry.io (linked to github) as a management client so that non technical authors can add content. It works to a point, but there are odd things like forestry has poor support for media that's not an image, and doesn't have a concept of folders. So everything gets thrown into "/media" which I hate. Also because it's using git as the database, it commits every time you save, which of course triggers a build. So if there was a way to add releases in forestry that'd be ideal.


That sounds like a really nice approach. Out of interest what does that sort of traffic end up costing on the Cloudflare side - presumably that's where the cost ends up here?


It's a free account on Cloudflare.

Cloudflare do not charge for bandwidth... and so it's free.

This whole setup is $5 per month for the hosting, though we do use Github personal and that is $7.00 per month.


That’s a pretty amazing setup for the traffic and press you were sustaining, nice!

Interesting. Have you made your source available?


People are missing a fundamentally important point. You CANNOT trust fremium services to prioritize your traffic at busy times because they will always cut back free users especially if it's interfering with the service they are providing paying customers. This includes Google Sheets, Netlify, Cloudfront, AWS, EVERYBODY.

To create truly fault tolerant services you CANNOT assume a fremium service will go out on a limb for you during a critical time.


Most people will run out of money (or the willingness to spend it) long before at freemium hosting would have given out.

Also, for anyone building a site they aren't directly responsible for, getting payment details in a crisis is effectively impossible.

Lastly, don't be so sure services like Netlify wouldn't help if you asked. They often do.


Seriously. A single web server with no dynamic content will handle a TON of traffic.


It's really the bandwidth that'll cost you :-)


Most web hosts and VPS solutions have unlimited traffic no?


No, most VPS start charging extra if you exceed their limits.


I get between 5 and 20 TB out per month, depending on host. That’s a loooot of 14kb page loads.


You must own your source of truth, if your Sheet is unavailable, you can't republish somewhere else.


Or just create a static HTML file and put the damn thing in a S3 bucket behind a cloudfront proxy.

That site is virtually guaranteed to never go down even with insane amoumts of traffic (plus it's edge optimized so a user in new delhi won't be sending requests to your server in los angeles)

edit: The whole setup takes like less than 2 minutes and can be even automated it with 2 aws cli commands.


That's pretty close to what Netlify is doing behind the scenes, isn't it? Why would configuring it yourself be better? Netlify has way more experience at that than most people.


I haven't used Netlify since I've been using Cloudfront since the day it launched many years ago. They give me free SSL, charge me like a dollar for it and have been ultra reliable forever.

I'm quite happy with other AWS architecutre also, e.g. SES for sending mail, lambda for serverless, etc so I like to stick with them. Awscli is also quite powerful and I'm able to setup the whole thing from scratch with a single bash script.

I'm sure Netlify must have simplified the process or made it easier and could be another great option.


Netlify likes to go down unless you are paying for the high tier. Also they charge for bandwidth after 100gb's ($20 for another 100gb's which is basically stealing money compare to how much a 100gbs is worth).

I would not recommend Netlify if you have other options.


I have more than 30 sites on the free tier of Netlify and I've never known it to be down.


This.

It would be nice to fork the project and do something similar with CloudFront. Any static object data can also be fetched from CloudFront as JSON files, and periodically updated by cache invalidation or cache expiry dates (ie. cache for 5 mins).


You lost most municipals with this. Yes, even this is too complicated.


care to write a little howto?

Thanks!


This isn't written by me but this looks like a nice step by step tutorial about the same (the reason it's so long because he covers everything including signin up for an amazon account, but the process is very easy and quick):

https://medium.com/tensult/creating-aws-cloudfront-distribut...

Another additional benefit of this is that you get a free SSL certificate from Amazon that virtually never expires. The price is also on-demand and very less (only pay for the bandwidth you use which is pretty cheap too)

These commands to re-deploy your site would be following (assuming aws-cli is installed)

    aws s3 cp ~/your-site/* s3://bucket-name
    aws cloudfront create-invalidation --distribution-id ID --paths /*
(sorry these commands are just from the top of my head, I couldn't double check)


I wrote a post on how I did it for my site:

https://ethanaa.com/blog/conversion-to-static-site-with-vuep...

There are follow up posts on CI / CD and search.


The simplest solution is to make a static website of your dynamic one using wget, and then publish that. I did this e.g. for Wordpress sites, works really well and is very reliable. The process can be triggered via a cron script or manually (I wrote a small Wordpress plugin for it). No special hardware, infrastructure or cloud services required. Just make sure all resources are reachable via a link (so wget can find them) or manually point it at a list of otherwise unreachable files (e.g. using a sitemap or .txt file).

The advantage is that you can still use your existing CMS, so your staff won't need to learn a new system, and you also don't need any third-party cloud services.

Actually, if your CMS is properly configured (e.g. correct cache headers) you can also simply put it behind a CDN like Cloudflare, which will handle the caching and scaling for you.


Any chance you could share the script / wget settings you are using for wordpress?


Normally you can just use

> wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org

This should make a full copy of your website (source: https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-...).

I don't have the source of the Wordpress plugin anymore unfortunately.


I like the idea of this but the process required to deploy it needs to be more carefully thought out. As it is, this currently assumes that any local agency or whatever contractor it works with will:

* Be familiar and comfortable with npm

* Be familiar and comfortable with netlify

* Be aware of this as a possible option

The bar for these folks is pretty darn low. A lot of organizations end up contracting with individuals or organizations not because they're up to speed on modern web development but because they knew them from somewhere else.

Setting this up as some kind of hosted service would probably be a good next step.


yep. This is a good tool written by a developer who thinks everyone else uses the same tools as he does


I agree, I'd love to see this as a docker container or Makefile where dependencies are automatically pulled in if not present on the host machine.


I think even Docker might be a big ask. I mean, my experience so far has been that a lot of the people that this is targeted towards are still using platforms like Wordpress, Drupal, and Magento (all php...). Heck, git adoption still isn't 100% in this market; I just recently received sftp credentials from an agency that works in this niche.


Command line would be the only necessity. You can write a script that could download docker & run it. That's what I meant by Makefile too.


I'd say CLI access is an even higher bar. Even in most medium-sized companies, the website people would never be able to get shell access in a reasonable amount of time, if at all, as they might be using web or CMS hosting.

Scale this up to a government organization, and the chance of it happening is basically zero, especially in an emergency. And even if they host their own web servers and manage to get access to them, the chances of them being able to run Docker or really anything besides what they were set up for without unreasonable effort are slim at best.

I'm not saying Netlify is a good solution, but it's one that a single creative tech could figure out and set up in a day and would be almost guaranteed to work well.


Agreed. Think something like sheet2site.com, but even then it requires one to be familiar with spreadsheets.


I'll add two more points:

- Remove all non-essential scripts: ads, analytics, fonts, social, liveperson, disqus, truste, foresee, cookielaw, etc.

- Scale down or omit images


Not to be "that guy", as I mostly like modern Web design, but:

- and after the emergency is over, keep it that way


Sometimes I contemplate launching a copycat conglomerate like Rocket, but instead of international copies of American and Chinese ideas it would do fast, minimal bandwidth & js clones with real privacy. The main problem is that network economics are a hell of a moat.

As the most bootstrap solution I could think of, a link dump with google sheets as backend -> rendered as static html with a few extra formatting options. Quick & dirty but something that most people are able to edit without needing to get used to a new GUI or create new accounts.

We then have a google form for public submission that feed the same sheet.

The document is 'published to the web' as csv, so there's no need to use the API / register an app.

example https://buletin.de/InitiativeDeAjutor/ https://github.com/pax/covid19ro-linkdump


Thank you for sharing this project! This could work nicely also for people wanting to move their business online. Multumiri!

Am I old school if I'm just asking for plain HTML, CSS3 and using SFTP to upload the stuff?

I don't get why it's cool to use NPM, a static site generator and netlify for an emergency website.


If you already have a NodeJS person, it would be far better than having to set up a static-optimized web server from scratch. You can't just replace your big fat Drupal (or worse) site's web root with a bunch of HTML files and have it perform as well as a static-optimizer server.

And even if you don't have anyone on-hand, it is, unfortunately, cheaper, easier and faster to learn all this crap and deploy to Netlify, than it is to set up a solid Web server from scratch (or even get good enough "just a folder" web hosting).


What’s wrong with just plain HTML and Nginx or Apache?


Same question. Netlify CMS and Eleventy seem like un-necessary complexity here. Apache or Nginx web servers can be setup with a one click install on most web hosts.


Well if you are looking at unnecessary complexity, there are simpler static HTTP servers than nginx and Apache. (-:


Seeking out relatively unknown and unsupported HTTP servers itself becomes unnecessary complexity.


You are making the error of thinking that everything apart from nginx and Apache has to be "sought out". That's simply narrow minded, bespeaking a lack of wide experience, and certainly not at all supported by even looking at Debian's or Arch's package repositories let alone at (say) the FreeBSD ports collection.


Yes, being narrow minded is sufficient in removing the complexities necessary to host a static website.

It's a non-sequitur to say since I don't endorse using other servers in this instance, I must lack knowledge of them. It's not about me or you, it's about the lowest common denominator of technical person necessary to maintain a static website.

Following your path, the complexity becomes contemplating why one should use lesser known web servers instead of what one is more likely to be familiar with.


And now you are trying to squirm out of what you said.

You were not not endorsing them, you were claiming that they had to be sought out and that that was unnecessary complexity. It is no more added complexity to install some other Debian/Arch/FreeBSD/Fedora/whatever package than it is to install those operating systems' nginx and Apache packages, and saying otherwise is really looking for any excuse to reinforce an existing narrowminded prejudice rather than a real evaluation of complexity.


> Following your path, the complexity becomes contemplating why one should use lesser known web servers instead of what one is more likely to be familiar with.

Read what I wrote again. "Seeking out" is not limited to just choosing a different package, it includes the cognitive costs of exploring an option that is hitherto unknown to oneself.

You only think I'm squirming because you are qualifying what "seeking out" means to a narrow view, ironically.

How would one who is unfamiliar with those "simpler HTTP servers" even know the fact without diving into comparing the complexities of Apache and nginx against the other options? Why dive into that complexity if one is already familiar with Apache and nginx, in this use case (I'm sure you'd agree that the average person tasked for this would be more likely to be familiar with those)? This is the point you're missing.


Not fancy enough to write a blog post about.


I've been using exactly the same approach for spinning up local COVID-19 support groups around London, one nice example is at https://holloway.coronacorps.com/. Really like the branch to subdomain mapping feature from them.


Let's pretend we are not web developers for a second. How do I build this website if I don't know what static page means?


I don't think this is the sort of thing a non-web developer should try to tackle on their own. If you are building something for emergency information, there is a lot of risk (loss of life?) if you can't get your message out.

Hire someone who knows how to build a scalable website. This isn't a horribly hard problem, but it's easy to make a mistake.


I don't think a developer should be required. This is the perfect time for the squarespaces and wixes of the world could offer minimal templates that do the job. They already have the infrastructure, and bombarded people with ads for websites.


This is a great question. Usually web developers aren't in charge of these projects, local city and state officials are. We should assume they're not technologically inclined. How can the information presented help them?

I wonder if there's room for a startup here, automated offsite emergency pages for town and city officials to use to quickly publish information.


> progressively enable offline-support w/ Service Worker

I'm a bit confused about this point. If it's a basic static site why would this be needed?


So that you can revisit the information even in the event that you loose internet access


The browser cache was supposed to handle this. Kind of stupid that you have to implement your own cache in JS now cause the browser can't be trusted.


Interesting, I always thought it would have served the cached page for some reason


Kinda funny that this modern tech needs to be used to read a plain html page offline.


In an emergency context where vital information is likely to change often, this seems a little strange. perhaps with a little (opt-in?) funding SMS could be used - surely that would be more available than internet


A Service Worker would give access to what could be important information vs no information. Given the nature of an emergency site it probably means a heightened chance of people losing internet availability.

If there is an updated version a Service Worker can check for that and pull it in if there is a connection.

Whilst imperfect to have potentially out of date information, that only happens if the person has no internet and the information has changed since they accessed it - I think it is worth the trade off of people not having any information.

For critical things, SMS probably makes more sense, but I'm not sure that is what they are trying to solve here.


SMS is usually displayed to the user as a linear chat history, and may not work well enough for large and complex content.

If vital information changes, it’s incredibly difficult to consolidate (where in the SMS thread was the latest update on topic XY again?)

SMS also can’t use pictures, requires stateful server infrastructure, is not easy to bookmark, is irretrievable when deleted, and can’t be shared as quickly as a URL.


I think offline access is really useful. Using time stamps and letting the user know they're offline will let them be able fo determine how stale the information their getting is


I mean, call me crazy, if this is for an emergency, wouldn't it make more sense to provide a small archive with the source files in it, no third party at all.

requires an unarchiver, and a text editor.


Way above the technical skill level of a lot of people.


Provide how? From a webserver? Back to square 1.


Web seeded torrent?


> When it comes to resilience, you just can’t beat static HTML.

We don’t actually need HTML for every case. For even more resilience, we could just push text files with markdown-like formatting characters that people might understand to give the content some hierarchy and emphasis. This would be just content and content alone. Obviously, this wouldn’t be appropriate for all use cases, but if you’re just sharing updates, it could (depending on other factors) be a simpler solution to implement.


Modern browsers are really shitty text viewers, especially mobile browsers. They have unreliable scaling, line wrapping, and are generally piss poor at the task. An HTML document with zero CSS will be displayed far better in browsers. A couple CSS statements in a script tag can make that basic document easily readable and even good looking.

With HTML you have the ability to link to other sites, documents, or anchors in those documents. This makes navigation much simpler than shittily displayed plain text. You're also less likely to have your document mangled by the browser like with plain text.

Also remember just about anyone with a smartphone knows how to navigate the web (links, back buttons, etc). If you break those UI paradigms by sending them plain text documents you've made it harder for a good percentage of the population to effectively use that data.


https://emergency-site.dev/posts/2020-03-22-example-post/

I would provide much less background information here. "What you need to do" should definitely not be below the fold. You should boil it down to:

1. This is happening 2. This is what you personally need to do about it.

Then add whatever else you want after that.

I know this is just an example, but it should set a good example.


As a technical writer if push comes to shove I would recommend leading off with what you need to do first and foremost, and then going into detail about why as needed. You should optimize your content for the audience that is already convinced and just wants to know what they need to do in order to stay safe. You can follow up with the information explaining why they need to do this for the secondary audience of people who still need convincing.

In other words, if you think about it in terms of the typical inverted pyramid model of journalism (which many here on HN already know about), what you need to do is the most important information, why you need to do it is secondary. That might be debatable to some people but that's how I view it.


IIRC (from taking journalism in high school), journalistic standards are:

Tell the most important info in the title as briefly as possible.

Repeat the most important info with a little more detail in the first paragraph.

Repeat your main point and add more details in additional paragraphs. You should be able to cut out the final paragraphs (or not bother to read them) without losing any actually critical information.

Each additional paragraph should add new information, but not be essential to the main point of the piece.

Answer: who, what, when, where, how and why.


Can someone explain why I would need a static site generator instead of just writing what I want in html in the first place?


For me maintaining common elements across pages is a reason not to use “just html”

It’s error prone to make changes in the header and footer for example across all pages if you have lots of pages


Wouldn't the people putting out emergency website likely be local governments? If so, wouldn't there be hurdles to using cloud providers like Netlify?

How do local governments usually host their websites?


I work with some local governments in cloud consulting. They use different clouds the same as anyone else.

We actually see a lot of local governments (cities or metro agencies) doing things in AWS.

I'd say the likelihood in my experience of them grabbing this kind of thing and deploying it is probably pretty low. The ones I work with would leave that up to a contractor.


Great insight, thank you!


I work with a few thousand local government entities, and they rarely have concerns with cloud hosting. They do have concerns with regulatory compliance, which means 100% accessibility on anything they publish. They also need to meet any public records laws in their jurisdiction. This tool seems to be fine for accessibility, and Netlify has a history of deployments so they have an audit trail of the public content... so I don't see anything here that would be problematic.


Not on Netlify.


Interesting idea but the complexity here is still way too high with a lot of dependencies. Realistically couldn't we just host txt files in an emergency?


I'm a lil surprised at the speed!

also, a little cheap addition would be saying the "you're viewing this page: online/offline, last refresh: now/two days ago" or such, and possibly either a button, or automatic popup for the "add to home/desktop" pwa button...


apparently missed the pwa button, that's my bad, initially viewed in a HN wrapper app


Very neat. Those minor details in the spec is probably what we easily forget about. Didn’t see a mention of CDN on the front page but that could be another good must-have to add to the list to improve resiliency - even just a free cloudflare setup in front.


Well it specifically mentions netlify which is already a CDN, but good idea anyway.


Netlify uses a CDN by default ;-)


What am I missing?

A highly performant web server like nginx.

Static html content.

Done.


A way for government employees who don't know what a command line is to edit it.

Fault tolerance in case something happens to your web server.

A way for people to read it offline when their internet connection goes down (presuming they haven't saved the HTML).


I disagree that this approach is any better than your standard static html and nginx. In an emergency situation I would want to reduce the footprint for error and keep everything as simple as possible. A CMS is totally overkill for this. There is nothing wrong with having a developer write HTML based on a word document given to them. If it were a true emergency that developer resource would be dedicated to doing this. Hell Word can export documents to HTML anyway so that is half the battle already won. This article just seems gimmicky to me.


It's adding a load of unnecessary tools and services to address a relatively simple process.

People forget how simple the web can be.


Other than service workers I don't see how this article handles points one and two. My answer there is: use a cdn.


If it's all client side s3/cloudfront and you're done. Costs pennies.

You can go super jank using s3 only with simple sites but realistically you're going to want TLS at some point.


Cloudfront in front of S3 with a free ACM cert. done


Nicely done. Wish this was the starter kit for most websites.


I think this is a great idea, and applaud the idea of bringing a high availability, high-load site template available to others.

The number of times I have thought in the past few weeks that if they had just used some static pages on S3 behind Cloudfront, or some kind of CDN, that much pain could have been averted.

Of course the first thing I did was to benchmark the test site to see how their edge network performs. For reference I'm based in Melbourne, Australia, and have a 100mbps download, 50mbps upload connections:

  $ ab -n 10000 -c 100 https://emergency-site.dev/
  This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
  Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  Licensed to The Apache Software Foundation, http://www.apache.org/
  
  Benchmarking emergency-site.dev (be patient)
  Completed 1000 requests
  Completed 2000 requests
  Completed 3000 requests
  Completed 4000 requests
  Completed 5000 requests
  Completed 6000 requests
  Completed 7000 requests
  Completed 8000 requests
  Completed 9000 requests
  Completed 10000 requests
  Finished 10000 requests
  
  
  Server Software:        Netlify
  Server Hostname:        emergency-site.dev
  Server Port:            443
  SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
  TLS Server Name:        emergency-site.dev
  
  Document Path:          /
  Document Length:        4836 bytes
  
  Concurrency Level:      100
  Time taken for tests:   106.534 seconds
  Complete requests:      10000
  Failed requests:        0
  Total transferred:      53220000 bytes
  HTML transferred:       48360000 bytes
  Requests per second:    93.87 [#/sec] (mean)
  Time per request:       1065.345 [ms] (mean)
  Time per request:       10.653 [ms] (mean, across all concurrent requests)
  Transfer rate:          487.85 [Kbytes/sec] received
  
  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:      713  808  30.7    803    1828
  Processing:   230  236   4.8    236     443
  Waiting:      230  236   3.9    236     310
  Total:        956 1044  31.7   1039    2067
  
  Percentage of the requests served within a certain time (ms)
    50%   1039
    66%   1047
    75%   1053
    80%   1057
    90%   1070
    95%   1082
    98%   1107
    99%   1168
   100%   2067 (longest request)
I know there's much better ways of testing load/performance. It's just what I had on hand.


> I know there's much better ways of testing load/performance.

such as

https://github.com/giltene/wrk2


Seriously, if you want something in an emergency- go to somewhere like weebly.com and set up a free site.


The one-click deploy to Netlify worked for me. Have used this to spin up a site listing crisis communications advice around Coronavirus: https://coronaviruscomms.netlify.com/


I searched for terms like IPFS and FreeNet in this HN discussion and surprisingly didn't find them. I think distributing to geographically localized p2p networks should be part of any system for emergency websites. An emergency need not necessarily be natural and unintentional.


I love distributed websites, but hardly anyone uses these things. I don't even know of any mobile apps for IPFS, Dat, or FreeNet. They should absolutely be more widespread, but crisis mode website operations need to focus on things the majority of users can access.


A README.md in most Github repo is already a statically published page, easy to author and easy to access. Even most PWA festures can be added to this setup with just Jekyll configs. Am I missing something in the question?


a radically simple microblog for single-user laypersons publishing on their own property (yes, one needs to buy some kind of hosting starting with 2 Eur/Month in DE): https://github.com/mro/ShaarliGo#install--update


But does it work with Netscape 3 and IE4?


Awesome, thank you!


Netlify CMS is based on React. In what universe could this be considered appropriate for a basic emergency website?

Edit: I don't really mind if React is only used as the authoring interface. But the consumer view should not require Javascript, and AFAICT Netlify CMS does.


Because the CMS is decoupled from the content, and installing Netlify CMS is very fast. I was able to coach an SMB owner into using it for their website in about 1h tops (after setting everything up myself).


You can view the demo without JavaScript and it mentions static site generation.


Netlify CMS is a headless CMS, you can display your content in plain text if you want. React is only needed in the authoring interface.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: