
A Starter Kit for Emergency Websites - mxbck
https://mxb.dev/blog/emergency-website-kit/
======
buro9
What we did for tacticalvote.co.uk:

1\. A static site generator, with markdown as the source input in Github

2\. Data from Google Sheets

3\. A bash job on a cron that would poll both for changes... if changes exist,
re-publish the site or data and purge Cloudflare cache using their API

4\. Configure Cloudflare via Page Rule to Cache Everything

Even with a very high change rate and hundreds of thousands of visitors a day
and severe traffic spikes... the site was instantaneous to load, simple to
maintain and update, and the cache purge stampede never overwhelmed the
cheapest Linode serving the static files.

The content editors used Github as the CMS and edited Markdown, or just
updated data in Google Sheets. Changes were live within 5 minutes.

~~~
mcintyre1994
That sounds like a really nice approach. Out of interest what does that sort
of traffic end up costing on the Cloudflare side - presumably that's where the
cost ends up here?

~~~
buro9
It's a free account on Cloudflare.

Cloudflare do not charge for bandwidth... and so it's free.

This whole setup is $5 per month for the hosting, though we do use Github
personal and that is $7.00 per month.

~~~
mcintyre1994
That’s a pretty amazing setup for the traffic and press you were sustaining,
nice!

------
tmpz22
People are missing a fundamentally important point. You CANNOT trust fremium
services to prioritize your traffic at busy times because they will always cut
back free users especially if it's interfering with the service they are
providing paying customers. This includes Google Sheets, Netlify, Cloudfront,
AWS, EVERYBODY.

To create truly fault tolerant services you CANNOT assume a fremium service
will go out on a limb for you during a critical time.

~~~
nkozyra
Seriously. A single web server with no dynamic content will handle a TON of
traffic.

~~~
goldenkey
It's really the bandwidth that'll cost you :-)

~~~
yoz-y
Most web hosts and VPS solutions have unlimited traffic no?

~~~
forgotmyhnacc
No, most VPS start charging extra if you exceed their limits.

~~~
Aeolun
I get between 5 and 20 TB out per month, depending on host. That’s a loooot of
14kb page loads.

------
superasn
Or just create a static HTML file and put the damn thing in a S3 bucket behind
a cloudfront proxy.

That site is virtually guaranteed to never go down even with insane amoumts of
traffic (plus it's edge optimized so a user in new delhi won't be sending
requests to your server in los angeles)

edit: The whole setup takes like less than 2 minutes and can be even automated
it with 2 aws cli commands.

~~~
onion2k
That's pretty close to what Netlify is doing behind the scenes, isn't it? Why
would configuring it yourself be better? Netlify has way more experience at
that than most people.

~~~
dubcanada
Netlify likes to go down unless you are paying for the high tier. Also they
charge for bandwidth after 100gb's ($20 for another 100gb's which is basically
stealing money compare to how much a 100gbs is worth).

I would not recommend Netlify if you have other options.

~~~
onion2k
I have more than 30 sites on the free tier of Netlify and I've never known it
to be down.

------
ThePhysicist
The simplest solution is to make a static website of your dynamic one using
wget, and then publish that. I did this e.g. for Wordpress sites, works really
well and is very reliable. The process can be triggered via a cron script or
manually (I wrote a small Wordpress plugin for it). No special hardware,
infrastructure or cloud services required. Just make sure all resources are
reachable via a link (so wget can find them) or manually point it at a list of
otherwise unreachable files (e.g. using a sitemap or .txt file).

The advantage is that you can still use your existing CMS, so your staff won't
need to learn a new system, and you also don't need any third-party cloud
services.

Actually, if your CMS is properly configured (e.g. correct cache headers) you
can also simply put it behind a CDN like Cloudflare, which will handle the
caching and scaling for you.

~~~
edsimpson
Any chance you could share the script / wget settings you are using for
wordpress?

~~~
ThePhysicist
Normally you can just use

> wget --mirror --convert-links --adjust-extension --page-requisites --no-
> parent [http://example.org](http://example.org)

This should make a full copy of your website (source:
[https://www.guyrutenberg.com/2014/05/02/make-offline-
mirror-...](https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-
site-using-wget/)).

I don't have the source of the Wordpress plugin anymore unfortunately.

------
thaumaturgy
I like the idea of this but the process required to deploy it needs to be more
carefully thought out. As it is, this currently assumes that any local agency
or whatever contractor it works with will:

* Be familiar and comfortable with npm

* Be familiar and comfortable with netlify

* Be aware of this as a possible option

The bar for these folks is pretty darn low. A _lot_ of organizations end up
contracting with individuals or organizations not because they're up to speed
on modern web development but because they knew them from somewhere else.

Setting this up as some kind of hosted service would probably be a good next
step.

~~~
miles_matthias
I agree, I'd love to see this as a docker container or Makefile where
dependencies are automatically pulled in if not present on the host machine.

~~~
thaumaturgy
I think even Docker might be a big ask. I mean, my experience so far has been
that a lot of the people that this is targeted towards are still using
platforms like Wordpress, Drupal, and Magento (all php...). Heck, git adoption
still isn't 100% in this market; I just recently received sftp credentials
from an agency that works in this niche.

~~~
miles_matthias
Command line would be the only necessity. You can write a script that could
download docker & run it. That's what I meant by Makefile too.

~~~
franga2000
I'd say CLI access is an even higher bar. Even in most medium-sized companies,
the website people would never be able to get shell access in a reasonable
amount of time, if at all, as they might be using web or CMS hosting.

Scale this up to a government organization, and the chance of it happening is
basically zero, especially in an emergency. And even if they host their own
web servers and manage to get access to them, the chances of them being able
to run Docker or really anything besides what they were set up for without
unreasonable effort are slim at best.

I'm not saying Netlify is a good solution, but it's one that a single creative
tech could figure out and set up in a day and would be almost guaranteed to
work well.

------
fludlight
I'll add two more points:

\- Remove all non-essential scripts: ads, analytics, fonts, social,
liveperson, disqus, truste, foresee, cookielaw, etc.

\- Scale down or omit images

~~~
franga2000
Not to be "that guy", as I mostly like modern Web design, but:

\- and after the emergency is over, keep it that way

~~~
fludlight
Sometimes I contemplate launching a copycat conglomerate like Rocket, but
instead of international copies of American and Chinese ideas it would do
fast, minimal bandwidth & js clones with real privacy. The main problem is
that network economics are a hell of a moat.

------
pax
As the most bootstrap solution I could think of, a link dump with google
sheets as backend -> rendered as static html with a few extra formatting
options. Quick & dirty but something that most people are able to edit without
needing to get used to a new GUI or create new accounts.

We then have a google form for public submission that feed the same sheet.

The document is 'published to the web' as csv, so there's no need to use the
API / register an app.

example
[https://buletin.de/InitiativeDeAjutor/](https://buletin.de/InitiativeDeAjutor/)
[https://github.com/pax/covid19ro-linkdump](https://github.com/pax/covid19ro-
linkdump)

~~~
dimovich
Thank you for sharing this project! This could work nicely also for people
wanting to move their business online. Multumiri!

------
hilti
Am I old school if I'm just asking for plain HTML, CSS3 and using SFTP to
upload the stuff?

I don't get why it's cool to use NPM, a static site generator and netlify for
an emergency website.

~~~
franga2000
If you already have a NodeJS person, it would be far better than having to set
up a static-optimized web server from scratch. You can't just replace your big
fat Drupal (or worse) site's web root with a bunch of HTML files and have it
perform as well as a static-optimizer server.

And even if you don't have anyone on-hand, it is, unfortunately, cheaper,
easier and _faster_ to learn all this crap and deploy to Netlify, than it is
to set up a solid Web server from scratch (or even get good enough "just a
folder" web hosting).

------
lovetocode
What’s wrong with just plain HTML and Nginx or Apache?

~~~
omarchowdhury
Same question. Netlify CMS and Eleventy seem like un-necessary complexity
here. Apache or Nginx web servers can be setup with a one click install on
most web hosts.

~~~
JdeBP
Well if you are looking at unnecessary complexity, there are simpler static
HTTP servers than nginx and Apache. (-:

~~~
omarchowdhury
Seeking out relatively unknown and unsupported HTTP servers itself becomes
unnecessary complexity.

~~~
JdeBP
You are making the error of thinking that everything apart from nginx and
Apache has to be "sought out". That's simply narrow minded, bespeaking a lack
of wide experience, and certainly not at all supported by even looking at
Debian's or Arch's package repositories let alone at (say) the FreeBSD ports
collection.

~~~
omarchowdhury
Yes, being narrow minded is sufficient in removing the complexities necessary
to host _a static website_.

It's a non-sequitur to say since I don't endorse using other servers in this
instance, I must lack knowledge of them. It's not about me or you, it's about
the lowest common denominator of technical person necessary to maintain _a
static website_.

Following your path, the complexity becomes contemplating why one should use
lesser known web servers instead of what one is more likely to be familiar
with.

~~~
JdeBP
And now you are trying to squirm out of what you said.

You were _not_ not endorsing them, you were claiming that they had to be
sought out and that that was unnecessary complexity. It is no more added
complexity to install some _other_ Debian/Arch/FreeBSD/Fedora/whatever package
than it is to install those operating systems' nginx and Apache packages, and
saying otherwise is really looking for any excuse to reinforce an existing
narrowminded prejudice rather than a real evaluation of complexity.

~~~
omarchowdhury
> Following your path, the complexity becomes contemplating why one should use
> lesser known web servers instead of what one is more likely to be familiar
> with.

Read what I wrote again. "Seeking out" is not limited to just choosing a
different package, it includes the cognitive costs of exploring an option that
is hitherto unknown to oneself.

You only think I'm squirming because you are qualifying what "seeking out"
means to a narrow view, ironically.

How would one who is unfamiliar with those "simpler HTTP servers" even know
the fact without diving into comparing the complexities of Apache and nginx
against the other options? Why dive into that complexity if one is already
familiar with Apache and nginx, in this use case (I'm sure you'd agree that
the average person tasked for this would be more likely to be familiar with
those)? This is the point you're missing.

------
3dprintscanner
I've been using exactly the same approach for spinning up local COVID-19
support groups around London, one nice example is at
[https://holloway.coronacorps.com/](https://holloway.coronacorps.com/). Really
like the branch to subdomain mapping feature from them.

------
firefoxd
Let's pretend we are not web developers for a second. How do I build this
website if I don't know what static page means?

~~~
thanksforfish
I don't think this is the sort of thing a non-web developer should try to
tackle on their own. If you are building something for emergency information,
there is a lot of risk (loss of life?) if you can't get your message out.

Hire someone who knows how to build a scalable website. This isn't a horribly
hard problem, but it's easy to make a mistake.

~~~
firefoxd
I don't think a developer should be required. This is the perfect time for the
squarespaces and wixes of the world could offer minimal templates that do the
job. They already have the infrastructure, and bombarded people with ads for
websites.

------
hanniabu
> progressively enable offline-support w/ Service Worker

I'm a bit confused about this point. If it's a basic static site why would
this be needed?

~~~
simon_acca
So that you can revisit the information even in the event that you loose
internet access

~~~
867-5309
In an emergency context where vital information is likely to change often,
this seems a little strange. perhaps with a little (opt-in?) funding SMS could
be used - surely that would be more available than internet

~~~
TomAnthony
A Service Worker would give access to what could be important information vs
no information. Given the nature of an emergency site it probably means a
heightened chance of people losing internet availability.

If there is an updated version a Service Worker can check for that and pull it
in if there is a connection.

Whilst imperfect to have potentially out of date information, that only
happens if the person has no internet and the information has changed since
they accessed it - I think it is worth the trade off of people not having any
information.

For critical things, SMS probably makes more sense, but I'm not sure that is
what they are trying to solve here.

------
Meph504
I mean, call me crazy, if this is for an emergency, wouldn't it make more
sense to provide a small archive with the source files in it, no third party
at all.

requires an unarchiver, and a text editor.

~~~
tantalor
Provide how? From a webserver? Back to square 1.

~~~
e12e
Web seeded torrent?

------
newscracker
> When it comes to resilience, you just can’t beat static HTML.

We don’t actually need HTML for every case. For even more resilience, we could
just push text files with markdown-like formatting characters that people
might understand to give the content some hierarchy and emphasis. This would
be just content and content alone. Obviously, this wouldn’t be appropriate for
all use cases, but if you’re just sharing updates, it could (depending on
other factors) be a simpler solution to implement.

~~~
giantrobot
Modern browsers are really shitty text viewers, especially mobile browsers.
They have unreliable scaling, line wrapping, and are generally piss poor at
the task. An HTML document with zero CSS will be displayed far better in
browsers. A couple CSS statements in a script tag can make that basic document
easily readable and even good looking.

With HTML you have the ability to link to other sites, documents, or anchors
in those documents. This makes navigation much simpler than shittily displayed
plain text. You're also less likely to have your document mangled by the
browser like with plain text.

Also remember just about anyone with a smartphone knows how to navigate the
web (links, back buttons, etc). If you break those UI paradigms by sending
them plain text documents you've made it _harder_ for a good percentage of the
population to effectively use that data.

------
ddevault
[https://emergency-site.dev/posts/2020-03-22-example-post/](https://emergency-
site.dev/posts/2020-03-22-example-post/)

I would provide much less background information here. "What you need to do"
should definitely not be below the fold. You should boil it down to:

1\. This is happening 2\. This is what you personally need to do about it.

Then add whatever else you want after that.

I know this is just an example, but it should set a _good_ example.

~~~
kaycebasques
As a technical writer if push comes to shove I would recommend leading off
with what you need to do first and foremost, and then going into detail about
why as needed. You should optimize your content for the audience that is
already convinced and just wants to know what they need to do in order to stay
safe. You can follow up with the information explaining why they need to do
this for the secondary audience of people who still need convincing.

In other words, if you think about it in terms of the typical inverted pyramid
model of journalism (which many here on HN already know about), what you need
to do is the most important information, why you need to do it is secondary.
That might be debatable to some people but that's how I view it.

~~~
DoreenMichele
IIRC (from taking journalism in high school), journalistic standards are:

Tell the most important info in the title as briefly as possible.

Repeat the most important info with a little more detail in the first
paragraph.

Repeat your main point and add more details in additional paragraphs. You
should be able to cut out the final paragraphs (or not bother to read them)
without losing any actually critical information.

Each additional paragraph should add new information, but not be essential to
the main point of the piece.

Answer: who, what, when, where, how and why.

------
classics2
Can someone explain why I would need a static site generator instead of just
writing what I want in html in the first place?

~~~
nsomaru
For me maintaining common elements across pages is a reason not to use “just
html”

It’s error prone to make changes in the header and footer for example across
all pages if you have lots of pages

------
mac-chaffee
Wouldn't the people putting out emergency website likely be local governments?
If so, wouldn't there be hurdles to using cloud providers like Netlify?

How do local governments usually host their websites?

~~~
sl1ck731
I work with some local governments in cloud consulting. They use different
clouds the same as anyone else.

We actually see a lot of local governments (cities or metro agencies) doing
things in AWS.

I'd say the likelihood in my experience of them grabbing this kind of thing
and deploying it is probably pretty low. The ones I work with would leave that
up to a contractor.

~~~
mac-chaffee
Great insight, thank you!

------
jan6
I'm a lil surprised at the speed!

also, a little cheap addition would be saying the "you're viewing this page:
online/offline, last refresh: now/two days ago" or such, and possibly either a
button, or automatic popup for the "add to home/desktop" pwa button...

~~~
jan6
apparently missed the pwa button, that's my bad, initially viewed in a HN
wrapper app

------
jamil7
Interesting idea but the complexity here is still way too high with a lot of
dependencies. Realistically couldn't we just host txt files in an emergency?

------
daniel_iversen
Very neat. Those minor details in the spec is probably what we easily forget
about. Didn’t see a mention of CDN on the front page but that could be another
good must-have to add to the list to improve resiliency - even just a free
cloudflare setup in front.

~~~
kall
Well it specifically mentions netlify which is already a CDN, but good idea
anyway.

------
nkozyra
What am I missing?

A highly performant web server like nginx.

Static html content.

Done.

~~~
t0astbread
A way for government employees who don't know what a command line is to edit
it.

Fault tolerance in case something happens to your web server.

A way for people to read it offline when their internet connection goes down
(presuming they haven't saved the HTML).

~~~
lovetocode
I disagree that this approach is any better than your standard static html and
nginx. In an emergency situation I would want to reduce the footprint for
error and keep everything as simple as possible. A CMS is totally overkill for
this. There is nothing wrong with having a developer write HTML based on a
word document given to them. If it were a true emergency that developer
resource would be dedicated to doing this. Hell Word can export documents to
HTML anyway so that is half the battle already won. This article just seems
gimmicky to me.

~~~
nkozyra
It's adding a load of unnecessary tools and services to address a relatively
simple process.

People forget how simple the web can be.

------
whatsmyusername
If it's all client side s3/cloudfront and you're done. Costs pennies.

You can go super jank using s3 only with simple sites but realistically you're
going to want TLS at some point.

~~~
_alex_
Cloudfront in front of S3 with a free ACM cert. done

------
yonilevy
Nicely done. Wish this was the starter kit for most websites.

------
kernelsanderz
I think this is a great idea, and applaud the idea of bringing a high
availability, high-load site template available to others.

The number of times I have thought in the past few weeks that if they had just
used some static pages on S3 behind Cloudfront, or some kind of CDN, that much
pain could have been averted.

Of course the first thing I did was to benchmark the test site to see how
their edge network performs. For reference I'm based in Melbourne, Australia,
and have a 100mbps download, 50mbps upload connections:

    
    
      $ ab -n 10000 -c 100 https://emergency-site.dev/
      This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
      Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
      Licensed to The Apache Software Foundation, http://www.apache.org/
      
      Benchmarking emergency-site.dev (be patient)
      Completed 1000 requests
      Completed 2000 requests
      Completed 3000 requests
      Completed 4000 requests
      Completed 5000 requests
      Completed 6000 requests
      Completed 7000 requests
      Completed 8000 requests
      Completed 9000 requests
      Completed 10000 requests
      Finished 10000 requests
      
      
      Server Software:        Netlify
      Server Hostname:        emergency-site.dev
      Server Port:            443
      SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
      TLS Server Name:        emergency-site.dev
      
      Document Path:          /
      Document Length:        4836 bytes
      
      Concurrency Level:      100
      Time taken for tests:   106.534 seconds
      Complete requests:      10000
      Failed requests:        0
      Total transferred:      53220000 bytes
      HTML transferred:       48360000 bytes
      Requests per second:    93.87 [#/sec] (mean)
      Time per request:       1065.345 [ms] (mean)
      Time per request:       10.653 [ms] (mean, across all concurrent requests)
      Transfer rate:          487.85 [Kbytes/sec] received
      
      Connection Times (ms)
                    min  mean[+/-sd] median   max
      Connect:      713  808  30.7    803    1828
      Processing:   230  236   4.8    236     443
      Waiting:      230  236   3.9    236     310
      Total:        956 1044  31.7   1039    2067
      
      Percentage of the requests served within a certain time (ms)
        50%   1039
        66%   1047
        75%   1053
        80%   1057
        90%   1070
        95%   1082
        98%   1107
        99%   1168
       100%   2067 (longest request)
    

I know there's much better ways of testing load/performance. It's just what I
had on hand.

~~~
sumnulu
> I know there's much better ways of testing load/performance.

such as

[https://github.com/giltene/wrk2](https://github.com/giltene/wrk2)

------
Angostura
Seriously, if you want something in an emergency- go to somewhere like
weebly.com and set up a free site.

------
lovelearning
I searched for terms like IPFS and FreeNet in this HN discussion and
surprisingly didn't find them. I think distributing to geographically
localized p2p networks should be part of any system for emergency websites. An
emergency need not necessarily be natural and unintentional.

~~~
jamesgeck0
I love distributed websites, but hardly anyone uses these things. I don't even
know of any mobile apps for IPFS, Dat, or FreeNet. They should absolutely be
more widespread, but crisis mode website operations need to focus on things
the majority of users can access.

------
benrmatthews
The one-click deploy to Netlify worked for me. Have used this to spin up a
site listing crisis communications advice around Coronavirus:
[https://coronaviruscomms.netlify.com/](https://coronaviruscomms.netlify.com/)

------
mamborambo
A README.md in most Github repo is already a statically published page, easy
to author and easy to access. Even most PWA festures can be added to this
setup with just Jekyll configs. Am I missing something in the question?

------
mro_name
a radically simple microblog for single-user laypersons publishing on their
own property (yes, one needs to buy some kind of hosting starting with 2
Eur/Month in DE): [https://github.com/mro/ShaarliGo#install--
update](https://github.com/mro/ShaarliGo#install--update)

------
forgotmypw16
But does it work with Netscape 3 and IE4?

------
z0mbie42
Awesome, thank you!

------
dreamcompiler
Netlify CMS is based on React. In what universe could this be considered
appropriate for a basic emergency website?

Edit: I don't really mind if React is only used as the authoring interface.
But the consumer view should not require Javascript, and AFAICT Netlify CMS
does.

~~~
tpfour
Because the CMS is decoupled from the content, and installing Netlify CMS is
very fast. I was able to coach an SMB owner into using it for their website in
about 1h tops (after setting everything up myself).

