1. A static site generator, with markdown as the source input in Github
2. Data from Google Sheets
3. A bash job on a cron that would poll both for changes... if changes exist, re-publish the site or data and purge Cloudflare cache using their API
4. Configure Cloudflare via Page Rule to Cache Everything
Even with a very high change rate and hundreds of thousands of visitors a day and severe traffic spikes... the site was instantaneous to load, simple to maintain and update, and the cache purge stampede never overwhelmed the cheapest Linode serving the static files.
The content editors used Github as the CMS and edited Markdown, or just updated data in Google Sheets. Changes were live within 5 minutes.
The builder just rebuilds when someone pushes to master and then scps to the VM. We've been trying out Forestry.io (linked to github) as a management client so that non technical authors can add content. It works to a point, but there are odd things like forestry has poor support for media that's not an image, and doesn't have a concept of folders. So everything gets thrown into "/media" which I hate. Also because it's using git as the database, it commits every time you save, which of course triggers a build. So if there was a way to add releases in forestry that'd be ideal.
Cloudflare do not charge for bandwidth... and so it's free.
This whole setup is $5 per month for the hosting, though we do use Github personal and that is $7.00 per month.
To create truly fault tolerant services you CANNOT assume a fremium service will go out on a limb for you during a critical time.
Also, for anyone building a site they aren't directly responsible for, getting payment details in a crisis is effectively impossible.
Lastly, don't be so sure services like Netlify wouldn't help if you asked. They often do.
That site is virtually guaranteed to never go down even with insane amoumts of traffic (plus it's edge optimized so a user in new delhi won't be sending requests to your server in los angeles)
edit: The whole setup takes like less than 2 minutes and can be even automated it with 2 aws cli commands.
I'm quite happy with other AWS architecutre also, e.g. SES for sending mail, lambda for serverless, etc so I like to stick with them. Awscli is also quite powerful and I'm able to setup the whole thing from scratch with a single bash script.
I'm sure Netlify must have simplified the process or made it easier and could be another great option.
I would not recommend Netlify if you have other options.
It would be nice to fork the project and do something similar with CloudFront. Any static object data can also be fetched from CloudFront as JSON files, and periodically updated by cache invalidation or cache expiry dates (ie. cache for 5 mins).
Another additional benefit of this is that you get a free SSL certificate from Amazon that virtually never expires. The price is also on-demand and very less (only pay for the bandwidth you use which is pretty cheap too)
These commands to re-deploy your site would be following (assuming aws-cli is installed)
aws s3 cp ~/your-site/* s3://bucket-name
aws cloudfront create-invalidation --distribution-id ID --paths /*
There are follow up posts on CI / CD and search.
The advantage is that you can still use your existing CMS, so your staff won't need to learn a new system, and you also don't need any third-party cloud services.
Actually, if your CMS is properly configured (e.g. correct cache headers) you can also simply put it behind a CDN like Cloudflare, which will handle the caching and scaling for you.
> wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org
This should make a full copy of your website (source: https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-...).
I don't have the source of the Wordpress plugin anymore unfortunately.
* Be familiar and comfortable with npm
* Be familiar and comfortable with netlify
* Be aware of this as a possible option
The bar for these folks is pretty darn low. A lot of organizations end up contracting with individuals or organizations not because they're up to speed on modern web development but because they knew them from somewhere else.
Setting this up as some kind of hosted service would probably be a good next step.
Scale this up to a government organization, and the chance of it happening is basically zero, especially in an emergency. And even if they host their own web servers and manage to get access to them, the chances of them being able to run Docker or really anything besides what they were set up for without unreasonable effort are slim at best.
I'm not saying Netlify is a good solution, but it's one that a single creative tech could figure out and set up in a day and would be almost guaranteed to work well.
- Remove all non-essential scripts: ads, analytics, fonts, social, liveperson, disqus, truste, foresee, cookielaw, etc.
- Scale down or omit images
- and after the emergency is over, keep it that way
We then have a google form for public submission that feed the same sheet.
The document is 'published to the web' as csv, so there's no need to use the API / register an app.
I don't get why it's cool to use NPM, a static site generator and netlify for an emergency website.
And even if you don't have anyone on-hand, it is, unfortunately, cheaper, easier and faster to learn all this crap and deploy to Netlify, than it is to set up a solid Web server from scratch (or even get good enough "just a folder" web hosting).
It's a non-sequitur to say since I don't endorse using other servers in this instance, I must lack knowledge of them. It's not about me or you, it's about the lowest common denominator of technical person necessary to maintain a static website.
Following your path, the complexity becomes contemplating why one should use lesser known web servers instead of what one is more likely to be familiar with.
You were not not endorsing them, you were claiming that they had to be sought out and that that was unnecessary complexity. It is no more added complexity to install some other Debian/Arch/FreeBSD/Fedora/whatever package than it is to install those operating systems' nginx and Apache packages, and saying otherwise is really looking for any excuse to reinforce an existing narrowminded prejudice rather than a real evaluation of complexity.
Read what I wrote again. "Seeking out" is not limited to just choosing a different package, it includes the cognitive costs of exploring an option that is hitherto unknown to oneself.
You only think I'm squirming because you are qualifying what "seeking out" means to a narrow view, ironically.
How would one who is unfamiliar with those "simpler HTTP servers" even know the fact without diving into comparing the complexities of Apache and nginx against the other options? Why dive into that complexity if one is already familiar with Apache and nginx, in this use case (I'm sure you'd agree that the average person tasked for this would be more likely to be familiar with those)? This is the point you're missing.
Hire someone who knows how to build a scalable website. This isn't a horribly hard problem, but it's easy to make a mistake.
I wonder if there's room for a startup here, automated offsite emergency pages for town and city officials to use to quickly publish information.
I'm a bit confused about this point. If it's a basic static site why would this be needed?
If there is an updated version a Service Worker can check for that and pull it in if there is a connection.
Whilst imperfect to have potentially out of date information, that only happens if the person has no internet and the information has changed since they accessed it - I think it is worth the trade off of people not having any information.
For critical things, SMS probably makes more sense, but I'm not sure that is what they are trying to solve here.
If vital information changes, it’s incredibly difficult to consolidate (where in the SMS thread was the latest update on topic XY again?)
SMS also can’t use pictures, requires stateful server infrastructure, is not easy to bookmark, is irretrievable when deleted, and can’t be shared as quickly as a URL.
requires an unarchiver, and a text editor.
We don’t actually need HTML for every case. For even more resilience, we could just push text files with markdown-like formatting characters that people might understand to give the content some hierarchy and emphasis. This would be just content and content alone. Obviously, this wouldn’t be appropriate for all use cases, but if you’re just sharing updates, it could (depending on other factors) be a simpler solution to implement.
With HTML you have the ability to link to other sites, documents, or anchors in those documents. This makes navigation much simpler than shittily displayed plain text. You're also less likely to have your document mangled by the browser like with plain text.
Also remember just about anyone with a smartphone knows how to navigate the web (links, back buttons, etc). If you break those UI paradigms by sending them plain text documents you've made it harder for a good percentage of the population to effectively use that data.
I would provide much less background information here. "What you need to do" should definitely not be below the fold. You should boil it down to:
1. This is happening
2. This is what you personally need to do about it.
Then add whatever else you want after that.
I know this is just an example, but it should set a good example.
In other words, if you think about it in terms of the typical inverted pyramid model of journalism (which many here on HN already know about), what you need to do is the most important information, why you need to do it is secondary. That might be debatable to some people but that's how I view it.
Tell the most important info in the title as briefly as possible.
Repeat the most important info with a little more detail in the first paragraph.
Repeat your main point and add more details in additional paragraphs. You should be able to cut out the final paragraphs (or not bother to read them) without losing any actually critical information.
Each additional paragraph should add new information, but not be essential to the main point of the piece.
Answer: who, what, when, where, how and why.
It’s error prone to make changes in the header and footer for example across all pages if you have lots of pages
How do local governments usually host their websites?
We actually see a lot of local governments (cities or metro agencies) doing things in AWS.
I'd say the likelihood in my experience of them grabbing this kind of thing and deploying it is probably pretty low. The ones I work with would leave that up to a contractor.
also, a little cheap addition would be saying the "you're viewing this page: online/offline, last refresh: now/two days ago" or such, and possibly either a button, or automatic popup for the "add to home/desktop" pwa button...
A highly performant web server like nginx.
Static html content.
Fault tolerance in case something happens to your web server.
A way for people to read it offline when their internet connection goes down (presuming they haven't saved the HTML).
People forget how simple the web can be.
You can go super jank using s3 only with simple sites but realistically you're going to want TLS at some point.
The number of times I have thought in the past few weeks that if they had just used some static pages on S3 behind Cloudfront, or some kind of CDN, that much pain could have been averted.
Of course the first thing I did was to benchmark the test site to see how their edge network performs. For reference I'm based in Melbourne, Australia, and have a 100mbps download, 50mbps upload connections:
$ ab -n 10000 -c 100 https://emergency-site.dev/
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking emergency-site.dev (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: Netlify
Server Hostname: emergency-site.dev
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
TLS Server Name: emergency-site.dev
Document Path: /
Document Length: 4836 bytes
Concurrency Level: 100
Time taken for tests: 106.534 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 53220000 bytes
HTML transferred: 48360000 bytes
Requests per second: 93.87 [#/sec] (mean)
Time per request: 1065.345 [ms] (mean)
Time per request: 10.653 [ms] (mean, across all concurrent requests)
Transfer rate: 487.85 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 713 808 30.7 803 1828
Processing: 230 236 4.8 236 443
Waiting: 230 236 3.9 236 310
Total: 956 1044 31.7 1039 2067
Percentage of the requests served within a certain time (ms)
100% 2067 (longest request)