Update: okay, it generates simplest site possible with 250 posts in 7 seconds. This is not the worst performance (it's faster than Jekyll), but I won't call this 'incredibly fast'. :\
And why the downvote, by the way? Is pointing out statements, which may not hold any value, wrong? Especially if they do not hold any value?
One could make the argument that Hexo is fast because it's:
- fast to compile: probably faster or equal to industry standard Ruby/Jekyll
- fast to install: npm
- fast to deploy: single command for Heroku / GitHub Pages
- fast to write: Markdown
You're taking the one word "incredibly" and blowing it out of proportion. We're not talking about super computing here, fast is a relative construct.
Jekyll is slow enough to not count something faster than Jekyll 'incredibly fast'. It should be much faster than Jekyll to be at least 'fast'.
For me, fast is that it can regenerate page while I'm switching from editor to browser. This one can not (7 seconds for almost nothing).
My idea for a blogging platform (maybe even a service), if I ever get to it, would be a combination of Jekyll and Posterous. In other words, you'd have:
* A Jekyll dir structure on the server
* You write a post via Email
* The email is POSTed to the service via MailGun
* The service writes a new HTML file in the Jekyll dir and runs `jekyll build`
BAM, you have the speed of a static site, the wysiwyg editing features of whatever email client you use, and the ability to post from any computer/phone/tablet (not just one with git configured).
GET-driven means that you need GET events to prompt the system to generate new pages and possibly cache them.
This has problems. The main one is that generating pages from scratch is costly compared to serving a flat file.
So we turn to caching.
But here arises a problem. GETs do not change content. So any system that relies on GETs as its source of activity will have inevitable mismatches between the cache and the content. This is where TTLs and LRUs and a whole bunch of other stuff comes into play.
What if, instead, we used a POST-driven architecture? Instead of relying on stochastic noise to determine what to do, we could rely on changing stuff when we actually receive the instruction to change stuff.
So instead of updating the cache on a certain number of GETs, or on a TTL or some other basis, we update the cache when a POST invalidates it. "Easy".
If you avoid the naive scheme that early versions of Movable Type used (rebuilding a whole site upon each post), the whole thing goes a lot quicker. And you get to keep comments.
In 2010 I did some preliminary research as part of an honours project proposal, but went with a different project. But it's still one of my pet peeves.
I was a huge fan early on in my career of using Blogger for both work and play. It was a really flexible generator of markup files - it didn't have to be HTML, so I'd use to make RSS/XML that would get piped into XSLT to make contact directories or into a Flash site for art projects, all while keeping an easy admin interface for various people to contribute their content.
I really wanted Blogger to succeed on its own. I took an Adaptive Path workshop with Ev when it was just him and Jason running things and begged him to take my money for a premium account. He said creating a Blogger Pro was a popular request and they were working on it, but then shortly after they got acquired by Google and it wasn't long before it went downhill, especially when it stopped being a static file generator and became a bad Wordpress clone (no plugins, for example).
That's what I love about Jekyll. What's old is new again and I'm pretty much trying to recreate that great setup from 5-10 years ago for my current projects. Content-centric sites want to be in HTML, not rendered on the fly by app/centric frameworks.
With Jekyll once it is up on say AWS you don't have to worry about it. You add a service in the mix, that means you need a server, that means you have an attack vector to worry about, among other things.
People like Jekyll because it is so simple. Once you start adding services, it's no longer simple.
You still need to update the base system.
Having to worry about updating because of security defects is the same as ... having to worry about updating because of security defects.
Reducing the attack surface still leaves an attack surface, is my point. You can't just "forget about it", your server can still be subverted to unpleasant ends.
No email support at the moment, but automatically updateds markdown files in your Dropbox.
Apologies for semi-hijacking to mention it but, I take a slightly different approach in my personal blog (custom, messy code) - I rsync markdown documents to my webserver which are compiled into HTML and put into a redis in-memory collection which the server uses to render the blog. That gives you in-memory caching for free and avoids having a whole bunch of static files having to be generated every time. I use node on the backend and angular on the frontend to allow for a single-page website.
Currently the solution involves regenerating all HTML each time files are rsync by a script run remotely via SSH, however I ultimately intend for it to use an inotify-style approach to only import files that have actually changed, running both locally and remotely, so publishing an article need only require you to write some markdown and save it in a particular folder.
Though of course all this (currently) requires you to have a server such as linode to which you can rsync + have a remote serving script watch a folder, I mention it so to ask whether anybody would be interested in me cleaning it up and open sourcing it?
However it's more limiting to simply serve static files - you're limited to what you've generated. With redis you can serve it as json data and use it dynamically for e.g. search or showing all articles with a given tag, in a given date range, etc.
Additionally, I'm not a big fan of a whole bunch of static files sat in a folder somewhere that needs to be regenerated every time I change something. Personal preference, perhaps :-)
Check out my website's repo: https://bitbucket.org/devlinzed/devlinzed.com/src. It has a JSON format for just about every URL, but is still entirely static:
Would you care to reveal a little more about where/how you host your site?
As far as I know, the only thing that I do that most static sites don't is precompile gzip files for HTML pages, and minify pretty much everything (including HTML and images.) PageSpeed, Pingdom and RedBot were very helpful for providing web server optimizations. I would just observe and implement every tweak they mentioned; there's a lot you can do managing your own server that you can't with AWS or GitHub.
And make that 100/100. :)
Also love the honest assessment that in your repo's description that your setup is "far more neckbeard and far more work" :)
Hexo (and Jekyll, Pelican, etc) pre-compile from Markdown to HTML which then gets served up as a static resource from the web server. On the other hand with Wordpress, the server has to compile the HTML every time a user visits.
For the server, HTML is just another static resource to transmit. You might as well ask to see benchmarks between displaying an image vs WordPress blog.
 This can be mitigated with caching, but static resources are easier to cache and the caching logic is now handled by the app rather than CDNs.