
Hexo – A fast, simple and powerful blog framework - juanfatas
http://zespia.tw/hexo/
======
piranha
Writing "incredibly fast" and not providing any clues of how fast is it is not
exactly the best way to promote an application IMO.

Update: okay, it generates simplest site possible with 250 posts in 7 seconds.
This is not the worst performance (it's faster than Jekyll), but I won't call
this 'incredibly fast'. :\

And why the downvote, by the way? Is pointing out statements, which may not
hold any value, wrong? Especially if they do not hold any value?

~~~
wting
I'm guessing you're getting downvotes because it's a trivial reply that
doesn't add to the discussion.

One could make the argument that Hexo is fast because it's:

\- fast to compile: probably faster or equal to industry standard Ruby/Jekyll

\- fast to install: npm

\- fast to deploy: single command for Heroku / GitHub Pages

\- fast to write: Markdown

You're taking the one word "incredibly" and blowing it out of proportion.
We're not talking about super computing here, fast is a relative construct.

~~~
piranha
> \- fast to compile: probably faster or equal to industry standard
> Ruby/Jekyll

Jekyll is slow enough to not count something faster than Jekyll 'incredibly
fast'. It should be much faster than Jekyll to be at least 'fast'.

For me, fast is that it can regenerate page while I'm switching from editor to
browser. This one can not (7 seconds for almost nothing).

------
callmeed
This looks like a nice, Node port of Jekyll. Not really how I prefer to blog.

My idea for a blogging platform (maybe even a service), if I ever get to it,
would be a combination of _Jekyll and Posterous_. In other words, you'd have:

* A Jekyll dir structure on the server

* You write a post via Email

* The email is POSTed to the service via MailGun

* The service writes a new HTML file in the Jekyll dir and runs `jekyll build`

BAM, you have the speed of a static site, the wysiwyg editing features of
whatever email client you use, and the ability to post from any
computer/phone/tablet (not just one with git configured).

~~~
don_draper
>>* The service writes a new HTML file in the Jekyll dir and runs `jekyll
build`

With Jekyll once it is up on say AWS you don't have to worry about it. You add
a service in the mix, that means you need a server, that means you have an
attack vector to worry about, among other things.

People like Jekyll because it is so simple. Once you start adding services,
it's no longer simple.

~~~
jacques_chester
> _With Jekyll once it is up on say AWS you don't have to worry about it._

You still need to update the base system.

~~~
don_draper
Right. But when a security vulnerability is announced for Apache, MySQL,
Postfix, etc, I don't have to worry about it.

~~~
jacques_chester
We're going to wind up splitting fine hairs, but here goes:

Having to worry about updating because of security defects is the same as ...
having to worry about updating because of security defects.

Reducing the attack surface still leaves an attack surface, is my point. You
can't just "forget about it", your server can still be subverted to unpleasant
ends.

------
singular
Seems interesting, I've always appreciated the simplicity of a Jekyll-type
approach.

Apologies for semi-hijacking to mention it but, I take a slightly different
approach in my personal blog (custom, messy code) - I rsync markdown documents
to my webserver which are compiled into HTML and put into a redis in-memory
collection which the server uses to render the blog. That gives you in-memory
caching for free and avoids having a whole bunch of static files having to be
generated every time. I use node on the backend and angular on the frontend to
allow for a single-page website.

Currently the solution involves regenerating all HTML each time files are
rsync by a script run remotely via SSH, however I ultimately intend for it to
use an inotify-style approach to only import files that have actually changed,
running both locally and remotely, so publishing an article need only require
you to write some markdown and save it in a particular folder.

Though of course all this (currently) requires you to have a server such as
linode to which you can rsync + have a remote serving script watch a folder, I
mention it so to ask whether anybody would be interested in me cleaning it up
and open sourcing it?

~~~
cnu
If you have compiled it to HTML, wouldn't it be more efficient to serve it up
as static HTML files via nginx (or any web server). Why store the HTML in
redis and serve off it via an application?

~~~
singular
With aggressive disk caching on servers static files are probably near enough
in-memory anyhow (+ I'm sure you could configure a modern web server
application to cache anyhow), so yeah it's probably pretty fast.

However it's more limiting to simply serve static files - you're limited to
what you've generated. With redis you can serve it as json data and use it
dynamically for e.g. search or showing all articles with a given tag, in a
given date range, etc.

Additionally, I'm not a big fan of a whole bunch of static files sat in a
folder somewhere that needs to be regenerated every time I change something.
Personal preference, perhaps :-)

~~~
nilved
If you're generating static HTML, why not generate static JSON? Then you can
use it for client-side search, for example.

Check out my website's repo:
<https://bitbucket.org/devlinzed/devlinzed.com/src>. It has a JSON format for
just about every URL, but is still entirely static:

<http://devlinzed.com/2013/may/keeping-all-your-data-safe>

[http://devlinzed.com/2013/may/keeping-all-your-data-
safe.jso...](http://devlinzed.com/2013/may/keeping-all-your-data-safe.json)

~~~
pdog
Your site loads very quickly. (PageSpeed Score is 97 out of 100 -- only
suggestions are for the gravatar.)

Would you care to reveal a little more about where/how you host your site?

~~~
nilved
I host it in DigitalOcean's New York datacenter (on a 512 MB VPS). They're
pretty alright, but any VPS would do the trick. In fact, I only use a VPS
because I can't properly set caching or gzip headers on GitHub pages.

As far as I know, the only thing that I do that most static sites don't is
precompile gzip files for HTML pages, and minify pretty much everything
(including HTML and images.) PageSpeed, Pingdom and RedBot were very helpful
for providing web server optimizations. I would just observe and implement
every tweak they mentioned; there's a lot you can do managing your own server
that you can't with AWS or GitHub.

And make that 100/100. :)

------
C1D
It would be nice to see some benchmarks, comparing with WordPress and others
blogging platforms.

~~~
wting
There is no need for benchmarks, because presenting static files will always
be faster than dynamic web apps like WordPress.

Hexo (and Jekyll, Pelican, etc) pre-compile from Markdown to HTML which then
gets served up as a static resource from the web server. On the other hand
with Wordpress, the server has to compile the HTML _every time a user
visits_.[0]

For the server, HTML is just another static resource to transmit. You might as
well ask to see benchmarks between displaying an image vs WordPress blog.

[0] This can be mitigated with caching, but static resources are easier to
cache and the caching logic is now handled by the app rather than CDNs.

