Hacker News new | comments | show | ask | jobs | submit login
Hexo – A fast, simple and powerful blog framework (zespia.tw)
65 points by juanfatas on May 25, 2013 | hide | past | web | favorite | 22 comments



Writing "incredibly fast" and not providing any clues of how fast is it is not exactly the best way to promote an application IMO.

Update: okay, it generates simplest site possible with 250 posts in 7 seconds. This is not the worst performance (it's faster than Jekyll), but I won't call this 'incredibly fast'. :\

And why the downvote, by the way? Is pointing out statements, which may not hold any value, wrong? Especially if they do not hold any value?


I'm guessing you're getting downvotes because it's a trivial reply that doesn't add to the discussion.

One could make the argument that Hexo is fast because it's:

- fast to compile: probably faster or equal to industry standard Ruby/Jekyll

- fast to install: npm

- fast to deploy: single command for Heroku / GitHub Pages

- fast to write: Markdown

You're taking the one word "incredibly" and blowing it out of proportion. We're not talking about super computing here, fast is a relative construct.


> - fast to compile: probably faster or equal to industry standard Ruby/Jekyll

Jekyll is slow enough to not count something faster than Jekyll 'incredibly fast'. It should be much faster than Jekyll to be at least 'fast'.

For me, fast is that it can regenerate page while I'm switching from editor to browser. This one can not (7 seconds for almost nothing).


This looks like a nice, Node port of Jekyll. Not really how I prefer to blog.

My idea for a blogging platform (maybe even a service), if I ever get to it, would be a combination of Jekyll and Posterous. In other words, you'd have:

* A Jekyll dir structure on the server

* You write a post via Email

* The email is POSTed to the service via MailGun

* The service writes a new HTML file in the Jekyll dir and runs `jekyll build`

BAM, you have the speed of a static site, the wysiwyg editing features of whatever email client you use, and the ability to post from any computer/phone/tablet (not just one with git configured).


This is an example of what I call a POST-driven architecture; most designs are GET-driven.

GET-driven means that you need GET events to prompt the system to generate new pages and possibly cache them.

This has problems. The main one is that generating pages from scratch is costly compared to serving a flat file.

So we turn to caching.

But here arises a problem. GETs do not change content. So any system that relies on GETs as its source of activity will have inevitable mismatches between the cache and the content. This is where TTLs and LRUs and a whole bunch of other stuff comes into play.

What if, instead, we used a POST-driven architecture? Instead of relying on stochastic noise to determine what to do, we could rely on changing stuff when we actually receive the instruction to change stuff.

So instead of updating the cache on a certain number of GETs, or on a TTL or some other basis, we update the cache when a POST invalidates it. "Easy".

If you avoid the naive scheme that early versions of Movable Type used (rebuilding a whole site upon each post), the whole thing goes a lot quicker. And you get to keep comments.

In 2010 I did some preliminary research as part of an honours project proposal, but went with a different project. But it's still one of my pet peeves.


You just described pre-Google Blogger and everything I loved about it back in the day.

I was a huge fan early on in my career of using Blogger for both work and play. It was a really flexible generator of markup files - it didn't have to be HTML, so I'd use to make RSS/XML that would get piped into XSLT to make contact directories or into a Flash site for art projects, all while keeping an easy admin interface for various people to contribute their content.

I really wanted Blogger to succeed on its own. I took an Adaptive Path workshop with Ev when it was just him and Jason running things and begged him to take my money for a premium account. He said creating a Blogger Pro was a popular request and they were working on it, but then shortly after they got acquired by Google and it wasn't long before it went downhill, especially when it stopped being a static file generator and became a bad Wordpress clone (no plugins, for example).

That's what I love about Jekyll. What's old is new again and I'm pretty much trying to recreate that great setup from 5-10 years ago for my current projects. Content-centric sites want to be in HTML, not rendered on the fly by app/centric frameworks.


>>* The service writes a new HTML file in the Jekyll dir and runs `jekyll build`

With Jekyll once it is up on say AWS you don't have to worry about it. You add a service in the mix, that means you need a server, that means you have an attack vector to worry about, among other things.

People like Jekyll because it is so simple. Once you start adding services, it's no longer simple.


> With Jekyll once it is up on say AWS you don't have to worry about it.

You still need to update the base system.


Right. But when a security vulnerability is announced for Apache, MySQL, Postfix, etc, I don't have to worry about it.


We're going to wind up splitting fine hairs, but here goes:

Having to worry about updating because of security defects is the same as ... having to worry about updating because of security defects.

Reducing the attack surface still leaves an attack surface, is my point. You can't just "forget about it", your server can still be subverted to unpleasant ends.


Give my weekend project a shot. www.snaphost.me

No email support at the moment, but automatically updateds markdown files in your Dropbox.


Seems interesting, I've always appreciated the simplicity of a Jekyll-type approach.

Apologies for semi-hijacking to mention it but, I take a slightly different approach in my personal blog (custom, messy code) - I rsync markdown documents to my webserver which are compiled into HTML and put into a redis in-memory collection which the server uses to render the blog. That gives you in-memory caching for free and avoids having a whole bunch of static files having to be generated every time. I use node on the backend and angular on the frontend to allow for a single-page website.

Currently the solution involves regenerating all HTML each time files are rsync by a script run remotely via SSH, however I ultimately intend for it to use an inotify-style approach to only import files that have actually changed, running both locally and remotely, so publishing an article need only require you to write some markdown and save it in a particular folder.

Though of course all this (currently) requires you to have a server such as linode to which you can rsync + have a remote serving script watch a folder, I mention it so to ask whether anybody would be interested in me cleaning it up and open sourcing it?


If you have compiled it to HTML, wouldn't it be more efficient to serve it up as static HTML files via nginx (or any web server). Why store the HTML in redis and serve off it via an application?


With aggressive disk caching on servers static files are probably near enough in-memory anyhow (+ I'm sure you could configure a modern web server application to cache anyhow), so yeah it's probably pretty fast.

However it's more limiting to simply serve static files - you're limited to what you've generated. With redis you can serve it as json data and use it dynamically for e.g. search or showing all articles with a given tag, in a given date range, etc.

Additionally, I'm not a big fan of a whole bunch of static files sat in a folder somewhere that needs to be regenerated every time I change something. Personal preference, perhaps :-)


If you're generating static HTML, why not generate static JSON? Then you can use it for client-side search, for example.

Check out my website's repo: https://bitbucket.org/devlinzed/devlinzed.com/src. It has a JSON format for just about every URL, but is still entirely static:

http://devlinzed.com/2013/may/keeping-all-your-data-safe

http://devlinzed.com/2013/may/keeping-all-your-data-safe.jso...


Your site loads very quickly. (PageSpeed Score is 97 out of 100 -- only suggestions are for the gravatar.)

Would you care to reveal a little more about where/how you host your site?


I host it in DigitalOcean's New York datacenter (on a 512 MB VPS). They're pretty alright, but any VPS would do the trick. In fact, I only use a VPS because I can't properly set caching or gzip headers on GitHub pages.

As far as I know, the only thing that I do that most static sites don't is precompile gzip files for HTML pages, and minify pretty much everything (including HTML and images.) PageSpeed, Pingdom and RedBot were very helpful for providing web server optimizations. I would just observe and implement every tweak they mentioned; there's a lot you can do managing your own server that you can't with AWS or GitHub.

And make that 100/100. :)


I love how crazy fast your site is on my slow browser (chrome on iOS seems to drag when you've got 100+ tabs). I'm starting a blog network as a side project and setting it up as Jekyll for exactly this kind of performance.

Also love the honest assessment that in your repo's description that your setup is "far more neckbeard and far more work" :)


Interesting. It's still useful to have the data in REDIS so you can easily search through data server-side, but of course that could simply load from static JSON if + when REDIS is restarted/new data is added.


It would be nice to see some benchmarks, comparing with WordPress and others blogging platforms.


There is no need for benchmarks, because presenting static files will always be faster than dynamic web apps like WordPress.

Hexo (and Jekyll, Pelican, etc) pre-compile from Markdown to HTML which then gets served up as a static resource from the web server. On the other hand with Wordpress, the server has to compile the HTML every time a user visits.[0]

For the server, HTML is just another static resource to transmit. You might as well ask to see benchmarks between displaying an image vs WordPress blog.

[0] This can be mitigated with caching, but static resources are easier to cache and the caching logic is now handled by the app rather than CDNs.


[deleted]


You have the power to remove them. There was a popular HN post recently advocating having no comments at all.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: