I know there seem to be two rather large groups on HN, one for proliferation of JS and one against, but at the very least if someone from the latter visits your site, please show something better than an entirely blank page.
(I'm speaking of the demo, not the actual site describing it --- although I thought it would be hosted with itself.)
As for the idea of turning static content sites into client-side JS-rendered apps, that gets a strong disapproval from me. Why? It's needless complexity (instead of generating the HTML once and storing it on the server, every single visitor has to regenerate it on their machine), bad for accessibility, and very much against the principle of the Web that information should be easily linkable and retrievable. I can understanding using JS to do "app-ish" things that wouldn't be possible with static pages, but this is reinventing the wheel and making it square.
> As for the idea of turning static content sites into client-side JS-rendered apps, that gets a strong disapproval from me. Why? It's needless complexity (instead of generating the HTML once and storing it on the server, every single visitor has to regenerate it on their machine), bad for accessibility, and very much against the principle of the Web that information should be easily linkable and retrievable.
Yep, and the worse offender seems to be Quora that seems to have its main page receive all its data from java script (which as a side effect makes scrolling annoyingly slow). But it's probably on purpose since they are preventing copy/pasting so it's impossible to quote text from their website without linking to them in some way.
Thanks for all the feedback! Yeah, this is definitely next on the list of bugs to fix. I definitely respect your opinion. I really looking a way to accomplish this problem from a different perspective using just client-side technologies. It would have been nearly impossible to accomplish this without the aid of Javascript unfortunately (besides just having static HTML of course but that wouldn't be any fun would it!)
That's only true if you use 'view source'. If you open up dev tools, all of the elements are rendered in full and live updated as they change.
Client-side rendered apps are quickly becoming every-where rendered apps. They can now work isomorphically, on the desktop, and on mobile.
> every single visitor has to regenerate it on their machine
This is a false premise. For server-side view rendering, the view is tightly coupled to the data. The majority of the payload will be made up of dynamically generated (and non-ideal for caching) HTML structure. When all the user really needs is data.
Rendering apps on the client-side allows fetching partials and content data piecemeal, which is ideal for caching. Views can be cached at the HTTP layer, directly in the app, or both. This will be especially useful once HTTP/2 (ie connection persistence) are in common use.
-----
We're moving past the days of the static web. Especially, now that it's trivial to pull data from many different sources on the client-side.
What really never make sense was rendering the view on both the server-side and client-side (often times using 2 view template engines). Fortunately, that practice is becoming less common.
Yes, you can use HTML/CSS rendered views on mobile but they pale in comparison to native UIs.
The latest app frameworks decouple client-side application logic from the view rendering layer. Meaning you can use the same application logic on your web app as you use on mobile EXCEPT the view on mobile will be rendered natively. Not through a hacky, slow, inconsistent embedded web view. Native native.
If the approach is successful as the framework devs expect, mobile app development (and the app ecosystem as a whole) will go the way of the dinosaur.
Pulling data from many sources on the server requires proxying requests to third parties and providing inline caching of the responses for performance reasons. Despite the fact that the source you're proxying to likely already implements their own caching strategies.
Have fun addressing bugs caused by cache invalidations when you're dealing with multiple caching layers.
Not to mention the increased memory/computation footprint incurred from hosting the equivalent of a database view of an entire third party service on your backend.
Or, you could just request the data directly on the client side.
That's what I mean when I say 'reduced complexity.' Unless you consider setting up CORS whitelisting and some AJAX requests to be really difficult. Then, carry on.
HTML (or more specifically HTML+css, and how they interact with and render in the browser) are anything but trivial. javascript (the language, not the JIT) is relatively simple in contrast
client side rendering + json apis on the server are the holy grail of openness (esp with an MIT licensed client)
Building a webpage node by node from within Javascript does not lessen the complexity of HTML and CSS, it simply adds to it. You're doing the exact same things - creating DOM nodes with attributes - but with yet another level (or 5) of indirection.
Excess layers of indirection come when you try do view generation on both the server and the client.
Many server frameworks currently follow this approach. Generate the view scaffold using templates on the server-side. Then request partials asynchronously on the client and generate additional views client-side using a completely different framework.
Shifting all of the view generation to the client frees up the back-end devs to focus all of their time/effort on the important part. Building and scaling the data infrastructure.
Not to mention, shifting the view layer client-side reduces resource requirements on the back-end.
The previous generation of front-end SPA frameworks frankly sucked in terms of performance, leading to a bad user experience. There were 2 primary reasons why. All data persistence/binding happened directly on the DOM, and all execution was handled on the main execution thread.
The latest generation greatly improves the experience by using new techniques such as data persistence/diffing via the virtual DOM, and handling all non-rendering logic on a background worker thread.
> The majority of the payload will be made up of dynamically generated (and non-ideal for caching) HTML
For a blog or CMS this is probably not the case. The majority of the page is going to be the same for all users. Maybe you have something at the top of tbe page showing the user is logged in, and admin links to edit it, but that's probably about as dynamic as you will get.
> Rendering apps on the client-side allows fetching partials and content data piecemeal, which is ideal for caching
Yes.. but why? 99% of websites aren't going to have enough traffic that this would cause load issues on the server. On the client the browser probably does a better job of caching pages than you could.
For something more complicated that a blog or CMS you need to think about security. It's not a trivial task to cache something and securely serve it to only people who have an active session and are authorised to view it.
But this is a static site generator. It says so in the title. For dynamic content client & server site rendering makes sense. But there are still many uses for static sites. Blogs, manuals, tutorials pages like Wikipedia, they could and should be rendered once.
Caching cannot help you with initial page loads, which is probably important when you have a content focused sites.
Rendering the content via JavaScript is slower than just having it in your HTML. It's true that JS execution has become insanely fast, however, initial JS execution is not as fast, because it requires that JIT to kick in. Even if JS execution becomes infinitely fast, you still have to work with the DOM, which is horribly slow and there are no known ways to make it fast. Google, Yahoo and others have prove with hard numbers that users care about speed, even when we are talking about milliseconds.
I highly recommend doing some research on how browsers and caching actually work before falling into the pit of false belief.
To be fair to him, he did specifically mention isomorphic, in which case the js is running on both sides and the idea is that initial load delivers server-generated markup, giving the user content immediately then doing all the databinding later.
Also, tricks like that are what users generally recognise as speed - similarly Twitter's optimistic xhr submission appears to work instantly, then goes off and actually does it's thing retrospectively.
Fwiw I think a fully client side js cms app is a moderately rubbish idea for a variety of reasons, but js rendering of dynamic content absolutely has merit.
It's a little more nuanced than slamming somebody's 'pit of false belief' because they like client view rendering. :)
> That's only true if you use 'view source'. If you open up
> dev tools, all of the elements are rendered in full and
> live updated as they change.
If one attempts to view the demo with client side javascript turned off - one gets a completely blank page. The very point to a static site generator is the word "static" which is generally taken to mean that the actual rendering of the content requires only HTML rendering (no javascript (or php or ??) code execution, server side or client side).
Even with js turned on, I get a mostly blank page. It chokes on String.prototype.endsWith, which is an ES6 feature not supported by my very-slightly-out-of-date version of Safari.
If you want to render your blog in client side js AND use fresh-from-the-oven js features, the least you can do is include a polyfill.
Render the client on the server, then push the rendered markup and the client state to the browser where the client takes over again.
This paradigm is fundamentally different from the Angular1-style "render everything in the browser" and "maybe pre-render on the server, then discard that on page load" approach. If anything, it's closer to the traditional "render everything on the server, then apply enhancements in the browser" approach of the pre-SPA days.
Angular2 is following the exact same approach. Although the isomorphic bits aren't production ready yet.
Basically Angular2 renders a completely interactive view that captures any user events that occur while the app is bootstrapping in the background. Once the app is loaded, the events are replayed to make the user experience consistent.
React and Angular2 are fundamentally just different flavors of the same underlying infrastructure. React is more de-centralized and composition based. Angular2 is more centralized, monolithic, OOP, architecture based.
They have even eluded during presentations that the 2 core dev teams have been in direct contact to work out some technical details.
From what I've heard, Ember.js uses these same approaches. Kinda makes me question how much of this started with Ember and was later adoped by React and Angular.
Unfortunately Facebook, Twitter, slack, and others can not ... Yet?
Maybe that's not directly SEO but it is indirectly. Good open graph data makes links, at least on Facebook, arguably more compelling which makes them more likely to be clicked and/or shared which may lead to better SEO
I don't think any "physical web" scanners do right now unfortunately, for example https://github.com/dermike/physical-web-scan (or search for mobile apps called "physical web"). This completely banjaxed my attempt to make an all front end internet beacon recently.
I don't want to discourage you, but I also don't want to encourage the proliferation of websites (web apps---as an alternative to actual desktop software---excluded) that require JavaScript to function at all.
Sites using JS only cannot be parsed by standard tools---I can't cURL the page, use wget, use a text-mode browser, etc. This fundamentally breaks interoperability, and limits users' freedom to use the tool/browsers they want to use the web. Users who wish to disable JavaScript to browse the web---be it for security, privacy, philosophy[0], or all of these things---are forced to either enable JavaScript or not read your website (I fall into the latter).
I write more JavaScript than any other language. I understand the community, and the rationale. But I know enough to know that I should disable JavaScript when browsing the web (except for select cases, and the software must be Free), and I still use command-line tools aggressively, even for the web. Please do your best to respect those who use the web as it was intended.
I'm pretty sure there's no "intended way" to use the web. The developer of the website dictates this. If he wishes to use JS-only, that is his choice. He may lose you as a user/reader, but he certainly is using the web "as intended".
The choice to block JS seems a bit weird to me. JS, to my eyes, is very different to binary code running in userland. It is a sandboxed language with very clear restrictions to what it may do, it is easily read and verified (view source, pass it through a prettifier, and all that's left to fix is the variable names), and even hackable (we get a JS console in browsers).
What is it that makes JS so dangerous it must be blocked 100% ?
Many browser exploits that break out of the sandbox take advantage of javascript. Disabling it prevents entire classes of exploits. Also, a lot of ad networks use javascript as a tracking tool.
Going from a web of documents to black boxes that put pixels on the screen is regression plain and simple. Use it when necessary for the things that cannot be otherwise done, but don't ruin a good thing that works fine just because you can. Thanks.
There is an intended way though. Html documents allow for parseable data. We are lucky google now runs javascript, but javascript pages are very similar to flash pages.
Thank you I really appreciate your feedback! I definitely understand and respect your view on accessibility for those not using Javascript. I was just trying to approach the problem from a different perspective...seeing if there was a way to do accomplish this using just client-side technologies. I'm definitely going to consider looking into fallbacks if they are possible.
It really depends on the use case. My site[0] works in a very similar fashion to cms.js but if you have javascript disabled it forwards you to the raw markdown file using a simple noscript tag. Sure, it breaks cURL but it's just a personal website so who really cares?
I get most of my useful information and perspectives from personal websites---be it various well-respected experts in the fields, or people that nobody really knows about, but have great information or perspectives.
In software development especially, our community of hackers is our most valuable asset.
Which is why I use the noscript fallback. I'm just saying it's not worth my time to worry overmuch about supporting everybody's use case on a personal blog full of half-completed thoughts. Nobody is trying to automate anything using my site and if they are then I feel bad for them. If javascript is on my site is fine, if not then you can see the markdown files. I'm not worried about anybody else.
Yeah that was kind of my line of thinking. This is definitely not ready for enterprise level blogs yet...more along the lines of personal sites/small blogs you want to get up and running quickly.
Well how is this intended to be 'content manage'd? Through pr's on github?
I think CMS is a poor choice for a name, when there is no interface to 'build' the site's pages. I'd call it something like 'website generator backed by github'.
I guess this technology is promising in a way that it delivers websites to be read by humans only, not the bots. I fail to recognise the practical use of it unless building the most unknown blog in the world, since obviously site on cmsjs is not going to be indexed by google like your ordinary wordpress blog. But, there is something to it. Something important. Blog for humans, readable only by humans, kinda timely in the modern days of senseless content aggregators.
I think this project is cool but it's all of the problems of dynamic sites with none of the benefits. It's like if you took the worst parts of Jekyll (having to redeploy to update content) and the worst parts of a dynamic single page app (rendering delay), and took both parts. There is no benefit to this sort of infrastructure.
Good work, but why is client-side JS needed for a simple blog? I thought it was going to be like Jekyll but Node instead of Ruby (so no JS client-side).
Both the name and description are completely misleading. They suggest that this app is a web UI for managing content and doing what Jekyll does from CLI.
But this really is a SPA that can grab markdown files from your Jekyll site hosted on Github or a Apache web server, convert them to HTML and render on client site with JS.
For easy theming, I suggest you to take a look at the Classless Project, which will be super easy to integrate to in your case and will bring many already made themes with it -- and much more to come.
The ursprung[2] micro CMS is integrating Classless with success to this day.
There's no code, it's only a collection of themes based on a "standard" template.
The CoffeeScript code you see is for the bookmarklet used on the playground. It is old code, coded in a rush, most for proof-of-concept. I'm slowly rethinking this playground and theme development stuff, so this is going to be replaced.
Cool. I'm actually building something very similar @ http://evanplaice.com. I use Markdown/JSON for all of the content. Markdown files can easily be embedded in a page using the <ng2-markdown> directive I wrote. The source is @ http://github.com/evanplaice/evanplaice.com.
I'm planning to eventually extract the good bits, and adapt it to work with Jekyll files. Front-matter support is the last major road block.
Google shouldn't have any issue indexing AJAX-loaded content. On my site it's the Angular2 router that's really screwing SEO. You can test it out using the 'Fetch as Google' tool.
5 seconds browser loading. Then blank screen for about 5 seconds. Then a spinner for about 3 seconds.
No matter how much optimisation you're doing, you're still destroying the progressive loading of HTML pages and the ability for my browser to tell me how my page is loading.
It may be a fun project, but it's not based on sound technical decisions, in my humble opinion.
I see you are getting some negative feedback, haha. I think the main problem at this point isn't due client-rendering, but the large prerequisite files. Gzipping your JavaScript would trim a MB or more. Putting the static assets on a CDN (or enabling gzip yourself) would cut the render time by 2/3rds.
Semantic UI adds 700KBs of CSS, which seems like a lot more than your site should need. Even if this were a static rendered site, the browser would delay rendering anything until the CSS was downloaded.
Haha. I figured 'working on' would signal that this isn't 'production ready' yet.
Guess I should choke down my excitement and stay in stealth mode until I'm ready next time.
The entire app is server-less and hosted on a S3 bucket.
Gzipping the content should be an easy addition to the build process.
I'm using approximately 2% of what Semantic-UI is capable of. Fortunately, Semantic-UI provides a set of tools to trim it down to a production-ready bundle.
In addition, Google will likely trim more of the fat from Angular2 in upcoming releases.
The big one I'm concerned about is Rxjs. The size of the lib is massive considering it only provides support for observables and observable extensions.
Trust me, I'm painfully aware of how slow it is right now.
Thanks for the feedback though. I do appreciate it.
Feel free to try it again. The css and js are now merged and concatenated into one and gzipped. Total size of both is now 366KB
The time to bootstrap the app is about 1.5 seconds. There's room for improvement but I don't expect any massive gains since I can't isomorphically pre-render without a server back-end.
TiddlyWiki[1], which is a wiki more than a "CMS", has been doing something very close to this for quite some time. I used it for years, but eventually switched back to plain text files for notes.
I use Jekyll for blogging and generating my portfolio site too, but right now I can only update my those from my own laptop with my Jekyll install which is not always great. CMS.js would be something else...
My dream would be a simple PHP based CMS for my Jekyll install though.
This looks interesting. I'm not in love with the name. That said, I love all static site generators. Unfortunately, hosting a static site isn't much fun, so I created http://www.statichosting.co to make it easier. It's in beta and we are getting close to launch, but have a few bugs to work out. Would love to have anyone give feedback on it as it develops.
Yeah, that was kind of my line of thinking. I wanted to originally call it a static site generator but then got a lot of backlash because it was really "static" because of the Javascript.
do a view source and take a guess, off course it will affect the SEO
but hey no problem it's JS, so to correct that SEO problem they will probably do a whole render of the page server-side with something like PhamtomJS and serve those "special" pages to bots
view source, that's what archive.org_bot is referencing
eg. no content whatsoever
if/when this site die, when you will browse the link above, you will see a blank page
I would even say that on web.archive.org the effect is even more vicious, if you don't know any better, because you see the page archived you think it is indeed archived when in fact it's not
but don't despair, there are other way to have the exact same result for your SEO
tell me how it is different than rendering the whole page as Flash SWF file ?
My guess is it will be OK for the text, but links will be out of whack so bots won't really "crawl" the site. Then again, maybe advanced bots will run the js?
also, I know words in h-tags are usually "counted" as more important, so that logic won't work with raw .md
there is no text, the page is empty, the linking is not the first priority at this point
a robot to be able to read the text would already need to run the js in the first place
but the concept is interesting let's do a CONTENT management system but be completely invisible to any search bots so our CONTENT never ever get referenced and searchable on the Internet
yeah Google does but what about the 1000s other robots ?
sure Google is the bigger one and you have to be referenced on it, but there are also other indexes where you want to be referenced, and those are maybe not using robots as advanced as google.
So, as I said, for your normal robot crawler visiting the page, the page is empty, no content.
That Google have already solved the problem does not mean that everyone else did.
Why do you think prerender.io exists ? to solve that very same problem
From a SEO point of view, which was the question I was answering, it is ridiculous to serve a page without content to an indexing robot crawler.
ohhh I see what's going on here. since you cloned the demo, you need to use postsFolder: 'demo/posts' and pagesFolder: 'demo/pages' since they are located inside the demo folder. The demo is setup slightly different than a standard setup since it's in the subfolder.
Also make sure you are on editing the config.js gh-pages branch as this is the branch Github uses for hosting. If you want to use the code you have on master, just merge master into gh-pages. If you have any more issues, feel free to email me.
(I'm speaking of the demo, not the actual site describing it --- although I thought it would be hosted with itself.)
As for the idea of turning static content sites into client-side JS-rendered apps, that gets a strong disapproval from me. Why? It's needless complexity (instead of generating the HTML once and storing it on the server, every single visitor has to regenerate it on their machine), bad for accessibility, and very much against the principle of the Web that information should be easily linkable and retrievable. I can understanding using JS to do "app-ish" things that wouldn't be possible with static pages, but this is reinventing the wheel and making it square.