I have been working through these ideas since 2007, taking it one step at a time:
2007: I "single-paged" subsections of my application, with different PHP scripts on the backend.
2008: Consolidated the backend into a single API.
2009: Switched the backend to Rails for ORM functionality, and finished upgrading the client interface to a single page application.
2009: Switched the backend to Javascript (Rhino) to enable sharing of model validations and other code (even native object extensions) with the client.
2009: Got my application working completely offline using a local SQL database and replication manager, together with ApplicationCache. At first I used LocalStorage but soon hit storage and performance limits.
2009: Switched from Rhino to NodeJS. Much faster, cleaner APIs. Huge performance gains from V8 and non-blocking IO.
Along the way I built up a framework for managing concatenation, client-side navigation, sessions, views, controllers, email etc. One significant advantage of building client-side only apps that replicate with the server is that they work offline by definition. Managing state on the client opens up incredible opportunities.
Indeed, we originally had this idea back in 2004 (pre-AJAX craze), which led to the creation of NOLOH (http://www.noloh.com), available to the public since 2009 (commercial since 2005, public beta in 2008).
NOLOH allows you to create your website/WebApp in a single tier and then NOLOH will output your application in a "single page". Furthermore, it takes care of bookmarking, back button, and rendering to search engines transparently to the developer. No additional pages necessary. It also handles all the client-server communication allowing your application to be consistently lightweight and on-demand.
It was very interesting reading this, as the beginning matches closely with certain portions of our original business plan. Ahh, nostalgia.
I'm sorry it was difficult for you to find the necessary resources. On the home there's a link to a series of around 20 YouTube videos, a developer zone with 30 extensive articles, a full and in-sync API reference, and a growing Demo section. You can also sign-up for a free hosted sandbox where you can get started right away, without the need to download or host anything yourself.
If you would be so kind, could you please tell us what sort of resources were you looking for, and where you were expecting to find them? Thanks.
When I clicked on the link, I almost clicked off before seeing what language this was even targeted at.
The first point is "Develop in a single, object-oriented language." That's just buzz.
Line 2: "Stop worrying about HTML, JavaScript, AJAX, and Comet." More buzz. Everybody says I can stop worrying about these things, but I still don't believe you. Is it web based?
Line 3: "Deploy seamlessly across all browsers and operating systems." Great, so it is web based and does exactly what every other web page does. Still not sold.
Line 4: "Create lightweight, on-demand websites and WebApps" More buzz, I don't care yet.
Line 5: "Boost your productivity. Develop faster, with fewer resources." Everybody says this, still not sold, ready to leave the site.
Line 6: "Enjoy many other exciting features." Great. I'm all the way through the bullet list and I still don't have any real idea what this project does.
Finally, as I scan the small print, I see that it is PHP based and is optimized for web apps.
This seems like a good idea, but the site is not great for discovering that.
One of my favorite new project sites is for vows (http://vowsjs.org/). It has one sentence declaring the goal, two sentences with output showing typical usage, and a brief paragraph explaining the purpose. All of it looks good too, and the rest of the documentation is very complete. I'd take a page out of their book if I were you.
Thanks for the feedback. We do have that huge header that says "Build for the Web Faster & Easier!", thus we would hope you immediately conclude it's web based. For most of those points above, the keywords are actually links to more information for each of those points.
Line 3 in particular is meant to emphasize that you can deploy across all browsers and operating systems, this is in fact not what every web page does, as most web pages would normally require tinkering to work in various different browsers versions.
However, I do see your point. We're in the process of adding a functional code sample to an area of the home, however, it's very difficult to strike the balance to appeal to those that make the software decisions and hardcore developers. Something like vows is clearly targeted towards the hardcore, whereas we're attempting to target a broader range, which includes the hardcore developers.
It is difficult and definitely something we're trying to improve. We appreciate your honest feedback and will definitely take your advice into consideration for our next update. Thank You.
It's way too much. Complete overload. And, unfortunately, as the parent poster mentioned, despite there being a wall o' text, there's not much actual information.
You should be able to answer this simple question: where do you want my eyes to go first? Right now, there's a few different headlines and buttons competing for my attention. When that happens, I often don't bother figuring out where I should look and give up.
I think you have the attitude that someone is already interested in what you've done. If that's true, then more information is better. You need to come at this from the perspective that most people won't care what you've done. You need to convince them you did something cool. Make sure that after five seconds of being on your site, they know what you think is most important.
It's very interesting. We use to have much more technically minded copy, but that wasn't as effective. Interestingly enough the points that were criticized are in fact some of the actual core tenets of NOLOH, it's not buzz, those are it's most attractive features.
Clearly, what's happening here is that the skeptical reader is dismissing them, and certainly not clicking to get more information when more information is available.
We'll definitely try to strike a fair balance in the updates ahead.
I'm not necessarily arguing for more technical copy. I'm certainly arguing for less copy. That means what copy you have must convey more information.
Imagine you came into a restaurant and asked, "What's today's special?" And the waiter replied, "It's the best meal you'll ever have. You will be sated. You'll experience savory tastes, with a hint of sweet and salt. This meal will solve all of your hunger problems."
"Yeah, but what is it?"
"Oh, it's steak."
Give us the meat, then sell us on it. Don't try to sell it before you tell us what it is.
Even when I was looking around, I could see that there were a lot of resources. I'd like to see a very short app - 50 lines of code - that shows off the main feature of the framework. More importantly, I need this to be one of the links on the top of the page if it's not on the home page.
I've been following a very similar approach, writing a web app for an upcoming launch. The hardest part is keeping everything in your head. When you have a cache on the client, a cache on the backend, several databases, throw in some ajax requests and command line scripts, it becomes very difficult to sanely architect a web app in one page.
That said, it is very possible, but it requires that you bend your design to fit the model.
I believe there is a firm need now (and will become more urgent as time goes on) for a solid framework that can deliver single-page off-line-accessible apps. Somehow, I think Node.js would be the perfect platform for it.
I think this makes sense in a couple of situations. First, for stuff that is fairly simple and where not having page loads is highly desirable, such as a music playing app like his example. Second, for stuff that is complex but you have a team of engineers and computer scientists as well as a suite of tools that make dealing with that complexity much more manageable, as in Gmail.
In most other situations though, where you have applications that are not simple, do not have some requirement that really benefits from not reloading pages (e.g. are not playing audio), you are not Google, the additional complexity this approach carries with it is really not worth it.
It is really not more complex and done right it is significantly simpler. If you roll your own toolkit it is a pain, but if you utilize a framework like Dojo, it is far easier than Java(JSP, Struts, JSF, et al.) or ASP.NET or even PHP. Without the contortions of pumping everything to the server and then getting a response and trying to figure out that context, you get a far less fragmented memory model.
For example lets take a shopping cart, in the page post model you would submit the page to the server, create a cart in the session (bad, bad, bad) and then respond to the client with a new UI, the client would select an item, you would form post that item and the server would update the cart with the item. Back comes the UI and we do it over again with another item, ad nauseum. Eventually the user selects check out, we form post and hit a routine that tallies everything up and spits back another UI. We do this until all the data is collected to complete the transaction.
With the new model, The UI is the sole domain of the client and we speak to the server in complete representative state when we have the whole communication. Not only that but data definitions have very ridged walls that define what that data is, therefore making the server side code far more reusable (more on that in a minute).
So for this example, done the new way, JavaScript creates an order object, it then displays the UI for products after making an asynchronous call to get the product information (given that this is a defined call to the /products URL we can set a cache expiration in the future and therefore any subsequent calls have very little cost associated with them).
So now the server is acting as a data and business logic layer, while the client is providing the work-flow and the screens.
Back to the example, on the client side we have an order object, loaded with products that we have not had to make transitions to the server to create and update, we can then push this object to the server via a POST to the /orders service.
As you can see your data and business logic are becoming very defined and resource-able. If you decide to provide a mobile interface you already have the services available to support it, your data is no longer intertwined with UI work-flow.
The benefits of this model are vast but at a high level here are the big ones:
The client side becomes responsible for the work flow. So different UI's can provide optimized work-flows for their format. Web, Mobile, voice, do not have to rely on the same work-flows to reuse existing code and not start over.
Front end developers work in pure HTML, CSS and JavaScript there is no reliance on back end technologies to be able to perform their tasks.
Back end developers work in pure platforms, a Java developer works in Java, a .NET developer works in C#.
The front end and back end are loosely coupled through service calls, either can be swapped out without ramifications to the opposing side.
You data and business logic becomes addressable, a natural byproduct is that you can expose your system to third party consumers and alternative UI's.
You are working with a non-fragmented object model, one party is responsible for state and that is the client.
The front end is far more responsive to user input. You have far more opportunity to pre-fetch data based on user patterns and expectations. You have fine grain control over performance.
UI's are best programmed via an event based model. It is impossible to achieve this within the old page post model. (See the Node.js talk on blocking vs no-blocking, this is a relative of that argument).
Session management is offloaded to the client, reducing large amounts of memory and resource requirements on the server side. No longer does the server have to approximate what is happening on the client side.
A byproduct of the client holding session is that any disruption in communication does not reflect a total failure of the transaction. The client holds state and therefore can submit to the server once it become available again, no matter the point in work flow.
The decoupling of the server side allows the UI teams to develop the front end in a far more agile and rapid fashion. They can hold closer to the stake holder and rapidly modify the application to meet user needs.
If a top down approach is used, the entire front end can be prototyped while creating stub service files, allowing the stakeholder the ability to touch and feel the application before back end development begins. This significantly reduces the costs associated with development to get to the point where end users touch and request rework to the application. Further, the stubbed services provide a clean definition of the services required to the back end team.
I could write a book on the pros of this development methodology but sufficient to say, with JavaScript frameworks and proper architecture, writing new style web apps is far less cumbersome and a lot less convoluted. I was one of the nay-sayers until I tried it and actually found that it was easier and produced a superior user experience. The benefits to building apps this way are numerous.
Server side session is a outcrop of the idea that the client is dumb philosophy, as such, the server has to "approximate" the client, this leads to many counter intuitive patterns like session. By far the worst evil is the fact that you have no guaranteed destructor because you have an approximation. So if the client wanders off, you have no way to clean up based on that event other than a brute force timeout. Further session by nature has no way of self governance. For example, I cannot wire an object in session to be cleaned by an observer once an action happen, so a natural byproduct of this is that you get "junk" in the session that all live processes that have reference to it have terminated thereby leaving a zombie.
Sorry for replying to my own post but I did want to touch a little further on this subject. above I stated:
So if the client wanders off, you have no way to clean up based on that event other than a brute force timeout.
A common rebuttal to this is well just sprinkle in an AJAX call. Which in my opinion is the worst decision one can make. Now not only are you supporting a server model but you also have client mode sprinkled in which compounds the complexity of you application significantly, in essence doubling your technology stack. This is the choice a lot of developers make when trying to dabble in RIA and it is my held belief that this is a fatal mistake. It doubles the required skill set and creates convolution in the sequences of application communication.
Well, nowadays you can encrypt and sign sessions and store the signed/encrypted data on your client side (or non-encrypted cookies if you want them to be modifiable from the client side). As long as it's more difficult to fake session data than to buy working credit card numbers, you're fine (at least once you've taken care of XSS attacks, which I take to be no less of a problem in a single-page site).
Wow. Did I just point out that cookies have legitimate and valid uses? My self from 10 years ago would run after me with a shovel and yell that cookies are evil. (Incidentally, the opinion of my self from 10 years ago about Javascript would be exactly the same).
The techniques you're describing aren't benefits of your system. They're just modern web programming techniques and certainly don't need to be used in a single page programming model.
For example the majority of what you're describing as benefits has been known to anyone using MVC for a while, MVC is available in any of the languages you mentioned as your system being easier than.
MVC was conceived to account for the limitations of a view oriented development philosophy. MVC is far from modern as it is not an evented model. Modern are platforms like Node.js and the RIA toolkits who eschew MVC in favor of an evented and message oriented architecture.
For large applications I use Dojo, for quick small apps and web pages I use jQuery. I focus my time on these two because I feel that they represent the best in their respective classes. But yes, I use Dojo, CSS and HTML that is pretty much it. For the back end I use Java, mainly because there is a wealth of middle-ware technology available for Java. I use JAX-RS to provide all of my services as RESTFull services.
There doesn't need to be any additional complexity, if anything it can be simpler. I hate to constantly push NOLOH, but we've been doing this since 2005, and it works. It's been used by companies, and sites large and small, and no extra complexity necessary. If anything, it's significantly simpler than the normal web multi-tiered paradigm.
Sure, if you try to do this manually, it's complex, but if you use a tool like NOLOH, there are others, it becomes very simple, and even more natural than conventional web development.
It's so frustrating reading these comments as if the tools don't exist today. They do, and they have. Every year or so I'll read another post that re-hashes a small part of something that an existing framework, like NOLOH, does, and it'll be touted as the way forward.
We should be able use the tools that are available, there's no need to re-create the wheel every few months, or start over. If the cool kids in SV aren't using a tool, that doesn't mean that it doesn't exist. It simply means that the cool kids aren't using it. Likely because those cool kids like to re-create the wheel, over and over and over again.
Sorry for the rant. This sort of stuff gets very frustrating.
It doesn't appear that they simply reinvented what you did. Their approach works well without js, meanwhile NOLOH and the four "powered by" sites are mazes of mostly dead links and missing alt text (one site has nothing more than a blurb blaming the visitor for the author's neglect, which I wouldn't showcase).
It's clear you didn't actually click on the sites in the powered by, but rather went completely on ShrinkTheWeb's out of date thumbnails. There is not a single dead link, and the image with the text is a reference to a server error, but clicking on the site goes to the live site.
There are 3 pages, with 9 live sites. There are many more, but not everybody decides to post their sites to the powered by section. We'll be starting a push to get NOLOH authors to post their sites there in the near future.
Furthermore, if you would've actually taken the time to read through our site, you would see that NOLOH does in fact render content without JS, if the developer chooses, which we're constantly improving. You can read our blog for more information.
It's somewhat shocking that this is what HN has come down to. Writing a reply without actually verifying your comment. I'm starting to think I don't belong here anymore, it's starting to feel like high school all over again.
I don't mean thumbnails. http://www.noloh.com/?poweredby/ shows me four sites and no way to navigate to more. If I follow the link to the last, http://www.diffpaste.com/, I see "Paste", "Diff", and "Latest Pastes" across the top and categories ranging from "PHP (67)" to "C++ (1)" down the right, none of which do anything at all. Likewise, http://www.noloh.com/ has several highlighted phrases and "Read More" divs which look like they were supposed to be links but don't go anywhere. Many other links all lead to http://dev.noloh.com/ rather than whichever page was intended, because the server can't see the path after the # sign. If you thought this stuff all worked without js, I'm sorry but I assure you much of it does not, so the happyworm.com crew seem to be onto something good.
You're basing your assumptions on a broken premise. You're clearly not browsing normally, but rather are crippling your browser in some way, after which you decide to bash whatever you can, without clearly identifying your methods.
As I mentioned earlier it's at the developer's discretion as to whether they want to enable JS degradation or not. Sometimes when an application is sufficiently complex a developer may choose not to, or not have certain actions map to links.
You shouldn't base your assumptions on one implementation, but rather, read what the technology claims to do and then try it so you can actually see, rather than just slash and burn.
It's people like you that really make me wonder whether we should even continue down the standards based route, or continue to support text-based browsers, as mentioned in our latest blog posts http://dev.noloh.com/#/blog/, or http://dev.noloh.com/?/blog/ for you. Not a single client or user has ever asked for such features, but we always get complaints from the die-hards. So we work and implement it, to what effect? Next you'll complain that some app that uses NOLOH doesn't do XYZ. There's nothing we can do about that, we can't force users to upgrade, or implement a feature, we can only offer it.
Clearly it doesn't matter what we do, or how compatible we try to be, you won't care, won't listen, and won't actually try it.
Firefox 3.6.8 on Vista Home Premium x64 without js on. That's all.
If I were in the market for a web framework, I would not take it on faith that I could rely upon interoperability features the site claims but does not demonstrate. And I wouldn't write the demo myself unless I had already ruled out your competitors.
If you decide to drop it, I have no doubt you can still find a large potential market of developers either indifferent or ignorant about the ongoing disintegration of the open HTML web. It comes down to what kind of effect you're comfortable with having on the industry.
There we go, without js, which you didn't identify at first. Rather, you just started to list things that seems like they weren't working.
The best way to determine if something lives up to its claims is to try it. You can't look at sites done in NOLOH and then expect them to have implemented or turned on every feature.
Furthermore, it's amazing that when our competitors 280north, or Sproutcore, or whatever else is "cool" posts something nobody complains that you HAVE to have js. No degradation options, no text-based browsers. Nobody complains that their sites aren't in their tool, or that they have significantly less resources than we do.
As soon as we post something there's usually somebody that steers the conversation to a different topic and then criticizes us for one reason or another. In this case you successfully diverted the conversation from "single page" websites into a conversation on js degradation, which in the case of NOLOH is really irrelevant to most users.
Search engines get a different version from the js, non-js versions, thus the non-js version is only for humans that specifically decide to turn off their js. Could it be better, yes, will it be better, yes, can we mandate it, no.
> There we go, without js, which you didn't identify at first.
I thought it was fairly obvious that someone on a technical forum complaining about anchors not leading anywhere and hidden divs appearing on the page is using NoScript, or otherwise has javascript disabled.
>Rather, you just started to list things that seems like they weren't working.
Well, they weren't, were they?
> As soon as we post something there's usually somebody that steers the conversation to a different topic and then criticizes us for one reason or another.
Mm, kinda like how you complain about your competitors instead of addressing issues about your javascript degredation?
If you bring up your product as a solution, it seems reasonable to expect us to describe why it isn't.
You clearly decided to pick and choose what you want to respond to. That's no way to have an adult discussion, as such I won't reply to you further, otherwise you can easily advance any conversation into any direction you like.
Is it not normal to respond to only parts of a comment? That's why we developed quoting methods, right? Because, y'know, we may not have something to say about everything?
We have this fantastic thing called threaded discussion. There is not a direction for a conversation, but many, as is evidenced by the deeply-threaded messages on most technical boards (think Slashdot, not Digg).
It's a fantastic little library that gives you the ability to add what I can only describe as 'rails-style' routes to your client side .js apps. It makes your single page apps much easier to write and has a good event-handling model too.
I agree that this is probably the future of web apps. I think this approach is becoming more and more common and the trend will continue as tools and libraries and languages improve.
You end up with more responsive user interface but my feeling at this point is there is still a real cost of additional code complexity -- especially for larger apps.
I think server frameworks like GWT and Echo take the wrong tack, they favor the developer to the detriment of the designer. I think the JavaScript toolkits have it right by separating the concern of the UI away from the back end and placing it squarely in the hands of the designer and UX developer. It is a different discipline and given the historic nature of web development, server toolkits either favored the developer (Java) or the designer (PHP) and made sacrifices to the opposing discipline. Removing the UI from the server all together provides the best of both worlds for all parties involved. Even if you are a lone gunman freelance.
I mentioned GWT because the linked blog post discusses some of the first baby steps toward the idea of building a real single-page client-side app, whereas GWT has been doing some real heavy lifting in that space for a long time...I'm a little disappointed that the post received so many upvotes, since it seems like using the URL hash to preserve state on-page should be common knowledge for any web developer.
I agree that GWT is not friendly to UI people who are used to writing their own markup. But I would argue that a good UX person should be concerned with how the user interacts with the application (not necessarily by writing HTML and CSS by hand, but by sketching out the design on paper or Illustrator), and a framework like GWT often makes it simple to build complex UIs that would be difficult/labor-intensive to create and maintain with a traditional web dev stack. A decent developer should be capable of taking mockups from a designer and building out the rounded corners and other pretty bits himself in CSS.
just to be clear, this is all relative? Lets face it, in the world of creativity and development, there is no right and wrong. I have seen some beautiful applications developed in VB, a technology I despise, I am also continual impressed with what the PHP guys produce, despite the fact that I personally loath working in PHP. So in that context GWT is not right for me, and I find that it is not right for a lot of other development houses, because they are focused on design centric concerns as much as development centric. Yes, you can find a master of both worlds, but many times you can find a wonderful designer who's logic escapes him, and we have all seen the horrors of a programmer designing interfaces. It has been my experience that it is easier to find masters of one and mediocrity of both. So for me, and my development efforts separation of concerns is the right thing. For others GWT may be the right selection. I just wanted to be clear in my statement, I am not telling anyone what is right for them, I am telling them what is right for me and the developers I work with.
Everyone has their preferred toolsets and frameworks and of course they should use what they feel is most effective. Case in point: my shop uses very little GWT - my coworkers prefer different tools (and often so do I; GWT is overkill IMHO unless you're building something big).
But this is all not really relevant to my point:
I was talking more about how the linked-to blog post was kind of web-development-101 stuff that everyone already knows and somehow, sadly, still received many upvotes/comments. I only brought GWT into the mix because some guy wrote a blog post about listening for URL hash changes and is presenting it as the future of web development, while (using GWT as an example) people with PhDs have written an optimizing Java to Javascript compiler and engineered very good solutions to difficult client-side web development problems that completely trivialize something as basic as hash change history tokens.
Last I checked you write Java, and it generates the UI and the services. It is a server framework because the same tool is responsible for the development server side and the client side, it is an evolution from stuts or JSF, but in the end you are developing the UI from a backend developers perspective. I know that it generates client side code and that it uses the familiar JS client model. But it is still designed for the comfort of the backend developer. Quit honestly GWT et. al. further alienate the designer in favor of the developer.
Question: The trend now is towards client-side applications, but Rails deals primarily with server code. Does Rails need to evolve to keep up?
David Hansson: So Rails have actually been interested in the client side for a long time. When AJAX sort of first got its initial push, when it got its acronym, back in, I think, 2006, Rails was one of the first server-side frameworks that said, "This is going to be huge and we're going to do something about that." So we put a Java script library straight in the Rails distribution prototype and we built a bunch of helpers around that to make it easier to create AJAX applications with Rails. And today it's almost inconceivable that you'll build a new, modern web application that doesn't have some aspect of AJAX in it.
Now, some people go a lot further than just having some aspects of AJAX in it. Some people have their entire application in Java script and just use the back end as a data store for that. I don't find that development experience that pleasurable. I have come to tolerate Java script now that there are great libraries and frameworks like Prototype around it to sort of make it a little more pleasurable, but it's still no Ruby. Ruby is still my first love in terms of programming languages. And however much you paint up Java script, it's not going to beat that. Which is fine.
So, from the development side of things, I don't enjoy Java script programming nearly as much or in the same league as I enjoy Ruby programming. Okay, fine. On the client side of things, like is this better for the user? I think there's something special and appealing to me about the mix, the mix of how the web is discreet pages and you use hyperlinks to jump from place to place and AJAX is sort of sprinkled across to make certain common operations a little faster. I tend not to like very heavy, single-screen-based web applications. They can be fine for some things, but I think the Web has this unique category of applications that fit into that sort of middle ground between one screen, or mainly one-screen applications and static web pages. And that's an awesome sweet spot and I think it works incredibly well for a wide array of applications. And I wouldn't want them to be any different. There are certainly some people developing for the web who long for the days of the desktop application and finally see that now AJAX is bringing that back. Well, we've heard that story a lot of times. First it was Java that was going to do this, applets were going to bring back the desktop experience and we could get rid of this nasty HTML. Then it was Flash that would bring this forward. And now AJAX or anything else like that. There's been so many attempts to bring the desktop to the web, and none of them have succeeded in becoming the dominant approach to building web applications, and I think there's a good reason for that, because that's not what users want. Like that sweet spot in the middle is great and it's actually desirable on its own terms.
</excerpt>
For now, I'm pretty happy in that middle ground as well... though it will be interesting to see where this goes next.
It's actually simpler than that now days, you simple set up a server with a headless web browser on it and route all old-browser and crawlers to that box. They get the same functionality but with-in a page post model. There are a few architectural adherence but for the most part it works pretty well.
I always wondered about this, how does Google prevent this type of behavior? Like serving specific content to search engines and show other content to users... do they check using camouflaged bots?
Actually the article describes specifically how to do this while avoiding SEO (and no-JS) problems.
As with AJAX, the typical philosophy is to design for elegant degradation -- plan the site, then build it to work without JS, then add in the "fancy" stuff.
2007: I "single-paged" subsections of my application, with different PHP scripts on the backend.
2008: Consolidated the backend into a single API.
2009: Switched the backend to Rails for ORM functionality, and finished upgrading the client interface to a single page application.
2009: Switched the backend to Javascript (Rhino) to enable sharing of model validations and other code (even native object extensions) with the client.
2009: Got my application working completely offline using a local SQL database and replication manager, together with ApplicationCache. At first I used LocalStorage but soon hit storage and performance limits.
2009: Switched from Rhino to NodeJS. Much faster, cleaner APIs. Huge performance gains from V8 and non-blocking IO.
2010: Results of the above up at: https://szpil.com
Along the way I built up a framework for managing concatenation, client-side navigation, sessions, views, controllers, email etc. One significant advantage of building client-side only apps that replicate with the server is that they work offline by definition. Managing state on the client opens up incredible opportunities.