Using precisely defined mathematical words in contexts where they only make sense vaguely to a layperson ruins their original usage. Make up a new word. Repurpose a shitty English word. But leave our damn maths words alone.
Old man quarterto shakes his fist at you! Get off my smooth, compact lawn!
According to M-W "isomorphic" has pre-existing definitions from medicine and chemistry that could align better with what OP is trying to say. http://www.merriam-webster.com/dictionary/isomorphic
Good day quarterto! I shall now leave to take a smooth ride in my compact car.
To clarify, if it was a true example of isomorphism it wouldn't be contingent on a single runtime environment or language.
A better example of isomorphism in computing would be two modules of code that perform the same function and have the same abstract interface but that are written in DIFFERENT languages.
According to http://en.wiktionary.org/wiki/isomorphism
one meaning is similar form. As in - server and client code have similar form.
isomorphic: modules that have the same abstract interface and functionality but that are written in different languages and executed in different runtimes
monomorphic: modules that have the same abstract interface and functionality and that are written in the same language and have the same runtime
heteromorphic: modules that have the same abstract interface and functionality and that are written in the same language but run in different runtimes
homomorphic: modules that have the same abstract interface and functionality but that are written in different langauges that run on the same runtime
See, the thing is... languages are compiled to runtimes, right? But they are transpiled to other languages...
There surface area of this discussion is very interesting and seems to have some interesting properties, and if you don't mind the math pun, it is truly a complex issue!
As for how to actually name these different things, I don't think I really mind... maybe isomorphic and monomorphic swap in the above definition... I've been reading some latin and greek roots to try and give them good names, but I'd love some suggestions!
For example, a function written in Scala that functions the same as a function written in JRuby and compiled to the JVM would be an example of a "homomorphism".
Monomorphic would just be sort of like an identity property.
Also, I have no idea what I'm going on about, so please help me fix the language!
Maybe isomorphic IS the correct term for what is going on in the article, who knows? :)
Imagine a graph theory researcher calling a program which handles graphs a web app. Isn't that confusing?
And there's a (unsolved) conjecture that every NP-complete language are p-isomorphic: http://en.wikipedia.org/wiki/Berman%E2%80%93Hartmanis_conjec...
That link doesn't say that those definitions were "pre-existing". They say the first use was on 1862 but failed in specifying in which context was used first.
(Mathematics) A one-to-one correspondence between the elements of two sets such
that the result of an operation on elements of one set corresponds to the result
of the analogous operation on their images in the other set.
If there is an isomorphism here, it is between the separate runtimes (client and server), and not the language.
I was inspired to use "isomorphic" from a 2011 article by Nodejitsu . It seemed like a fine way to describe this approach. I would gladly use a better would if I were to find one.
A few ideas:
Programmers these days!...
OTOH, this is talking about the same thing running in two different contexts. More appropriate terms might be uniform, homogeneous, ...
Before maths, it was probably mostly geologists that used it. It's also used plenty in evolutionary biology, but this was probably after maths (not sure though).
The fact that it's used in math is incidental. It's used in biology and chemistry as well. It does leave space for ambiguity because of closeness of math and computer science, but overall it's correct usage.
Polymorphic: many shaped.
Monomorphic: single shaped.
Isomorphic: same shaped.
longingly gazes into the distance
longingly gazes into the dead stars
The cool thing is, other template languages (like Handlebars, Jade, etc.) can compile to this intermediate representation, which then gets rendered on any updates.
If the front-end community could agree on a protocol for how to represent these steps in, say, JSON, then we could be on our way to a world where you could use any rendering engine with any template library, on the client or the server.
That is, if the community could agree on a representation :)
Also, you can't patent this now.
The phantomjs idea is a shit idea and should go away. Phantom is fine and good for headless testing (hells we use it for quite a bit else) but seriously it is not a solution for a real load.
Shit idea, lil harsh.
In my experience, including rendering code both server-side and client-side is overkill. Just put all the rendering and templating code client-side, period.
(Obviously SEO is a different case, but there are tools built for that specifically.)
In my experience, server-side rendering has led to a much better UX.
That is closer in line to what the real definition of isomorphism is describing. :)
Frankly, Node is a way for front-end developers to write server-side code without having to learn too much new stuff.
Some people are side-stepping the issue by saying that the whole presentation layer must be moved to the front-end, but that approach is really incompatible with the web.
If anything, the current state of the front-end is thanks to server-side developers who want to bring their world view to the browser. (Think: MVC -> Backbone / Ruby -> Coffeescript -- apologies to the authors of those tools)
It seems like CoffeeScript did this rather successfully, and with source maps, the debugging story is getting better too. Any reasons why this might not be a more compelling future than the "JS everywhere" vision?
This is a classic "impedance mismatch" like O/R mapping. At the end of the day, there may be no good, i.e. simple, solution. It is inherently difficult and messy.
GWT is/was a terrible amount of overhead baggage, but had some brilliant event systems and DOM tricks for its day.
And the Java server code compiles to JVM bytecode. Languages often compile to some other form to run, and the fact that the compilation target is different for different parts doesn't change that the programming language is the same.
GWT is very much alive and kicking, but it's primarily used on internal business applications with large and complex code bases. And what overhead baggage are you talking about? GWT is designed to reduce overhead though cross-compiled, browser-spesific builds, dead-code removal, and js-optimizing.
Vaadin is consistently rated one of the best web app frameworks. Why its not more widely known is beyond me.
I've been feeling the burn myself—the desire to fully switch to handlebars templates in my Rails app. The expensive part would be building my API out more than I have, because I rely on associations in Rails view renders. However, gaining 1 template engine to rule front & back.
Instagram.com does something similar; it's a Django app.
Eh, saying "matured into" implies that the current state is somehow better or more desirable than the previous state, and somewhat implies that this was the expected / desired goal state. I reject all of those implications. I posit that it would be more correct to say that "the Web has been mangled, bent, distorted, twisted and hacked into a fully-featured application platform..."
A Web browser should be good at browsing, trying to make it into an application runtime is a "separation of concerns" violation of the first magnitude. It might work, but let's not pretend there aren't other choices, or forget to continue researching alternative approaches to delivering applications over the Internet.
Look at the first diagram in the article and you'll notice that the author shows 3 environments:
1. Client (obviously the browser is JS)
2. Application Server (the server-side rendering environment = what he says can now change to JS thanks to Rendr)
3. API Server (use whatever you want here - JS, Java, Ruby, Python...)
If you check out his example code you'll see that he actually sets up example 04 (https://github.com/airbnb/rendr/blob/master/examples/04_entr...) with a subapp for a static page.
Also, if you look at the react-moment branch in the isomorphic tutorial repo (https://github.com/spikebrehm/isomorphic-tutorial/blob/react...) you'll see that he's proxying a dummy API server to a route on the application server.
Anyway, so problem currently is that client devices are slow and HTTP requests are expensive/slow. If these were not issues - I guess rendering on a client would be just fine?
If above is correct, I do not see good reason for smallish websites/apps/whateverucallit fallback on server rendering, as it looks like every single year mobile devices getting faster and faster. And we have SPDY on a way to mainstream.
Do I want to wait 3 seconds to get HTML or wait 100ms to get bootstrap code and see pieces rendering as data comes in? Probably doesn't matter, since it will take roughly same 3 seconds to render it on client side.
Ultimately, perceived responsiveness of webapp depends a lot on particular implementation. If you wait for all data required for page to render - yes, it will be slow. If you render pieces of page as data comes in - user sees that something going on and this is good enough. Why it should matter where data will be converted to presentation, on server or on a client? It is still waiting. And if client is not happy with it? Buy beefier hardware!
No server should take more than 100ms-500ms on a sufficiently cached page to process and to begin transferring, in non-edge cases. The only thing that could take that long is a shitty datacenter/vps, and as such, transfer speed/latency limits will encumber all types of requests.
In the end, you still have to transfer data from databases, so ignoring that, client-side template rendering seems like a non-issue to me. Surely it's never the latency issue unless you're serving at ~100+ requests per second, which translates to many millions of requests a day, and I don't believe 99% of websites are doing this.
I feel this tension is at the root of the CommonJS vs AMD discussions that have been taking place recently as well as issues related to npm, bower, and browserify.
Monomorphic code, while being very easily shared amongst different runtimes, still needs to aware of the mechanisms, strengths, and weaknesses of those runtimes!
Projects like browserify-cdn and sites built with on top of it like requirebin.org do a lot to bridge the gap but raise some interesting questions about wrappers like UMD.
I feel there is room for a better protocol for sharing code between the different environments.
I'm currently doing some research on the subject and plan on writing a spec and implementing some example interfaces.
BTW, projects like Bower head in the opposite direction and seem painfully unaware of their actual context...
I mean, installing a package manager to then install a package manager should raise some eyebrows, right?
Edit - A couple of unofficial but useful links:
Also, you'll have to re-do the entire rendering on the client, then swap the server-generated DOM with the client-generated DOM. Its probably possible - but is it worth the effort?
I disagree about the complexity. I've build Angular apps with a RESTful backend that couldn't be simpler. I also haven't noticed any performance issues. It's faster than traditional sites for me. SEO is a problem though.
However, Yahoo already released a solution for shared Express/client state: https://github.com/yahoo/express-state
That's old school way. Sometimes old ways are presented as new, which confuses me.
> then capture the outerHTML of an element as an HTML string to serve
That makes sense. Do the fancy building on the server and then convert it for transport
That sounds interesting. You answered my question enough that I can do more research. Thanks.
To iterate a point in the article, a number of client-heavy sites built on Angular and Ember have been rendering their content on the server-side to be presented to crawlers like the Googlebot for SEO purposes.
It's returned in the request as HTML if the USER_AGENT matches Googlebot just like it would in a normally non-SPA web app.
Mostly Django user attempting to ask from outside the Node/server-side js community for an explanation of why we're invoking "isomorphic" here:
As far as what's going on, are we just saying that the js for rendering the page is equally capable of talking to the API and doing the DOM work while running on the server as it is when downloaded, interpreted, JIT'd etc before fetching the actual request content? Does this mostly eliminate an extra RTT to the API if the js/web application server experience relatively zero RTT due to network/physical proximity to the API server? Does this incidentally make SEO possible while having fat clients and server-side initialization all in one setting and with mostly one code base?
Assuming I'm on track, working backwards towards understanding the implementation as it relates to "isomorphic", if a program's output state is a valid input state to initialize the program (we're talking fat, stateful MVC clients after all) state, then all that's needed to re-create the entire execution state of the current program is to copy itself into the output in such a way that it will be run upon receipt (we link js files in the page). I'm going to take a wild guess that the output state of the client doesn't have the necessary template tags etc used when a cold page is rendered, server or client-side, so the server has to serialize some of the state and pump it into the copy of the program to make it as if the client we're sending is the client that rendered the page, keeping the client's internal representation consistent with the state of the page.
If this is true, we're taking a program that cannot be initialized from it's own output and modifying it to be a program that achieves the same effect. JS is modifying JS, and is therefore said to be "isomorphic?" It still sounds like a glorified way to describe a serialized execution state, one that happens to involve a program in a language outputting a valid program in the same language, but the purpose is to achieve the effect, as far as the developer is concerned, that the output is a valid input, albeit with an intermediate technique to make the abstraction hold water.
If I'm on track, then I have to say it's rather as if we have a fat client, output of one render, serialized client state (because the DOM is not sufficient to re-create the program state or else our client js has to initialize from one format that's programmer friendly and one that's program-friendly), and necessarily a third-leg responsible for injecting the client js and serialized state as js into the output.
Standing to see the mauve dusk of my life against the impending rain of spears and arrows, I must ask if we can not simply call the js necessary to achieve the equivalent state of running everything client side a "continuation" and call the entire technique "client continuation programming?" with it implicitly understood that creating the continuation involves a third piece of program? I'm prepared to die. I just want to know.