And even the largest selling point of the entire stack - develop once, run everywhere - is far less true then many like to admit. In the past a substantial part of development time has been spent on getting the thing to run consistently across browsers but I fear we are far from the end of the road. Now we have more or less consistent behavior across browser for all the basic things but you still regularly run into unusable web application because you happen to have an unexpected screen aspect ratio, an unsupported video codec or missing WebGL support.
Unfortunately I have no good suggestion how to escape from that situation (quickly) but it seems pretty obvious to me that we have a lot of problems to solve.
But it's more true than any platform in the history of mankind.
I've noticed a resounding chorus of seasoned developers railing against the web as application platform for over a decade now. But their complaints are always comparing an open standards based platform against some proprietary platform or GUI toolkit. Of course Microsoft or Apple are going to put together a more cohesive app SDK than the W3C. But neither of those behemoths has ever approached the ubiquity of the web, not by a long shot, because every single platform they target has to be developed by them. The web is available everywhere, and if a new corporate behemoth builds the next great platform, they must include web support because that is now considered baseline functionality. I'm surprised more old-school developers don't recognize what an amazing accomplishment this is.
Systems on the scale of the web don't—can't—happen by design. Instead of complaining how terrible it is we need to look at what incremental improvements are possible. Over time this leads to real evolution, and yeah it's slower than we'd like, but there's no other way forward without chucking out all the users with the bathwater and becoming irrelevant.
Java apps! ducks But seriously folks, I'm curious what went wrong there. The JVM these days is very fast, runs everywhere and comes installed by default on most operating systems. I was still a student during the java hayday, but the only downsides I vaguely recall were poor UI, poor security and poor speed. The last two have been fixed and the first one seems very fixable. Furthermore, you don't have to write apps in java anymore. It just seems like there's a lot of untapped potential there these days.
Inheriting the DOM, especially its graphical elements was a boon to JS apps.
Technically Java can/could do all the same stuff, but in the DOM, input, output, graphics and text all flow together. They're too rigidly separated in Java layouts.
While I'm at it, upgrading the Java runtime was always an annoying process, taking time, confusing my sister, etc.
Luddite: one who protests against modern labor-saving technology.
This is a bare assertion. The same thing can be (has been) said about every programming language, ever.
> the entire stack is more less broken.
This is another bare assertion.
> HTML and CSS got repurposed from document markup languages to GUI markup languages.
This is a red herring. There is nothing intrinsically wrong with the evolution of markup to encompass more layout capabilities.
> We allowed this awful language to escape the browser and infiltrate our servers in form of Node.js.
They were certainly not the first. The trend started, I believe, with Mozilla's XUL architecture, evolved through things like Lazslo, and is steadily moving towards web components. This trend has been going on for a decade; even in enterprise Java land things like Struts and JSF got on the application markup bandwagon. Quite simply, markup has proven to be a good way to lay out interfaces.
> develop once, run everywhere - is far less true then many like to admit.
There are inconsistencies, true, but there is really no other stack that has achieved as much cross compatibility as the web stack.
> In the past a substantial part of development time has been spent on getting the thing to run consistently across browsers but I fear we are far from the end of the road.
This is somewhat of a "slippery slope" fallacy: 'There were problems in the past, so I fear there will be problems in the future, therefore we shouldn't solve the problems and the whole thing is crap...'. It doesn't hold together.
> Now we have more or less consistent behavior across browser for all the basic things
Which is incredibly powerful and frankly unprecedented.
> but you still regularly run into unusable web application because you happen to have an unexpected screen aspect ratio, an unsupported video codec or missing WebGL support.
Very rarely. For experimental new apps yes. But when you look at what projects like Clara.io have accomplished, they have blown away the preconceptions that the web is not suitable for things like high-end 3D graphics development.
> Unfortunately I have no good suggestion how to escape from that situation (quickly) but it seems pretty obvious to me that we have a lot of problems to solve.
Let me make a suggestion then. Contribute to solving the remaining problems instead of berating the technology, or if you manage to find a better alternative, write about that instead.
This is obviously my opinion and I assume everybody is able to mentally add IMO somewhere.
If table based layouts or hundreds of wrapper DIVs are not a sign that the underlying technology is not suitable for the task then I don't know what is.
Another red herring. Node.js exists. Nobody is forcing you to use it. It's being used successfully by quite a few folks.
That somebody uses it successfully does not make it a wise decision to use one of the worser languages in a place where you are - unlike in a browsers - not forced to do it. And don't get me wrong, everybody is free to use whatever they like, I just want to point out that there are better options. You can use a screwdriver to drive a nail into a board, I won't stop you, I just want to suggest to at least consider getting a hammer.
The were certainly not the first. The trend started, I believe, with Mozilla's XUL architecture, evolved through things like Lazslo, and is steadily moving towards web components. This trend has been going on for a decade; even in enterprise Java land things like Struts and JSF got on the application markup bandwagon. Quite simply, markup has proven to be a good way to lay out interfaces.
I completely agree that using a markup language is a good way to build GUIs, the bad idea is using HTML and CSS for that because they were not intended for this and are not really up to the task.
This is somewhat of a "slipper slope" fallacy. There were problems in the past, so I fear there will be problems in the future, therefore we shouldn't solve the problems and the whole thing is crap.
No, we should of course fix the problems, but I believe we should at least think about fixing them at the fundamental level instead of keeping patching the holes. The obvious problem is of course the adoption of a better replacement and that is why I am not sure if and how this could happen.
My point here is that people quite often try to suggest that portability is kind of a unique feature and essentially for free with the web stack which is not true. A simple program is really easy to write in a cross-platform manner in most languages. Then there is a range of applications where the web stack is exceptionally good for building cross-platform applications with very little overhead. At the upper end it again becomes hard and costs effort to get it working across platforms.
As another commenter pointed out, we know how to build better solutions. What we don't really know is how to get adoption for it.
Also, compare this to the number of people who are just using JS normally worldwide, then we'll have perspective of just how many more people are just fine with JS Vs the people who feel the need to make these.
My guess would be that the number of people who like JS and use just normal JS in their projects is far far greater in number than the people who use compile-to-js solutions. So in comparision to the former, the latter group could be called as 'few'.
> largest selling point of the entire stack - develop once, run everywhere
Rather than being the largest selling point, this is talked about by virtually nobody who actually works with Node.
Easy handling of concurrent I/O is the largest selling point.
Everyone knows what the solution is; a sandboxable arch-independent batteries-included bytecode format for the web. The solution is obvious, the problem is in organising the detailed work necessary for such a format to emerge.
Assuming this work was completed, widespread adoption is mostly a matter of browser support and time. I suspect the Chrome team would be open to implementing it after it's been designed (there are parallels with the Native Client work), and other browser teams would be unlikely to resist if there are clear performance and workflow benefits (which there undoubtedly would be). Basically it's a "if you build it, they will come" situation.
To understand the scale of the task, consider the 'batteries-included' aspect. The core libraries for the bytecode format should be standardised (i.e. core library API set in stone for each major version of the bytecode). That's a huge amount of work. Then you'd also have the security testing needed (to ensure the bytecode was securely sandboxed). You'd also need to work on ensuring the bytecode was efficiently implemented for all processor architectures. I don't know about you, but I certainly don't feel qualified to do that.
1 URL = 1 resource
Right now, in every web browser that exists, there is still a so-called "address bar" into which you can type exactly 1 address. And yet, for a system that uses WebSockets, what would make more sense is a field into which you can type or paste a vector of URLs, since the page will end up binding to potentially many URLs. This is a fundamental change, that takes us to a new system which has not been thought through with nearly the soundness of the original HTTP.
Even worse is the extent to which the whole online industry is still relying on HTML/XML, which are fundamentally about documents. Just to give one example of how awful this is, as soon as you use HTML or XML, you end up with a hierarchical DOM. This makes sense for documents, but not for software. With software you often want either no DOM at all, or you want multiple DOMs. The old model was:
1 URL = 1 resource = 1 DOM
Perhaps the most extreme example of the brokenness are all the many JSON APIs that now exist. If you do an API call against many of these APIs, you get back multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP protocol is under the misguided impression that it just sent you 1 document. At a minimum, it would be useful to have a protocol that was at least aware of how many documents it was sending to you, and had first-class support for counting and sorting and sending and re-sending each of the documents that you are suppose to receive. A protocol designed for software would at least offer as much first-class support for multiple documents as TCP allows for multiple packets. And even that would only be a small step down the road that we need to go.
A new stack, designed for software instead of documents, is needed.
JS and Ruby are similar in a way, the syntax is not all too strange to me so it's easy to run any JS code in my mind, on the fly.
That said I find JS extremely cumbersome compared to Ruby... But if I could easily add JS functionality to my Sinatra/Rails it would be great! :-)
Time is the true currency of this universe, so we optimize for efficiency as best as we can without knowing the future.
The problem being discussed here is more along the line of building the Golden Gate with wood and plaster.
I wonder if rust or c might run in a vm, I guess yes. C compilers can be very small too.