"Programmers may be surprised to note that the vast majority of users do not ever view the sources of a web page."
I can't imagine anyone being surprised by that. Programmers, on the other hand, look at source all the time. Being able to view the source when you wonder "how did they do that?" is a big factor behind the rapid evolution of the web.
"The source of the desktop applications you use isn’t typically human-readable unless the source is open and you seek it out."
Which, I would argue, is one reason why most innovation is now occurring on the web rather than the desktop.
"Beginners need consistency, and the majority of developers are always going to be beginners, so we have no choice but to help them."
By concealing the way things are done from them and eliminating the terabytes of publicly viewable example code?
I agree completely. My first thought after reading this article was that while 99% of users may not view the source of a page, the 1% (web devs) that actually create additional pages find View Source, Firebug, etc. extremely useful. When I'm helping any new web developer out one of my first steps is to have them use and understand Firebug. Preaching to the choir here, but it cannot be understated how important access to and readability of source code was to me when I first began.
Compiling and delivering in an unreadable format is possibly the worst solution to this problem. Who ends up benefiting from this? The end user who saves a minuscule amount of time loading the page? Is hampering innovation and progress really worth that amount of time?
Below, the author makes a point that viewing the actual source is unreadable anyways. Very true, but tools like Firebug and Chrome Developer Tools help combat that problem to an extent by nicely formatting HTML, CSS, JS, etc. And while many large sites (Gmail, etc.) have compact and utterly horrendous source, I would argue that novice developers are not looking there first for help, instead they look at less complex sites that are not minified (because they don't need to increase page loading speed).
I would argue a lot of what was called "mash-ups" a few years ago, came about because people could view the underpinnings. Long before companies started documenting official API's developers where pulling apart their code and adding and mixing and matching features from various web app providers. I think there is a culture of openness on the web and part of that I believe is due to the readability of the underpinnings.
That being said, the web has felt like technology soup for some time now, shortly after getting away from CGI, it just felt like we where doing it wrong (at least the apps part). The newer JavaScript / CSS / HTML front-ends communicating to a REST back end feels a little more right to me over all the proprietary server pages mess (ASP, JSP, PHP) that we contorted the document language with, to make pages, apps. But it does still feel like there should be a better way to deliver apps. I think this is why IOS apps are doing so well, the use a common REST back end with the web, yet make development of the UI a lot less complex. I think the web could learn a lot from Apple's successes on the mobile platform, but there is just so much inertia behind web technologies that a radical departure is just not in the cards.
He mentions decompiling it into a human-readable form when necessary. Personally, I think that whatever standardized format is chosen should have its own, ubiquitous viewing and editing tools. I think it should be the same format we use for desktop applications and everything else. I'm not holding my breath, though.
The nice thing about text is that its easy to edit and process, right? If we make this binary format easy to edit and process in addition to being more efficient, what advantage do we get from text?
Being able to view the source is important—but thanks to minification and automatic generation of content, the source isn’t designed to be consumed by humans anymore. Viewable source can’t be anywhere near the biggest reason for web applications’ popularity. On Occam’s Razor, I’d bet anything it’s just because the web is a ubiquitous, semistandard platform that’s all interconnected (relatively) seamlessly.
The “publicly viewable example code” isn’t helpful at all as-is. Discoverability already sucks. Just sending the source through a pretty-printer would be solving the problem backward, but if you’re going to do that, then there’s still no real reason the source itself has to be transmitted. If we had a single, standard, compiled format, then we’d solve the transmission and parsing problems, and the format could be presented on many different media, of which “decompilation” to HTML would be just one.
I’m not saying it wouldn’t be a lot of work to get it right, but in my opinion the benefits would outweigh the costs. That’s why I wrote the article. But of course you can’t solve a problem just by saying “this is how we ought to do it” without any discussion, so it’s people like you who intone “no, that’s a terrible idea, Jon” that will ultimately prove me wrong or right.
Apparently people are having trouble with this: just because it's binary doesn't mean you can't read it. It just means you can't read it with the tools you already have on your computer. We would have to build the editing tools to read and modify whatever data format comes out of this, but we can do that and it won't be that hard.
I think we should do it anyway. I dream of a composable, computable data format that can represent anything and be sent anywhere. We could build web apps with it. We could make desktop data formats with it. I know it's ambitious, but I don't think it's impossible. I'm not expressing it terribly well because it's late. But I think this is a good idea.
Text format allows various forms of security analysis and on-the-fly modifications of HTML/Javascript in personal firewalls, browser plugins etc (for the purpose of blocking malicious scripts, advertisements, flash, images from other domains with http 401 and other stuff).
Compiled binary executable blob with fixed address references will be non-modificable. You either run it or not, but you can't cut a part of it off while passing through firewall.
So, proposed change is a step backward from the security perspective.
I like to imagine a flipped scenario, where the content, style and interaction are even further separated; allowing better inter-app sharing of each aspect. For example, if the content of your blog just came through in JSON, I could very simply quote your post on my blog, with my layout & style. It seems that compiling it all to binary would make sharing harder instead.
But if all web applications were to serve content as meaningful objects, then the underlying format could be anything, so long as it were standardised. VMs could transmit everything as JSON if they so chose, but they might as well use a binary format because no human need ever see it.
If all applications are communicating using the same abstractions, then it doesn’t even matter what language they’re written in. My site can be written in Haskell and it’ll serve blog entries just as well to Haskell on the Firefox VM on Windows as to Ruby on the Chrome VM on a Linux toaster. And we never need to agree on an interchange format, because that’s the VM’s job.
While the Web model was good enough for a time with slow and expensive Internet connections, in the era of ubiquitous broadband it becomes more and more of a bottleneck.
In my opinion, the future lies in a more equal interconnection of Internet devices, which would not be divided to servers as clients as we have on the Web, but more on a peer-to-peer level. BitTorrent is a primitive example of one aspect (storage) of what I have in mind, but it would expand into processing and all other kinds of data exchange. One can easily imagine a networked TV being used as a display device for any other networked appliance, from mobiles to microwaves.
Whoever defines the best standard for that to be build upon will hold the keys to the future. And you may call this SkyNet if you like. ;)
FTA: "At the very least, we should stop serving HTML, CSS, and JavaScript to users. Let’s instead serve things in concise, binary format—compiled documents, compiled stylesheets, and bytecode-compiled JavaScript that runs in a standard-issue stack-based virtual machine."
I'm right there with him in wishing there were a chance in hell web browsers would move to a (language neutral as possible) VM for client-side code, but as for those other things, they basically are served in "binary compiled" versions most of the time, in the form of gzipped versions of the documents.
Using a different custom binary format for the same data would save very little over a gzipped html/css file and would add an immense amount of complexity, thus really isn't worth it, IMO.
Well, if you say I made one good point, then at least I made one good point. A standard VM is the most obvious thing we need. But I think there is definitely something to the notion of web applications as real applications, which can be queried in a structured way and composed (think pipes) with one another and with desktop applications.
To be clear, while I'm not sold on binary formats for HTML and CSS as they currently exist, I do agree with your general point.
The web (even as of HTML5) is somewhat of a ridiculous answer to the question of "How do we build these things we want to build" when those things are rich UI apps.
Unfortunately the problem is more politics than technical. To make any big sweeping changes you need buy in from at least Apple, Google, Mozilla and Microsoft, who all have very different goals and borderline antagonistic relationships with each other in all directions.
to me the real issue is not so much the format.
its standardization. (standard-issue stack-based VM)
- no two javascript VMs support the same things.
- no two HTML engines support the same things.
- no two CSS engines support the same things.
So there you go, dear open and standard web: you're not. you're just not standard. it's time for people to realize that.
There's a zillion w3c drafts which every browser attempts to support, and they spawn new ones (hi Google) all the time.
There's zillion of new extensions to solve "HTML5 stuff that we need" (everything is called HTML5 if its new features, easy!) and of course many are not compatible, or implemented only by one or the other browser (audio stuff, database stuff, etc).
And you have pages with "works best in chrome" or even "you need chrome to see this". Chrome is the best example because Google is the company coming with the most new drafts and the most "Chrome-only" technology. But it could be other companies eventually.
Well again guess what. That's not standard. At all.
You mean a proprietary platform that offers little to no interoperability, accessibility, composability, or searchability? No, that’s precisely the opposite of what I want.
If it doesn't, we should skip it. Programmers love large-scale infrastructure refactorings and they always get those wrong. Both in axiomatics and in implementation.
For example, I would argue that if unified web bytecode was adopted in 2002, its performance would be inferior today than leading JS engines provide. Optimizing high-level code is actually simplier than optimizing low-level kludge built upon wrong assumptions. And they would not get it right in 2002. I argue they wouldn't today,
If it doesn't, we should skip it. Programmers love large-scale infrastructure refactorings and they always get those wrong. Both in axiomatics and in implementation.
For example, I would argue that if unified web bytecode was adopted in 2002, its performance would be inferior today than
I can't imagine anyone being surprised by that. Programmers, on the other hand, look at source all the time. Being able to view the source when you wonder "how did they do that?" is a big factor behind the rapid evolution of the web.
"The source of the desktop applications you use isn’t typically human-readable unless the source is open and you seek it out."
Which, I would argue, is one reason why most innovation is now occurring on the web rather than the desktop.
"Beginners need consistency, and the majority of developers are always going to be beginners, so we have no choice but to help them."
By concealing the way things are done from them and eliminating the terabytes of publicly viewable example code?
Sorry, that doesn't seem very helpful to me.