My biggest horror with all this new W3C tech, is that in reality: it's really old. Because of Microsoft's stupid mistakes while reigning power over the browser world, we've been in this 10 year period of stagnation in which we've go nowhere.
There's no real laws you can put against Microsoft, but if this were a real world situation, we'd be trying them for crimes against humanity. They have effectively ceased innovation for an incredible amount of time, and I would go ahead and say that they've slowed down the exceedingly fast development of the last decade.
I am glad that Google is getting into this game. Not only because it will definitely give Microsoft a run for its money, but we will finally have new innovation that before seemed to take years to implement.
> I would go ahead and say that they've slowed down the exceedingly fast development of the last decade.
As a fellow developer, I'm about a commit a heresy down to text:
"Most people don't want fast development."
People may claim they want change or for things to be "done with" in a timely matter, but the reality is that change brings disruption in economic, social, and political terms. Take a look at IP4, for example. IP6 just isn't getting traction because the problems it solves have now become "features" to many people. (NAT being one of the big ones, since it's gotten the perception of being a security feature.) My job involves working with IE 6, since my organization refuses to update our systems to even use IE 7, much less standards-compliant stuff like Chrome or Firefox.
Simply put, if MS was sent to trial for "crimes against humanity," the jury of "peers" who would to be selected would acquit them. Why? Because the court would likely consider "peers" to be end users, business men or the ignorant, not developers or technologists like us.
The removal of NAT, IMHO, is a major ache in the market. What used to be purely an annoyance to developers and technologists has now become a serious stumbling block for users.
The internet is no longer the download-from-website model that it was years ago. There is a massive amount of user participation now, which is why upload caps are increasing as services like video chat, peer-to-peer gaming, etc take off.
And now you tell them only one person can initiate a video chat from behind a single router. The user is annoyed. Or you tell them that you can't both be playing the same game at once on two computers due to a port conflict.
IMHO there's a real need to get NAT out of our system, and there's a very real consumer benefit to doing so. It directly opens doors to services and products that people are already clamoring for.
I'm going to resist following you down the NAT rathole and focus back on Web standards. Where I might be inclined to agree with you.
People expect innovation to happen in "Internet Time" -- but that is a myth, born of hype and bubble thinking. In the real world, it takes on the order of decades for people to even grasp the full potential of the existing HTML/CSS/JS standards and de facto standards.  I just don't think innovation in computing is being particularly held back by these technologies.  They're not the bottleneck.
Consider some popular Web 2.0 sites: Wikipedia, Facebook, Twitter, Flickr, even Google. If you were transported back in time to 1996, you might find it difficult to replicate some aspects of these services (for example, your hardware and software bills might cause you to lose consciousness -- do you have any idea what 1GB of RAM used to cost?!) but the primitive state of the HTML standard would not hold you back. Your site would be butt-ugly (no CSS yet!) and/or employ dozens of horrifying <table> hacks, and you'd be stuck doing lots of full-page loads (no JS yet!) but it would work just fine.
Why weren't all these things built back in the 90s? The reasons are different in each case (Wikipedia and Facebook required a critical mass of web users; Flickr awaited the development and growth of the digital camera market -- the Apple Quicktake, one of the first digicams, was released in 1994; Google arguably did exist back then but it was called "Yahoo" or "Altavista"; Twitter just hadn't been thought up yet!) but none of them had anything to do with poor web standards.
You might argue that Ajax apps like Google Maps or Gmail are being held back by web standards sluggishness, and that's a stronger argument. I do think we'd see more innovative apps if we further lowered the barrier to delivering cross-platform, cross-browser apps over the web -- that's been the lesson so far. But it's not as if the situation is hopeless: you can build pretty good Ajax apps now, you can resort to Flash, or you can leverage the power of web-based education, development, marketing and delivery to ship (brace yourself) old-school desktop apps better than ever before. Nor is it the case that the standards bodies are the bottleneck here: The problem is insufficient market penetration of Webkit and/or Firefox. In other words, I'm not sure that inventing more and newer standards would be more helpful than trying to get more use out of the standards we already have.
 Witness how long it has taken for blogs, online news, and Craigslist to start actually killing newspapers, an event that has been predicted for a very long time.
 Of course, that's easy for me to say -- I'm not a designer or a typographer. Typographers are tearing out what remains of their hair for the lack of a better CSS standard for specifying typefaces.
CSS does have its limitations, however I'm not convinced providing full DOM manipulation is a good idea. I admit it does solve the problem and provide greater flexibility, but the downside is that it'd be very quickly misused with content and presentation becoming more tightly coupled as people start using the convenient CSS DOM manipulation with dynamic pseudo-classes for interactivity and removing many of the accessibility advantages that CSS has. There's also very real unaddressed problems in terms of creating a readable syntax, maintaining CSS code with injections all over the place and potential browser rendering issues. To me, the cure seems worse than the disease, and the Advanced Layout Module, despite its ugliness, is probably a better solution.
CSS3 has gone a long way towards improving selectors, and I'll be the first to admit calc is undoubtedly both necessary and long overdue. I'm not fussed either way by CSS variables - I can see both sides of the coin.
Can accessibility really get any worse? Extremely few websites are accessible today. If it weren't helpful for SEO, nobody would bother with it at all.
Can readability get any worse? Nobody really understands floats. This is a layout language in which people regularly boast about how they achieved three-column layouts. Think about that for a second.
And if you look at the code that achieves said three-column layout, it is impossible to tell what it does without comments. That's because CSS is written out as a set of spring-loaded contraptions all interacting with each other, meaningless without the "cascade" of DOM elements just so, and a browser model just so. There's nothing like "three_column_layout(node, node, node)".
As far as I can tell, your thesis is that because CSS is underpowered, this will help us maintain accessibility. That's totally backwards! There will always be requirements for complex layout and interaction. That part of the job is not going away. So something's got to give. The artistic people go to Flash, the scripting people just do it with tables, and the server-side programmers go for some toolkit that pretends the browser is Java. Very very few people have the sheer bloody-mindedness to do the right thing and learn every last browser quirk and the so-called CSS model.
CSS is DEAD. It is not the basis for a better future.
I'm going to break down the differences between my opinion and the original articles and hopefully your opinion.
I admit CSS has its problems, and the core one is this: it's too complex to create good designs, and it's too easy to hit a brick wall. We can all agree on this.
The two biggest, and very real, problems are the unintuitive box model and layout model. Everyone should agree with this.
I'll first deal with the relatively simply box model problem. The W3C box model was incredibly poorly thought out and makes like incredibly and unnecessarily difficult. It's simply inside-out. The width should refer to the total width of the box as in IE4 and quirks mode: this makes things wonderfully intuitive, makes things look as you expect and allows you to mix different units with ease. As it is, it width refers to the inside of the box and the real width is the inside width + border + padding. This is quite obviously nuts. There are two fixes to this problem: a nastier backwards-compatible one, and an actual nice solution. Both are in CSS3, and it gets full marks from me here. The first fix is to provide a way of calculating values so you can easily mix percentages with pixels. This is in CSS3 as the calc function. The second fix is switching to the traditional sane box model. In CSS3 there is a property to do this - add "box-sizing: border-box". So far, CSS3 gets full marks from me.
The second core problem is layout. We currently have the "float model". I put them in quotes because it's not so much a model as people meeting the limitations of CSS2 but wanting many of the advantages of it and finding a way around them using floats. Presumably when CSS2 because a recommendation in 1998 the web looked like a very different place, the limitations of the model weren't considered a big problem, and then we were left in standards limbo which left us for several years without a decent CSS2 implementation and no table-replacing display: table implementation in the major browsers in sight. And up until now the only usable, accessible CSS model we had was arguably designed for something else entirely - the task of allowing text to flow around images on normal layout pages. To say this dirty hack is lacking is obvious an understatement, and that's excluding working around CSS bugs, but I'd still argue it's the best thing we have and the accessibility advantages and content seperation are worth the dirty hacks.
And so we have where we are today and our problem - how do we provide a good, natural, way to make layouts?
And any answer to this leads us to the fundamental subquestion and finally to where we started this discussion: how seperate do we want our content and presentation to be?
This is actually a difficult, and slightly, ideological question. The more you seperate content and presentation, complexity of both the spec and language goes up, flexibility goes up, inherent accessibility goes down and so does inherent maintainability. The "completely" solution results in DOM injection, the "not completely" solution results in something nearer to the Advanced Layout Module.
And this is where our fundmental differences clearly lie.
And you, and the article, believe that the advantages of flexibility outweighs my criticisms, and CSS would be underpowered without them.
And to go back to your own post:
Accessibility can get worse, and CSS3 is looking like it will improve the situation.
Readability can get worse - floats are a dirty hack especially when combined with CSS hacks for dated browsers and as such inherently aren't readable, but adding injection into the mix won't automatically make life better.
And yes, floats are a dirty hack and aren't an indicator of the future of CSS either way.
CSS isn't really inherently underpowered: it depends how you see it. At the moment it needs fixing either way
And CSS isn't dying at all, and CSS is going to be completely reborn over the coming years. Flash is still dying - Flash was always a dirty hack. CSS's display: table is going to replace with CSS as soon once IE7 dies in about 5 years. And CSS is getting better, compatability is improving and CSS3 is going to better than CSS2 whatever it chooses: DOM injection or not.
Yes it does. It has at least 4 different ways to loop, 4 different types of arguments to functions, 5 or 6 different lets (functions, macros, let* , normal let, ...), 3 or 4 different kinds of variables (with no unifying underlying theme), and at least 3 different flow models (the nested lists, tagbodies, loops with returns, etc...). That is exactly the same problem with C++. Instead of finding unifying abstractions which give the same functionality with 1 idea, they have chosen to add in multiple specific quick fixes to the language. They have chosen to amend the language with a thousand small changes, each suited to its own little use case, and have ended up with a monster of a standard that takes up 15 megabytes in 2300 files with 110,000 hyperlinks. Thats 15 mega bytes mostly of text.
Scheme is a lot better but its going the way of CL, slowly but surely.
To a limit. Admittedly, having different argument types is pretty useful but still, 5 independent parts with their own semantics some of which may not be usable in certain contexts* ? how are auxiliary arguments useful in the general case?
This still does not affect the rest of my claims. Having multiple execution models in the same language is bloated. So are many other things in the CL standard.
Oh! So that's why Google is developing a speedy JS interpreter... they want to be able to see JS-based pages! They can load a page into a browser-with-no-graphics, and give the JS a certain amount of time to run before freezing the page and then reading off the DOM nodes instead of reading HTML. If they don't like the results, they can just use the HTML like they used to.
(Reading off the DOM nodes also gives them a fresh crack at some more semantic analysis that used to be really hard, since it involved emulating a browser... did you use <span class="some_header"> instead of <h1>? They can make a better heuristic guess at that if they are "inside" a browser and thus capable of seeing that the size is twenty points higher than the surrounding text, no matter how you implemented that.)
To really pull this off, you need a browser engine that enough people are using that site designers need to account for it, and it needs to be designed with the goal of running just the engine as quickly as possible on the backend, with the ability to cut it off after a certain amount of time.
So... has anybody dug deeply enough into Chrome to have an opinion on the feasibility of running the browser without a UI?
"Should the ability of search engines to index something determine whether or not you do it?"
If your goal is to make money then yes it should if organic search traffic is a big piece of your marketing puzzle. It's not the cart before the horse if your intention is to run a business on the internet.
I've been a big part of a few successful businesses that depended on SEO. Without it they would not have succeeded.
It doesn't replace running all code on both client and server, but for certain use cases it does.
For example, you have a blog post with comments and a form that allows you to post new comments. When a person visits the page, the HTML of all current comments is loaded. The code to generate that HTML is on the server. But say that you want the interface to be very responsive, and when someone posts a new comment, you want to be able to display that comment in the list with others w/o waiting for the response from the server. Ideally the same code that generated the HTML for the original list can be used to generate the HTML for this new comment -- but on the client.
(Of course, to actually store the new comment, you need server side code, which I'm not saying it will eliminate at all.)
The problem is that Microsoft inversely gains from improving the ease of browser development. The easier it becomes to develop software using open source tools(No Visual studios) on an OS-independent platform(The web with browsers) the less money they make.
Google's recent project (Native Client) is what I've been waiting for as a C++ developer. It basically allows you to run native code(assembler) in browser. But then has security mechanism so that the code cannot do anything to the client computer unless the user of that client has given the container(Sandbox) that runs the code permission.
This will allow people to build their own UI frameworks, that render much more rich graphics and hopefully behave the same no matter what Browser/OS combination you are running on.
Do people really think we'll ever reach a point where different vendors building code-bases based on documentation built by a committee will ever behave even remotely the same? Have you read any of the W3C documentation, it's fucking ridiculous. We should be sharing implementations, it's the only way to ensure bit-level identical behavior of a target system(Performance characteristics aside).
I really hope Native Client takes off. In combination with Gears for client side storage it would allow us to build desktop quality software but with the web advantages of cross platform development, zero install applications(well as long as you don't consider caching a form of installing), user's pulling down the latest client at all times, and etc.
Yes that's the dogma. Alas, it's completely lost on me. I fail to understand how tables are more "semantic" than nested divs. Tables only have semantics within the context of some formalism that defines their semantic meaning. Outside of such a context they can be anything, so why not a way of laying out information on a page?
I don't see how swaths of divs, some of them used for semantics (according to some metadata standard), some of them used for purely presentational purposes, improve anything.
These priorities make no sense to me. Laying out web pages with divs and CSS is a black art. It's fragile. It breaks in all kinds of nasty ways. It reduces the productivity of UI development to pre 90s levels and leads ot entire pages done in Flash.
"I fail to understand how tables are more "semantic" than nested divs. Tables only have semantics within the context of some formalism that defines their semantic meaning."
Tables are not more semantic than nested divs. But nested divs aren't very semantic, either. Tables are used for tabular data. Divs are used for dividing content.
"I don't see how swaths of divs, some of them used for semantics (according to some metadata standard), some of them used for purely presentational purposes, improve anything."
No one is proposing that you make every element a div. In the web standards community, we call this "divitis" and while better than table-based layouts, it still isn't semantic.
"These priorities make no sense to me. Laying out web pages with divs and CSS is a black art. It's fragile. It breaks in all kinds of nasty ways."
Just because you don't understand how to make something work doesn't mean it's a "black art". Plenty of web developers are able to create semantically-meaningful documents styled (cross-browser) with CSS. CSS offers advantages, but you seem to only see (imaginary) disadvantages.
"It reduces the productivity of UI development to pre 90s levels and leads ot entire pages done in Flash."
I'm not even sure how to respond to this. It's ridiculous.
Well yes, in theory those are not UI - but how many times have you seen someone create two divs for the semantically same data just so the CSS styles will play nicely?
The problem is that CSS is not powerful enough to do rich layouts, and that forces developers to cripple the semantic goal of CSS/HTML by injecting presentation into what is supposed to be a presentation-free markup.
He has a good point: CSS can only define the appearance of existing HTML elements. Sometimes you need visual elements which doesn't correspond to any semantic HTML element. I think SVG would be a good approach for designing these "unsemantic" visual elements, and you can use CSS with SVG. SVG would also be a good way to extend CSS beyound the built-in properties, eg. if I want to design a wawy border to apply to some elements.
He also has a good point about the need for calculation in CSS. However I am really worried about the possible manitenance nightmare of a turing-complete CSS! It should be designed very carefully. A constraint-based sublanguage for CSS would perhaps make sense.
"If you look at the specification there was only one property in CSS1 that could be used to effect layout - float. position didn’t even exist, and in HTML 2 neither did <div>, the two essential tools without which no modern design could be achieved (without resorting to tables)."
Lost me there. A <div> is just a vanilla block-level element. You can turn anything into a <div>.
"CSS can only be applied to elements that have semantic meaning"
You should see the CSS that developers I've worked with write. Even worse is that DreamWeaver has its "style1", "style2" way of doing things which makes picking apart a stylesheet very tedious.
The fact that you can already accomplish what he wants with jQ, and that his proposed solution is to add imperative programming semantics to CSS, one assumes the reasonable response to this is simply a compiler that spits out CSS/JS based on whatever random feature he wants CSS to have.