Hacker News new | past | comments | ask | show | jobs | submit login

I used to be like you. I believed in the proper correctness of of markup; proper closing tags, proper nesting. But I've come to see the light. The WWW succeeded and flourished because of it's faults and it's lazy error checking. Thousands of non-technical people writing their own html. Thankfully it didn't have to be perfect and it worked.

I still like tidy clean code, but I don't agonize over it's perfection.




I hear that repeated, but I don't find it convincing. A simple grammar would make it easy to find errors and kick them out immediately. Instead, we ended up with shitty ambiguous standards (common in "friendly" text-based protocols) and still have to deal with cross-browser compatibility.

If HTML had error checking and kicked out unspecified/ambiguous syntax, people may have left off tags (decided not to bold or make a list), omitted some images or something.

It's hard enough writing a spec - there will be unforeseen combinations resulting in conflicting behaviour. The answer isn't to give up and make the spec loose.


HTML5 isn't loose -- it has a well-defined procedure for handling errors.

Which is worlds better than XML's "every error is a fatal error" approach, since real-world XML is often non-well-formed (and, when validity checking is possible, invalid), and tools ignore that to varying degrees and recover or ignore just like they do with older versions of HTML.

(my favorite example of all time, with that, is the ability of XHTML documents to have their well-formedness status depend entirely on the HTTP Content-Type header, and at the time none of the major toolchains actually handled it)


Can you detail this often non-well-formed XML? I've not seen any XML parsers that handle invalid XML. Except for people who wrote their own XML parser and think a simple regex is enough.

Validation is another issue, and I don't think you'll find anyone saying that the myriad XML addons are simple or easy :).

The mixing of HTTP and HTML also seems like a bit of strange hack to me. And let's not start talking about well-formed HTTP; I'd be surprised to find many real-world clients or servers actually following the inane HTTP spec. Just like mail clients don't always handle comments in email addresses.


Well, the classic example is XML + rules about character encoding. Suppose I send you an XHTML document, and I'm a good little XML citizen and in my XML prolog I mention that I've encoded the document UTF-8. And let's say I'm also taking advantage of this -- there are some characters in this document that aren't in ASCII.

So I send it to you over HTTP, and whatever you're using on the other end -- web browser, scraper, whatever -- parses my XML and is happy. Right?

Well, that depends:

* If I sent that document to you over HTTP, with a Content-Type header of "application/xhtml+xml; charset=utf-8", then it's well-formed.

* If I sent it as "text/html; charset=utf-8", then it's well-formed.

* If I sent it as "text/xml; charset=utf-8", then it's well-formed.

* If I sent it as "application/xhtml+xml", then it's well-formed.

* If I sent it as "text/xml", then FATAL ERROR: it's not well-formed.

* If I sent it as "text/html", then FATAL ERROR: it's not well-formed.

Or, at least, that's how it's supposed to work when you take into account the relevant RFCs. This is the example I mentioned in my original comment, and as far back as 2004 the tools weren't paying attention to this:

http://www.xml.com/pub/a/2004/07/21/dive.html

These are the kinds of scary corners you can get into with an "every error is a fatal error" model, where ignorance or apathy or a desire to make things work as expected ends up overriding the spec, and making you dependent on what are actually bugs in the system. Except if the bug ever gets fixed, instead of just having something not quite look right, suddenly everyone who's using your data is spewing fatal errors and wondering why.

Meanwhile, look at things like Evan Goer's "XHTML 100":

http://www.goer.org/Journal/2003/04/the_xhtml_100.html

Where he took a sample of 119 sites which claimed to be XHTML, and found that only one managed to pass even a small set of simple tests.


HTML has strict implementation requirements and loose authoring requirements. I recall that it is a goal of HTML that a significant percentage of "anyone" can create useable documents with it, but the closest I can come to a citation at the moment is this: http://wiki.whatwg.org/wiki/FAQ#Why_does_this_new_HTML_spec_...


One of the things I really like about HTML5, actually, is that it recognizes that real-world HTML is not perfect... and then specifies exactly how parsers should deal with imperfections.


It worked because the rendering engines picked up the slack - Gecko, Trident, and Webkit are all magnitudes more complex for having to reinterpret pages for the nebulous correctness.


Exactly. I weep to think how many CPU cycles have been wasted processing bogus, mal-formed HTML on the web. :-(




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: