The W3C is just a weird organization, basically owned by some members who pursue their own agendas, producing reams of "standards" that, largely, no one uses, or even implements. Tim Berners-Lee is a great guy but it is a failed organization as far as I can see.
Speak for yourself.
RDF is one of those things, JSON-LD is another. Most efforts I have seen so far involve trying to convince me that I need it. It wasn't me looking for a solution. There is a big difference.
It's HTML 5 that's an XML dialect. That's right: XHTML 2 died, but you can use XHTML5 instead.
Could be wrong though, my wife often points out that it happens...
I still like tidy clean code, but I don't agonize over it's perfection.
If HTML had error checking and kicked out unspecified/ambiguous syntax, people may have left off tags (decided not to bold or make a list), omitted some images or something.
It's hard enough writing a spec - there will be unforeseen combinations resulting in conflicting behaviour. The answer isn't to give up and make the spec loose.
Which is worlds better than XML's "every error is a fatal error" approach, since real-world XML is often non-well-formed (and, when validity checking is possible, invalid), and tools ignore that to varying degrees and recover or ignore just like they do with older versions of HTML.
(my favorite example of all time, with that, is the ability of XHTML documents to have their well-formedness status depend entirely on the HTTP Content-Type header, and at the time none of the major toolchains actually handled it)
Validation is another issue, and I don't think you'll find anyone saying that the myriad XML addons are simple or easy :).
The mixing of HTTP and HTML also seems like a bit of strange hack to me. And let's not start talking about well-formed HTTP; I'd be surprised to find many real-world clients or servers actually following the inane HTTP spec. Just like mail clients don't always handle comments in email addresses.
So I send it to you over HTTP, and whatever you're using on the other end -- web browser, scraper, whatever -- parses my XML and is happy. Right?
Well, that depends:
* If I sent that document to you over HTTP, with a Content-Type header of "application/xhtml+xml; charset=utf-8", then it's well-formed.
* If I sent it as "text/html; charset=utf-8", then it's well-formed.
* If I sent it as "text/xml; charset=utf-8", then it's well-formed.
* If I sent it as "application/xhtml+xml", then it's well-formed.
* If I sent it as "text/xml", then FATAL ERROR: it's not well-formed.
* If I sent it as "text/html", then FATAL ERROR: it's not well-formed.
Or, at least, that's how it's supposed to work when you take into account the relevant RFCs. This is the example I mentioned in my original comment, and as far back as 2004 the tools weren't paying attention to this:
These are the kinds of scary corners you can get into with an "every error is a fatal error" model, where ignorance or apathy or a desire to make things work as expected ends up overriding the spec, and making you dependent on what are actually bugs in the system. Except if the bug ever gets fixed, instead of just having something not quite look right, suddenly everyone who's using your data is spewing fatal errors and wondering why.
Meanwhile, look at things like Evan Goer's "XHTML 100":
Where he took a sample of 119 sites which claimed to be XHTML, and found that only one managed to pass even a small set of simple tests.
For XHTML, one of the big ideas was that you could use an XML parser, and embed custom XML. Since an XML parser errors on invalid input, it can be smaller and faster. Having an XML parser also means embedded XML is easy to deal with. However, all this falls down when you consider that nearly all XHTML was sent as HTML, so the XML parser never kicked in. All this meant you required properly formatted files.
I won't say the lack of widespread adoption of XHTML was all Microsoft's fault, but they definitely played a role.
My favourite was a google tool (can't remember what it was - google website optimizer?) that required you to use some godawful <script> construction that was necessarily broken. And you'd have thought google would know better.
(By the way, since sibling nodes have no specified order in XML, there's no reason why one paragraph should have followed another on a web page consistently, and the <ol> was an oxymoron.)
There is a reason why I pushed the TAG to finish this:
(Clue: TimBL is part of the TAG.)