Hacker News new | past | comments | ask | show | jobs | submit login

I think I was extremely generous with my margins and went to lengths to be selective with my inclusion criteria, I didn't even catalogue everything under those criteria, and I omitted huge swaths of web standards on the basis that (1) it was more forgiving to W3C and (2) they would be difficult to compare on the same terms. At most you've given a credible suggestion that there might be an order of magnitude off, but even if there were, it changes the conclusions very little. I explained all of that and more in my methodology document, and I stand by it. If you want to take the pains to come up with an objective measure yourself and provide a similar level of justification, I'm prepared to defer to your results, but not when all you have is anecdotes from vaugely scanning through my dataset looking for problems to cherry pick.



No, I've given credible reasons for two orders of magnitude:

1. The majority of the documents you are including are not reasonably considered web standards

2. Of those that are, you are counting each one 5-50 times.

That's two orders of magnitude.

All your analysis has proven is that it's (ironically) difficult to machine-parse the w3 data, and that you did so in a way to justify your preconceptions.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: