Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Analyzing One Million Robots.txt Files (intoli.com)
157 points by foob on Dec 24, 2017 | hide | past | favorite | 25 comments


The next steps towards standardization began when Google, Yahoo, and Microsoft came together to define and support the sitemap protocol in 2006. Then in 2007, they announced that all three of them would support the Sitemap directive in robots.txt files. And yes, that important piece of internet history from the blog of a formerly 125 Billion dollar company now only exists because it was archived by Archive.org.

The Internet Archive (archive.org) is currently running their end-of-year donation drive, if you value the work they do it's a good time to donate: https://archive.org/donate/

(and on the topic of robots.txt, it sounds like they're moving in the direction of disallowing people from using them indiscriminately to block access to valuable archival materials: https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea... )


Have the IA ever discussed why they retroactively apply robots.txt? I can see the rational (though don't necessarily think it is the best idea given the IA's goals) for respecting it at crawl time, but applying it retroactively always felt unnecessary to me.


It seems pretty obvious: copyright restricts distribution, so they hide pages that the apparent copyright holder apparently doesn't want distributed.


I also wrote up an analysis of the top 1M robots.txt files: http://www.benfrederickson.com/robots-txt-analysis/

I ended up analyzing very different things from this article though, so this article was still pretty interesting to me.


  “traditionally used for
  vague attempts at humor
  which signal to twenty-something
  white males that this is
  a “cool” place to work.”
WTF with the casual sexism/ageism?


Just to clarify, this was intended as a tongue-in-cheek critique of tech companies that actively project superficial images designed to appeal to specific hiring demographics. I'm sorry if the meaning didn't come across as clearly as I had hoped for, but the statement was meant to be a condemnation of sexism and ageism rather than an endorsement.


I see what you're saying. And to be fair, I disagree that you're being casually racist. What you're doing is casually accusing someone else of being racist.

That being said, I think it's a little hypocritical to insult someone for dropping casually racist contents into their technical work when you're doing something very similar.


But your premise is sketchy, that the goal is to intentionally signal to white people. It's not, but I guess you think it is?


As a twenty-something white male software engineer, I found it humorous and it gave me a laugh!


I think you're overreacting to the author's attempt at tongue-in-cheek humor... After all the explicit sarcasm, emphasised by quoting 'cool', turns the tables on the 'twenty-something white males', and makes 'them' (the majority) look the fool. Still, doesn't excuse sexist/ageist remarks, even if they are just pointing out industrial stereotypes.


There are so many Indians and Chinese in the industry that I doubt that "twenty-something white males" are the majority.


Exactly


WTF with the casual racism, too, and the fact that no one in this thread mentioned that part?


WTF with you failing to notice the word "white"?


What makes you think he failed to notice the word "white"?


"The web servers might not have cared about the traffic, but it turns out that you can only look up domains so quickly before a DNS server starts to question your intentions!"

s/DNS server/third party open resolver/

IME, querying an authoritative server for the desired name triggers no such limitations.

One does not even need to use DNS to get the IP addresses for those authoritative servers, if the zone file is made available for free to the public as most are, under the ICANN rules.

I have thought about building a database of robots.txt many times. IMO, robots.txt has an important role besides thwarting "bots". It can thwart humans as well. It can be used to make entire websites "disappear" from the Internet Archive Wayback Machine.

Perhaps others are making mirrors of the IA.

However, I have thought it could be useful to monitor the robots.txt of important websites on a more frequent basis than IA, in order to (if possible) preemptively archive the IA's collections if robots.txt changes are ever detected that would effectively "erase" them from the IA.

Perhaps the greatest thing about robots.txt is that it is "plain text". This "rule" seems to be ubiquitously honoured. Did the author ever find any html, css, javascript or other surprises in any robots.txt file?


I specifically go after sites for archiving that block the Internet Archive in their robots.txt file.

The Internet Archive is also modifying its policy on retroactive blocking using robots.txt, although I don’t have the blog post link handy at the moment.

If you’d like to mirror certain Internet Archive contents, every item is served as a torrent.


The Wayback Machine data is not available in bulk, only via the web interface.


History presented in this post was very interesting, but the analysis ended up disappointing. The article ends just after they had managed to narrow their sample of robots.txt files to exclude duplicate and derivative files. They don't even present any summary statistics for this filtered sample.


Surprisingly interesting post that goes into history of robots.txt and details how it is not, in fact, a W3 standard or legal requirement


does seem obsessed with the non standard crawl delay interestingly it doesn't mention those robots.txt files that have a BOM which can stop them working - which is why best practice is to have a comment of a blank line at the top of your robots.txt


That's a best practice for everyone?


yep if your first Line is a disallow and you have a text file with a BOM it will be ignored.


I was more thinking that most webmasters don't use editors and OSes where a BOM might be emitted.


Honestly, I'm kind of surprised that turnitin's bot listens to robot.txt, or that the 'anti copyright infringement' bots do the same. Seems like it provides a very simple way for a cheating site to just thwart their entire 'system'.

But hey, I guess it's one of those cases where the law and basic ethics clash a bit; with certain laws saying 'unauthorised' access to a server is illegal, then ignoring that would leave them under fire for that instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: