The next steps towards standardization began when Google, Yahoo, and Microsoft came together to define and support the sitemap protocol in 2006. Then in 2007, they announced that all three of them would support the Sitemap directive in robots.txt files. And yes, that important piece of internet history from the blog of a formerly 125 Billion dollar company now only exists because it was archived by Archive.org.
The Internet Archive (archive.org) is currently running their end-of-year donation drive, if you value the work they do it's a good time to donate: https://archive.org/donate/
Have the IA ever discussed why they retroactively apply robots.txt? I can see the rational (though don't necessarily think it is the best idea given the IA's goals) for respecting it at crawl time, but applying it retroactively always felt unnecessary to me.
Just to clarify, this was intended as a tongue-in-cheek critique of tech companies that actively project superficial images designed to appeal to specific hiring demographics. I'm sorry if the meaning didn't come across as clearly as I had hoped for, but the statement was meant to be a condemnation of sexism and ageism rather than an endorsement.
I see what you're saying. And to be fair, I disagree that you're being casually racist. What you're doing is casually accusing someone else of being racist.
That being said, I think it's a little hypocritical to insult someone for dropping casually racist contents into their technical work when you're doing something very similar.
I think you're overreacting to the author's attempt at tongue-in-cheek humor... After all the explicit sarcasm, emphasised by quoting 'cool', turns the tables on the 'twenty-something white males', and makes 'them' (the majority) look the fool. Still, doesn't excuse sexist/ageist remarks, even if they are just pointing out industrial stereotypes.
"The web servers might not have cared about the traffic, but it turns out that you can only look up domains so quickly before a DNS server starts to question your intentions!"
s/DNS server/third party open resolver/
IME, querying an authoritative server for the desired name triggers no such limitations.
One does not even need to use DNS to get the IP addresses for those authoritative servers, if the zone file is made available for free to the public as most are, under the ICANN rules.
I have thought about building a database of robots.txt many times. IMO, robots.txt has an important role besides thwarting "bots". It can thwart humans as well. It can be used to make entire websites "disappear" from the Internet Archive Wayback Machine.
Perhaps others are making mirrors of the IA.
However, I have thought it could be useful to monitor the robots.txt of important websites on a more frequent basis than IA, in order to (if possible) preemptively archive the IA's collections if robots.txt changes are ever detected that would effectively "erase" them from the IA.
Perhaps the greatest thing about robots.txt is that it is "plain text". This "rule" seems to be ubiquitously honoured. Did the author ever find any html, css, javascript or other surprises in any robots.txt file?
I specifically go after sites for archiving that block the Internet Archive in their robots.txt file.
The Internet Archive is also modifying its policy on retroactive blocking using robots.txt, although I don’t have the blog post link handy at the moment.
If you’d like to mirror certain Internet Archive contents, every item is served as a torrent.
History presented in this post was very interesting, but the analysis ended up disappointing. The article ends just after they had managed to narrow their sample of robots.txt files to exclude duplicate and derivative files. They don't even present any summary statistics for this filtered sample.
does seem obsessed with the non standard crawl delay interestingly it doesn't mention those robots.txt files that have a BOM which can stop them working - which is why best practice is to have a comment of a blank line at the top of your robots.txt
Honestly, I'm kind of surprised that turnitin's bot listens to robot.txt, or that the 'anti copyright infringement' bots do the same. Seems like it provides a very simple way for a cheating site to just thwart their entire 'system'.
But hey, I guess it's one of those cases where the law and basic ethics clash a bit; with certain laws saying 'unauthorised' access to a server is illegal, then ignoring that would leave them under fire for that instead.
The Internet Archive (archive.org) is currently running their end-of-year donation drive, if you value the work they do it's a good time to donate: https://archive.org/donate/
(and on the topic of robots.txt, it sounds like they're moving in the direction of disallowing people from using them indiscriminately to block access to valuable archival materials: https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea... )