In the UK transitive copies for the purposes of display, caching &c., have been cleared as non-infringing actions. Googles caches link back to the original author by way of attribution for example. But archiving and reproduction without attribution in any way?
Also whilst Google may have been given a pass in robots.txt (were sub-sites allowed individual robots files? I never had one) to crawl the site declaring oneself as Googlebot in order to spider and archive the whole site could well show bad faith?
Just wondered if you'd discussed the copyright position, perhaps with Geocities. Maybe there was a disclaimer that effectively released content as PD, I doubt it though.
A brief discussion on webmasterworld, http://www.webmasterworld.com/foo/3898789-2-30.htm , but more interestingly an idea to "rape" geocities for content for ad serving sites, see http://ducedo.com/free-content-geocities/ .
Someone on webmasterworld considered whether Google might back-rate based on content so that highly rated content pages that disappear from Geocities (&c.) could be given a boost in the SERPs.
OT: did you use the current username based addressing too or are you only linking the old "campus" names, can't remember mine, was in RT somewhere IIRC.