Seems much simpler.
Basically takes a random word from a dictionary of English words and adds a random extension to the end, checks if the site up and returns the domain else loops until it does.
Found some weird & wonderful stuff through it!
For companies that are trying to sell a product or are trying to present users with content relevant to them, that would seem to call for some kind of content curation.
I think we would be surprised if we knew how many online retailers still do recommendations based solely on your buying history (and also, perhaps, the buying history of others who have bought items similar to you). I think to improve these recommendation engines, we should improve the underlying algorithms.
To improve said algorithms, I think graph databases like Neo4j have an important role to play as they are built for this sort of thing. To present better and more relevant content, we'd need two primary things - more information about the user in question (and I'm all for transparency in both collecting this data and informing the user how it will be utilized) and better algorithms to leverage that information. With the response times of a queried result set originating from multiple levels of depth-traversal in a graph database compared to the response time of a similar query in a traditional RDBMS, I'd say that graph databases are an important and vital component to making this kind of thing better for everyone.
making a page that offers tools to find random content on specific sites seems like a fun idea, maybe as a starting point for an even larger project
Also randomization of existing long-lived blogs.