All of your points rely upon the assumption that Craisglist "owns" all of the posts submitted. I'm not saying that's right or wrong, but if that is true then wouldn't that extend to Facebook owning all content submitted to their service, Twitter owning all tweets, Flickr owning all hosted photos, and Stack Overflow owning all submitted answers?
And there's no copyright infringement as long as your compilation, based on their publicly available data, is also unique, which PadMapper's is. This has lots of precedents dating back to services derived from phonebooks.
In my interpretation it is the content of the listings, not the compilation. He could compile the data from several different sources and present the same result.
What constitutes a "unique compilation" is subject to interpretation in a court of law.
Adding/removing/modifying and changing the arrangement or display of items in a dataset sufficiently constitutes a dataset unique from the one Craigslist offers even if it is largely derivative from the Craigslist dataset.
Craigslist could argue a trespass to chattels tort or file a ToS civil suit, but there isn't much they can do to protect a dataset.
The reason Facebook makes Facebook content largely available only to those who are logged in is to hide behind their ToS and prevent scraping whether centralized or distributed.
So what you're suggesting is that if a service put "noindex" in its robots/metatags, they would be somehow be overstepping the bounds of what they can do with their users' content?