Because we don't understand what's hard, we think you're not really trying, and then we make up evil reasons to explain that.
I believe if people understood better the difficulties of spam fighting they would be more understanding.
Not necessarily. The rate at which Google refreshes its crawl of a site, and how deep it crawl, depend on how often a site updates and its PageRank numbers. If a scraper site updates more often and has higher PR than the sites it's scraping, Google will be more likely to find the content there than at its source. Identifying the scraper copy as canonical because it was encountered first would be wrong.