I don't understand why Google can't figure it out and remove these clones. They can do much harder things. Why couldn't them outrank sites based on equal text content or -- much better -- huge presence of ads.
I suppose it's mostly a social/legal problem. Google could get rid of 90% of those scrappers by basically hardcoding a preference for StackOverflow if it has the same content than another site. Same with sites like Wikipedia. But then obviously people will call foul play (probably scrappers and SEO people will be the loudest to complain). So whatever solution Google makes has to be general enough people won't call it unfair, and that makes the problem much more difficult.
They have made inroads in the past, but lately copycats have been cropping up in results (iswwwup.com is one I've been seeing a lot) again. I imagine there's a ranking algo update in the future that might fine-tune this more. To be sure, it's going to be an arms race, since the only purpose of these sites is adsense siphoning.