I don't think it's the automated detection that's a problem. It's the "you cannot object, you have no recourse, you cannot reply, you cannot show your license, and when we decide this has happened enough times, you cannot use our platform" that's the problem.
Yes, an automated system that flags content but is human reviewed would be more reasonable. But the human review part slows things down and you end up losing much of the advantage of the automated system as you would be throttled by human review or have to spend much more on manpower.