What strikes me as interesting regarding Google's automated approach to community regulation is the effect of focusing on action rather than intent. Making a perfect intent-detector is difficult: much easier to make an action-detector and penalize actions consistent with malicious behavior. This works as long as those actions are a) necessary for malicious behavior and b) are easy to avoid while making good apps.
The experiment is now: how much developers will adjust their behavior to signal clean behavior versus Google perfecting its algorithm to perfectly judge intent. Once Google has a reasonable policy in place, they can sit and wait for developers to conform, and presumably those that can't conform cannot do so precisely because the behavior they would have to change is the behavior that makes their software malicious or undesirable.
Humans already do this every day in conversation - I adjust my language and tone to signal non-malicious intent when making a statement I fear might be perceived as threatening or rude. But these are relatively easy fixes that do not affect my communication horribly; I suppose it's still an open question as to whether Google's criteria unreasonably hinder the development process.
What strikes me as interesting regarding Google's automated approach to community regulation is the effect of focusing on action rather than intent
What's interesting about that? We're all guilty of putting action over intent. Look at A-B testing, the metrics we use have nothing to do with our user's intents. Fuck their intents, I'm optimizing to get as many as possible to give me money.
Its the same exact thing, just wearing a different color jacket. This time you just happen to be the subject rather than the administrator.