I also imagine that they have sanity checks for their systems. Either running classifiers again previously identified spam, or running manual sanity checks against deleted items.
I hardly posted anything, and somehow got flagged without notification, when I tried to log in one day.
The consensus seems to be that if this happens it’s generally because of illegal content but given my posts were all stuff for family/friends, I’m not sure what could trigger this.
A bit annoying, and my linked Instagram account is just fine (same content).
This whole experience has colored my view of our AI future quite negatively - I have visions of “computer says no” cropping up all over the place with no ability to appeal/dispute, and I’ve turned from a neutral observer of GDPR to a strong supporter even if I’m not a EU citizen.
Purpose of the piece seems to be about how FB has proactive measures in detecting spam/fakery, and the quantity that it manages to process. Key numbers:
- We took down 837 million pieces of spam in Q1 2018 — nearly 100% of which we found and flagged before anyone reported it...
- In Q1, we disabled about 583 million fake accounts — most of which were disabled within minutes of registration.
- Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.
- We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 — 96% of which was found and flagged by our technology before it was reported.
- For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 — 86% of which was identified by our technology before it was reported to Facebook.
They also mention that their hate speech tech "doesn't work that well". Of the 2.5M hate speech pieces it removed, 38% was flagged by FB's tech.