> It wouldn't be a stretch at all to take one of the self driving AI guys and have them evaluate say, an ML classifier used to flag spam/abuse content.
Facebook has a years-old bug with their ML classifier used to flag spam/abuse content in Facebook Groups. Its exact behavior has morphed over time as they flailingly attempt to fix it, but for several straight months that behavior was "posting to a Group via the Graph API fails with 'unknown error' if it includes a dollar sign". This in an API used by tens of thousands of apps and many millions of users, at one of the richest (in both money and programmers) companies on the planet.
ML is often a black box "computer says no" scenario with little meaningful ability to debug.
Facebook has a years-old bug with their ML classifier used to flag spam/abuse content in Facebook Groups. Its exact behavior has morphed over time as they flailingly attempt to fix it, but for several straight months that behavior was "posting to a Group via the Graph API fails with 'unknown error' if it includes a dollar sign". This in an API used by tens of thousands of apps and many millions of users, at one of the richest (in both money and programmers) companies on the planet.
ML is often a black box "computer says no" scenario with little meaningful ability to debug.