It may not be right to blame AI, but it's certainly the case that they were presenting it as "the system says there was a shot fired here." The bigger point is that the "system" can't be trusted. And the fact that they need to manually classify things is proof of that fact. (It's way more complicated than just an audio recording. It's recordings from numerous locations, attempting to triangulate the original location, without being confused by echoes and other things.)
> It may not be right to blame AI, but it's certainly the case that they were presenting it as "the system says there was a shot fired here."
I think a crucial distinction is to clarify who “they” is: Who was making the claim? From the article, it seems that the prosecutors may have been the ones exaggerating the evidence:
> Prosecutors said ShotSpotter picked up a gunshot sound where Williams was seen on surveillance camera footage in his car, putting it all forward as proof that Williams shot Herring right there and then.
Naturally the defense wanted to see the actual evidence, so that’s what they got down to.
I know everyone wants to vilify AI, but I think the actual bad actor in this article might be the prosecutor trying to misrepresent the evidence.
>I think a crucial distinction is to clarify who “they” is: Who was making the claim? From the article, it seems that the prosecutors may have been the ones exaggerating the evidence:
The prosecutors are the "they". The further removed you get from people who know the 'AI' the more the validity of the evidence provided by the AI relies merely on what others have said.
The people working on the system know that it can be faulty. Other software developers have an idea that it could be faulty. Prosecutors are told that it's alright. They turn around and argue it in a court of law. And the further you get the less people have any idea that the system is faulty. Down the line people will go "the machine says so, therefore it must be so".
>Down the line people will go "the machine says so, therefore it must be so".
Thats the real feature of AI, devision laundering: a flawed decision made by someone for their own gain now appears impartial and trustworthy, and come wothout accountability
Sure. But the point is that they misrepresent the evidence by pointing to the system as trustworthy. Even without tampering with the evidence the system isn’t trustworthy beyond a reasonable shadow of a doubt.