Hacker News new | past | comments | ask | show | jobs | submit login

> You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.

Something I've started to see happen but never mentioned is the effect automated detection has on systems: As detection becomes more automated (previously authored algorithms, now with large AI models), there's less cash available for individual case workers, and more trust at the managerial level on automatic detection. This leads to false positives turning into major frustrations since it's hard to get in touch with a person to resolve the issue. When dealing with businesses it's frustrating, but as these get more used in law enforcement, this could be life ruining.

For instance - I got flagged as illegal reviews on Amazon years ago and spent months trying to make my case to a human. Every year or so I try to raise the issue again to leave reviews, but it gets nowhere. Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

More automatic detection can work (and honestly, it's inevitable) but it's got to acknowledge that false positives will happen and allocate enough people to resolve those issues. As it stands right now, these detection systems get built and immediately human case workers get laid off, there's this assumption that detection systems REPLACE humans, but it should be that they augment and focus human case workers so you can do more with less - the human aspect needs to be included in the budgeting.

But the incentives aren't there, and the people making the decisions aren't the ones working the actual cases so they aren't confronted with the problem. For them, the question is why save $1m when you could save $2m? With large AI models making it easier and more effective to build automated detection I expect this problem to get significantly worse over the next years.






>Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

It can be much scarier.

There was a case in Russia when a scientist was accused in a murder that happened 20 years ago based on 70% face recognition match and fake identification as an accomplice by a criminal. [0] He spent 10 months in jail during "investigation" despite being incredibly lucky to have an alibi -- archival records of the institute where he worked, proving he was in an expedition far away from Moscow at that time. He was eventually freed but I'm afraid that police investigators that used very weak face recognition match as a way to improve their work performance stats are still working in the police.

[0] https://lenta.ru/articles/2024/04/03/scientist/


Grave consequences are not a rarity. Automated decision making in immigration and housing classify people with zero recourse or transparency, locking them out of a place to live (and in the case of Australia, locking them up in offshore detention for years).

I know it’s the wrong way to think but things like this make me glad about a digital footprint… good chance I’m liking a TikTok comment or reading an HN thread at the same time as any crime, just statistically.

that's not going to get you off the hook. anything that could be faked via account sharing is going to be discarded (not to mention that tiktok and similar platforms will not collaborate with you to build an alibi by giving access to this data, only the police to build a case)

And probably there are other people in jail convicted using the same method that just were unlucky enough to not have a bulletproof alibi?

I don't know, but it seems quite likely, unfortunately. There were quite a few other cases when fake evidence was planted by police.

It's not the only problem with technology -- it's claimed that there has been over hundred cases of false DNA matches not caused by malice or processing errors.[0] In theory, DNA match must not be considered by courts as 100% accurate, but in fact it is.

On the other hand, there were cases when human rights advocates or journalists were claiming that innocent people were jailed but that turned out to be false, like people getting caught on camera doing the same kind of crime again after they served their sentence.

[0] https://www.kommersant.ru/doc/5825384


same with stenographs. We keep inventing new methods for the police to make up evidence.

I don't understand, could you elaborate on that?

I apologize, I got the terms mixed up, I meant a polygraph

Yep, I got the impression that courts consider polygraph only when the results implicate the accused. Good thing that by law they cannot force you to get questioned with polygraph attached.

[flagged]


https://www.washingtonpost.com/local/crime/fbi-overstated-fo...

The notion that this kind of thing couldn't happen in the west is laughable


Or this: "A former forensic scientist intentionally manipulated DNA evidence during her 29-year career at the Colorado Bureau of Investigation, casting doubt on at least 652 criminal cases she handled, including some of the most high-profile trials, according to investigation findings released by the agency Friday." [0]

[0] https://coloradosun.com/2024/03/08/yvonne-missy-woods-cbi-in...


You chime in to showcase another thing where the USA is wrong, but you oversee the point that in no way russia is comparable to this.

In the article you linked, there is a criminal investigation, an audit and a re-test of evidence in order “to ensure the accuracy and completeness of its entire catalog of records because someone manipulated DNA evidence.

In russia, there is no investigation, because they make up evidence against their political enemies all the time. It feels like you have some incentive to miss the elephant in the room.


The notion that the judicial system in democratic countries is in any way comparable to the judicial system in the dictatorship of russia is laughable.

In democratic countries such errors with regards to forensic evidence spark news because it is so unusual. In dictatorships like russia, nobody expects forensic methods to be valid because the court verdict does not depend on evidence, it depends on your connections to the dictator.


I'm not interested in figuring out which judicial system is worse, I'm sure russia would win. But you give the western system far too much credit, because the police frequently rely on unscientific methods to convict people, even after this fact has been known. Not just the article I linked, polygraphs, drug searching dogs which also have been shown not to actually be effective but mostly respond to their handlers. I don't know what things are like in russia, but for sure I do not like people pretending there is no problem because other places are worse. And I have very little hope AI won't be the next thing on that list.

>which judicial system is worse, I'm sure russia would win

Having indefinite moratorium on the death penalty is a big plus though.


> [...] my conclusion is that you're here to spill russian propaganda. [...]

The case described by the parent is that of someone who was wrongly imprisoned for 10 months on the basis of bogus application of faulty technology, even though they had a solid alibi. Therefore, the comment does not reflect well on Russia, the Russian state or the Russian government, like.. at all.

If there is a propaganda dimension to this (which I doubt), it is certainly not an attempt to say something nice about the Russian justice system.


It's a subtle form of propaganda. Same category as the "funny russian car crashes" or "awesome chinese acrobat" videos that are on reddit's front page all the time. One might wonder why it's always those two countries and not others who are getting thousands of upvotes.

The comment I criticized falsely implies that there is due process in russia, and that technical faults lead to unfair results for the people who are accused of something.

It is a cherry-picked example, and the big majority of russian court cases are decided without due process, because it is a dictatorship. If you try to get justice because you were harmed by corrupt officials or the tzar you're out of luck. Lawyers are getting shot on the street as a birthday present for putin. There are lots of examples. And once you're in prison they'll send you to the frontlines to murder Ukrainians.


I'm pretty relaxed about all this, but just a thought: Your comments in this thread seem very eager to talk about Russia instead of the actual topic of the thread, which is privacy and AI.

You wrote those comments in a very repetitive and mission-driven way. Which does not inspire confidence in the absence of ulterior motives.


The UK Post Office scandal is bone-chilling.

Update this to a world where every corner of your life is controlled by a platform monopoly that doesn't even provide the most bare-bones customer service and yeah, this is going to get a lot worse before it gets better.


And that's the early game.

Imagine when AI will be monitoring all internet traffic and arresting people for thoughtcrime.

What wasn't feasible to do before is now quite in reach and the consequences are dire.

Though of course it won't happen overnight. First they will let AI encroach every available space (backed by enthusiastic techbros). THEN, once it's established, boom. Authoritarian police state dystopia times 1000.

And it's not like they need evidence to bin you. They just need inference. People who share your psychological profile will act and speak and behave in a similar way to you, so you can be put in the same category. When enough people in that category are tagged as criminals, you will be too.

All because you couldn't be arsed to write some boilerplate


It's already arresting the wrong people [0].

[0] https://www.theregister.com/2023/08/08/facial_recognition_de...


We need strong and comprehensive regulations. Some places have enacted partial solutions but none anywhere near as complete as needed. EU has GDPR and some early AI laws, India has the IT Act that requires companies to provide direct end-user support.

That's why there are transparency laws that indirectly forbid the use of black box decision systems like these for anything government-related.

This exact scenario is described in the 1965 short story "Computers Don't Argue".

You can find it in the following link in the third page of the PDF (labelled as page 84): https://nob.cs.ucdavis.edu/classes/ecs153-2021-02/handouts/c...

It's amazing how 60 years ago somebody anticipated these exact scenarios, yet we didn't take their cautionary tale seriously in the slightest.


Wow that story is so grim and precinct

There was a good thread on this phenomenon (called "accountability sinks") [1]

[1] https://news.ycombinator.com/item?id=41891694


It could also be used to eliminate political opponents, minorities, etc. Persecution with collective guilt bases on digital footprints wasn't easier ever.

Also AI for accountability laundering. It gives plausible deniability. It's a sociopathic manager's dream.

This. They're digital sniffer dogs, a pretext to lend credibility to vibes-based policing.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: