Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
American cops are using AI to draft police reports, and the ACLU isn't happy (theregister.com)
67 points by rntn 7 months ago | hide | past | favorite | 48 comments


The simple solution is for DAs to refuse to prosecute any crimes where the police reports were written with AI and judges to refuse to allow any evidence written or compiled by or with the assistance of AI.


The "simple solution" would be to make the use of AI illegal in the criminal justice system. Judges and prosecutors can't (or at least shouldn't be allowed to) simply refuse to prosecute crimes which have otherwise been legally processed and presented.


how simple is it to detect AI?


It is impossible to definitively identify AI or eliminate generative ai use in any given piece of writing. There are detectors out there but they all have high levels of false positives. And unknown levels of false negatives.

Why? There are at least 5 common generative ai services (which display different behaviors depending on how they’re prompted and what has previously been in the context), there are hundreds of thousands of open models, millions of different ways you could set up retrieval augmented generation, infinite ways you can prompt.

That said It is quite easy (for professors for example) to pick up on common linguistic and structural patterns. (Chat gpt uses certain words more commonly, structures arguments in a particular way, uses and misuses the same metaphorical structures)


To anyone that thinks this is a bad idea honestly try doing a ride along (where you get to spend the day following a police officer). The amount of paperwork they do is insane - a single crash report can be 4+ pages long and the vast majority of the questions are incredibly menial.


Okay, so get rid of the menial questions. If you can't get rid of them because they're important, then definitely don't use an ML model to make up answers for them.


This is the most important point in the entire thread.


Agree, looking for the multi upvote button


I have a hard time caring. People's lives and freedoms are at stake. If they can't take the time to do their jobs, perhaps they should find another career.


Then we’re going to need to spend waaaaay more money on policing, if we don’t allow any paperwork shortcuts. Imagine 2 hours per report for a 2 hour incident - we’d need to double the staffing versus a no paperwork force.

Or just stop responding to nonviolent crime, which already happens. We’ve got to be realistic about the tradeoffs.


> Then we’re going to need to spend waaaaay more money on policing

Don't believe it. According to the Bureau of Labor Statistics, cops earn more than software engineers in many states. Half, or more, of most municipal budgets go towards police salaries, benefits and pensions. Some of the highest paid public employees, if not the highest paid employees, on any government's payroll are police.

Most cops sit in their cars for hours waiting for something to happen, often accruing overtime in the process. They have plenty of time to play Candy Crush, hard to believe they have no time for paperwork.


Well, your first claim that in "many states cops earn more than software engineers", seems completely untrue.

I simply went on the BLS website (that you mentioned) and looked. In all 12 states with the lowest SDE salaries, the SDE mean wage is still higher than the police, and in all 10 states with the highest police salaries, their mean wage is still less than SDEs. I also spot checked other random states and have not found a single one where police get paid more.

Look for yourself, data from May 2023:

https://www.bls.gov/oes/2021/may/oes151252.htm

https://www.bls.gov/oes/2023/may/oes333051.htm


Did you read the note on the annual mean wages column?

> (2) Annual wages have been calculated by multiplying the hourly mean wage by a "year-round, full-time" hours figure of 2,080 hours; for those occupations where there is not an hourly wage published, the annual wage has been directly calculated from the reported survey data.

It is not accounting for overtime pay at all, it’s using the hourly wage multiplied by what they consider to be full time employment.

From your links a software developer makes 63$ per hour in Napa, CA while a cop makes 59$ per hour, it wouldn’t take much overtime at $88.50 to surpass the salaried developer in yearly income. Cops also get pensions in many areas, so if we consider total comp I think it’s safe to say there are places where police are making more money than software developers.


Are you in New York, or are cops crushing candy all day instead of actually doing anything useful a nationwide problem?


It's an everywhere problem. It's an amazing grift if you think about it.


If they're too busy to do paperwork I'd rather they just not do the paperwork instead of having an AI produce garbage data that will be ingested by another AI down the line accepting it as gospel source material


Way more money compared to what? Not having AI is the status quo. Perhaps AI will free officers to spend more time responding to nonviolent crime, but that’s not exactly proven.


So like how they did it 3 years ago? And for all time before that?

If the paperwork load is unreasonable there’s an obvious solution. (Reduce the paperwork)

Generative ai will make things up. It’ll be inadmissible as evidence once judges catch on.


> Then we’re going to need to spend waaaaay more money on policing, if we don’t allow any paperwork shortcuts. Imagine 2 hours per report for a 2 hour incident - we’d need to double the staffing versus a no paperwork force.

> Or just stop responding to nonviolent crime, which already happens. We’ve got to be realistic about the tradeoffs.

Oh come on, your reason is pretty fundamentally flawed. This technology is new, so police aren't going to "stop" doing anything if they don't use it, because its unavailability is the status quo.

I think it's highly likely that your "imagine 2 hours per report for a 2 hour incident" is a number you pulled out of your butt. But you, me, and everyone should want these reports to be accurate.


Your argument is if we don't let the police use this novel (unreliable) labor saving tool that didn't exist until a couple of years ago, policing (as it has been done for the whole of history) would be too expensive?


Most cities are just really police forces with a few ancillary gov services tacked on. AKA the majority of budgets already go to policing.


I don’t think that’s fair. It’s not about them disliking the work. It’s about me as a taxpayer wanting police officers to have time for policing. I want the most safety benefit for the money we spend. So I’m supportive of them using tools to be more efficient.


That paperwork, being careful that due process is followed and documented, is part of policing.


It's a bad idea because the AI adds no signal, only noise.

If the problem is that there are too many questions, then reduce the number of questions.

There are probably many questions to help with filing reports and searching them later. It may no longer be necessary to have those fields if AI can deduce them from a single free form text field, for example.

I hope the police apps keep track of exactly what words and inputs came from the officer, and which came from the AI.


In your car crash example, that police report could be a key piece of evidence in a civil case that causes one party to need to pay up rather than the other. The amount they have to pay could be enough to ruin their life.

Of course, that's to say nothing of criminal cases


It would make more sense to just record everything they do. The reason for all this documentation is for disputes or court cases, but if you have a recording you can produce the documentation only if it's actually required.

Of course, there's the problem that cops don't want to be recorded.


Not just recorded, the information needs to be publicly available. I shouldn't have to request this stuff, it should be transparently available and searchable. If we're going to use my tax dollars to fund the surveillance state at least give me the data.


Strong disagree. Under many circumstances, law enforcement officers are allowed to enter places where people have a reasonable expectation of privacy. Just because the police are allowed in such locations doesn’t mean the general public should get to watch.

Let’s say you fall in the shower and you are badly injured. You call 911, and the police are first to arrive. Officers enter your house and try to keep you calm until an ambulance arrives.

You think police body camera footage of you, naked and injured on your bathroom floor, should be “transparently available and searchable”?


I mean, I think probably yes, and likely that society would be better off if people agreed with me but I understand that my view is significantly outside the norm. I will happily agree that for practical reasons we can agree on some limits, however I think we should still err on the side of transparency in these situations.

How do you feel about the weaker claim that footage of all police action in public spaces should be available and searchable?


>How do you feel about the weaker claim that footage of all police action in public spaces should be available and searchable?

I have trouble with that, too.

Being a crime victim should not mean that video footage of yourself in the immediate aftermath of the crime is publicly available and searchable. Imagine that a young girl is walking her family dog in a park, and that she is attacked and raped. The girl calls the police, who respond to the park. I don’t think that footage should be publicly available and searchable.

Also, “footage of all police action in public spaces” would routinely include information like people’s names, addresses, phone numbers, and social security numbers.

I am in favor of transparency, but I don’t think that absolute transparency would be a net benefit to society.


Then the solution is to change the forms to fill out. Filling out forms with meaningless dumb AI filler bla is not the way to go.

If the task to write 4+ pages is menial reform the questions asked so it's 1 page and meaningful.

THAT is the right way and not AI LLM blabber


As the unfortunate target of a maliciously dishonest and editorialized police report which led to years of court appearances, I would at least like the cop to have to work a bit.


Fascinating. Are they still hearsay, then, given that there is no declarant?


Instead of using AI to summarize body cameras, why not just release the body camera footage along with the police report? This is almost never done, usually you have to wait for discovery and the trial process to get access to the body cam footage, while the cops can and do lie/exaggerate all the time on the reports. Then you have to pay for a lawyer just to get to the data of truth, which easily costs citizens thousands of dollars. Though they don't call it lying, they call it the "officer's recollection." That's how you get things like, everyone smells like weed and/or alcohol, everyone is resisting arrest, everyone is "defiant and uncooperative." Plus, many times they will charge you extra money for the body cam footage. Trust me, all they really need to do is prevent the body cams from being turned off and/or muted, and present the body cam footage to all parties at the start, and we'd have a revolution in policing. Especially if all these police reports are drafted from incomplete body cam footage where the officer knowingly covers their camera, turns it off, or mutes it for no good reason. A lot of cops have gotten caught planting evidence from their own body cam. Same for dashcams. They should be released immediately. There is no reason not to other than to try to muscle normal citizens out of their constitutional rights.


Yeah I mean if the police is going to use some software tool to do the reporting, then just release the raw footage and let the courts decide what happened. As it is police officers have way too much latitude to determine what gets written down.


Who owns the liability if someone is falsely charged?


That's a complicated question. Let's just say we have ways of deciding who's guilty.


Eagerly awaiting the news that no one will be liable for someone's wrongful incarceration based on LLM hallucination.

The business model for LLMs will be that no one is liable. They were for a while, like when the airline chatbot quoted a lowball fare and was forced to accept it. But that will be rolled back as the stakes get higher.


This is a great boon for defense attorneys.

“Your honor the police department uses ai to generate police reports. Exhibit 1) shows numerous studies showing that hallucinations are common in generative AI. I therefore request that the police report be removed from evidence in the case against my client.”


A YC company, Abel Police, is doing exactly this.


May as well just have the AI write the court judgements too


Might as well replace the jury with AI personas


Wait until you find out that CPS is using AI to do the same thing to much applause and bonuses. Turns out people regardless of career field don't like doing tedious work.

Can't say I'm surprised that software isn't included in the moral panic about AI usage on HN. Despite as a group using it secure people's private data, handle people's money, process insurance claims, etc.. When we use it we understand that we're still experts evaluating and editing the output but when other people do it they're lazy, careless, and actively malicious. Do you really have a problem with AI or do you just not trust the cops regardless of what tools they use?


> When we use it [...] Do you really have a problem with AI

"What do you mean by 'we', Kemo Sabe?" -- I don't use LLMs, and I do indeed have a problem with them.

Just because conflicting opinions can be found within a large group (i.e. HN commentators) does not mean every member of the group is a self-contradicting weirdo. (Fallacy of composition.)


Software can be proven. You can exhaustively test a program for a particular hardware configuration, and logically speaking you can get proofs of validity. There aren't those proofs now because the "Free Market" decided there was more value to be had using shitty software now rather than proving it out first through testing. Hence why every license provides no guarantee of fitness for purpose and disclaims warrantee to the extent practicable by law. But given time, those claims can and waivers and warnings aren't necessary. The discrete math will just work out that way.

Now ask the AI producers/users to accept personal liability based on a stochastic probability machine. I imagine you won't see many hands raising to take you up on the offer.


AI users are accepting personal liability right now, this isn't some hypothetical. It's what happens every time someone uses AI do aid them in a task. I take personal liability for any code I write using an AI tool. It's why all these systems are generate-then-verify.


Yeah: There's a huge difference between "confidence in the product itself" versus "confidence that if the product breaks someone other than me will be paying for the damages."


I mean, personally I don’t use our magical robot overlords.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: