Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can Large Language Models Emulate Judicial Decision-Making? [Paper] (ssrn.com)
1 point by shivamsaran on Feb 4, 2025 | hide | past | favorite | 2 comments


An actor can emulate the communication style of judicial decision language, sure.

But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.

> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.

From "Asking 60 LLMs a set of 20 questions" https://news.ycombinator.com/item?id=37451642 :

> From https://news.ycombinator.com/item?id=36038440 :

>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law

>> A "who hath done it" exercise

>> "For each of these things, tell me whether Gdo, Others, or You did it"

AI should never be judge, jury, and executioner.


I recently wrote an academic paper evaluating whether large language models can emulate judicial decision-making. It covers methodology, limitations, and implications for AI in law. Curious to hear thoughts from the HN community.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: