Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Incredible how low usage is among lawyers. Does anyone have any intuition on why?


Part of it is selection bias, Claude is much less general-audiences than ChatGPT. But any lawyers using LLMs in 2025 deserve to be disbarred:

"A Major Law Firm's ChatGPT Fail" https://davidlat.substack.com/p/morgan-and-morgan-order-to-s...

"Lawyer cites six cases made up by ChatGPT" https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...

"AI 'hallucinations' by ChatGPT end up costing B.C. lawyer" https://www.msn.com/en-ca/news/world/ai-hallucinations-creat...

The list goes on and on. Maybe there's a bespoke RAG solution that works...maybe.


> But any lawyers using LLMs in 2025 deserve to be disbarred

In what year would you think it will be acceptable and why?

LLMs are tools, I don't see anything wrong with using them in any occupation as long as the user is aware of the limitations.


no - some Judge wrote to his family member recently.. " I am seeing all these great briefs now " followed by a novice discussion of AI use. This is anecdotal (recent), but it says to me that non-lawyers, with care, are writing their own legal papers across the USA and doing it well. This fits with other anecdotes here in coastal California for ordinary law uses.


i think they're especially likely to hallucinate when asked to cite sources, as in they're mostly prone to making up sources, and a lot of the work my lawyer friend have asked of chatgpt or claude requires it to cite stuff, and my friend has said it has just made up case law that isn't real. so while it's useful as a launching point and can in fact be helpful and find real case law, you still have to double check every single thing it says with a fine tooth comb, so its productivity impact is much lower than code where you can clearly see whether the output works immediately


My guess is bc hallucinations in a legal context can be fatal to a case, possibly even career endu g — there’s been some high profile cases where judges have ripped into lawyers pretty destructively.


Because LLMs make things up and the lawyer is liable for using that made up information.


Lawyers are selected for critical thinking skills and they aren't vulnerable to AI hype the way relatively poorly educated computer guys are.


Interesting article relating Adam Unikowsky asking for Claude to decide a Supreme Court case: https://blog.plover.com/tech/gpt/presidential-emoji.html

"Claude is fully capable of acting as a Supreme Court Justice right now."




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: