Hacker News new | past | comments | ask | show | jobs | submit login

My guess is because it’s the Smithsonian, they’re just not willing to trust an LLM’s transcription enough to put their name on it. I imagine they’re rather conservative. And maybe some AI-skeptic protectionist sentiments from the professional archivists. Seems like it could change with time though.





> My guess is because it’s the Smithsonian, they’re just not willing to trust an LLM’s transcription enough to put their name on it. I imagine they’re rather conservative

I expect thats a common theme from companies like that, yet I don't think they understand the issue they think they have there.

Why not have the LLMs do as much work as possible and have humans review and put their own name on it? Do you think they need to just trust and publish the output of the LLM wholeheartedly?

I think too many people saw what a few idiot lawyers did last year and closed the book on LLM usage.


The incident with the lawyers just highlighted the fundamental problem with LLMs and AI in general. They can't be trusted for anything serious. Worse, they give the apppearence of being correct, which leads humans "checkers" into complacency. Total dumpster fire.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: