I would challenge you to find a picture of text that you think a human can read and OCR cannot. I’m happy to demonstrate. The text shown in this article is trivial.
The archivists themselves say that they run into such texts often enough that this program was needed:
> The agency uses artificial intelligence and a technology known as optical character recognition to extract text from historical documents. But these methods don’t always work, and they aren’t always accurate.
They are absolutely aware of the advances in these tools, so if they say they're not completely there yet I believe them. One likely reason is that the models probably have less 1800s-era cursive in their training set than they do modern cursive.
It's likely that with more human-tagged data they could improve on the state of the art for OCR, but it's pretty arrogant to doubt the agency in charge of this sort of thing when they say the tech isn't there yet.
I don't necessarily agree with her conclusion because she wasn't participating directly in the thread and wasn't completely responsive to some of the points raised, but still, it appears that there are a few instances of difficult-to-read handwriting where OCR is still coming in second to skilled human interpretation.
Text that is _intentionally constructed_ to fool computers but not humans is obviously out of scope. But they’re generally easily solved with OCR these days anyway.