I see a lot of comments on hallucination risk and the accumulation of non-traceable rotten data. If you are curious to try a better non-llm-based OCR, try LLMWhisperer.https://pg.llmwhisperer.unstract.com/
If you're looking for better accuracy and table layout preservation, give LLMWhisperer and Docling a try! Both keep tables tidy with a Markdown-like structure.
* Loan application form: It picks up checkboxes and handwriting. But it missed a lot of form fields. Not sure why?
* Edsger W. Dijkstra’s handwritten notes(from Texas univ archive) - Parsing is good.*
* Badly(misaligned) scanned bill - Parsing is good. Observation: there is a name field, but it produced a synonymous name instead of the name in the bill — hallucination??
* Investment fund factsheet - It could parse the bar charts and tables, but it whimsically excluded many vital data points from the document.
* Investment fund factsheet, complex tables - Bad extraction, could not extract merged tables and again whimsical elimination of rows and columns.
Anyone curious, try LLMWhisperer[1] for OCR. It doesn't use LLMs, so no hallucination side effects. It also preserves the layout of the input document for more context and clarity.
There's also Docling[2], which is handy for converting tables from PDFs into markdown. While it uses Tesseract/EasyOCR under the hood, which can sometimes make the OCR results a bit less accurate
For those interested, try LLMWhisperer(https://unstract.com/llmwhisperer/) for OCR. It avoids LLMs, eliminates hallucination issues, and preserves the input document layout for better context.
The tool doesn't use any LLMs for processing/parsing the data. It parses and converts into raw text.
The final output(raw text) of the parsing is then fed to LLMs for data extraction.
e.g. Extracting data from insurance, banking, and invoice documents.
> The "best" models just made stuff up to meet the requirements. They lied in three ways:
> The main difficulty of the is project lies in correctly identifying page zones; wouldn't it be possible to properly find the zones during the OCR phase itself instead of rebuilding them afterwards?
Anyone curious, try LLMWhisperer[1] for OCR. It doesn't use LLMs, so no hallucination side effects. It also preserves the layout of the input document for more context and clarity.
Looks interesting, but the cost is prohibitive for a hobby project. Also, it doesn't really solve my problem.
Google Vision already returns the coordinates of each word (and even of each letter), so it's easy to know where the word was on the page, and even, if necessary, to rebuild the page with the words correctly placed -- that's fundamentally what I do with the mouseover on the interactive demo: https://divers.medusis.net/boislisle/pub (at the paragraph level).
But my problem isn't to know where the words are (Google Vision provides that); it's to know what belongs to what, what is footnotes, what is main text, etc. This is what the post discusses. Just having the text following the same layout as in the original wouldn't help, because I'm not trying to reproduce the layout or the typesetting, I want to rebuild the content semantically, so as to do different "flows".
That said, it got me thinking... there may be an opportunity to do a cheaper version of LLMwhisperer? ;-)
reply