While we have a PDF internals expert here, I'm itching to ask: Why is mupdf-gl so much faster than everything else? (on vanilla desktop linux)
Its search speed on big pdfs is dramatically faster than everything else I've tried and I've often wondered why the others can't be as fast as mupdf-gl.
It's funny you ask this - i have spent a time building pdf indexing/search apps on the side over the past few weeks.
I'll give you the rundown. The answer to your specific question is basically "some of them process letter by letter to put text back in order, and some don't. Some build fast trie/etc based indexes to do searching, some don't"
All of my machine manuals/etc are in PDF, and too many search apps/OS search indexers don't make it simple to find things in them. I have a really good app on the mac, but basically nothing on windows. All i want is a dumb single window app that can manage pdf collections, search them for words, and display the results for me. Nothing more or less.
So i built one for my non-mac platforms over the past few weeks. One version in C++ (using QT), one version in .net (using MAUI), for fun.
All told, i'm indexing (for this particular example), 2500 pdfs that have about 150k pages in them.
On the indexing side, lucene and sqlite FTS do a fine job, and no issues - both are fast, and indexing/search is not limited by their speed or capability.
On the pdf parsing/text extraction side, i have tried literally every library that i can find for my ecosystem (about 25). Both commercial and not. I did not try libraries that i know share underlying text extraction/etc engines (IE there are a million pdfium wrappers).
I parse in parallel (IE files are processed in parallel) , extract pages in parallel (IE every page is processed in parallel), and index the extracted text either in parallel or in batches (lucene is happy with multiple threads indexing, sqlite would rather have me do it sequentially in batches).
The slowest libraries are 100x slower than the fastest to extract text. They cluster, too, so i assume some of them share underlying strategies or code despite my attempt to identify these ahead of time. The current Foxit SDK can extract about 1000-2000 pages per second, sometimes faster, and things like pdfpig, etc can only do about 10 pages per second.
Pdfium would be as fast as the current foxit sdk but it is not thread safe (I assume this is because it's based on a source drop of foxit from before they added thread safety), so all calls are serialized. Even so it can do about 100-200 pages/second.
Memory usage also varies wildly and is uncorrelated with speed (IE there are fast ones that take tons of memory and slow ones that take tons of memory). For native ones, memory usage seems more related to fragmentation than it does it seems related to dumb things. There are, of course, some dumb things (one library creates a new C++ class instance for every letter)
From what i can tell digging into the code that's available, it's all about the amount of work they do up front when loading the file, and then how much time they take to put the text back in content order to give me.
The slowest are doing letter by letter.
The fastest are not.
Rendering is similar - some of them are dominated by stupid shit that you notice instantly with a profiler. For example, one of the .net libraries renders to png encoded bitmaps by default, and between it and windows, it spends 300ms to encode/decode it to display. Which is 10x slower than it rasterized it. If i switch it to render to bmp instead, it takes 5ms to encode/decode it (for dumb reasons, the MAUI apis require streams to create drawable images). The difference is very noticeable if i browse through search results using the up/down key.
Anyway, hopefully this helps answer your question and some related ones.
> From what i can tell digging into the code that's available, ..., how much time they take to put the text back in content order... The slowest are doing letter by letter. The fastest are not.
Thank you, that's really helpful.
I hadn't considered content reordering but it makes perfect sense given that the internal character ordering can be anything, as long as the page renders correctly. There's an interesting comp-sci homework project: Given a document represented by an unordered list of tuples [ (pageNum, x, y, char) ], quickly determine whether the doc contains a given search string.
Sometimes I need to search PDFs for a regex and use pdfgrep. That builds on poppler/xpdf, which extracts text >2x slower than mupdf (https://documentation.help/pymupdf/app1.html#part-2-text-ext..., fitz vs xpdf). From this discussion, I'm now writing my own pdfgrep that builds on mupdf.
> Sounds like "I don't know programming, so I will just use AI".
If you were leading Tensorlake, running on early stage VC with only 10 employees (https://pitchbook.com/profiles/company/594250-75), you'd focus all your resources on shipping products quickly, iterating over unseen customer needs that could make the business skyrocket, and making your customers so happy that they tell everyone and buy lots more licenses.
Because you're a stellar tech leader and strategist, you wouldn't waste a penny reinventing low-level plumbing that's available off-the-shelf, either cheaply or as free OSS. You'd be thinking about the inevitable opportunity costs: If I build X then I can't build Y, simply because a tiny startup doesn't have enough resources to build X and Y. You'd quickly conclude that building a homegrown, robust PDF parser would be an open-ended tar pit that precludes us from focusing on making our customers happy and growing the business.
And the rest of us would watch in awe, seeing truly great tech leadership at work, making it all look easy.
I would hire someone who understands PDFs instead of doing the equivalent of printing a digital document and scanning it for "digital record keeping". Stop everything and hire someone who understands the basics of data processing and some PDF.
Let's assume we have a staff of 10 and they're fully allocated to committed features and deadlines, so they can't be shifted elsewhere. You're the CTO and you ask the BOD for another $150k/y (fully burdened) + equity to hire a new developer with PDF skills.
The COB asks you directly: "You can get a battle-tested PDF parser off-the-shelf for little or no cost. We're not in the PDF parser business, and we know that building a robust PDF parser is an open-ended project, because real-world PDFs are so gross inside. Why are you asking for new money to build our own PDF parser? What's your economic argument?"
And the killer question comes next: "Why aren't you spending that $150k/y on building functionality that our customers need?" If don't give a convincing business justification, you're shoved out the door because, as a CTO, your job is building technology that satisfies the business objectives.
So if you receive a pdf full of sections containing prerasterized text (e.g adverts, 3d rendered text with image effects, scanned documents, handwritten errata), what do you do? You cannot use OCR because apparently only pdf-illiterate idiots would try such a thing?
I wouldn't start by rastering the rest of the PDF. In business world, unlike academia and bootleg books and file sharing, majority of PDFs are computer generated. I know because I do this for a living.
Its search speed on big pdfs is dramatically faster than everything else I've tried and I've often wondered why the others can't be as fast as mupdf-gl.
Thanks for any insights!