Tag soup hasn’t been a problem for years. The HTML 5 specification goes into a lot more detail than previous specifications when it comes to parsing malformed markup and browsers follow it. So no matter the quality of the markup, if you throw it at any HTML 5 implementation, you will get the same consistent, unambiguous DOM structure.
yeah, you could just pull the parser out of any open source browser and voila a parser not only battle-tested, but probably the one the page was developed against
That's why the best strategy is to feed the whole page into LLM. (After removing html tags) and just ask LLM to give you the date you need in the format you need.
If there is lots of javascript dom manipulation happening after pageload. Then just render in webdriver and screenshot, ocr and feed the result into LLM and ask it the right questions.
It’s informal language that has formal language mixed in. The informal parts determine how the final document should look. So, a simple formal-to-formal translation won’t meet their needs.
I never really understand this reasoning of "regex is hard to reason about, so we just use an LLM we custom made instead!" I get it's trendy but reasoning about LLMs is impossible for many devs the idea that this makes it more maintainable is pretty hilarious.
Regex’s require you to understand what the obscure-looking patterns do character by character in a pile of text. Then, across different piles of text. Then, juggling different regex’s.
For a LLM, you can just tune it to produce the right output using examples. Your brain doesn’t have to understand the tedious things it’s doing.
This also replaces a boring, tedious job with one (LLM’s) that’s more interesting. Programmers enjoy those opportunities.
In either case you end up with an inscrutable black box into which you pass your html...honestly I'd prefer the black box that runs more efficiently and is intelligible to at least some people (or most, with the help of a big LLM).
That is true. One can also document the regex’s and rules well with examples to help visualize it.
I think development time will be the real winner for LLM’s since building the right set of regex’s takes a long time.
I’m not sure which is faster to iterate on when sites change. The regex’s require the human learning one or more regex’s for sites that broke. Then, how they interact with other sites. The LLM might need to be retrained, maybe just see new examples, or might generalize using previous training. Experiments on this would be interesting.
Well, even building and commenting the regex is something that LLMs can do pretty well these days. I actually did exactly that, in a different domain: wrote a prompt template that included the current (python-wrapped) regex script and some autogenerated test case results, and a request for a new version of the script. Then passed that to sonnet 3.5 in an unattended loop until all the tests passed. It actually worked.
The secret sauce was knowing what sort of program architecture is suited to that process, and knowing what else should go in the code that would help the LLM get it right.
Which is all to say, use the LLM directly to parse the html, or use an LLM to write the regex to parse the html: both work, but the latter is more efficient.
For as much as I would love for this to work, I'm not getting great results trying out the 1.5b model in their example notebook on Colab.
It is impressively fast, but testing it on an arxiv.org page (specifically https://arxiv.org/abs/2306.03872) only gives me a short markdown file containing the abstract, the "View PDF" link and the submission history. It completely leaves out the title (!), authors and other links, which are definitely present in the HTML in multiple places!
I'd argue that Arxiv.org is a reasonable example in the age of webapps, so what gives?
What is Google Flash? Do you mean Gemini Flash? If so, then the article talks about that general purpose LLMs are worse than this specialized LLM for Markdown conversion.
In this case it is not, though. As much as I'd like a self-hostable, cheap and lean model for this specific task, instead we have a completely inflexible model that I can't just prompt tweak to behave better in even not-so-special cases like above.
I'm sure there are good examples of specialised LLMs that do work well (like ones that are trained on specific sciences), but here the model doesn't have enough language comprehension to understand plain English instructions. How do I tweak it without fine-tuning? With a traditional approach to scraping this is trivial, but here it's unfeasible to the end user.
Unfortunately not getting any good results for RFC 3339 (https://www.rfc-editor.org/rfc/rfc3339), such a page where I think it would be great to convert text into readable Markdown.
The end result is just like the original site but with without any headings and the a lot of whitespace still remaining (but with some non-working links inserted) :/
That's their existing API (which I also tried, with... less than desirable results). This post is about a new model, `reader-lm`, which isn't in production yet.
In real-world use cases, it seems more appropriate to use advanced models to generate suitable rule trees or regular expressions for processing HTML → Markdown, rather than directly using a smaller model to handle each HTML instance. The reasons for this approach include:
1. The quality of HTML → Markdown conversion results is easier to evaluate.
2. The HTML → Markdown process is essentially a more sophisticated form of copy-and-paste, where AI generates specific symbols (such as ##, *) rather than content.
3. Rule-based systems are significantly more cost-effective and faster than running an LLM, making them applicable to a wider range of scenarios.
These are just my assumptions and judgments. If you have practical experience, I'd welcome your insights.
An aligned future, for sure. Current commercial LLMs refuse to talk about “keeping secrets” (protection of identity) or pornographic topics (which, in the communities I frequent – made of individuals who have been oppressed partly because of their sexuality –, is an important subject). And uncensored AIs are not really a solution either.
Why Claude 3.5 Sonnet is missing from the benchmark? Even if the real reason is different and completely legitimate, or perhaps purely random, it comes across as "claude does better than our new model so we omitted it because we wanted the tallest bars on the chart to be ours". And as soon as the reader thinks that, they may start to question everything else in your work, which is genuinely awesome!
So regex version still beats the LLM solution. There's also the risk of hallucinations. I wonder if they tried to make SML which would rewrite or update the existing regex solution instead of generating the whole content again? This would mean less output tokens, faster inference and output wouldn't contain hallucinations. Although, not sure if small language models are capabable to write regex
Not sure about the quality of the model's output. But I really appreciate this little mini-paper they produced. It gives a nice concise description of their goals, benchmarks, dataset preparation, model sizes, challenges and conclusion. And the whole thing is about a 5-10 minute read.
Feels surprising that there isn't a modern best-in-class non-LLM alternative for this task. Even in the post, they described that they used a hodgepodge of headless Chrome, readability, lots of regex to create content-only HTML.
Best I can tell, everyone is doing something similar, only differing in the amount of custom situation regex being used.
How could it possibly be (a better solution) when there are X different ways to do any single thing in html(/css/js)? If you have a website that uses a canvas to showcase the content (think presentation or something like that), where would you even start? People are still discussing whether the semantic web is important; not every page is utf8 encoded, etc. IMHO small LLMS (trained specifically for this) combined with some other (more predictable) techniques are the best solution we are going to get.
Fully agree on the premise: there are X different ways to do anything on the web. But - prior to this - the solution seemed to be: everyone starts from scratch with some ad-hoc Regex, and plays a game of whackamole to cover the first n of the x different ways to do things.
Best of my knowledge there isn't anything more modern than Mozilla's readability and that's essentially a tool from the early 2010s.
About their readability-markdown pipeline:
"Some users found it too detailed, while others felt it wasn’t detailed enough. There were also reports that the Readability filter removed the wrong content or that Turndown struggled to convert certain parts of the HTML into markdown. Fortunately, many of these issues were successfully resolved by patching the existing pipeline with new regex patterns or heuristics."
To answer their question about the potention of a SML doing this, they see 'room for improvement' - but as their benchmark shows, it's not up to their classic pipeline.
You echo their research question: "instead of patching it with more heuristics and regex (which becomes increasingly difficult to maintain and isn’t multilingual friendly), can we solve this problem end-to-end with a language model?"
I don't get the usage of "regex/heuristics" either. Why can that task not be completely handled by a classical algorithm?
Is it about the removal of non-content parts?