There are lots of attempts to write new Wikipedia parsers that just do "the useful stuff", like getting the text. They all fail, for the simple reason that some of the text comes from MediaWiki templates.
E.g.
about {{convert|55|km|0|abbr=on}} east of
will turn into
about 55 km (34 mi) east of
and
{{As of|2010|7|5}}
will turn into
As of 5 July 2010
and so on (there are thousands of relevant templates). It's simply not possible to get the full plain text without processing the templates, and the only system that can correctly and completely parse the templates is MediaWiki itself.
Yes it's a huge system entirely written in PHP, but you can make a simple command line parser with it pretty easily (though it took me quite a while to figure out how). The key points are to put something like
at the start of it, and then use the Parser class. You get HTML out, but it's simple and well-formed (to get text, start with the top level p tags).
To get it to process templates, get a Wikipedia dump, extract the templates, and use the mwdumper tool to import them into your local MediaWiki database.
I don't know if this is the best or "right" way to do it, but it's the only way I've found that actually works.
>> To get it to process templates, get a Wikipedia dump, extract the templates, and use the mwdumper tool to import them into your local MediaWiki database.
Could you please explain this more? Specifically, what is meant by "extract" the templates? From what I gather from your message, you are proposing using MediaWiki itself to process the templates and output more of a plain text (within the HTML output).
The dumps of Wikipedia come as big bzip2ed XML files containing all articles, templates, etc., each in a "page" tag. The templates have titles starting with "Template:", so they are easy to detect:
It's these page tags that need to be copied to a new XML file, along with the header and footer from the original.
From what I gather from your message, you are proposing using MediaWiki itself to process the templates and output more of a plain text (within the HTML output).
Correct. The MediaWiki parser outputs HTML containing all the text to be displayed, including that generated by templates.
It's part of mediawiki and available for each and every wikipedia subsite – as far as I can tell. We are using this as well to autocomplete data. And it works really well.
For anyone who might find it useful, I wrote this really simple spidering tool in Go, which is useful when you just want a small subgraph of Wikipedia.
That looks really good and neat! I am currently working on a project that uses information from Wikipedia articles and having a parser such as yours would make things a lot easier.
I am currently on vacation for the next 2 weeks but I'd like to fork your project when I get back. Let me know if there is anything you need help with (bug fix or new features).
E.g.
will turn into and will turn into and so on (there are thousands of relevant templates). It's simply not possible to get the full plain text without processing the templates, and the only system that can correctly and completely parse the templates is MediaWiki itself.Yes it's a huge system entirely written in PHP, but you can make a simple command line parser with it pretty easily (though it took me quite a while to figure out how). The key points are to put something like
at the start of it, and then use the Parser class. You get HTML out, but it's simple and well-formed (to get text, start with the top level p tags).To get it to process templates, get a Wikipedia dump, extract the templates, and use the mwdumper tool to import them into your local MediaWiki database.
I don't know if this is the best or "right" way to do it, but it's the only way I've found that actually works.