Bringing Parsoid closer to core makes it easier for non-Wikimedia admins to use more of MW's modern core features. The performance gains are a nice bonus, and I suspect are related to the general improvements into PHP 7.2.
I'd really like to read that. The decision to have this parser as a completely separate component is the main reason why a lot of local MediaWiki installations completely avoided having a visual editor -- which in turn probably created lots of missing hours and/or missing documentation because WikiCreole ain't exactly a thing of beauty or something that's used in other places (as opposed to Markdown, which is an ugly beast, too, but at least the ugly beast you know).
You need a heavy JS frontend for a visual editor anyway, so why not do it client-side?
Having to deploy a separate component, probably in an environment that's not used at all is pretty much the worst choice possible. Yes, I'm aware, you readers here probably do all kinds of hip docker/keights setups where yet another node microservice ain't nothing special (and should've been rewritten in Rust, of course), but a wiki is something on a different level of ubiquity.
But to whet your appetite: we used https://github.com/cscott/js2php to generate a "crappy first draft" of the PHP code for our JS source. Not going for correctness, instead trying to match code style and syntax changes so that we could more easily review git diffs from the crappy first draft to the "working" version, and concentrate attention on the important bits, not the boring syntax-change-y parts.
The original legacy Mediawiki parser used a big pile of regexps and had all sorts of corner cases caused by the particular order in which the regexps were applied, etc.
We made a deliberate choice to switch from JS-style loose typing to strict typing in the PHP port. Whatever you may consider the long term merits are for maintainability, programming-in-the-large, etc, they were extremely useful for the porting project itself, since they caught a bunch of non-obvious problems where the types of things were slightly different in PHP and JS. JS used anonymous objects all over the place; we used PHP associative arrays for many of these places, but found it very worthwhile to take the time to create proper typed classes during the translation where possible; it really helped clarify the interfaces and, again, catch a lot of subtle impedance mismatches during the port.
We tried to narrow scope by not converting every loose interface or anonymous object to a type -- we actually converted as many things as possible to proper JS classes in the "pregame" before the port, but the important thing was to get the port done and complete as quickly as possible. We'll be continuing to tighten the type system -- as much for code documentation as anything else -- as we address code debt moving forward.
AMA, although I don't check hacker news frequently so I can't promise to reply.
is a not-too-atypical view of the process after the "intial working port" was done (post Aug 2019). Some nasty bugs fixed (https://github.com/wikimedia/parsoid/commit/34fcb4241aa0f3a0... a GC bug in PHP!), some more subtle bugs (PHP's crazy behavior of '$' at the end of a regexp, unless you use the 'D' flag), etc.
If you look through the history earlier in 2019, you'll even see JS commits like https://github.com/wikimedia/parsoid/commit/2853a90ceda7cdfa... which are to the JS code (in production at the time) preparing the way for the PHP port. In that particular case, our tooling was doing offset conversion between JS UTF-16 and PHP UTF-8 as part of the output-testing-and-comparison QA framework we'd built for the port, and it was getting hugely confused by Gallery since Gallery was using "bogus" offsets into the source text. Since fixing the offsets was rather involved (the patchset for this commit in gerrit went through 56 revisions : https://gerrit.wikimedia.org/r/505319 ) the change was first done on the JS side, thoroughly tested, and deployed to production to ensure it had no inadvertent effects, before that now-better JS code was ported to PHP. It would have been a disaster to try to make this change in the PHP version directly during the port.
Now they rewrote in PHP, thats probably one of the worst languages out there, and why not rewrite in something compiled if speed was the main reason for a rewrite?
For me PHP sits in the middle as a poor language, and still slow compared to any compiled languages. Also i would want to see some wasm vs php benchmarks they did before starting with php.
Lots of poor decisions from the wiki team.
After switching jobs and ending up in a PHP-based company, I can say that such thing is not entirely true.
Not really, it does a lot of stuff and solves a lot of problems.
Kinda true but not really: php 7.x saw a huge improvements and rumors have it that php 8.x will be getting a JIT-compiler.
Also, from my own observation, most of php slowness derives from the fact that the usual approach to deploying a php web application means using php fpm, that starts a whole new php interpreter for each request.
This in turn derives from the fact that php was born to create "dynamic websites" as in websites that were mostly static but with some occasional dynamic page.
IMHO some framework (Laravel? Symfony? some new player?) should try and start a single php process to handle request and persist between a request and the next one.
Starting a new php process is SUPER expensive: there's the whole fork+exe overhead, I/O to load data from disk, parsing and byte-compiling. every single time. even with opcache, you might skip some of the last steps, but you'll have to re-load cache in the next execution.
And by PHP i mean if this rewrite is done partly because of speed, why not build the parser in a compiled language? Its just silly that you have to work with PHP as its one of the worst languages out there in terms of dx and features.
I assume its 2020 where clients are "fast enough" to handle a syntax transformation like markdown -> HTML
2020: mobile clients. Yes, they're where the rest of the web was in 2005. Yes, they're ubiquitous.
> "For me PHP sits in the middle as a poor language, and still slow compared to any compiled languages." No assumptions whatsoever.
Its not an assumption. PHP is slower than a compiled langauge. simple and easy. Need speed? Dont do it in a compiled language. Period.
One of the things that come to my mind is rendering in formats other than html.
wikibooks for example lets you render wikimedia pages into pdf, and that's cool. but to do that you have to parse the page server side.
I found this a little frightening given Parsoid/JS is handling user input.
For some reason, I did not manage to find it. Neither linked from this article, nor via the MediaWiki page:
Nor via the Phabricator page:
What am I missing?
parsoid source code => https://github.com/wikimedia/parsoid
A nice (unexpected) side effect is it became much easier for people extend the parser which their own syntax, leading to an explosion of plugins ( https://www.dokuwiki.org/plugins?plugintype=1#extension__tab... )
I'm no expert on parsing theory but I have the impression that applying standard approaches to parsing source code; building syntax trees, attempting to express it with context free grammar etc. is the wrong approach for parsing wiki markup because it's context-sensitive. There's some discussion of the problem here https://www.mediawiki.org/wiki/Markup_spec#Feasibility_study
Another challenge for wiki markup, from a usability perspective, if a user get's part of the syntax of a page "wrong", you need to show them the end result so they can fix the problem, rather than have the entire page "fail" with a syntax error.
From looking at many wiki parsers before re-writing the Dokuwiki parser, what _tends_ to be the case, when people try to apply context-free grammars or build syntax trees is they reach 80% then stumble at the remain 20% of edge cases of how wiki markup is actually used in the wild.
Instead of building an object graph, the Dokuwiki parser produces a simple flat array representing the source page ( https://www.dokuwiki.org/devel:parser#token_conversion ) which I'd argue makes is simpler write code for rendering output (hence lots of plugins) as well as being more robust at handling "bad" wiki markup it might encounter in the wild - less chance of some kind of infinite recursion or similar.
Ultimately it's similar discussion to the SAX vs. DOM discussions people used to have around XML parsing ( https://stackoverflow.com/questions/6828703/what-is-the-diff... ). From a glance at the Parsiod source they seem to be taking a DOM-like approach - I wish them luck with that - my experience was this will probably lead to a great deal more complexity, especially when it comes to edge cases.
Seems like we've learned many of the same lessons building our parsers. Markup parsers do seem to be a unique thing, not really like parsing either programming languages or natural languages. If we every meet I'm sure we could happily share a beverage of your choice trading stories.
Good code uses small functions that do a simple thing, then you combine those functions , it will look similar for most programming languages
Why are web engineers snobs? Get the job done and move on.
This is how I would write the function definition:
function html2wikitext($config, $html, $options = , $data = null)
public function html2wikitext(
PageConfig $pageConfig, string $html, array $options = ,
?SelserData $selserData = null
Not totally sure, but this seems to be the old JS function definition:
_html2wt = Promise.async(function *(obj, env, html, pb)
This is fine in smaller codebases, 'your' code, and code that you can read to a point where you can extrapolate these variables from the implementation, but this simply does not scale beyond a certain code size - or more importantly, a certain amount of contributors.
It can be compensated with documentation (phpDoc), but that is just as verbose if not moreso than adding type information - although you should probably do both.
Type systems come into place where you are not expected anymore to fully comprehend the code. They are useful when you are just a consumer / user of this function and all you want to do is convert some html to wiki text without having to understand the internals of that particular function (and whatever else goes on beyond it). Types are documentation, prevent shooting yourself in the foot, reduce trial-and-error, and avoid the user having to read and comprehend hundreds - thousands of lines of code.
public function html2wikitext(
array $options = ,
?SelserData $selserData = null
That being said, I think the Python way of formatting type annotations ("variable : type") is more readable than C-style "type variable = ...", especially when the annotation is optional.
Further, documentation: "of course everybody knows what you're supposed and forbidden to pass into $data" - NOT. Even if it's just you writing the code: the you+1year will have trouble reading it (been there). Not even when it's supposed to be documented. If you have an explicit data structure, this becomes far more evident, even before any documentation (note: not replacing it).
I'm not interested in playing computer in my head any more, juggling internal state that's completely superfluous to me: am I a higher primate? Yes. Are higher primates tool users? Also yes. Should I let machines do the menial tasks for me, leaving me to do the creative ones? A hundred times yes.
(NB: this is not a silver bullet - e.g. won't help against logic errors - but it's a useful guard against going completely off the rails)
function html2wikitext($config, $html, $options = , $data)
You: I like X!
> I see this "strictness over readability" on the rise in many places and I think it is a net negative.
which makes your tastes a bit more... absolute to say something. So it makes sense to me that people reply to your negative view with counter arguments.
For lone wolf coding or rapid prototyping the equation is different.
But yes, the people using it should know learn enough before they start contributing, and in a lot of places it's preferred to have people who barely know the basics of what they use (because they are "cheap" and "easily replaceable").
> The two wikitext engines were different in terms of implementation language, fundamental architecture, and modeling of wikitext semantics (how they represented the "meaning" of wikitext). These differences impacted development of new features as well as the conversation around the evolution of wikitext and templating in our projects. While the differences in implementation language and architecture were the most obvious and talked-about issues, this last concern -- platform evolution -- is no less important, and has motivated the careful and deliberate way we have approached integration of the two engines.
Which is I suppose a compelling reason for a rewrite if you're understaffed.
I'd still be interested in writing it in Rust and then writing PHP bindings. There's even a possibility of running a WASM engine in the browser and skipping the roundtrip for evaluation.
From the article: "However, by 2015, as VisualEditor and Parsoid matured and became established, maintaining two parallel wikitext engines in perpetuity was untenable"
They didn't write it in PHP for speed, that was merely a side effect. They wrote it in PHP so they could have a single language for the system.
I assume that Wikimedia works on a rather tight budget. Choosing (and unifying on) tech stacks with a larger supply in devs seems to be an economically reasonable choice.
The other side to using PHP was having support in other host providers. Wikipedia is not the only installation of MediaWiki and there has been consideration in the past for those installing MediaWiki on shared hosts where you don't necessarily have root access to install things like node. Moving forward that's less of a concern because you can containerise MediaWiki (and the other services), but not even Wikimedia run that in production yet AFAIK.
However, even if they weren't budget constrained (which they aren't) unifying on a single language used by the majority of their devs isn't a bad idea, especially when the effort to port the entire stack to a new language would be unjustifiable.
server-side JS was a thing 10 years ago, but it didn't offer enough benefits to switch. same with python, java, ruby - all existed, but didn't offer enough benefits to switch then, and probably still don't now.
also, what would be a "larger supply"? C? Java? C#? JS? PHP has a huge supply of developers at all skill levels, which may make it just as easy (or easier) in finding the talent they need. And... hey - they wrote that initial parsoid in JS and... they've doubled the speed by converging on PHP.
But in general, I agree with you. Regular expressions aren't hard, and there's no excuse for not learning to read and use them.
Plus there are so many cases where people build insane regex where they are just the wrong tool for the job, e.g. parsing/extracting or manipulating HTML. It always starts out with "I just need the src from that <img>, what could go wrong" and ends in despair, because you never just need that src and you never only deal with perfect html and you'd be done already if you had just used some dom parser.
Regexes are easy to understand if you write them, but reading them can take lots of time.