Hacker News new | past | comments | ask | show | jobs | submit login

The MoarVM and JVM backend supporting code was, and still are, very similar.

The Parrot backend supporting code was significantly different. So much so that to properly support Rakudo on Parrot would require a rewrite of it.

There are various reasons for that, but the main takeaway was that continuing to support Parrot made everything significantly more difficult.

(Even now the JavaScript backend code is more similar to the other two than Parrot backend code ever was.)

---

Parrot made a lot of assumptions about how objects worked.

It knew simultaneously too much about how objects worked, and not enough about how objects worked. It basically assumed that the Meta object protocol would be written in C for every object type.

MoarVM doesn't know how objects work. It gets told how they work everytime it runs. (Necessary since Raku can mutate its own object system.)

---

Rakudo on Parrot was slow. If you happened to use a Unicode character in your source file it got dog-slow.

Worse yet, Unicode was an optional feature of Parrot! WTF!!

That's not good backend for a language that has excellent Unicode support.

---

Not only did the NQP intermediate code for supporting Parrot need to be rewritten from scratch. A large portion of Parrot also needed significant work.

The main idea of Parrot was that there would be integers, numbers, strings, and other. It turns out that other was really the main category. And again Parrot was opinionated about what other even was.

It was thought that if something like integer bytecode was similar to machine-code that it would be easier to optimize it.

The problem is that something as simple as calling a function involves taking a Signature object and asking it if it accepts a given Capture object. Since even just adding two values together is a function call; doing objects fast is more important than special casing integers.

MoarVM is basically the other category first and foremost.

By that I mean if you took out the special-casing of integers, numbers, and strings from Parrot. Then redesigned the PMC code to support what Rakudo really needed, you would end up with something that is very similar to MoarVM.

The reason you might take that special-casing out is that making bytecode similar to machine-code was completely and utterly pointless. At least as far as Raku is concerned.

It doesn't matter that much if integer operations are fast if you are only ever going to run one in a row.

The reason that MoarVM isn't extremely slow is that it knows enough about objects to be able to pull them apart to only run the parts that need to be run.

---

I think that Parrot as initially designed may have been a really good fit for a Perl5+, Ruby, Python, or Lua etc; but it turns out that it really wasn't a good fit for what Perl6 eventually became.

---

Let's do a hypothetical thought experiment.

Imagine if someone did all of the work to make Parrot work again the day after it was dropped. That includes rewriting the middleware and doing enough to make the PMC part usable. So then it would have continued to work over the years.

I'm fairly certain that it would today be tied for 2nd or 3rd with JVM or JavaScript. (If not 4th.)

If the Unicode support of Parrot got better and faster I think it might be more likely to tied for 2nd.

For it to beat, or even meet MoarVM for first; it would end up needing so many changes that I'm not sure that it would even resemble what it once was.

---

I would have liked for Parrot to survive and be one of the backends that Rakudo runs on.

But dropping it was the correct move.




Rakudo on Parrot was slow. If you happened to use a Unicode character in your source file it got dog-slow.

Worse yet, Unicode was an optional feature of Parrot! WTF!!

This is nonsense, and I say that as the person who spent months profiling Rakudo to figure out why it was slow.

Rakudo's parser was (maybe still is) slow because it can't optimize anything, even the <ws> token.

Adding NFG could have helped by allowing fixed-width access to normalized codepoints, but IIRC Patrick told the person who was going to implement NFG not to do it.

Seems like a pattern.


You might be right about why Unicode was slow.

In which case I was right.

> So much so that to properly support Rakudo on Parrot would require a rewrite of [the middleware].

Which also means that most of the work you would have done to Parrot would needed to be reworked again afterwards.

There was `ifdefs` all over the NQP and Rakudo codebases to work around Parrot's differences. Which was annoying and error-prone.

The `ifdefs` are now mostly in NQP. And even those tend to be constrained a bit.

---

> Rakudo's parser was (maybe still is) slow because it can't optimize anything, even the <ws> token.

That is factually incorrect. There are several known optimizations that have not been implemented yet. One of which was even in STD.

Also since Raku treats regexes as code, optimizations to regular code paths can also apply to regexes. That includes optimizations to calling methods like using the <ws> token.

The main reason that they haven't been many is that the people that are competent enough and confident enough to do that work have been busy doing other things. Both their dayjobs and other optimizations or design work.

Really as far as I know, there have been next to no attempts to optimize specifically regexes and grammars since they first got to the current feature set. Certainly not in the several years where I was looking over every commit to Rakudo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: