Hacker News new | past | comments | ask | show | jobs | submit login

That's interesting work, thank you very much. I would be very interested in the performance gain in using the GraalVM JIT compared to the original Squeek (Cog) or e.g. VisualWorks the performance of which is comparable to CPython (https://benchmarksgame-team.pages.debian.net/benchmarksgame/...). I'm aware of another publication which concludes that the performance gain of a Truffle based solution is about the same as of an RPython based one. Did you also have a look at this? And finally: could'nt you improve performance even more by using a more optimized bytecode (instead of the bluebook one, as it was e.g. done here http://strongtalk.org/downloads/bctable.pdf)?

We compared the runtime performance of GraalSqueak with OpenSmalltalkVM (formerly Cog) and RSqueak (RPython-based VM) in Figure 4 of our MPLR’19 paper [1] for the first time. Since then, we've worked on a couple of performance optimizations. So at the moment, I think GraalSqueak is only slower in the DeltaBlue benchmark... in all others, it's (often significantly) better than OpenSmalltalkVM. However, please also note its limitations which we discussed in section 5.2, especially with regard to Smalltalk's interrupt handler and Partial Evaluation performed by the Graal compiler.

So far, I'd say our observations wrt RPython match the ones discussed in "Tracing vs. Partial Evaluation" [2]: Language implementers have to do more work in Truffle, but probably get better peak performance in return (not talking about warmup or memory consumption here).

RE "a more optimized bytecode" set: Sista [3] (an extension of the OpenSmalltalkVM) is doing something like that, too. I'd guess performance would be more or less the same using GraalVM as Truffle produces highly specialized code. But, of course, this would need to be benchmarked. The advantage of the Sista approach is that it's managed on the image level, so specialized versions of methods are persisted as part of the image. AFAIK the GraalVM team is working toward persisting compiled code caches, which is kind of similar but on the level of the language implementation framework.

Hope this answers your questions!

[1] https://fniephaus.com/2019/mplr19-graalsqueak.pdf [2] https://stefan-marr.de/papers/oopsla-marr-ducasse-meta-traci... [3] https://hal.inria.fr/hal-01596321/document

Thank you for your detailed, interesting answer. I look forward to reading the referenced papers. At the moment I am building frontends for LuaJIT and would like to compare performance with analog implementations in RPython and Truffle. Maybe I will build a Smalltalk-80 frontend and then compare it to your mentioned implementations.

> ... VisualWorks the performance of which is comparable to CPython...


I already posted the link in my comment.

No, I posted a different link.

In what respect is it different? What did you intend to show?

You posted a link to normalized boxplot averages which-programs-are-fastest.html for 28 different language implementations.

I posted a link to side-by-side measurements vw-python3.html of individual Smalltalk and Python programs.

Ok, I see. It's a pitty Pharo is not on the diagram.

You should make the measurements you want to see, and publish them in the way you want to see them.

It's not my site, and actually the proponents of pharo and smalltalk should have best interest that their language is optimally presented.

You're the one saying what you think should be shown.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact