Hacker News new | comments | show | ask | jobs | submit login

While I find it good that the article explicitly addresses issues with trace-based compilation (usually this is not the case), a completely fair account needs to present the additional memory requirements for using the PyPy tool chain. Quite recently, somebody here has addressed this by mentioning that he does not really care for all the performance speedup he gets, if the memory requirements become outlandish at the same time.

It would also be very informative to know what the differences in automatic memory management techniques are (i.e., what did the previous implementation do?) Personally, I am also interested in interpreter optimization techniques, and it would therefore be interesting to me what--or if at all--the previous VM used for example threaded code or something along these lines.




Yeah: I wish speed.pypy.org also compared memory use. Admittedly, that's more complicated.


It does not for a very trivial reason - we lack benchmarks that have some reasonable memory impact. Right now most benchmarks would measure interpreter size, which is slightly boring


Hm, I agree about having benchmarks with memory impact is more compelling, but wouldn't it be at least interesting to show the memory impact as it is right now? (i.e., how much more memory does PyPy need?)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: