Hacker News new | past | comments | ask | show | jobs | submit login

These, technically, are realistic benchmarks as they execute your average web application code...or what used to remotely resemble one.

Comparing purely interpreted languages or interpreted + weak JIT compiler + dynamically typed (Python, JavaScript, Ruby, PHP, BEAM family) to the ones that get compiled in their entirety to machine code (C#, Java, Go, C/C++/Rust, etc.) is not rocket science.

There is a hard ceiling as to how fast an interpreter can go - it has to parse text (if it's purely scripting language) first and then interpret AST, or it has to interpret bytecode, but, either way, it means spending dozens, hundreds or even thousands of CPU cycles in a worst case scenario per each individual operation.

Consider addition, it can be encoded in bytecode as, let's say, single 0x20 byte followed by two numeric literals, each taking 4 bytes (i32). In order to execute such operation, an interpreter has to fetch opcode, its arguments, then it has to dispatch on a jump table (we assume it's an optimized interpreter like Python's) to specific opcode handler, which will then load these two numbers into CPU registers, do addition and store the evaluation result, and then jump back to fetch the next opcode, and then dispatch it, and so on and so forth.

Each individual step like this takes at least 10 cycles (or 25 or 100, depends on complexity) if it's a fancy new big core and can also have hidden latency due to how caches and memory prefetching works. Now, when CPU executes machine code, a modern core can execute multiple additions per cycle, like 4 or even 6 on the newest cores. This alone, means 20-60 times of difference (because IPC is never perfect), and this is with the simplest operation that has just two operands and no data dependencies or other complex conditions.

Once you know this, it becomes easier to reason about overhead of interpreted languages.




So I take it unless you're doing something which requires quick prototyping or a specific machine learning library one should always avoid interpreted languages when performance is a concern ?

I love C sharp and I'm really productive in it, but I've worked at so many places which have tried to get performance out of Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: