Hacker News new | past | comments | ask | show | jobs | submit login

2-5x IS a lot. It's the speed difference between the current iPhone 14 and an iPhone XS-iPhone 6. That's 4-8 years of hardware improvements.

And the parent was talking about numpy code which is better than stock python, who knows how far back normal python would send you.




I'm in the camp that 2-5x performance improvement is not really worth re-writing Python code in C for.


Guess that'll depend on how much you need the performance and how much code it is.

They're comparing numpy (SIMD plus parallelism) with straightforward C code and getting a 2-5x improvement.


I highly doubt that numpy can ever be a bottleneck. In typical python app - there are other things like I/O that consume resources and become bottleneck, before you run into numpy limits and justify rewrite in C.


I haven't personally run into IO bottlenecks so I have no idea how you would speed those up in Python.

But there's two schools of thoughts I've heard from people regarding how to think about these bottlenecks:

1. IO/network is such a bottleneck so it doesn't matter if the rest is not as fast as possible.

2. IO/network is a bottleneck so you have to work extra hard on everything else to make up for it as much as possible.

I tend to fall in the second camp. If you can't work on the data as it's being loaded and have to wait till it's fully loaded, then you need to make sure you process it as quickly as possibly to make up for the time you spend waiting.


In my typical python apps, it's 0.1-20 seconds of IO and pre-processing, followed by 30 seconds to 10 hours of number crunching, followed by 0.1-20 seconds of post processing and IO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: