Hacker News new | past | comments | ask | show | jobs | submit login

obviously I use do.call all day long because R is my primary weapon, but even if I say so myself, a happy R user, Python with Numpy is faster. I would invite you to show me a single instance where R is faster at bog-standard memory access, than Numpy. My example demonstrates exactly this. Can there be anything simpler than a matrix access? And if that's (8x) slower, everything built on this fundamental building block of all computing (accessing memory) will be slow too. It's R's primary weakness and everybody knows it. Let me make it abundantly clear:

  > xx <- rep(0, 100000000)
  > system.time(xx[] <- 1)
     user  system elapsed 
    4.890   1.080   5.977 

  In [1]: import numpy as np
  In [2]: xx = np.zeros(100000000)                                               
  In [3]: %timeit xx[:] = 1
  1 loops, best of 3: 535 ms per loop
If the very basics, namely changing stuff in memory, is so much slower, then the entire edifice built on it will be slower too, no matter how much you mess around with do.call. And to address the issue of (slow, but quickly expandable) Python lists, recall that all of data science in Python is built on Numpy so the above comparisons are fair.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: