I think it's especially odd that they only take the time for the actual addition of the arrays into account.
Initializing the arrays already takes way longer than adding them in a list comprehension. So if you're doing one-off additions of lists, converting them to np arrays is most likely not worth it.
Here are a couple of quick results I got with the same data setup as in the article:
List comprehension with zip: 4.748612998810131e-05s
Just building the np.arrays: 0.00013617933000205086s
Building and adding np.arrays: 0.00013746836000791518s
Well, given that they didn't even use pythonic constructs, I'm not quite sure what to think of the article:
In [1]: import random
In [2]: r = [random.randrange(100) for _ in range(100000)]
In [3]: x, y = random.sample(r, 1000), random.sample(r, 1000)
In [4]: %timeit z = [x[i] + y[i] for i in range(1000)]
106 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: %timeit z = [i + j for i, j in zip(x, y)]
67.3 µs ± 3.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(under python 3.6.3)
For those that "don't see it", what I'm seeing is that, instead of looping over a zip of iterables, like they should be doing, they're using an "index range" to access each element by index -- not the best practice, and also results in a noticeable slowdown.
Edit: And for the curious who might think they're different result:
In [6]: z1 = [x[i] + y[i] for i in range(1000)]
In [7]: z2 = [i + j for i, j in zip(x, y)]
In [8]: z1 == z2
Out[8]: True
Final edit: all in all, I'd say this is a low effort post, aimed at gathering attetion and showing "look how good we are that we know how to speed up python loops using numpy" (/s)...