The phrase "raw numpy" strikes me as funny. I would figure that's about as abstract as you could get while still working with the math (discarding symbolic engines).

 Yes, after implementing a simple neural network in C (with AVX, and pthreads), "raw numpy" does sound funny!On the other hand, try implementing a convnet in numpy, especially the backprop, and you might start feeling some of its "rawness" :-)
 As someone who also hand-coded a neural network implementation, forward and back prop as well as an RNN in C, yea "raw numpy" is a joke.Something I've always hated about people that say "why do I have to write a backprop when TF does it for me?"Here's why: you go to a company, they want you to incorporate machine learning into their c++ engine. Have fun using numpy, you said you knew machine learning right? implement backprop for me, you can do that right?
 I know what you mean. To take partial derivatives with respect to the filter parameters from a correlation (or convolution), it's simpler to go down to the component level. However, it's hard to get back up to the matrix/vector level after doing so (to write the operations in NumPy).I'm developing a model (not exactly a convnet) that uses a correlation step. Because of the above problem and its resulting pure-python loops, I may have to cythonize or use the NumPy C API for the gradient evaluation. Do you know of any examples I could check out that implement partial derivatives w.r.t. a correlation (or convolution) in "raw numpy"?
 The phrase seems appropriate to me. The students are still working with matrices and linear algebra, focusing on the major algorithms. Going into the element-by-element linear algebra algorithms, for example in matrix multiplication, would be more appropriate for a high-performance, numerical computing class. The right level of detail is given when he talks about considering the behavior of individual gradient elements.

Search: