
Show HN: Portable matrix multiplication library - webdva
https://github.com/webDva/matrixmul
======
Boxxed
You might consider looking into putting the matrix elements onto the heap with
malloc -- that way you won't limit the matrices to MAX_ELEMENTS and you also
won't waste lots of space for small matrices.

It'll complicate the usage a bit (you'll have to add allocate and free
functions) but it's a good pattern to learn.

~~~
webdva
Thank you for the feedback!

However, to dynamically allocate memory would be detrimental to this project
as its purpose is to provide a simple and effective means for performing
matrix operations on embedded systems, that is, microcontrollers. But I'll
clarify that in the documentation thanks to your feedback!

------
programmarchy
There’s not many tests. Why not just use a BLAS implementation like OpenBLAS
and use gemm?

~~~
webdva
And thank you for the feedback too!

I should probably plan for adding many tests in the project's future, per your
feedback.

Also, regarding the issue of other implementations existing, one reason this
project was started was for learning and training purposes. Also, the main
benefit that I envision for this library to provide to users is that of being
simple and easy to use in a project that involves micro-controllers. I just
looked up OpenBLAS and it seems to not be conducive to this project's goals.

------
webdva
Thought I could get some feedback here, that is, one is welcome to provide
feedback.

~~~
ychen306
Seems like performance will suffer without tiling.

~~~
alfalfasprout
if the target is embedded systems, usually things like naive GEMM are faster
than tiled approaches for small matrices (which are probably what's used on
these systems).

