
GLMW – WebAssembly Powered Matrix and Vector Library - indescions_2018
https://maierfelix.github.io/glmw/
======
bhouston
Amazing! I'm one of the main contributors to the Three.JS math library and I
am always looking for speed improvements.

Can you help me understand the benchmarks because there is some weirdness in
them. Usually I can make sense of JS benchmarks, either it is memory
allocations, deoptimization, function overhead, implicit conversions, etc.

Specifically these results:

[https://maierfelix.github.io/glmw/mat4/](https://maierfelix.github.io/glmw/mat4/)

    
    
      mat4.create:
      GLM: 0.3000000142492354ms
      GLMW: 0.2999999560415745ms
    
      mat4.copy:
      GLM: 4.399999976158142ms
      GLMW: 0.5000000237487257ms
    
      mat4.set:
      GLM: 6.999999983236194ms
      GLMW: 6.800000031944364ms
    
      mat4.identity:
      GLM: 2.800000016577542ms
      GLMW: 0.4000000189989805ms
    

Each of the above operations appears to be a series of 16 assignments in
order, and some of the above have an additional allocation first.

These options sometimes it takes the same amount of time in JS and WA and
sometimes WA is way faster. Any idea as to why?

The one that is weirdest above is that mat4.create is faster than
mat4.identity, but mat4.create does more, specifically an allocation? I am
struggling to make a mental model of why that is?

Also why is perspective so slow in WA:

    
    
      mat4.perspective:
      GLM: 4.799999995157123ms
      GLMW: 4.499999980907887ms
    

It is a tiny bit of math, with a tan and then 16 assignments. Is tan that
slow?

I sort of think there is something wrong with the benchmarking.

~~~
bhouston
I see that you used a custom benchmarking library. I think that is the problem
and why I can not make sense of the results. I suggest
[https://benchmarkjs.com/](https://benchmarkjs.com/) It is what we use for
Three.JS benchmarking.

~~~
beiller
Im just thinking of ways to patch this in to THREE.Matrix4 :) Do you think
that would speed things up significantly? Like I have large many-bones
skeletons that can take time to animate.

~~~
bhouston
Until I see reliable benchmarks and understand why Matrix4 is slow I do not
know.

I think that the speed difference may be mostly related to avoiding
allocations as in WA it is mostly just views rather than new ArrayBuffers if I
am not mistaken.

I thought everything else should be fast. But I need to see the benchmarks
accurately to know.

------
johnsonjo
Sorry if I’m piggybacking this thread my comment might be too off topic. This
seems interesting, although it’s probably too low level for my current needs
(I don’t have a background in linear algebra and numerical computing). I need
to rewrite some Fortran code into JavaScript for a school project (a neural
network) and I need to do operations in the BLAS subroutines. I think I’m
going to end up using SciJS [1]. It seems promising at least feature wise, but
if anyone has any further suggestions/recommendations I’d like to look into
it.

[1]: [http://scijs.net/packages/](http://scijs.net/packages/)

