Additionally, 4 years ago, WebCL was announced and implemented in some browsers, but then it was scrapped in favor of compute shaders in WebGL 2.0. When WebGL 2.0 finally arrived, it did not have compute shaders. Instead, they have been delayed to be implemented as an extension at some indefinite point in the future.
Sorry if this sounds a bit grumpy, but I have been disappointed too often to be enthusiastic about this.
There are also issues like floating point rounding modes in JS and floating point precision in WebGL that have to be addressed. Should not be too hard, but who knows.
Python and R cant use SIMD either (out of the box), infact no compiler can use SIMD efficiently, therefore gotoBlAS is directly written in assembler for this reason.
Secondly WebGL2 as a vast array of new texture types and Mozilla is implementing the ARB shader. But this has nothing to do with JS perse, because shader langauge is not JS, (its more C like).
This is basically Tensorflow enabled to run in browser and server environments. And it does have GPGPU support via Tensorflow bindings and WebGL. Yes, really.
WebGL without compute shaders it anything but general compute. There have been some attempts at shoehorning matrix multiplication and other functions required for neural networks into WebGL, but the performance is abysmal due to API limitations.
The actual math in propel seems to be based on https://github.com/PAIR-code/deeplearnjs which has a benchmark website https://deeplearnjs.org/demos/benchmarks/ but the results for the GPU implementation are meaningless because they measure CPU execution time instead of GPU execution time. That's like benchmarking a server by measuring how long it takes to send an http request without waiting for the response.
With proper benchmarking I get about 500 milliseconds for a 1024x1024-by-1024x1024 matrix multiplication, or 4 GFLOPS. CUDA can do 500 GFLOPS on my GPU, so the WebGL implementation is 125 times slower.
Something fishy seems to be going on with their CPU benchmark as well, since it is 2500 times slower than Intel MKL on this machine.
Some ppl are going to make up drame to "sound interesting" but i dont see how that is FACTUAL.
The README contains over 8900 lines of 200+ well documented standalone code examples samples comparing JS and R code ,side by side for every function call.
THE RESULTS ARE EXACTLY THE SAME!!
Name dropping like web
I am not the R nerd, my friend with the PhD is, I just do the coding around it and help clean up his brainy coding. (ie, he can code ok, so he creates the initial R code, and I clean it up so I don't go mad having to work with it) But he explained that even Excel (which I will bet is more accurate than JS) will put out inaccurate results because of number fudging at some level. (again beyond my nerd level to grok)
All of this is too bad, because R syntax is just plain rotten icky stuff. (but I may be seeing things with some of the examples looking like awful R syntax... maybe I need sleep)
Edit: Here's a comment on JS number accuracy, basically what I recalled. (this is a quick search, so take some salt with it). The issue being how JS decided to "fix" the numbers for me, felt like the magic semi colons, only not as predictable.
And, anyway, R uses IEEE-754 for 'numeric' too, and will do the same "fudging":
> All R platforms are required to work with values conforming to the IEC 60559 (also known as IEEE 754) standard
There's fair criticisms that JS may not offer the required control over rounding modes that R (or the C libs underlying R) provide or the necessary instructions to minimise error accumulation, but general handwavings about "fudging" of floating point don't apply: floating point is unavoidably imprecise, and R and JS even use the same basic format.
Handwaving is all I can do, I am just the tech taking orders.
I just talked with my statistician friend on the phone, here's one issue my friend just showed me. (integer error, not even floating point)
111,111,111 * 111,111,111 = 12,345,678,987,654,321
(I did this by hand to confirm, he made me the big meany)
Edit: _All_ numbers in JS are floating point numbers, there are no integers... <hmmm>
Edit 2: (for fun) This is from R:
> bignum <- as.bigz(111111111)
> mul.bigz(bignum, bignum)
Big Integer ('bigz') :
Even "the tech" can have some understanding of finite precision integers and floats, and how they behave, given they're building blocks of the trade. It seems somewhat relevant to knowing what changes are valid to code. http://floating-point-gui.de/ is a great place to start if you're interested in learning more. :)
> Edit: _All_ numbers in JS are floating point numbers, there are no integers... <hmmm>
Not having any native integers is unfortunate and annoying, but there are various tricks to ensure numbers are integers (e.g. floats exactly represent integers up to 2^53) plus various tricks to convince the JS engines that things are guaranteed to be integers for optimizations. These are clunky and ugly hacks, but they're possible and even more possible when using JS as a compilation target from a different language (e.g. asm.js in the extreme).
> I guess this is a well known issue
Indeed, it very much is. As you demonstrate, even R doesn't allow doing your example calculation natively, you need to import a library... something that can be done just fine in JS too: for arbitrary size integers https://github.com/Yaffle/BigInteger and for arbitrary precision floats https://github.com/charto/bigfloat (among many examples). It's true that these won't approach the performance of GMP due to lack of access to the specialized instructions it uses, but they offer the same basic functionality.
That said, I agree with the general sentiment that JS is suboptimal for scientific computing, but mostly because of performance (although R's performance falls off very quickly when using anything other than vector operations) rather than inaccurate concerns about precision.
I do have "some understanding". But not enough to processing millions of records and notice that one or two numbers of output, out of hundreds, aren't correct. That's the PhD's job.
Nor can any of the functions in R "base" use GMP.
Again, nobody is fudging numbers, neither R, Java, C, or JS because floating point is implemented on hardware so this argumument is a moot one.
But JS doesn't even _have_ integers...
If you can get over the way assignments are made, I find R's syntax, quite nice – especially with the pipe paradigm á tidyverse that is getting increasingly common. I personally find that the (from a software engineering perspective) awful syntax instead comes from many R users having a different background than those of other languages. I guess very few R users write unit tests, for example.
Of course, this is fine in most cases – messy, unengineered code is completely fine if few people will use it.
“According to the ECMAScript standard, there is only one number type: the double-precision 64-bit binary format IEEE 754 value (numbers between -(253 -1) and 253 -1). There is no specific type for integers. In addition to being able to represent floating-point numbers, the number type has three symbolic values: +Infinity, -Infinity, and NaN (not-a-number).”
0.1 + 0.2 === 0.3 // false
Some languages/libraries do offer arbitrary precision numbers that let you avoid these all pitfalls in exchange for a massive performance hit.
But sometimes he just wants to loop through a list and he goes a bit crazy.
R is fudging numbers?
I dont think so, neither does JS or excel btw.
R shows limited precision to a human, you can control this with options('digits'=22)
The README contains over 8900 lines of 200+ well documented standalone examples with many many code samples comparing JS and R side by side for every function call.
You took the effort to throw dirt at JS, but could you not have glossed over the "proof" first?
You seem to not know that IEEE floating is a standard, I suggest you read up on that first