Oh, wow, author of that library here, thanks for posting this! I originally intended it to be a little fun experiment, but if there's real interest, then I can certainly clean it up and make it usable.
Last night I changed it to not depend on llc.exe and instead use the llvm api (had to compile llvm-3.4... that was not fun), which was one of the biggest roadblocks I foresaw with respect to redistributability.
Hey, hope you don't mind that I posted it here. I came across this on The Morning Brew, and thought that a broader audience might appreciate your efforts.
Not sure. It's a quick research project, created by a bored high school student (me). Edit: Although judging by the reaction it's getting, I very well might try and release it with a stable version and really make it shine.
I haven't looked into Accelerator that much, so I'm not entirely sure how to compare them. Please note, however, that the CudaSharp project was created no more than four days ago, so doing a direct featureset comparison might give some skewed results.
A quick skim of that link makes it look like Accelerator is using DirectX for its GPU work. CudaSharp is using CUDA (via LLVM), so there's one major difference. (I plan on eventually porting it to OpenCL as well - but there's not much documentation on the OpenCL LLVM backend, I'm not sure if it even exists)
Right now, not a lot of C# code can actually be compiled to run on the GPU. It's using a custom JIT compiler that I made, I've only spent about two days working on it so far, so naturally it's nowhere near feature-complete.
Examples of things that it doesn't support are exceptions, dynamic allocation of memory, the like. Essentially, you are writing CUDA kernel code, but in C# instead. Whatever CUDA C can't do, you can't do (even if normal C# can) - for example, exceptions aren't in CUDA C, so "throw" (and try/catch) won't ever be supported.
Actually on topic to your point, it's being compiled through LLVM, which is a very awesome compiler backend that does a bunch of optimizations on code.
I don't think that the C# compiler does much optimization, so the MSIL won't be that optimized (therefore if you were literally translating that your output would be not that optimized) but you can always do what the CLR does and take advantage of the platform and/or guarantees in the code.
I've never heard of asm.js before, but doing a bit of googling looks like it's sort of like an LLVM backend.
Theoretically, yes. However, that translation from CIL to LLVM IR isn't the easiest thing in the world. Right now I'm only supporting a subset of the instructions that roughly correspond to the things available on the GPU, and adding support for the entire C# language is an extreme task. There's a reason there's a huge team at MS dedicated to writing the JIT for .net.
But yes, it would be possible, it would just be a lot of work and probably wouldn't support much of C#.
Last night I changed it to not depend on llc.exe and instead use the llvm api (had to compile llvm-3.4... that was not fun), which was one of the biggest roadblocks I foresaw with respect to redistributability.