This is one of my recent experiments; An animated human created using MakeHuman, animated in Blender, and rendered by three.js in less than 200 lines.
One thing I don't like about either editor is that they're web-based. While it makes a ton of sense on paper, I hate doing any serious 3D work in a browser window. Something like Microsoft's Language Server Protocol but for graphical editors would be amazing. Run the project in a browser window while having bidirectional flow of data between a native desktop editor and the browser window.
Unfortunately if you want to run something like Three.js inside of a native desktop editor, you'd have to embed a web runtime. That really balloons project complexity, so I can see why so many people prefer web-based editors when making web-based projects.
One alternative, at least for 3D applications, is multi-platform frameworks that also work on the web. Oryol in particular comes to mind. Hypothetically you could build a native editor around it, with no need to embed a runtime. The native editor's viewport would just use native graphics APIs for rendering. Then when you like what you see, just compile the same thing for the web. While some edge cases may not make it that easy, overall it seems to be a far superior workflow than having to deal with web-based editors or embedded web within native applications.
Unity 5 and Unreal Engine 4 both have incredible, native desktop editors that support exporting to the web. Unfortunately, they both have massive runtimes that make their web footprints a joke, among other problems.
That said, I agree saying it's neglected is a bit unfair, so I've reworded my original statement qualifying it as sort of neglected (far more diplomatic). Moreover, it isn't really fair for me to hold it to the same standards as production-grade editors, since I'm pretty sure its primary purpose is to simply give people a sandbox environment to play around in.
You'll be happy to know GitHub rate limited me around January 2015 of the commit history you linked, since there was so much of it. :)
Does anyone have any real-world Canvas3D stories? I've no experience with it. Things like potential idiosyncrasies its implementation relative to major browsers seems worrisome. Feature lag also might be a concern, especially with bleeding-edge features found behind flags. Not to mention the horror story that is Web Workers.
Granted, there's usually pain whenever you insist on having your cake and eating it too, it's just a matter of where.
I think, if you've ever actually tried to use this feature in these frameworks, you'd know this support is in name only. It's really not a usable solution.
My main beef though was the fact that UE4's web rendering path is based on and artificially limited by the performance considerations of its mobile path (last I checked anyways).
We're not too far away from SIMD, Atomics and SharedArrayBuffer as well as OffscreenCanvas in the browser (they're all available behind experimental flags today). I can definitely see a newcomer building a web engine from the ground up and beating Unity/UE4 in performance AND productivity for not having their overhead. Its a huge undertaking but nothing impossible.
What I missed the most doing WebGL work however were asset pipelines. Open-source engines barely support DXT compression, normalized integers and whatnot. We ended up writing our own (very crude) CLI tool to compress DXT1/5, ETC1 and PVRTC as well as a KTX parser to load them at runtime. I'll see if I can make them open-source - they're still a bit tied to our custom in-house webgl renderer.
We went with ImageMagick and PVRTexToolCLI driven from a node.js script. I'd go with GraphicsMagick now that I know about it; IM doesn't yield good DXT compression quality.
We had to write our own polyfill for our implementation but it beats the post-process technique of webvr-polyfill quite easily.
That's how UE4 and Unity3D do barrel distortion as well.
Some caches are even implemented with bugs. Half the code use a cache in a certain way and the other half in a different way. That was fun to dig into.
I tried to optimize Babylon for a few days at work before giving up - my general rule of thumb is that if I'm about to refactor more than 20% of a codebase its much, much faster to rewrite it if I'm already familiar with the problem space. Took about two weeks to write a proprietary renderer running circles around Babylon - but only supported static meshes.
Babylon was useful to prototype but the mobile performances aren't there, even after days of profiling and optimizing every slow path it was still an order of magnitude slower than an in-house renderer designed for performance from the ground up.
There could be some tradeoffs to those suggestions as well. For example using unindexed geometry for simple meshes can still be slower if there's many vertex attributes. Its also not uncommon to render tesselated meshes - there's a sweet spot in triangle size for mobile GPUs, at least tile-based ones like PowerVR. With VR barrel distortion applied in the vertex shader during the main pass you definitely don't want cubes made out of only 12 triangles.
Vertex count isn't that important a metric anyways; you can push a few million polygons in a few hundred draw calls to mobile GPUs every frame and still run a smooth 30FPS. Desktop is an order of magnitude higher (5k draw calls/frame is common). The number of draw calls, the cost of their shaders and how fast the CPU can push them are much more important. There's little difference between 20k and 40k polygon meshes, but there's a huge one between 20 and 40 draw calls. Its creating batches that's costly, not running them.
We also had heuristics to determine an appropriate device pixel ratio without completely disabling the scaling. So for mobile devices with a ratio of 3 instead of tripling the pixel count we'd settle for a ratio in between. Text projected in 3D was just unreadable on iPhone without this and going all the way to 3x was overkill.
I did call freeze() on materials but the material/effect caches were trashed quite often and the bind() implementation is very expensive; it does quite a few hash lookups and indirections. A lot of our uniforms had to be updated every frame so we ended up separating the materials from their parameters and indexing the later with bitfields. Setting a shader was just looping through a dirty bitfield and doing a minimum of uniform uploads. This also allowed for global parameters quite easily (binary OR on material/global parameter bitfields). There was only 3 arrays of continuous memory to touch to fully setup a shader (values, locations, descriptors), and they could be reused between materials so it was very CPU-cache friendly.
Looking at the profiler most of the lost performance came from the engine, not the scene.
Here's the string concatenation to set the active texture unit. (By the way the fastest way to do it is "gl.TEXTURE0 + channel" instead of creating the string to index in the proper constant).
As for the broken cache, I think it was Engine._activeTexturesCache; sometimes its indexed by texture channel other times by GL enum values (this makes the cache array explode to 30k elements and causes cache misses in half the code paths.)
From what I remember, lots of caches are needlessly trashed many times per frame.
There's also noticeable overhead to all of those "private static <constant> = value;" with public getters.
It could've been fixed since as well.
Would be cool to render the scenes out! Has anyone (sucessfully) run Cycles through Emscripten? :)
I don't use web editors for 3d so I have no idea, just curious.
PlayCanvas is a game engine (in the style of Unity/Unreal)
To put it another way. You build your 3D assets in Clara and import them into PlayCanvas to add interactivity.
GitHub doesn't seem to say?