Hacker News new | past | comments | ask | show | jobs | submit login
Three.js editor (threejs.org)
362 points by danboarder on Jan 2, 2017 | hide | past | web | favorite | 53 comments



Three.js is a lot of fun to play around with, though I tend only to use its web editor to test my content for errors.

It's really simple to use Three.js using only vanilla JavaScript.

This is one of my recent experiments[1]; An animated human created using MakeHuman[2], animated in Blender, and rendered by three.js in less than 200 lines.

[1] http://codepen.io/satori99/pen/Xjbvbr/?editors=0010#0

[2] http://www.makehuman.org/


amazing. I am flabbergasted these resources exist. Great demo, BTW


While I love Three.js, its editor has been around for almost as long as the project itself and is sort of neglected by comparison. WebGL Studio[0] has a far more impressive editor, but its underlying engine is custom, so in most cases it isn't anywhere near as useful as Three.js due to the latter's ecosystem.

One thing I don't like about either editor is that they're web-based. While it makes a ton of sense on paper, I hate doing any serious 3D work in a browser window. Something like Microsoft's Language Server Protocol[1] but for graphical editors would be amazing. Run the project in a browser window while having bidirectional flow of data between a native desktop editor and the browser window.

Unfortunately if you want to run something like Three.js inside of a native desktop editor, you'd have to embed a web runtime. That really balloons project complexity, so I can see why so many people prefer web-based editors when making web-based projects.

One alternative, at least for 3D applications, is multi-platform frameworks that also work on the web. Oryol[2] in particular comes to mind. Hypothetically you could build a native editor around it, with no need to embed a runtime. The native editor's viewport would just use native graphics APIs for rendering. Then when you like what you see, just compile the same thing for the web. While some edge cases may not make it that easy, overall it seems to be a far superior workflow than having to deal with web-based editors or embedded web within native applications.

Unity 5 and Unreal Engine 4 both have incredible, native desktop editors that support exporting to the web. Unfortunately, they both have massive runtimes that make their web footprints a joke, among other problems.

[0] https://webglstudio.org/

[1] https://github.com/Microsoft/language-server-protocol

[2] https://github.com/floooh/oryol


What makes you say it's been neglected? I scrolled back until i got tired (~3 years) of github's history page for the editor/ directory and it looks like a steady and active stream of development: https://github.com/mrdoob/three.js/commits/dev/editor/js


Its commit history represents a small fraction of overall commits on the project. While the editor can surely be considered under active development by normal standards, a Mr. Doob project raises the bar on that. It's not uncommon to see him average over a dozen commits a day on a regular basis for the entire three.js repo, in addition to managing and merging a ton of pull requests at the same time.

That said, I agree saying it's neglected is a bit unfair, so I've reworded my original statement qualifying it as sort of neglected (far more diplomatic). Moreover, it isn't really fair for me to hold it to the same standards as production-grade editors, since I'm pretty sure its primary purpose is to simply give people a sandbox environment to play around in.

You'll be happy to know GitHub rate limited me around January 2015 of the commit history you linked, since there was so much of it. :)


Considering that Three.js seems to have several backends for rendering, it should be possible to make a desktop based Three.js editor using Qt (which has its own JavaScript implementation) and a Qt-specific backend.


I believe Qt Canvas3D[0] is what you're suggesting. If I recall, one of the default project template(s) for this used three.js right out of the box.

Does anyone have any real-world Canvas3D stories? I've no experience with it. Things like potential idiosyncrasies its implementation relative to major browsers seems worrisome. Feature lag also might be a concern, especially with bleeding-edge features found behind flags. Not to mention the horror story that is Web Workers.

Granted, there's usually pain whenever you insist on having your cake and eating it too, it's just a matter of where.

[0] https://blog.qt.io/blog/2015/05/27/introducing-qt-canvas3d/


>> ...that support exporting to the web.

I think, if you've ever actually tried to use this feature in these frameworks, you'd know this support is in name only. It's really not a usable solution.


I have actually, and it was terrible. Even floated crazy ideas like packaging the ~50MB engine runtime into a browser extension.

My main beef though was the fact that UE4's web rendering path is based on and artificially limited by the performance considerations of its mobile path (last I checked anyways).


If you want scene setup and polygon editing and animation creation try https://Clara.io and it exports to the threejs and fbx formats as well. Clara.io is similar to blender and Maya in terms of its features.


thanks for that link, clara.io looks great after looking at it quickly, will be using it from now on!


Try clicking one of the examples in the menu on the top, and then hit 'Play'. Love it! It is certainly great for playing around with WebGL, but still lots of work necessary if it wants to catch up to Unity or CopperCube and create complex WebGL games or scenes.


That's mostly because of the architecture of Unity3D being both efficient and productive and UE4 being much more so. They're radically different architectures than what three/babylon have. And the fact that they both compile through Emscripten is proof AAA engines can be built for the web.

We're not too far away from SIMD, Atomics and SharedArrayBuffer as well as OffscreenCanvas in the browser (they're all available behind experimental flags today). I can definitely see a newcomer building a web engine from the ground up and beating Unity/UE4 in performance AND productivity for not having their overhead. Its a huge undertaking but nothing impossible.

What I missed the most doing WebGL work however were asset pipelines. Open-source engines barely support DXT compression, normalized integers and whatnot. We ended up writing our own (very crude) CLI tool to compress DXT1/5, ETC1 and PVRTC as well as a KTX parser to load them at runtime. I'll see if I can make them open-source - they're still a bit tied to our custom in-house webgl renderer.


It's worth checking out PlayCanvas for the asset pipeline. Texture compression is supported directly in the editor. https://blog.playcanvas.com/webgl-texture-compression-made-e...


That's good to know!

We went with ImageMagick and PVRTexToolCLI driven from a node.js script. I'd go with GraphicsMagick now that I know about it; IM doesn't yield good DXT compression quality.



We tried babylon for a mobile VR experience, needless to say the performances are mediocre at best. It runs well on PC because it can afford to waste 95% of the computer's power and still run smoothly for small scenes.


PlayCanvas has a renderer that has an optimized path for stereo rendering - achieves great perf on mobile. https://blog.playcanvas.com/webvr-support-in-playcanvas/


We had one as well. We're rendering VR with barrel distortion applied in the main vertex shader rather than as a post-process. Saves an expensive framebuffer (some mobile GPUs take as long as half a millisecond to switch render targets) at the cost of a slightly heavier vertex shader. It also requires geometry to be tesselated for the distortion effect to work.

We had to write our own polyfill for our implementation but it beats the post-process technique of webvr-polyfill quite easily.

That's how UE4 and Unity3D do barrel distortion as well.


It does take a little time to learn how to get the best performance out of BJS or any WebGL app. The community is very helpful with that.


I looked through the source code and saw string concatenation to set the active texture unit among many other inefficiencies. Everything does isReady() checks multiple times per frame (rather than creating it only once its ready). The material parameters are indexed in the worst possible way. The stats gathering code is still active in release. The list goes on and on :)

Some caches are even implemented with bugs. Half the code use a cache in a certain way and the other half in a different way. That was fun to dig into.

I tried to optimize Babylon for a few days at work before giving up - my general rule of thumb is that if I'm about to refactor more than 20% of a codebase its much, much faster to rewrite it if I'm already familiar with the problem space. Took about two weeks to write a proprietary renderer running circles around Babylon - but only supported static meshes.

Babylon was useful to prototype but the mobile performances aren't there, even after days of profiling and optimizing every slow path it was still an order of magnitude slower than an in-house renderer designed for performance from the ground up.



That's to optimize the scene, not the engine itself :)

There could be some tradeoffs to those suggestions as well. For example using unindexed geometry for simple meshes can still be slower if there's many vertex attributes. Its also not uncommon to render tesselated meshes - there's a sweet spot in triangle size for mobile GPUs, at least tile-based ones like PowerVR. With VR barrel distortion applied in the vertex shader during the main pass you definitely don't want cubes made out of only 12 triangles.

Vertex count isn't that important a metric anyways; you can push a few million polygons in a few hundred draw calls to mobile GPUs every frame and still run a smooth 30FPS. Desktop is an order of magnitude higher (5k draw calls/frame is common). The number of draw calls, the cost of their shaders and how fast the CPU can push them are much more important. There's little difference between 20k and 40k polygon meshes, but there's a huge one between 20 and 40 draw calls. Its creating batches that's costly, not running them.

We also had heuristics to determine an appropriate device pixel ratio without completely disabling the scaling. So for mobile devices with a ratio of 3 instead of tripling the pixel count we'd settle for a ratio in between. Text projected in 3D was just unreadable on iPhone without this and going all the way to 3x was overkill.

I did call freeze() on materials but the material/effect caches were trashed quite often and the bind() implementation is very expensive; it does quite a few hash lookups and indirections. A lot of our uniforms had to be updated every frame so we ended up separating the materials from their parameters and indexing the later with bitfields. Setting a shader was just looping through a dirty bitfield and doing a minimum of uniform uploads. This also allowed for global parameters quite easily (binary OR on material/global parameter bitfields). There was only 3 arrays of continuous memory to touch to fully setup a shader (values, locations, descriptors), and they could be reused between materials so it was very CPU-cache friendly.

Looking at the profiler most of the lost performance came from the engine, not the scene.


For isReady: you can set material.checkReadyOnlyOnce


What version of BJS are you referring to?


Latest.

Here's the string concatenation to set the active texture unit. (By the way the fastest way to do it is "gl.TEXTURE0 + channel" instead of creating the string to index in the proper constant).

https://github.com/BabylonJS/Babylon.js/blob/master/src/baby...

As for the broken cache, I think it was Engine._activeTexturesCache; sometimes its indexed by texture channel other times by GL enum values (this makes the cache array explode to 30k elements and causes cache misses in half the code paths.)

From what I remember, lots of caches are needlessly trashed many times per frame.

There's also noticeable overhead to all of those "private static <constant> = value;" with public getters.


Just pushed an update to remove the string concat. No evidence of broken cache as all references to activeTextureCache use texture channel index.


You won't see it in the code. Run it through the debugger; the value of "channel" is sometimes the value of the GL enum rather than the index of the texture unit.

It could've been fixed since as well.


Very cool. Here's a similar idea but instead it uses a node graph editor: http://idflood.github.io/ThreeNodes.js/index_optimized.html#...


Some nice examples you can use with this editor: http://mrdoob.neocities.org


I like that it supports the standard Maya keybindings by default.

Would be cool to render the scenes out! Has anyone (sucessfully) run Cycles through Emscripten? :)



Wow TurboScript looks interesting. I am supposing the bigger story here is in-browser 3d editors as the test-bed for the browser's taking over CPU future ...


One thing I am not able to find anywhere on the whole threejs.org website - what is three.js?


The github project page has a bit more info than the threejs.org website.

It is a relatively small JavaScript library for creating and rendering 3D scene graphs in a browser. It enables you to code while thinking of scenes and the objects within them, rather than GPU buffers and what to put in them.

https://github.com/mrdoob/three.js/


It is a JavaScript library to work with WebGL in the browser :)


A-Frame (https://aframe.io), a WebVR framework for three.js, also has an editor that works like a DOM Inspector. You just hit a shortcut on any A-Frame scene on the Web, and it'll inject an Inspector. Also integrated with A-Frame's version of the Unity Asset Store for components.


unbelievably cool this all works so nicely in my browser.


If you like this: there are even 3D CAD tools that work in the browser. For example: https://www.onshape.com/


I'm amazed how well it works on my tablet.


Is there an example to play around with?


Yep, hit the Examples menu.


While this is fun to play around with, and definitely helpful when using threejs directly, for a fullblown editor on the web there is nothing that beats PlayCanvas at the moment.


And a handy link for anyone who wants to check out PlayCanvas: https://playcanvas.com


How does that compare with https://clara.io? I thought that was made by one of the core contributors of three.js (bhouston).

I don't use web editors for 3d so I have no idea, just curious.


Clara.io is a modelling, animation and rendering tool (in the style of Maya/Max/Blender)

PlayCanvas is a game engine (in the style of Unity/Unreal)

To put it another way. You build your 3D assets in Clara and import them into PlayCanvas to add interactivity.


What's the license of this?

GitHub doesn't seem to say?



Yup! MIT ^^


The menus appear behind the page on mobile.


Trying to edit scripts doesn't work for me


Too many tutorials and examples rely on Three.js and similar frameworks, instead of using vanilla-JS together with WebGL.


I remember taking a quick look at this when I experimented with THREE.js first, I didn't get the hang of it though and I ended up just using blender as the main editor for my game (levels, models and similar assets)




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: