Hacker News new | past | comments | ask | show | jobs | submit login
Global Illumination in WebGL (playcanv.as)
211 points by mrmaxm 15 days ago | hide | past | web | favorite | 78 comments

I'm surprised I never see playcanvas on HN. It's great technology that should be better known IMO.

I first came across it working on a project where I needed to load and animate a number of complex .fbx character models. Spent loads of time trying to do this with three.js (in 2017) + all manner of converters to different formats. I tried every supported loader and they were all broken in various ways.

Then somehow I found playcanvas and had the models loaded and working flawlessly in a very short period of time. That experience extended to basically everything else I tried with the system—it's well polished.

The editor UI is very nice, and yet remains lightweight in a way where I still feel more like I'm working with something like three.js than e.g. Unity. I personally still prefer three.js for most projects I work on, but it would be a very good option for certain projects and I always keep it in mind. It's also open source: https://github.com/playcanvas

The downside is for the free tier you get their logo on your app, and you can't hide the source (and anyone can fork your project). On the up-side, the pricing was something like $15/month last time I checked.

For a very simple example, here's an asteroids game I spent a few days making with it: https://playcanvas.com/project/479850/overview/rocks (you can play or jump into the editor and modify from that link.)

In WebGL frameworks, Playcanvas is like Windows where Three is Linux. You can do the same things in both, it's a bit easier to start with in Playcanvas, but the cost and freedom of Three attracts lots of developers.

I agree to some extent, though I think PlayCanvas is more analogous to OS X than it is to Windows—partly because of the priority given to design and high standards for quality and partly because if you need to get lower level you can still do something like unix terminal access as with OS X.

But the analogy also falls apart in certain significant ways, e.g. PlayCanvas being open source and not actually tied to their servers or editor or anything (but don't get me wrong, it's not as easy to add into a standard project structure as `npm i three` either).

> but the cost and freedom of Three attracts lots of developers

There's definitely some truth to that, but I'd bet it's a very small fraction of three.js users who are even aware of PlayCanvas' existence, and that that lack of knowledge probably affects adoption ratios significantly.

Actually, it now _is_ that easy. :) PlayCanvas is now officially published as an NPM package:


Also for PlayCanvas one has to pay for the extra features, which scares another set of developers.

Engine and tools features, are identical for free and paid users. Same complexity games can be developed by both types of users. It is more direct support and quality of life features, related to larger teams and management are behind paid tier. But as any other business, PlayCanvas has to feed their families too.

I fully agree with you.

Ahhh, your game reminds me of Maelstrom -- even though all the sprites were prerendered, the sprites were rendered in 3D and I spent so many hours on it...



Oh man. I spent so many hours playing Maelstrom on my LC 475.. Through your wiki link I discovered that Ambrosia open sourced the game - there are even OSX binaries. https://www.libsdl.org/projects/Maelstrom/binary.html

Nice! That looks like fun. I'd really like to do a more complete asteroids remake and put proper time/thought into the design + aesthetic—it's one of my favorite games strictly for gameplay.

I actually did another version 15 years ago, too—incidentally it was my first ever OpenGL app "JAVAsteroids": http://symbolflux.com/statichtml/oldprojects/javasteroids.ht...

Did you know about Asteroids? https://en.wikipedia.org/wiki/Asteroids_(video_game)

Probably the true pioneer in the genre, although not 100% sure.

I've been wanting to do some 3D game experimentation for personal projects, mostly with ThreeJS but I might just give playcanvas a shot, it looks like it gives you the right amount of tooling. I wanted to use godot, but I like just being able to edit things on a browser and debug with dev tools and such.


Neat, dont have time to dive in the source right now but whats the trick? Light probes? Edit: Ah, nevermind its one click away:

"Illumination technique used here is:

Realtime direct shadowmapping with time blended lighmaps for global illumination on: walls, ceiling and floor.

For furniture we use spherical harmonics L2 with spatial and time blending for ambient light.

Reflections made with time blended box projected cubemaps for image based lighting on physically based materials.

Post processing done with color grading by LUT (lookup tables) and vignette."

20 years ago this kind of stuff took minutes per frame on state-of-the-art SGI workstations. Now I get > 25 fps in my phone browser. Wow.

Very cool seeing PlayCanvas on HN! I've been using[1] their framework for a several years and I have to say it's the best out there. Only babylon.js comes close for 3d game engines for the browser but PlayCanvas has the Editor[2] which makes a world of a difference.

[1] https://redka.games [2] https://developer.playcanvas.com/en/user-manual/designer/

babylon.js has an editor. http://editor.babylonjs.com/

That's true but the editor for babylon.js is very simple. There's a big difference between the two. Apart from the much better UI the Editor for PlayCanvas is well featured with things like uploading and auto-conversion of 3D models and textures, cloud save, light mapping, sprite editor, comprehensive engine configuration, texture compression and much more.

I was just researching about babylon and playcanvas for a hobby project I've got in mind - any additional suggestions about why one should choose one instead of the other? For info, I'm a very experienced developer, but I've never really done 3D before.

Has anyone done video compositing in WebGL?

I'm very interested in moving video editing tools to the cloud where the heavy rendering can be offloaded and the browser can do lightweight work and scaled down approximations.

As a side effect you'd be able to use such an editor on Linux, which while not a central motivator, is compelling.

I'm working on a web based movie sequencer with special effects as a side project right now and the development of this thing is absolutely doomed. There are tons of problems on this path (codec/browser compatibility, browser bugs, slow performances, firefox unable to go to a desired time fast enough during render, etc.)

This has been done but holds very few advantages. Nuke and davinci resolve already run on linux. In either case local bandwidth and multiple cores are the biggest factors. Transferring images back and forth to a server is a waste of time when it takes longer than the processing time.

Is the explanation text correct that direct shadowmapping is used only on walls, ceiling and floor? The shadows on furniture also look like they would require shadowmaping.

Shadowmapping is used on everything. But GI lightmapping only on walls ceiling and floor.

The color palette makes it look like a video game.

Is the WebGL 2? Does not render in Safari.

PlayCanvas uses WebGL 2 by default and falls back to WebGL 1 when it's not available. So in Safari, the renderer will use WebGL 1. Just tried the link on a recent MacBook Pro and an iPhone X and XS and it works fine. What is your OS and Safari version?

I had that problem, but I'd turned on the experimental WebGL 2.0 support. Turned it off and it ran fine.

Ah! I had that turned on. Thanks! What is the default for these experiments? I can see me turning that one on, but a lot of others are enabled that don’t make sense to me.

Works on my Safari with WebGL 2 off.

This runs fine in Safari 13.0?

I'm trying to imagine seeing this 10 years ago and my mind gets blown.

Awesome use of PlayCanvas. Nice job!

Crashed my Firefox tab. :(

Adam Weishaupt strikes again!

Amazing. Would be really cool if this got integrated with Three.js, which last I checked doesn't really have a global illumination story.

You can make demo like this in Three.js already. Difference is, how much labour it takes.

None of existing WebGL engines offers real GI.

Honestly, Unity WebGL GI looks way better and probably is also easier to integrate into your mesh modelling/texturing workflow.

edit: plus unity is basically free to use

PlayCanvas works great in the mobile browser all the way back to iPhone 4S. AFAIK, Unity still doesn't officially support mobile WebGL builds. Try making this in Unity and running it in the browser on a low-end to average Android/iOS device.

Please provide a link to a Unity WebGL GI example. Thanks

edit: PlayCanvas has a free tier.

Plus the engine codebase is open source (MIT): https://github.com/playcanvas/engine :)

Yes it has a free tier: for public projects.

Please, try making similar demo in Unity, and publish it to WebGL, with mobile support. Currently Unity will not be able to do this.

Yep, I don't know why anyone would bother to write their own engine instead, unless it's out of pure curiosity.

This rubs me the wrong way. If people enjoy writing graphics engines, even if it's not economically rational to do so, then they should do that!

This is an especially pointless complaint when it comes to video games, because, from a purely economic standpoint, few programmers capable of writing production-quality global illumination should be working on games at all as opposed to getting a job doing something else. (This is a hard pill to swallow, but it is nevertheless true.) The market, especially the indie market, is oversaturated, and, as a result, most people who are highly skilled and involved in game development are, to varying degrees, in it for fun as opposed to money. Since game developers are doing it for fun, why not encourage them to do what they enjoy most?

That's a good point for any dev thinking they can write their own engine... since one is bad at it and shouldn't do it or good at it and perhaps one should just focus on the engine.

If you want to ship a game, don't write your own engine. End of story.

I don't agree with that statement, but even if it were true, it's off topic for this HN submission. The point of this demo isn't "use this instead of Unity if you want to ship a game". It's "look at this neat thing I made". I prefer to encourage that instead of being gratuitously negative.

It is not off topic at all. Don't reinvent the wheel if you want fast, meaningful results. DRY. You should know better. Otherwise it's only useful for curiosity's sake, as stated.

Save yourself the pain and use Unity instead if your goal is to ship something.

What if I want to ship a game engine?

Then you better be a 10x engine developer to be able to beat what is already out there.

This is a common mistake people make. If you make a custom engine you do not need to beat what is already out there since you are certainly not going to use every single feature that the engines out there use. Bespoke engines (that are actually used in games) aren't trying to replace Unreal or Unity, they are only providing what the games they are used for need - anything else is unnecessary. Even in the high end AAA space, most bespoke engines do not provide everything that Unreal does - they focus only on the specifics their development teams need.

This is especially important to keep in mind when it comes to smaller (be it indie or "AA") developers - the developers who write their own engines aren't trying to replace Unreal, these engines only support a tiny tiny fraction of the functionality that Unreal does and that is fine because that functionality is what these developers need. If anything, for a smaller developer that does not have the necessary developer manpower to mold an existing gargantuan engine like Unreal to their needs it can be a better choice to go with their own engine than try to understand and modify Unreal.

Spoken like a true armchair game developer.

Writing your own engine is never your most important problem as an indie developer nowadays. It's not important at all.

If you have lost sight of that, then you've already failed.

I never claimed that writing your own engine is the most important problem an indie developer would have. If there is a "most important problem" that would be getting exposure for your game when gamers are drowned by multimillion-funded advertisements for multimillion-funded games (and more often than not, negative news about those multimillion-funded games - visit a gaming forum like /r/games or similar and you will see a lot of posts, videos and comments of how bad games are nowadays, how developers are screwing everyone, etc while everyone making those ignores all the smaller developers who are not doing that, mainly because they are simply invisible to most people).

My comment was about responses that compare custom engines with middleware engines-as-products like Unreal and Unity that imply that a custom engine has to provide at least as much as unreal and unity which is certainly not the case.

And my comment wasn't even about indies, this is the case with non-indie studios too - even AAA ones. In fact the previous AAA game i worked at used a custom engine that had zero networking support (outside of some debugging stuff) - because the games that the studio built were single player titles and thus they didn't need such a feature. This is something that an engine-as-a-product like Unreal or Unity cannot do, they have to care about multi player support even if many of the games they'll be used for are single player only, because some of their customers will also need to create multi player games.

That's silly. Why bother shipping your first game? it's never gonna be be 10x better than the games that already exists of that genre.

90% of indie developers discover that the hard way.

This is poor advice because plenty of games of all sizes are shipped using custom engines.

It’s comes about because there are lots of people who say they want to make a game who are far more happy eternally twiddling their tech. Moving to Unity/Unreal/PlayCanvas etc. wouldn’t dampen that impulse just push it in different directions. It’s a bit of an empty page problem where it much easier to obsess about having the right starting conditions than it is to actually just start. Particularly as these people tend towards being more technically capable without a lot of design experience.

It's both important to pick the right tools for the job and also to not get lost in the weeds with that decision, and that's not just a gamedev thing either (though the design work in gamedev makes it really show). I see it a lot in web with pretty much every piece of tech, and the more inconsequential to the functioning of the final deliverable, the more twiddling goes on. Fortunately those twiddlers do tend to really care about the actual technical quality of the project if you can get them actually working on it.

Game engine implementations have different tradeoffs, and generalized game engines are optimizing for the lowest common denominator. Niche use cases often require custom engines. For example flight simulators need good terrain level of detail streaming, and may need to use special techniques to work around loss of floating point detail when rendering at a global scale. I don't know of any game engine that supports spherical height maps.

On the other hand Kerbal Space program is implemented in Unity, so maybe even off the shelf engines can handle that.

Unity is incredibly versatile. As a single dev, it is highly unlikely that you'll be able to implement a use case that it doesn't support. It is far more likely that you are reinventing the wheel out of ignorance.

Can it do demo like in this topic. To load fast. Work fast. Even on mobile?

Every tool, engine, framework, has its strong and weak sides. That is why we have many options to choose from.

Does it still take 10 minutes to load a "project"?

One good reason is the Unity -> WebASM pipeline isn't well suited for the web. Firstly Unity is built as a more heavyweight monolithic solution, decreasing load times, additionally, it's not meant for more seamless integration into web pages.

Indeed. But even more conceptually, web is very different platform. We don't download whole db of friends on facebook with their timelines to browse facebook. Why do we have to download all data from game projects, when we only need relevant one. So Unity has to provide way better solution for streaming content in order to compete with existing WebGL high-end projects, like Polaris vehicle builder.

And it's engine is way too monolithic indeed. WASM still feels like a big hack.

Epic recently admitted that experiments with WebGL and Unreal were fun, but they are not misleading customers that it is path they are going to go. They clearly stated that WebGL target for AAA engines, is not a way. And they are very right. Unity doesn't admit that. They know how big is Web platform. But they have to do something major to really get into web, as currently they only mislead customers with promise. But commercially, Unity WebGL - is not an option, 98% of time.

Huh? Unity integrates seamlessly into webpages just as well any other WASM target does.

It works, it's just kinda big for a lot of things. If someone's making good use of the engine's extensive features then that's not a problem, but it's a bit too heavy for a lot of simpler (previously flash) games. To be fair though, flash was insanely cumbersome and dreadful to deal with, and people still used it, so I'm sure they'd put up with WASM Unity even for those sorts of cases.

This shows again that the RTX feature on NVidia cards is completely pointless. With some clever hacks you get the same visual fidelity with traditional OpenGL ES and with one order of magnitude better performance.

Only because the scene is static though. If you wanted good reflections of interactive models on a volumetric blob with a shiny material (eg the T1000 from Terminator 2) in real time you're going to need ray-tracing.

It is not static, the light source moves, it is the same as if the models would move. They explain that they update light probes, they could do the same if the models would move.

I think the limitation here is the number of lights. They use shadowmaps for direct shadows. You can update shadowmaps interactively for a couple of light sources, but if you have dozens of lights, as is often the case with architectural scenes, shadow maps updating can become a bottleneck.

We have precalculated all probes and lightmaps (25) in advance due to defined light trajectory. This clever hacks, tricks the user into believing it is a real deal - that was a desired goal of demo. Doing true GI in WebGL wont be commercially viable option, yet.

The Terminator 2 movie is a prime example of using envroinment maps which are supported by OpenGL since day one. There is zero raytracing used in this movie. This shows again that from a psychovisual viewpoint 100% accurate reflections are utterly pointless.

Hardware manufacturers should focus on things that really matter which is texture resolution and geometric detail.

T2 was rendering static environments, not interactive ones in real time. And they used something like 600 SGI computers and months of rendering time to do it. To get that level of graphics in an interactive environment you can't fake it. You need ray-tracing or something similar.

As for texture resolution and geometric detail... improvements there are great as well. It's not a "one or the other" choice.

Not that any of this matters with regard to Playcanvas because there's no way to access the RTX's ray-tracing pipeline from WebGL.

I feel like you don't know what you are talking about. Do you know what a cubemap is, how to generate one, and how to use it to render reflections?

it wasn't ray-traced. It was done with renderman. At the time it didn't support raytracing.


It's only pointless to you. It showcases the cards general compute capability. Just because they use it to raytrace doesn't mean you can't use it for physics/mining/ml etc.

This particular demo is only possible with fixed light trajectory. Basically, there are 25 light states here, and blending between them. RTX tech is next level - would allow to take this hacky static approach, to fully realtime and dynamic one.

RTX is extremely useful as a developer using some of these "clever hacks" in a rastererized application; e.g. baking light maps in Substance Painter/Designer is 100x faster than previously.

This is a tiny demo, not something continuously pushing 4HD soft real time at 90 FPS.

Clever hacks are not a replacement for the real deal.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact