Hacker News new | past | comments | ask | show | jobs | submit login
Learning how to write a 3D engine from scratch in C#, TypeScript or JavaScript (2013) (microsoft.com)
251 points by adamnemecek on Mar 6, 2016 | hide | past | web | favorite | 36 comments

Doing 3D rendering in software is incredibly satisfying, it's one of those things that can make you feel like a REAL computer programmer (compiler implementation has a similar effect, I believe).

I briefly scanned through these articles and did not see any mention of polygon clipping. A note to the inexperienced; it took me some years (I am dense at times) to realize that one of the primary uses for polygon clipping is to clip polygons against the view frustum in view space (ie after polygons have been projected to the screen). This is useful to only draw the part of a possibly very large polygon that is visible. As a consequence of clipping, a triangle can result in 5 sided polygon, so drawing/rasterizing arbitrary polygons, not just triangles is desirable. You'll often read about rasterizing arbitrary side polygons when studying software 3D, this is why.

Also, I always like to link this tour de force by Charles Bloom when software 3D comes up, it's a great read:


In modern graphics programming there are really two layers to writing a 3D engine; low-level and high-level. Funny enough, low-level rendering code does not mean the same thing today as it did 20 years ago.

I totally agree that knowing how triangle/fragment clipping works at the lowest level is a valuable skill but for the vast majority of people its a level of complexity way above their need. From a practical perspective (i.e. you want to make a Game, Simulation, realtime 3D rendering of any kind), you should not be writing a software rasterizer. As an exercise in learning, go for it, but if you want to have both practical and marketable skills, you need to learn a graphics API like OpenGL, Direct3D or even Metal. These APIs mask a lot of the implementation details but in return you get a fast, reliable and consistent API.

From a high-level, you find ways to use these APIs in smart ways to optimize for modern graphics cards. That in itself is a MASSIVE challenge and not to be underestimated. This is where a smart use of API calls becomes an excellent Engine.

For myself, I started backwards, learned OpenGL then Direct3D then finally built a software rasterizer and raytracer. To me that was a great way to learn because I first understood the high-level concepts (what are textures, what is a mesh) before learning low-level concepts (what are barycentric coordinates, how does bresenhams work, how do you do perspective correction).

Surprisingly, there are still areas where SW renderers are the only way, like in underpowered navigation devices without OpenGL support or running on exotic embedded operating systems.

There's even a third layer -- engines like UE4 or Unity or libraries like SceneKit. There are so many steps to setting up shaders and lights that it's handy to have something that can intelligently handle all the moving parts -- like juggling the limited per-pixel lights with vertex lights, for example.

How can a third party commercial 3D engine be a layer to writing your own 3D engine?

I imagine maybe by not providing all the pieces one might need in certain types of rendering algorithms.

Just a guess.

Excellent post, thank you.

I second this. When I wrote my first frustum cull method it opened up so many other possibilities. After clipping, I would say look into writing a quadtree (2d) followed up by an octree (3d). Even though quadtree's are confined to a single plane they are quite useful even in 3d games.

I can recommend the game programming gems series. I don't know what number they are up to now, I had the first three and they were a huge asset in learning all these techniques and more.

Another really awesome talk to watch about some aspects of this is David Braben's talk on making the original Elite in 32K of ram.

I think this is the link but it doesn't work on mobile: http://www.gdcvault.com/play/1014628/Classic-Game-Postmortem

It's also nice to check out CUDA if you have a Nvidia card. You can run parts of your code very easy on the GPU gaining a lot of speed.

If I'm a "normal" web programmer, how much of a step up is Cuda programming going to be? I've got some geospatial functions (like point-in-polygon) that might benefit from being run on the GPU.

http://sysweb.cs.toronto.edu/publication_files/0000/0247/icc... is a good example of the kinds of things I'm looking at but I only barely grasp the concepts in the paper itself.

I think there's a decent MOOC on udacity; or at least there was at some point. Linear algebra helps a bit, but you can probably pick up most of what you need to get started.

As far as I know Cuda is only available as a C bin.

CUDA can be programmed in C, C++, Fortran, Haskell, .NET, Java and any other language with backends than can spit out PTX, hence why researchers always favoured it over OpenCL.

OpenCL now has SPIR

I know, it was a response to those developeers wanting to stay away from OpenCL C code or using translators that would generate OpenCL C from their languages of choice.

Lets see if they are still on time to change the wave.

I noticed an interesting optical illusion with the rotating cube vertices example: if you interpret it as rotating one way, it’s an ordinary cube—the back face appears smaller because it’s farther away; but if you interpret it as rotating the other way, you see a shape that’s constantly stretching and deforming.

This leads me to wonder what a game would look like if all perspective were so inverted, so that the farther away an object is, the larger it appears.

Reverse perspective rendering: https://vimeo.com/12518619

Can't even imagine how it would look like with VR.

The apparent size of an object changing with distance is not just some random thing that you can change... it's a consequence of geometry. The eye or camera is a point, but the projection plane is a plane, so a ray from the camera through two adjacent points on the plane will keep diverging on the other side of the projection plane, and hit objects that are further and further apart. Another way to think of it is that there's just more stuff far away than close, so all that far away stuff has to look smaller to fit.

If you want to change that, you can't have a camera that is a point. That's how parallel projection works; the camera is basically a plane of the same size as the picture plane, so the rays never diverge. What you're suggesting is basically that the camera would be larger than the projection plane. That would cause the rays to converge in a single point some distance away from the camera. You might then just as well think of that point as the camera, and do what pnp wrote, just reverse the z buffer.

I haven't seen that. I have seen a rotating camera of a castle spire with the Z buffer culling backwards. It looked strange as the castle seemed to be rotating the wrong way. I recall the skybox was rendered properly adding to the illusion.

Edit: I realize what I saw is a textured version of what you are describing.

That's a renderer, not engine. An engine is a lot more than just putting some triangles on screen.

This is like learning how to write a ray tracer, or learning to write a compiler, etc. etc.

In other words, it's elucidating to do, but unless you're an industry veteran or subject matter expert, you should almost never do it with the intent of actually using the result.

Can anyone recommend some advanced tutorials for making a 3D engine with OpenGL? I'm looking for something beyond rendering simple polygons with gouraud shading. How do you handle loading large worlds while maintaining a high frame rate? How do you make scenes more realistic with shadows and ambient occlusion?

This is a very, very deep rabbit hole. The kind which people make PhD's and careers out of digging into this hole. Actually, it's several holes.

Loading large worlds while maintaining a high frame rate? It's one deep problem for Skyrim-style worlds. Another for shooters. Yet another for Minecraft. Shadows are yet another deep problem, and so is ambient occlusion, and any number of lighting/rendering techniques.

In other words, you're not really in tutorial territory anymore. You're in article/paper territory.

As someone whose dug these holes before - there are tutorials even for advanced stuff, usually when a hacker figures out some paper he shares it with others in a less academicky way and then it's much easier to pick up. I still remember casey muratori gjk video in highschool, i found some paper using math notation for sums and throwing abstract terms arround and then I saw this guys video and it clicked and I had a collision detection demo running in two days.

Academic approach is nececary for research work but when you're just implementing stuff it's already been groked by other people and you can find their blog/notes/code online.

What you say is very true. Maybe I should have said "intermediate" tutorials. Even though the problem area is so deep and complex there have been astonishing engines produced by very small teams, such as:

* http://procworld.blogspot.com/ which is a voxel-based rendering engine mostly developed by one person

* http://the-witness.net/ which is a game with a custom engine developed by a small team and features many realistic effects

I have found the book "Game Engine Architecture" a really interesting read, and a seemingly good introduction to the required concepts: http://www.amazon.co.uk/dp/1466560010

This is something I've wondered about for years. It truly feels like a black art. I look forward to reading this.

This is very nice! For once, we should try to write everything that's happening in our pipeline ourselves!

Does anyone know of any accompanying articles, but for raytracing, or other methods of rendering? It'd be nice to have a suite of examples of different methods.

Not too long ago on here, there was a submission for Peter Shirley's Raytracing in One Weekend[1], which is a pretty decent quick book on ray tracing. It even looks like he's working on a second iteration, covering some more advanced stuff.

[1] http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-we...

Nice post. I'm trying to read up on opengl es to prep for a vr world. Having a hard time at it. I'm a back end dev so maybe not my strong suit.

I've been thinking of doing something similar. What frameworks and such have you considered using? That is, what tools do you think would be helpful?

Unity is high level and really easy to code. But it is not opengl .. I have been watching courses onot computer graphics to try and figure it out but tough going as I said.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact