I think one of the reasons there continues to be so much excitement about this project is that it clearly does not look like any other game on the market. It's only more exciting now that the renderer is fluid and the perspective not limited, demonstrating that the results are not an unpractical hack.
Thanks! There was a time when graphics coding was more creative (people get quite creative with what we have, but we can only take it so far). John Carmack was writing Doom's engine, Ken Silverman the Build engine, etc. Everything looked and felt different. The standardization of rendering hardware and APIs has become a blessing and a curse. On the bright side, it saves us from reinventing the wheel, but on the downside, there is some art in reinventing the wheel. When I launched Doom for the first time, it was basically Wolfenstein in terms of gameplay. But it was visually beyond anything I had seen prior, and the visuals created an entirely different experience on their own (and I used to be a large proponent of the "graphics don't matter" camp). It is incredibly punishing (from a business standpoint) to be in the engine business - even if you are one of the big guys, but I feel like somebody has to add a little diversity to the arena. Using these newest techniques is very refreshing to me because it allows for all kinds of tricks that simply are not possible in your traditional modern polygon pipeline.
I agree completely. But as someone very interested in game dev, I am also terrified by the idea if spending 3-4 years on just the tech. I wish there was a good middle ground.
As someone who is writing a real-time 3d engine, let me tell you the experience is incredibly rewarding. My math background is weak, but the practical applications of 3d maths that you will learn is invaluable. If you're interested in learning how a pixel is created on the screen from some arbitrary model in a lighting environment, nothing can really replace writing an engine. Give it a shot! http://web.archive.org/web/20150311211412/http://www.arcsynt...
Yes - it is relatively difficult to make anything that stands out, and venturing into engines is like making two games (or more) in the span of time allotted for one game.
There are a lot of undiscovered techniques. However, triangle rasterization is currently supported by widespread consumer ASICs with competing, but compatible libraries. It hits a "sweet spot" of performance and results given current hardware. Current predictions are that path tracing will get more common, since the computation cost won't be prohibitive, and path tracing can handle shadows and reflections more easily AND more accurately. SDFs are currently the structure of choice here, but I'm sure we'll see a lot of improvement.
For an example of the kind of improvement, think about how depth buffers are implemented. They're not just arrays of depth values, but they're hierarchical data structures which allow entire fragments to be queried at once. The same kind of optimizations for 3D SDFs might get hardware support in the future, but who knows?
as far as polygon stuff goes, your hands are more or less tied - there are only so many tricks you can use. I am finding a lot more flexibility with SDF, not just in terms of performance tricks but rendering in ways that are difficult with polygons, like warping the direction of rays to produce different transformations or deformations in objects.
Game's are hacks through and through, generally speaking. No technique works well in all aspects there are always tradeoffs. Even if something only works well for isometric views doesn't mean it's not good. Just sayin'.