Hacker News new | past | comments | ask | show | jobs | submit login
Zdog – Pseudo-3D JavaScript engine for Canvas and SVG (zzz.dog)
686 points by tomek_zemla on May 29, 2019 | hide | past | favorite | 94 comments



Super-crude STL file reader I whipped up in 10 minutes: https://codepen.io/dheera/pen/zQJBrx


nice stuff man!


So cool. I looked at a few projects and was turned off by how much code was needed to render each result. However, when I looked closer I realized that most of the code was simply to define the shapes in the illustration, which, is nice, because it's not very logic heavy and means these scenes could be whipped up relatively quickly without confusing the hell out of me. Nice! To the author: you said you wanted to make it for a video game, would love to hear if that idea is going anywhere because I could imagine something like this as a game engine being super useful.


Remembering Dogz and digging through the Wikipedia rabbit hole, I just learned that PF Magic [1], the developer for Dogz/Catz, were also responsible for the SNES 3D fighting game Ballz [2], and its programmer later went on to develop the open-ended experimental adventure game Facade [3]. I knew of the existence of all of them but didn't knew they were all connected!

[1]: https://en.wikipedia.org/wiki/PF_Magic

[2]: https://en.wikipedia.org/wiki/Ballz

[3]: https://en.wikipedia.org/wiki/Fa%C3%A7ade_(video_game)


Why wasn't the last one named Facadez?


I notice that there are some issues with depth sorting (which I guess would be expected). It'd be complicated, but one way you could do the same effect without that issue is to use signed distance fields in a shader:

https://www.iquilezles.org/www/articles/distfunctions/distfu...

(Demo at the bottom of the page)


It is addressed in https://zzz.dog/extras#z-fighting as a feature:

> Z-fighting in Zdog is the natural result of its pseudo-3D engine. Zdog’s 3D shapes are actually 2D projections, rendered without any accounting for collisions with other shapes. > Rather than fighting against this effect, the best course is to accept it. Z-fighting is one of Zdog’s charms. Embrace it.


This is such a glorious project. I wonder if intersecting shapes could be detected and automatically decomposed into a set of non-intersecting shapes that render separately...


Yeah, although personally I don't want to embrace z-fighting :-) Different subjective opinion though.


I wonder if some sort of progressive dither or fade is practical to implement and fast enough (and visually useful)? If I wasn't at work right now I would have a look at the code.

I have some experience with Canvas rendering but, like this, just in straightforward layers. I don't think a dither would be practical, a fade might be using a mask and setting the alpha, but you'd still have to detect all the overlap.

I'd say the answer to my question is a 'nope'.


Yeah, if possible, I really think replacing the pop with a fade would help a ton.


Yeah, you could do it in a shader, but in SVG not so much


If you've got a shader engine you could render real 3D. This is a very different approach.


Well the advantage of this approach is you get vector graphics, which you don't with a standard mesh based approach to 3d. That's why I was mentioning the signed distance field thing -- you can get the same style without z-fighting, plus you can do some interesting shape blending and stuff which allows for some interesting possibility.


A signed distance field approach is fundamentally incompatible with this library and it's goals.

Zdog is a vector drawing library, it's resolution independent. When you render to SVG, you can later rasterize that SVG at any resolution. This lets you do things like print at large sizes or high DPI.

Ray marching is a raster (pixel based / resolution dependent) way to render.


> Zdog is a vector drawing library, it's resolution independent. When you render to SVG, you can later rasterize that SVG at any resolution. This lets you do things like print at large sizes or high DPI.

Signed distance fields and ray marching are completely resolution independent... The only limit is your canvas size. You can emulate the exact same effects, plus some additional ones since you can blend the shapes.

They aren't incompatible, but the trade-off is in efficiency. Ray marching is more expensive usually, but also more expressive and extensible. If you look at https://www.shadertoy.com/ , a large set of those effects use signed distance fields exactly because by using mathematical functions you can have vector-like infinite precision.

I guess you're right in that you can't, say, put it in a document and print it. But considering that the advantage of this is that it's 3D, if you wanted this for print you'd probably just draw something in inkscape or illustrator anyway.


> Signed distance fields and ray marching are completely resolution independent

That’s not what I meant. I expect you knew that. The output of ray marching is a resolution-dependent raster. The output of Zdog is a vector format image.

Ray marching doesn’t make sense in Zdog, because the output SVG is rasterized later, on demand, by your browser or printer, not by Zdog. You can’t do that with ray-marching, and that’s why these two ways of rendering are incompatible. It’s not just efficiency, putting ray marching in Zdog makes no sense.


That's kinda a weirdly accusatory tone dude. If the output is the screen, which it probably would be 99.9% of the time since this is a javascript library creating a 3d object in a browser, then the distinction is entirely academic. Just set the canvas size to whatever you want. After all the output of displaying an SVG on a screen is also a resolution dependent raster, as is the output of plotting ink on a piece of paper.

Also I wasn't like saying this was a useless library, just that if z-fighting was a concern there's an alternative using GPU shaders that can get you a very similar effect. Geez.


I’m sorry, I didn’t mean to accuse you of anything, my intention was to give you the benefit of the doubt that you understand all the issues here; you sound like you know what you’re talking about with SVG & ray marching.

This distinction is very much practical and not academic. SVG exists for a reason. I use (and write) SVG renderers in practice precisely for their resolution-independent, rasterize-on-demand functionality.

> Just set the canvas size to whatever you want.

This is missing the point that an SVG output can rasterize to multiple different resolutions, when it’s viewed, where a ray-marching output is baked at one size.

> If the output is the screen, which is probably would be 99.9% of the time

I would counter with most SVG on the web today is served as files, and a major use case of Zdog would be creating those files, not necessarily including Zdog in the js and rendering everything from scratch at display time. Rendering to files allows Zdog output to work on all browsers that support SVG, which from the comments appears to be more browsers than Zdog can run on.

I was thinking specifically of the case where I render files in Zdog and save them for display & print later. It’s restrictive to assume that you’re rendering & rasterizing at the same time, one of the benefits of rendering to SVG is being able to separate rendering from rasterizing.

That part of the discussion doesn’t change the fact that Zdog renders to SVG and ray marching renders to pixels, so they aren’t compatible rendering techniques unless you change Zdog to only rasterize it’s output and disable SVG.


Ray marching isn't baked at all, it's a webgl canvas. It's a real time effect. If you resize it the resolution changes and you re-render. The shapes that are used as a source for the rendering are all mathematical objects that can be combined and used at any scale, just like paths and curves in an SVG. The only time it would be "baked" is if you're exporting it out of the browser.

> This distinction is very much practical and not academic. SVG exists for a reason. I use (and write) SVG renderers in practice precisely for their resolution-independent, rasterize-on-demand functionality.

I wrote an SVG renderer: https://github.com/fathat/glsvg

> I was thinking specifically of the case where I render files in Zdog and save them for display & print later.

If you lose the interactive 3d component, Z-Dog doesn't really give you anything over just drawing it in a vector art program. Actually it gives you less, because you have less control and you have to type out your shapes instead of just drawing them. And it's not like it's easy to export out of the browser. In chrome at least, if it renders to a canvas and you right click on the element to save it you get a png, and if it renders into an SVG you get the option to save the entire web page.

Anyway all I was suggesting is that if you want a similar rendering effect (IE, 3D objects without triangulation) and z-fighting is an issue for you then you can consider signed depth fields. That's all. If z-dog works for your purposes use it instead.


> I wrote an SVG renderer

So, clearly you do understand the issues here.

Which is why I’m a bit puzzled by your push-back here. When I said “baked” I was talking about ray-marching spitting out pixels at a certain resolution. Once you render, you can’t change the resolution without re-rendering, so the resolution is “baked” into the raster output. If I save it to a file, my only choice is to save an image file at the resolution that I rendered at. If I want to display that file at a different resolution later, and I don’t have the renderer, my only choice is to resample the image.

> It’s a real time effect

I don’t know what you mean here. Ray marching is a rendering technique, you can choose to do it in real time, or you can choose to take hours. This has no bearing on whether it renders to pixels or vector art. Zdog can also be real-time, or offline batch.

> If you resize it the resolution changes and you re-render

Yes, exactly. You have to re-render a ray-marched image if you change the resolution. That is unlike SVG, which you can re-rasterize without re-rendering.

> The shapes that are used as a source for the rendering are all mathematical objects that can be combined an used at any scale, just like paths and curves in an SVG

I feel like we’re regressing a bit here, we already covered this. Yes, the input SDF & scene certainly are resolution independent. You have to compare that to the input of Zdog, which is not SVG, it’s the 3d shape description.

The output of ray marching is a raster. Zdog’s SVG output is, well, resolution independent SVG.

> If you lose the interactive 3d component, Z-Dog doesn’t really give you anything over just drawing it in a vector art program.

Your straw man hypothetical eliminated all of Zdog’s functionality there. You can use Zdog with 3d but without interactive, right? That’s one specific use case I’m interested in. What Zdog buys is a conversion from a 3d scene description to a 2d scene description, that’s the value for me, and the very thing that ray marching does not do.


I think our fundamental disagreement here is that I think the main use for this is interactive graphics, and that's how it seems to be marketed, and you seem to think its main utility is as an art tool for generating SVGs. I personally think it would be a pretty awkward way to generate an SVG compared to the alternatives, since pulling the data out of the browser requires using dev tools or saving the entire pages' HTML, and if you don't care about being able to move a camera around then it's probably quicker just to draw what you want in inkscape. But if you want an interactive 3d thing and you don't care about self-intersecting objects than this is fine.

> I don’t know what you mean here. Ray marching is a rendering technique, you can choose to do it in real time, or you can choose to take hours.

Hours is really misrepresenting things. It's not ray tracing. It takes milliseconds. You can do it hundreds of times per second, with scenes that are vastly more rich and complicated than this can do.

> Yes, exactly. You have to re-render a ray-marched image if you change the resolution. That is unlike SVG, which you can re-rasterize without re-rendering.

Huh? That's a really weird distinction to make. Unless you mean "rendering" in the sense that web-devs often use it for writing tags, rasterizing and rendering are essentially the same thing. You have data and you need to convert it into pixels or blotches of ink.

If you insist on thinking of "rendering" as having a definition of the scene, then the shader you would have which defines the shapes would essentially be the "rendering". In fact it'd look very similar to the javascript, other than being written in glsl.

If I needed a ray marched scene on very fancy glossy printed paper I'd just rasterize it on a really big canvas and print that. It would take about ten milliseconds to generate that image.

> Your straw man hypothetical eliminated all of Zdog’s functionality there.

Not at all. It's perfectly fine if you want pseudo 3d in a browser. But a lot of people don't like depth sorting artifacts (I am among them). Not being able to intersect objects creates a very strict limit on what you can do. That's fine, but if you want something more flexible I was just mentioning that there's an alternative. I don't know why you're pushing back so hard on my suggestion that, specifically, if you want a 3d scene in a browser that has fewer limitations you can do that in an alternative way. You don't HAVE to. There are trade-offs. I just suggested the possibility. Signed distance fields aren't as well known as svgs are so I thought people would be interested in the concept.

> What Zdog buys is a conversion from a 3d scene description to a 2d scene description, that’s the value for me, and the very thing that ray marching does not do.

I mean, if it provides value for you to create art in that way that's great, do whatever you want. But I'm pretty sure if I polled 99/100 artists that wanted to create a static SVG file, the last thing they'd want to do is write javascript to place each object and then use the browser's dev tools to pull their data out of the SVG tag. (Most of them wouldn't even know how to do that). I mean if you don't want an interactive scene then there's much more flexible ways to create an SVG. (+ You'll notice in most of their examples, they're rendering to a canvas anyway, kind of making my point)

I guess my overall point is that if you're making something for print or you need a content authoring tool, you're probably using an entirely different set of tools, and if you're making something to be consumed in a browser, then whether you use SVG or a webgl canvas, you can achieve the same effect to the person viewing your content.


Yes, I am including both interactive and non-interactive use cases in my argument. You are assuming Zdog is limited and can only be used interactively in practice. I do disagree with your assumption.

> Hours is really misrepresenting things. It's not ray tracing. It takes milliseconds. You can do it hundreds of times per second, with scenes that are vastly more rich and complicated than this can do.

This is still irrelevant, and just not true in general, you're making assumptions and overstating ray-marching's generality. The time depends completely on what you choose to do, in all cases: with ray marching, with ray tracing, and with SVG rendering. Ray marching is usually fast and commonly used in interactive renderers, but plenty of ShaderToy shaders run much slower than hundreds of times per second. And while there are some very rich scenes, ray marching doesn't do meshes, and it isn't all that fast to render a large hand-drawn set of 2d curves (I and others have tried.)

> Huh? [...] rasterizing and rendering are essentially the same thing. You have data and you need to convert it into pixels or blotches of ink.

In the case of Zdog generating SVG, rendering and rasterizing are not the same thing. It is confusing me that you're refusing to acknowledge the utility of vector graphics, when I know you know the difference and the benefits.

Let's try saying it another way. A ray-marching shader turns an SDF in to pixels directly in one step. Zdog turns the scene description into SVG. Later, the browser or printer turns the SVG into pixels, not Zdog.

If you don't want to call it rendering vs rasterizing, then we can call it two-pass rendering. Ray marching is one pass, and there's no intermediate representation. Zdog & SVG is two-pass and there's an intermediate 2d resolution independent representation.

On the contrary to your claim that this distinction is somehow weird, the one and only singular benefit to the 2d intermediate representation is that you can change the resolution without having to re-generate the 2d description. This is the whole reason why formats like SVG and PDF even exist.

> I don't know why you're pushing back so hard on my suggestion that, specifically, if you want a 3d scene in a browser that has fewer limitations you can do that in an alternative way.

It seems like you've made some assumptions and/or misunderstood me, because I don't believe I did any such thing. I stated that ray marching (rendering to pixels) is incompatible with rendering to a vector representation. I still stand by that. To be clear, I'm not saying anything about the validity of using SDFs in a different context. I love ray marching and use it all the time. It's still not compatible with vector rendering, and I don't use ray marching for the same things I would use Zdog for.


> You are assuming Zdog is limited and can only be used interactively in practice.

Well, it is pretty limited, by design, in terms of possible constructions you can make with it. I would not call it a general purpose tool. As far as I can tell it doesn't even have an export function, you'd have to scrape the SVG data out of the tag yourself. (might be wrong, haven't closely looked at the api, just went through the examples)

> Zdog & SVG is two-pass and there's an intermediate 2d resolution independent representation.

I realize this, but there are reasons why I think this is of niche utility:

- The constructions I can make are limited; IE, this isn't a general purpose tool. It's mostly a neat toy. To me, for it to be a general purpose tool it'd need to be able to handle intersecting objects properly and it fundamentally cannot do this. I like it as a neat toy, that's not a criticism! It's good at being what it is. I just think that you can do more with other techniques, if you want to do more.

- Yes SVG is a useful format, but there are some downsides to rendering a 3d scene into an SVG: specifically, now you have to parse and interpret that data before you can even display it. That's definitely not computationally trivial. Also, having written things that render vector graphics, I can tell you that rendering vector graphics on 3d hardware is extremely hard to do efficiently. Either you need to tessellate into very high polygon meshes, or ignore the GPU entirely and do it in software on the CPU (slow!), or use much more obscure methods. I'm sure the clever people working on Chrome have done this well, but the point still stands that SVGs aren't some magical way to make graphics fast. What they're very good at is being resolution independent. Ironically, one of the more cutting edge ways to try to rasterize vector graphics faster is: ... by using signed distance fields :-) (Look up how Valve renders fonts, for instance)

- If you're saying that SVGs are more space efficient in this context then rasterized graphics, maybe, but I'd point out that even with a huge image we're talking about very few colors (in this style). You could store it in an 8bit png with extremely good lossless compression.


Are you suggesting ray marching the object instead of drawing 2D shapes?


Yes, if you don't want z-fighting. If you don't care then this is perfectly fine. But since both approaches can be expressed in a very similar way (primitives that can easily be mathematically modeled), and rendered in the same style, I figured it's an interesting way of achieving the same thing if you want the same style and scalability while allowing for intersecting objects. The trade-off is that ray marching is more expensive.


Perhaps if you want to stay truer to the spirit of the system, then you might have zDog generate faux depth buffers for each of its shapes (using gradients when necessary).

Then you might still take advantage of Chrome’s “optimized” SVG rasterization, but also use the GPU to do depth compositing on the color buffer afterward. As a bonus, depth-fade intersection shaders could work (which are tricky with raymarching).


Is it me, or that `addTo` api feels very strange. Creating new objects with no assignments, with the side effects of adding it to an illustration.


The name is non-standard, but the concept is absolutely normal for a 2D or 3D rendering API. You can think of the ‘addTo’ property as the item’s parent. In 3D APIs this is normally called an instance or transform node, and here the transform is combined with the shape type. Sometimes those are separate things, and they would each have a parent attribute. Some APIs do the assignment the other way, by adding children to the parent. Either way, my short answer is no, this doesn’t feel strange.


That makes sense, I'm just not used to constructors having side-effect, though I can imagine wanting to skip the extra step of adding it to the illustration. But wouldn't something like

illustration.addRect({}) look better?


Don't constructors always have side effects? That's the only reason to have them.

I wouldn't think of this as a side-effect though, think of it as a graph property. addTo is setting up a tree structure, the same way you might setup a linked list. Think of the addTo property as a pointer to the parent, rather than a side-effect.

A doubly-linked list has two pointers per node, one for next node and one for previous. A tree node in a scene graph also has two pointers per node, one for parent and one for child. A parent can have multiple children, and so might arrange the children in an array. A child, however, can have only one parent. This makes setting the parent simpler than setting a child, considering corner cases like duplicate children.

Your suggestion is pointing at the addChild() form rather than addParent(). One of the other comments said that kind of call is available in Zdog, so maybe you can just use that instead.

But, I wouldn't use addRect() I would prefer addChild(). A call like addRect() is binding the type of shape and scene graph setup unnecessarily, so you'd have to provide separate calls for each shape type.


> Don't constructors always have side effects?

The only side effect they should have is creating a new instance, no playing around with some other object's hierarchy.

> think of it as a graph property. addTo is setting up a tree structure

The problem is that the hierarchy here is backwards. You usually add children to the parent, not parent to the children. Having a link back to your parent does help with navigating, but it shouldn't be how you "build" the tree.

When building a tree, do you think of it as building from children up to the parent? I did see that addChild was a thing, and that makes much more sense to me.


> The only side effect they should have is creating a new instance, no playing around with some other object’s hierarchy.

You might be making assumptions. The ‘addTo’ property does not imply the constructor is touching any other object.

You might be getting confused because the name is a verb. Again, addTo isn’t very standard naming, but what it represents is the parent property. You should think of it as a property that can define the graph, not assume that the name implies anything about the implementation. Even if the constructor does do something external to the node, the implementation can change.

> The problem is that the hierarchy here is backwards. You usually add children to the parent... it shouldn’t be how you “build” the tree.

No, it’s bottom-up, not backwards. And bottom-up builders are common in graphics, so I disagree with your assumption about how trees should be built. I think of building trees from child to parent all the time, because, as I mentioned before, it’s a simpler operation: all nodes have one and only one parent, where nodes can have any number of children and complications managing those children.


But the example calls render on `illo`, which is the parent. So at some point the "double link" between illo and the child was created, which is a side-effect, no?


Presumably updateGraph() and even render() can both tidy up linkages and anything else that needs updating. The shape constructor might well have a side-effect though, I don't know.

Hey I hear your point, this API is a little bit different. Maybe since it's early your feedback is fully justified. Just from my perspective it didn't strike me as "very strange", only perhaps a little bit. I don't think you're wrong, it's just a matter of degree... which is subjective anyway, so take this as lukewarm agreement with you. :)


When the objects are tightly coupled, it makes sense to keep this logic internal.

This way its short and self documenting, easyer to improve without breaking existing stuff, and likely has some implicit stability and performance gains because developers can make less mistakes.


There is alternate option to use anchor.addChild(shape)

[] https://zzz.dog/api#anchor-addchild


Does this library support dynamic rendering? I tried to add extra elements to the illustration after rendering it to the canvas and it doesn't appear to work. I might be using the API incorrectly though - is the idea that you can use this library to produce a static model which can then be transformed (eg. rotated), but can't be extended after the first render?


Did you try to call updateGraph() after adding elements? It seems like it should do the trick.

[] https://zzz.dog/api#anchor-updategraph


This was my initial attempt using an Illustration https://codepen.io/anon/pen/NVORrZ

This is my 2nd attempt using the docs you linked https://codepen.io/anon/pen/QRZKxy?editors=0010

What do you think - is updateGraph + render supposed to work more than once?


No, the underlying flatgraph (which is used to generate the drawing commands) is cached and only updated on first call. Reset it by calling

  scene.updateFlatGraph();
after adding new shapes, before updating the graph & rendering

(see https://github.com/metafizzy/zdog/blob/master/js/anchor.js#L...)

edit: made an issue https://github.com/metafizzy/zdog/issues/21


Cheers!


Are there similar libraries for charting and plotting data? That would be super useful for visualizing complex datasets - in a 3d space.


I found vpython extremely helpful for visualizing an aerodynamic model of one of our parachute systems at work

https://vpython.org/


Is this something I can use locally or in Jupyter or do I have to use GlowScript?


This is a great question, the whole GlowScript vs. VPython thing is really confusing and I feel presented in a complicated way on their site. It works great in plain old Jupyter and also from plain old python. If you use it in non-Jupyter Python, it'll launch a web server and browser to play the animation.


Thank you, it really wasn’t clear.


You could just use Zdog for it...

https://observablehq.com/@jrus/munsell-spin


This is amazing. The examples are great and feel amost too easy.

I hope I get the chance to use it.


Same. I kind of wish I had a need for this sort of thing.

Without this sort of library I would have zero chance of building this myself. Amazing.


> I kind of wish I had a need for this sort of thing.

Just Do It. Over some coffee instead of loading HN.

Make a snowman. Three spheres. Progressively smaller. Make two eyes, then get the snowman to rotate. Then add a carrot nose. Maybe two arms as extra credit.

Doesn't take long, you've dabbled enough to at least think of it if you do ever need it. And best of all, you've replaced idle consumption with creation which always is a more fulfilling use of time.


You are the man. Took the advice. :D


Been following this project on Twitter for a while through the creators (Dave DeSandro). Looks great.. Used to use masonry all the time in projects, glad to see he is still creating cool shit.


N00b question - why is it called pseudo-3d? It looks and feels very much 3d like a three.js output.


Submitted a clarifying description: https://github.com/metafizzy/zdog-docs/issues/4

> Zdog is a pseudo-3D Vector Art engine. You define your geometry using 3D splines and blobs, which render as 2D SVG shapes for any arbitrary perspective.

> For nerds—this simple translation is made possible by assuming an orthographic projection (where drawing 3D splines becomes trivially reducible to flat 2D splines—not so with perspective projection). More obvious, the occlusion-culling is achieved with a trivial drawing order or by hiding entire 2D primitives when appropriate.


Thanks so much!


> Its geometries exist in 3D space, but are rendered as flat shapes.


So does pretty much every single 3D engine that renders to a flat surface, like a monitor.

I think it is just a misconception stemming from the 90s (notice the influence of this engine) when anything non-polygonal (and sometimes, non-GPU accelerated, depending on who you asked and how misinformed they were) was considered as "pseudo 3D".

But there is nothing pseudo here, if your scene data structures use three dimensions, you are doing 3D.


In conventional 3D graphics each pixel is painted in 3D and a depth test applied. Here, 2D shapes are ordered discretely to create a 3D effect.

Instead of the per-pixel z-fighting you conventionally see in 3D graphics, here we see entire shapes cross each other. This method cannot correctly paint intersecting, or cyclically occluding shapes (without breaking them into smaller components). I think "pseudo" could refer to the lack of generality in the system's ability to render 3D geometry.


A depth buffer is something that you find common nowadays, especially with the GPUs providing it essentially for free, but again it is not what makes 3D graphics and a lot of otherwise full 3D polygonal games were made without it nor it is necessary to create 3D graphics. With other approaches to 3D rendering you do not even need a depth buffer to get correct results for penetrating objects (e.g. classic raytracing).

What is happening here is called "painter's algorithm" and was a very common approach for realtime graphics before depth buffers became hardware accelerated. Actually it is still often used even today for transparencies since those often cannot be rendered in arbitrary order and you need to depth sort them. One common approach to solve these issues - both then and now - is to subdivide/preprocess geometry so that wrong overlaps are not or rarely happening.


Indeed. I avoided the term depth buffer for this reason. A depth test is still necessary on a per-pixel basis for a correct solution to the visibility problem in the general case - even if that's done by raytracing or some other method.

The painters algorithm is not a general solution because it cannot handle cyclical occlusion: https://en.wikipedia.org/wiki/Painter's_algorithm#/media/Fil... Its limitations constrain the geometry which may be correctly rendered.


My point above is that to call something 3D graphics does not depend on having per-pixel correct visibility and painter's algorithm (and others) is a valid method to render 3D graphics and was for many years. Applications using it either accepted the limitation (many games did that) or tried to overcome it by preprocessing the geometry to create clusters that are unlikely to overlap or by performing subdivision on the projected polygons.


The other thing - this engine does orthographic projection, which preserves (IIRC) the distance aspect between points without any distortion.

It's a commonly used projection method for when you don't want scaling issues, where the need for measurement accuracy is more important (such as for CAD drawings - which inherited it from the standard technique for 3D model projection from the regular 3-view top/plan/side of pencil/paper drafting).

Most ordinary 3D engines use a true perspective projection, where distance from the view camera (or view plane) is taken into account, which causes objects (and parts of objects) to look smaller as they recede from that plane.


Yes, but that again is not what would make something "not 3D" - orthographic projection is common in 3D graphics and even most game engines support it.


If it helps, think of it as an “actual 3D” scene where objects are laid out in three dimension, but within each bounding box is just a 2D painting of that object. So wherever objects intersect they just act like flat sprites hitting each other. They also have flat (2D) lighting.

It’s very similar to a sprite based 3D engine.

For other “pseudo-3D” engines see Wolfenstein, or even Super Mario which has “actual 3D” parallax but 2D drawing within each layer.

Another example is Heretic (a Doom engine variant) which could fake looking up and down by warping tiles, but distortion increased the higher up you look.

In every case there really is real 3D projection. But it just doesn’t go all the way. It retains some 2D algorithms for some part of the rendering.


Back in the day it was also referred to as "2.5D"

https://en.wikipedia.org/wiki/2.5D

Also, the whole sprite thing was used in a lot of Amiga demos in the late-80s-early 90s and was termed "vector bobs":

http://www.pouet.net/prod.php?which=3583

http://www.pouet.net/prod.php?which=50417


Sadly, 2.5D has basically lost all its meaning. It's sometimes meant to represent sidescrollers that use 3D polygons, or even fully 3D games that just have a certain camera angle, or any number of other things. :(


Heretic is a 3D game, as was Doom, since the world is really 3D: in addition to the 2D vectors making up the walldefs, there is a height component in each sector and entity which adds the third dimension. That some parts of the game (entity-to-entity collision, but not hitscan-to-entity which knows about elevation) ignore the 3rd dimension or that the renderer took advantage of the way the world was represented to speed up wall rendering doesn't really change that the world itself is made up of three dimensions.

Entities being 2D sprites is also irrelevant as they are just billboards and many engines even today use billboards for various reasons (particles, foliage, LOD, etc). They are also placed in 3D space.

Wolfenstein 3D on the other hand, yes, that is a fully 2D game as there is no third dimension outside the implied wall height used during rendering.

I mean, it is really simple, there isn't need to justify some old misconceptions about "2.5D" or whatever that persist due to pop culture: if your graphics data has three dimensions, you are doing 3D graphics.


The original PlayStation didn't even have real 3D texture mapping, it used linear transformations rather than perspective transformations.


This is an element of how rasterization is performed and has no implication on if graphics are 3D or not though. PS1 hardware having no perspective correct texture mapping doesn't make the games that used it "not 3D" - and on the flip side, a side scrolling 2D platformer using a rasterizer capable of perspective correct texture mapping (like any platformer that uses OpenGL or Direct3D to render its worlds as triangles) doesn't make the game use 3D graphics.


React bindings are ready for testing: https://github.com/drcmda/react-zdog :-)


This is really nifty. Bonus points for suggesting tau rather than pi across the whole set of examples. Use cases like this make it entirely obvious pi is Just Bad


Wow. Nice! I can imagine a plethora of things being built over this. Less applicable in real life, but also fun: https://www.youtube.com/watch?v=qoxmyH7GezE https://github.com/s0lly/Pseudo3DEngineInExcel


Nice. I've been playing around with 2D Canvas for audio visualizations and looked into WebGL but was mortified by how complicated it was to render basic 3D shapes and just gave up.


The answer is always start with Three.js. Especially if you have no 3D graphics experience and just want to render basic shapes. Drawing a textured cube is relatively straight forward (https://github.com/mrdoob/three.js/blob/master/examples/webg...).


I'd love to see the old tensorflow logo built with this.


I was thinking of how cool it would look to be able to rotate all 600+ logos in the CNCF Interactive Landscape into the third dimension.

https://landscape.cncf.io/


Or the cover of Godel Escher Bach


Wow. This is a nice execution of an idea. Very well done.


Very clean and fast. The API also looks good, I like hierarchical translation and scaling. Is there anything preventing animation?


Seems like there shouldn't be since the examples are interactive (can rotate the move the shapes in 3D space with the mouse), but perhaps animation is a DIY exercise on top of this library


Feels light and fast. Thank you for creating and sharing. I’m wondering what are some useful applications?


Wow! This looks fantastic! Kudos / bravo / thank you for creating and sharing this!! :)


Awesome, congratulations to the creator! I started doing something a little similar, to render pseudo 3d shapes to canvas, here is an example[0]. Try clicking on the page and moving the mouse.

[0] http://jsquest.net/chagas/shift.html


But when will we get ray-tracing Electron apps for Windows Home tablets?


I love this. I think "pseudo-2D" might be more accurate.


Does it work on IE11?


Just tried it. No it doesn't.


This is delightful


Pop this on product hunt, you'll get your first few thousand users in no time.

> Zdog is directly inspired by Dogz, a virtual pet game by P.F. Magic released in 1995. It used flat 2D circle sprites to render the Dogz’ models, but in a 3D scene. See Dogz playthrough video here. Dogz were fully animated in real time, running, flopping, scratching (on Windows 3.1!). It was remarkable.

The new repeating the old. Love when someone looks historically, finds a great idea and re-implements it for a new usecase!


I hadn't thought about Dogz in ages. This video brought back some nostalgic memories.

https://www.youtube.com/watch?v=6lKSn_cHw5k


I remember playing its successor Petz: https://www.youtube.com/watch?v=Qw2tJj5IX5k

Definitely have memories from 90s childhood gaming.


It reminded me of Ecstatica (https://en.wikipedia.org/wiki/Ecstatica), an early 90s game that rendered character with sorted ellipsoids. It also reminds me of Ballz 3D, which also used sorted circles for rendering.

Some Pico-8 demos also use sorted circles for a 3D effect (https://www.lexaloffle.com/bbs/?tid=4050) and (http://www.pouet.net/prod.php?which=66974)


When I was looking at this and seeing the z-sorting pop I was instantly reminded of Catz, a similar product. Cool to see that was the inspiration




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: