Hacker News new | past | comments | ask | show | jobs | submit login
Why are 2D vector graphics so much harder than 3D? (mecheye.net)
347 points by Jasper_ on May 9, 2019 | hide | past | favorite | 85 comments



>This new printer was based on a small, stack-based Turing-complete language called Forth, and they named it… the Interpress!

Xerox's laser printers were never "based on Forth". JAM and Interpress were only conceptually related to Forth, in that they used postfix semantics.

That's like saying that because Java byte code uses a stack, your Android phone is based on Forth. It's not. Stack based architectures (like the Burroughs B5500 and B6500, developed in 1962, which directly inspired Design System => JAM => Interpress => PostScript) have been around a lot longer than Forth (developed in 1970), and simply using a stack doesn't make a language Forth.

Here's a much more accurate and detailed historical account of the history of Interpress, written by Brian Reid:

https://groups.google.com/forum/#!topic/fa.laser-lovers/H3us...

Gaffney came to E&S from graduate school at the University of Illinois, where he had used the Burroughs B5500 and B6500 computers. Their stack-oriented architectures made a big impression on him. He combined the execution semantics of the Burroughs machines with the evolving Evans and Sutherland imaging models, to produce the Design System. Like all successful software systems, the Design System slowly evolved as it was used, and many people contributed to that evolution.

John Warnock joined Xerox PARC in 1978 to work for Chuck Geschke. There he teamed up with Martin Newell in producing an interpreted graphics system called JAM. "JAM" stands for "John And Martin". JAM had the same postfix execution semantics as Gaffney's Design System, and was based on the Evans and Sutherland imaging model, but augmented the E&S imaging model by providing a much more extensive set of graphics primitives. Like the later versions of the Design System, JAM was "token based" rather than "command line based", which means that the JAM interpreter reads a stream of input tokens and processes each token completely before moving to the next. Newell and Warnock implemented JAM on various Xerox workstations; by 1981 JAM was available at Stanford on the Xerox Alto computers, where I first saw it.


Thank you for the correction. I revised the phrasing slightly.

Jerry Mendelson's Laser Lovers post [0] reads "A FORTH like graphics/printing language was developed by, among others, John Warnock before coming to Xerox/PARC." Mendelson quoting Forth as the first inspiration here made me believe they knew of and used it. But admittedly "based on" is a bit much.

[0] https://tech-insider.org/unix/research/1985/0301.html


Thanks for writing and updating such a great well researched article!

Funnily enough, the first sentence of the Wikipedia page about Interpress literally says: "Interpress is a page description language developed at Xerox PARC, based on the Forth programming language [1] and an earlier graphics language called JaM."

https://en.wikipedia.org/wiki/Interpress

The [1] reference is to: Harrington, Steven; Buckley, Robert R. (1988). Interpress, the Source Book. Brady Book. p. 18. ISBN 0-13-475591-X.

Unfortunately, I can't find a copy of it online (wouldn't it be deliciously ironic to find a PDF rendering of the Interpress Source Book? ;), so I can't check if the citation literally says Interpress was based on Forth, but I've asked Brian Reid if he had a copy or knew a better citation. I'll update the Wikipedia page if I get any more information.

I really love Brian's first-hand account of the origins of PostScript that he posted to laser-lovers!

If the Helvetica font can have a movie all about it, I think the story of PostScript deserves a whole miniseries! "showpage: the PostScript Documentary"

https://www.youtube.com/watch?v=wkoX0pEwSCw

Brian Reid is also known for Scribe, the Usenet Cookbook, and is well respected enough in computer security circles that Kevin Mitnick impersonated him on the phone to gain people's trust! ;)

https://en.wikipedia.org/wiki/Brian_Reid_(computer_scientist...

Brian's brother Glenn Reid was also very active in the PostScript world, he worked for Adobe (Illustrator), Apple (iMovie) and Fractal Design (Painter, Dabbler, Poser), and NeXT (Interpersonal Computing).

Around 1990, Glenn Reid independently designed and developed a delightfully original "Font Appreciation" app for NeXT called TouchType, which decades later only recently somehow found its way into Illustrator. Adobe even CALLED it the "Touch Type Tool", but didn't give him any credit or royalty. The only difference in Adobe's version of TouchType is that there's a space between "Touch" and "Type" (which TouchType made really easy to do), and that it came decades later!

Illustrator tutorial: Using the Touch Type tool | lynda.com: https://www.youtube.com/watch?v=WUkE3XLw_EA

SUMMARY OF BaNG MEETING #4, July 18, 1990: https://ftp.nice.ch/peanuts/GeneralData/Usenet/news/1990/_CS...

TOUCHTYPE Glenn Reid, Independent NeXT Developer

The next talk was given by Glenn Reid, who previously worked at both NeXT and Adobe. He demonstrated the use of his TouchType application, which should prove to be an enormous boon to people with serious typesetting needs.

TouchType is unlike any other text-manipulation program to date. It takes the traditional "draw program" metaphor used by programs like TopDraw and Adobe Illustrator and extends it to encompass selective editing of individual characters of a text object. To TouchType, text objects are not grouped as sequences of characters, but as individually movable letters. For instance, the "a" in "BaNG" can be moved independently of the rest of the word, yet TouchType still remembers that the "a" is associated with the other three letters.

Perhaps the best feature of this program is the ability to do very accurate and precise kerning (the ability to place characters closer together to create a more natural effect). TouchType supports intelligent automatic kerning and very intuitive, manual kerning done with a horizontal slider or by direct character manipulation. It also incorporates useful features such as sliders to change font sizes, character leading, and character widths, and an option which returns characters to a single base line.

TouchType, only six weeks in development, should be available in early August, with a tentative price of $249. BaNG members were given the opportunity to purchase the software for $150.


This is why I come to HN


This is why I usually check out the 2-3 top comments and then read the article.


Same here! I thought I was the only one with this strange habit.


The knowledge base present here means misinformation can be called out quickly and impactfully.


>The only major problem with PostScript was its Turing-completeness — viewing page 86 of a document means first running the script for pages 1-85. And that could be slow.

1) EPS (which the article just mentioned) is part of Adobe's Document Structuring Convention, which solved the problem of skipping and reordering pages.

https://www.adobe.com/content/dam/acom/en/devnet/actionscrip...

https://www.complang.tuwien.ac.at/ulrich/PS/eps.html

>Every DSC-compliant document is indicated by having the comment %!PS-Adobe 3.0 as the first line. This comment is a flag to indicate that the document is compliant. You should never use this comment unless your document really is DSC compliant. There are many other parts to proper DSC. A document which follows the DSC can be manipulated in many ways. In particular, postprocessors can shuffle the pages, print two or more pages on a side, and so on. The printer drivers from some notable companies do not follow the DSC, and their PostScript documents are, therefore, impossible to work with once they've been generated.

2) Inefficiently skipping pages (or being slow) is NOT the main problem with PostScript's Turing completeness!

https://www.loc.gov/preservation/digital/formats/fdd/fdd0002...

>Security Issues: In early 2017, Microsoft took steps to prevent exploits based on Word's import "filter" for EPS. Based on the assessment that "EPS files are a legacy format that has largely fallen out of favor in today’s ecosystem", Microsoft disabled the import filter by default in its April 2017 security update for Office. According to PPTools post I can no longer insert EPS graphics into PowerPoint, this update applied to Office 2010, 2013, and 2016/365. Import of EPS files into Office 2007 has been disabled since 2015. The stand-alone Office for Mac is not affected.

>A Jan 2017 post from The Register, We don't want to alarm you, but PostScript makes your printer an attack vector, publicizes the vulnerability of networked PostScript printers. The potential for damaging exploits based on communication between computers and PostScript printers has been understood for years, but was not a serious threat before PostScript printers were networked and accessible over the Internet. See also PostScript page from Hacking Printers.

http://hacking-printers.net/wiki/index.php/PostScript


Brian Reid wrote about page independence, comparing Interpress' and PostScript's different approaches. Adobe's later voluntary Document Structuring Conventions actually used PostScript comments to make declarations and delimit different parts of the file -- it wasn't actually a part of the PostScript language, while Interpress defined pages as independent so they couldn't possibly affect each other:

https://groups.google.com/forum/#!topic/fa.laser-lovers/H3us...

>By now you can probably see the fundamental philosophical difference between PostScript and Interpress. Interpress takes the stance that the language system must guarantee certain useful properties, while PostScript takes the stance that the language system must provide the user with the means to achieve those properties if he wants them. With very few exceptions, both languages provide the same facilities, but in Interpress the protection mechanisms are mandatory and in PostScript they are optional. Debates over the relative merits of mandatory and optional protection systems have raged for years not only in the programming language community but also among owners of motorcycle helmets. While the Interpress language mandates a particular organization, the PostScript language provides the tools (structuring conventions and SAVE/RESTORE) to duplicate that organization exactly, with all of the attendant benefits. However, the PostScript user need not employ those tools.


"Without getting too into the math weeds, at a high-level, we call approaches like Bezier curves and Hobby splines implicit curves"

I don't think of Bézier curves as implicit in any way. There's an explicit equation for the curve, as a parameter of "time" (0 at the beginning of the curve, 1 at the end, increasing monotonically in between).


Yup, the entire point of Bezier splines is that they are very easy to explicitly evaluate, and somewhat intuitive for humans to control.

The gist of a Bezier curve is "Take weighted averages, repeatedly".


The curve isn't necessarily implicit or explicit; multiple forms exist. Think of the standard form of a line versus a parametric form.

For beziers you might explicitly evaluate them at time t if you're doing animation of implicitly evaluate them at position x,y if you're rendering them.


Yes. Technically they are parametric curves but you can turn any parametric curve into an implicit one (the reverse not being true), so I chose to leave out that small detail for narrative purposes.


>Technically they are parametric curves

It would make the already excellent article even better if you simply linked to the relevant[1] Wikipedia page.

Similarly, for 3D, you really want parametric surfaces[2], which NURBS are.

For the readers:

The key difference: to plot an implicit curve, you need to solve an equation, which is a whole another can of worms. Solving equations is hard. For high-degree equations, there are no formulas for solutions!

To plot a parametric curve, you just need to plug numbers into some formulas. That's much easier!

The problem discussed in the article is that even if you have an explicit parametrization, answering questions like "Where does this curve/surface intersect this ray?" involves equation-solving. And that's hard. As would be something like "represent this curve/surface with segments/triangles of uniform size".

TL;DR: implicit is something for which you don't have a formula. Bezier and NURBS have easy formulas; they are parametric.

[1]https://en.wikipedia.org/wiki/Parametric_equation#Explicit_e...

[2]https://en.wikipedia.org/wiki/Parametric_surface


The implicit form is what's used when rasterizing, because we have the opposite problem: we don't care about the value of a specific bezier function at specific time t, we care about whether the pixel at location x,y is on inside the path created by a number of bezier segments (but it's even more complicated, because of winding order). That can't be solved with the parametric form alone.


Indeed, but the terminology used in mathematics is very clear. So perhaps add this snippet to the article?

Another thing possibly of value: where Bezier curves come from - automotive design. The machines that were tracing these curves out needed to solve the forward problem (where should the tool head be at time t?), and the designers needed an easy way to change the shape of the curve. The Bezier splines were (and still are) a great solution to that problem.

For that reason, it's easy to convert Bezier splines into G-Code for either plotter - or, these days - 3D printer.

Also, sampling the parametric form does give you a polygon (or a polygonal mesh), which is easier to work with. But that's another can of worms.


>That can't be solved with the parametric form alone.

That's not strictly true - you could choose a numerical algorithm like Newton-Raphson.

3D B-rep modelers will typically solve x,y,z on curve that way because many of the curves won't have an implicit form of any reasonable low degree. In fact, some of them won't have an implicit form at all.


I understand and appreciate striking a balance between precision and a good story.

But I thought in this case, perhaps this was a big deal. I thought one of the reasons why 2D graphics and 3D graphics are so different is because of ... is it topology? ... For any arbitrarily complex curve, I think I can give you a 1D parameterization for it. At least to some desired accuracy.

What condition is required of a surface so that you can do the same, with a 2D parameterization?


> For any arbitrarily complex curve, I think I can give you a 1D parameterization for it.

If you haven't see it already, read this paper by Laurent Boue: "Real numbers, data science and chaos: How to fit any dataset with a single parameter"

https://arxiv.org/abs/1904.12320


The Hyperdimension Neptunia games have character graphics that breathe and move and looking at them I thought they were vectors but when I looked into the resource packs I was astonished to find no whole-character CG's but instead bitmap eyes and mouths and shoulders and other parts that the system would transform and move around with the 3-d scaler.

Even to draw 2-d graphics the 3-d technology works so much better.


I think that's just an artifact of the fact that not nearly as much effort has been expended on 2D graphics as 3D. In fact, I've become convinced through my Pathfinder work that the GPU is just as good as the CPU for 2D vector graphics. It's just that our 2D rendering stacks are way behind.

(As an aside, I think this state of affairs is unfortunate, given how much money there is in mobile apps and browsers and the like. The vast majority of the research goes to 3D, and 3D is flashier and prettier work in a sense, but the amount of time we spend reading text on a screen generally dwarfs the amount of time people spend consuming 3D CG entertainment...)


What 2d functionality do you think is being left on the table? The way I see it, most 3d raster graphic techniques boil down to baking data in a way that's easy to pipeline. It doesn't really have anything to do with 3d vs 2d. Verts and textures are the way they are because its easy to linearly interpolate them in parallel.

Do you mean vector vs raster?


If the GPU is only "just as good" why bother with GPU rendering at all? Free up the CPU for other work?


Yes, definitely. We did that in SimAirport, basically rendering everything as if it was full 3D and use none of the engine-provided "2D features" (which largely amounts to 'more work on the CPU').

Finding fun mechanics that add to CPU load is awfully easy. ;)


It's faster than the CPU if done properly.


On the Playstation 2, which didn't have accelerated perspective correct texture mapping, Parappa the Rapper embraced and ran with the stylized 2D-in-3D look!

https://www.youtube.com/watch?v=rYKbmx9ykWM&t=2m29s


Think you mean the Playstation 1; the second definitely had perspective-correct texture mapping.


Yes, you're right! It looks like they've pimped out the graphics for the later releases of Parappa.

One of my favorite PS1 games whose designers rightfully said "fuck it, we're just going to draw perspective-incorrect textures" as well as "fuck it, we're just going to simulate newtonian-incorrect physics" in the name of excellent gameplay, is Twisted Metal 2!

Check out the deliciously distorted faces of the buildings along the Boulevards of Paris:

https://www.youtube.com/watch?v=d6mB8zlx88g&t=12m38s

Peter Macon, the guy who voiced Axel and Minion in Twisted Metal, now plays Lieutenant Commander Bortus on The Oriville!

https://orville.fandom.com/wiki/Peter_Macon


TM2 felt so distinctly anti-simulationy that I still have problems with casual ‘racing’ games in which the physics are truer.

‘Crash Drive 2’ is the closest online game that I've found so far, but its imbalanced upgrade mechanics botch the challenge after a few hours.


Nice to see a nod to Iniqo Quilez and the crazy SDF world. I still get a thrill from the fact you can specify an infinite grid of spheres with a single line of SDF code. Seems magical - as does the "almost free" CSG, morphing and chamfering you get from SDF rendering.


iq was single-handedly responsible for most of my early interest in computing.


But why ' of incredible infamy ' - did he do something illegal, bad or contemptible?


It's a bit tongue in cheek and perhaps I should edit it out before someone misinterprets it. To a lot of people, iq is a mad scientist, in the best way. The rediscovery of SDF and invention of raymarching almost immediately gave every single graphics developer pause. It's such a different way to think about graphics and modelling and art and helped open people's eyes to alternate approaches and representations.


IMO, yes, it shouldn't be there. I've been following the guy for some years and was really surprised to see him called "infamous", so I went and googled for a while to see if he ever did anything "wrong".


I adjusted the wording. Incredibly sorry about that, I definitely don't want to hurt the man's reputation.


> Microsoft, while working on Windows 95, wanted to create its own graphics API for developers, and a primary goal was making it compatible with existing printers so the graphics could be sent to printers as easily as a screen. This API was eventually released as GDI, a core component used by every engineer during Windows’s meteoric rise to popularity in the 90s.

AFAIK, GDI was part of Windows since the very first version released in 1985.


Yes, and its binary serialization format Windows Metafile (WMF) also predates Windows 95. Microsoft tried to push it as an alternative to PostScript for printing, with some success in low-end PC printers with thick drivers.

https://en.wikipedia.org/wiki/Windows_Metafile#Implementatio...


That one slipped by in editing. Fixed, thank you.


Is it just me or anyone else confused by the conflation of 2d to vector and 3d to raster graphics in this article?


Bless you people, for revealing this concise and informative gem.


The technical aspects of this article are super interesting, but also the aspects that talk about adobe's early business model: Adobe’s original business model was to sell this font technology to people that make printers, and sell special recreations of fonts, with added hints, to publishers, which is why Adobe, to this day, sells their versions of Times and Futura. Adobe can do this, by the way, because fonts, or, more formally, “typefaces”, are one of five things explicitly excluded by US Copyright Law, since they were originally designated as “too plain or utilitarian to be creative works”. What is sold and copyrighted instead is the digital program that reproduces the font on the screen.

An interesting way to look at this technology from a startup perspective. From the new industry of digital printing they found a business model based on the loophole that fonts are uncopywritable and could be monitized.


I don’t agree that 2D is harder than 3D. Regardless on what you count, a page of text is much simpler to render than a frame in modern videogame. Even if you don’t cache TTF glyph textures.

I think it’s just legacy. For many decades before modern GPUs, 2D graphics was done in certain ways. People accumulated experience, wrote books, taught stuff to students in university.

Then modern GPUs happened, but initially only gamers and 3D graphics professionals had them, so 2D programmers couldn’t care less, their users just didn’t have hardware for GPU-based rendering. I think it was only around 2008-10 when non-game developers could finally use GPUs without sacrificing users base, and even at that time it meant “DX9 GPU” so no geometry or compute shaders.

I think people will adjust after a while. Professors teaching 2D graphics will stop talking about moveto/lineto and start talking about submitting triangles and writing shaders.


> I think people will adjust after a while. Professors teaching 2D graphics will stop talking about moveto/lineto and start talking about submitting triangles and writing shaders.

The 2D graphic system in the future will of course submit triangles and make use of shaders, but it is hard to imagine that we will be directly working with triangles. And we have taken the subtly broken scenary of raw 3D triangles granted (so far) but not tolerated jaggy 2D strokes. To resolve this dichotomy either 3D should go continuous (e.g. implicit surfaces) or 2D should go discrete as you suggest; the latter seems more unlikely.


> 2D graphic system in the future

“The future is already here, it's just not evenly distributed” © William Gibson.

It already works this way on modern Windows: WPF, UWP, and Direct2D already use GPUs.

> it is hard to imagine that we will be directly working with triangles

Most people doing 2D graphics on CPU don’t moveto/lineto either, they use higher-level libraries for that. To be good at it, you have to understand, to some extent, couple underlying abstraction layers. That’s why universities are teaching these basics. The basics for GPU-rendered 2D graphics are totally different, and until very recently only 3D graphics professionals cared about them.

> either 3D should go continuous (e.g. implicit surfaces)

You don’t necessarily need implicit surfaces. If you have a GPU ≤ 10 years old, you can convert 2D vector stuff to triangles on the fly.

Here’s a simple example that renders a polyline: https://github.com/Const-me/vis_avs_dx/blob/master/avs_dx/Dx... It cuts a few corners e.g. doesn’t handle self-intersecting polylines particularly well if you have transparency, but it’s all fixable, that was an unpaid hobby project.

Often you can do the same with Bezier/Nurbs, see tessellation stages in D3D11 docs.


> It already works this way on modern Windows: WPF, UWP, and Direct2D already use GPUs.

Oh, you are right. Even DirectWrite makes use of GPUs (and not just texture atlas) when you use a correct API.

> [...] That’s why universities are teaching these basics. [...]

I agree if you meant so. It is unfortunate that most university courses are not concerned with fundamentals or only concerned with outdated fundamentals.


> most university courses are not concerned with fundamentals or only concerned with outdated fundamentals.

I don’t think they’re that bad. For things that are relatively stable, like compilers, file systems, OS kernels, or relational databases, universities are doing OK job.

It’s just that GPUs have been evolving too fast for the last decade. They drive evolution of everything that runs on them. GPUs have become ridiculously powerful (in my desktop PC, GPU is 25 times faster than CPU doing math on floating-point numbers), so more and more stuff does run on them. Initially they only run 3D game engines, then 2D graphics, now also a lot of high-performance computing. When stuff gets ported from CPU to GPU it changes a lot due to very different architecture.


I wrote a full OpenVG1.x on top of OpenGL|Es1.x (fixed pipeline) engine: https://www.amanithvg.com

"Curves to straight line segments" and "polygon to triangles" was done in CPU, everything else in GPU. It was the year 2005 or so.


That's funny. I was just wondering weather Dreams was based on implicit surfaces, with their ability to scale their models in this good looking way, and then the article goes and mentions it as an example.

Edit: I really enjoyed the article and its information, but I think I missed the part where it answered the question in the title.


Is it true that plotters were the first graphical device to be controlled by computers? before oscilloscopes?

Doesn't really make a difference to the point in the article. Both are "vector" devices. But I'm curious about the history.


The first computer display seems to have been on Whirlwind I, in 1949. That was more of a demo than a useful device.

Computer-controlled plotters apparently date back to 1953, but the online source for that is iffy. The Atlas Missile Guidance Computer had some, which would be around 1957. Vannevar Bush's differential analyzer had plotters, but that was mechanical, not electronic.

Sketchpad, on the MIT TX-2, was the beginning of what we'd recognize today as interactive graphics.

There were many early military systems. The USAF really wanted graphic displays. Preferably big ones. SAGE, the Iconorama, and other triumphs of money over the problem followed.

By the late 1960s, most of the typesetting companies had some kind of phototypesetting machine. Fonts were typically on glass disks and printed onto photographic paper by firing a strobe lamp at the right time. The next stage was using a stroke-writing CRT to draw the characters. Usually the CRTs for these were about one line high but a page wide. Harris-Intertype had that approach working before 1970, long before Metafont.

The font problem in that era was "given this outline, fill it in with minimal stroke movement". Strokes had a width, so you had the usual problem of machine tools - all inside corners are rounded. If the beam is too small, it takes too many strokes to do the fill, and if it's too big, there's too much corner rounding. I knew of a program called Procrustes which did that job for the Harris-Intertype Phototronic around 1970.


Why do you say the scope on the WWI wasn't a useful device? They even had a camera watching the scope to take photos (controlled by the computer) at interesting times.


> Is it true that plotters were the first graphical device to be controlled by computers?

Do you count differential analyzers as computers?

https://en.wikipedia.org/wiki/Differential_analyser

If so, then yes.

IIRC, some of the input devices to such machines were x/y "pointer" tables - imagine a flatbed plotter, where you moved the part where the pen would go, in order to input cartesian coordinates (or a continuous value I suppose, composed of x/y values?)...

Prior to oscilloscopes (but after differential analysers) - the Williams Tube was used as a memory system. It could be considered a form of bitmap display, except you'd have to only use it in a dark room (and no camera flashes, dammit!):

https://en.wikipedia.org/wiki/Williams_tube


Define computer?

Before plotters or oscilloscopes, there were harmonic oscillators which could perform Fourier transforms and the reverse operation and print the result on a piece of paper. It's a vector device too, albeit a mechanical one, not digital or even electronic.

https://www.youtube.com/playlist?list=PL0INsTTU1k2UYO9Mck-i5...


That incredible!


I'd say looms, which would be raster devices.


I'd say a loom deals with vectors, it just only handles thin vectors in two directions 90 degrees rotated from each other.


Hah, good call. It's like a mechanical slow-scan film recorder.


It's probably a close call. AFAIK analog oscilloscopes were used in X-Y mode as a form of vector monitors in about the same time (end of 50s), but plotters are probably a few years older.


Thanks you for this extensive historical and technical overview. It's really useful to know where we are and how we got here. It's easy to underestimate how widespread the consequences of these fundamental choices are.


I'd just like to say, I'm generally hard to please on software, but Blend2D is good stuff, those guys are s-m-r-t.


When I create a simple curved object in Blender or OpenSCAD, I see a mesh of triangles. When I create a simple curved object in Fusion 360, I see a smooth object with no apparent triangles. Is Fusion 360 rendering a triangle mesh or is it doing something else?


CAD tools can use special adaptive algorithms to triangulate NURBS curves into meshes for rendering, so there is still a mesh there as far as I know. Beyond this, there is also the property of how the material responds to light. You can have multiple triangles appear smooth because they respond to lighting in a way that does not produce seams. "Smooth" rendering will use interpolated surface normals, and "flat" rendering uses per-face normals.


Fusion 360 (or FreeCAD) renders a triangle mesh on-demand at display time from the underlying model. But that underlying model is based on solids with perfect platonic shapes like arcs, spheres etc. In Blender and OpenSCAD the underlying model is also based on triangle meshes, so they cannot do the same.


Are you sure about OpenSCAD?

I thought it used constructive solid geometry?


Kinda, but I primarily have a user perspective. For instance even though most of the OpenSCAD language operators are classic CSG ones, there are variables to specify the number of facets on circles etc (which does not make much sense in a pure solids world). This can be controlled globally using the $fa/$fs/$fn variable (assuming a well-behaved model). Changing these variable seems to require the entire geometry to be rebuilt. Rebuilding the model itself is slow, and it gets slower the more facets you have - indicating that this is not just a 'display' feature. I believe this is because the underlying geometry kernel, CGAL, operates on meshes.

Note that OpenSCAD can also export a CSG datastructure of a model. The OpenSCAD module in FreeCAD module uses this generate B-rep based solid models using their OpenCascade kernel.

This raises the point of difference between OpenSCAD and Blender. In OpenSCAD the mesh is mostly a secondary thing, an artifact of how the internals are implemented - the "primary" model the user modifies is the code itself. Where as in Blender, the mesh is the primary model being modified by user. Although the distinction is a bit blurred, as Blender has some support for NURBS these days. And one can have non-destructive operators ("modifiers") such as subdivide/array/mirror etc.


In Blender you can simply use smooth shading to achieve this effect. Flat shading simply shades each face while smooth shading interpolates between vertices. Try it on a cylinder and you will see the effect fall over on the sharp edge. That’s why people use the edge split modifier to get around this and keep poly count low.

Of course you can also define surfaces using Bézier curves in Blender.


As far as I can tell, it's rendering a triangle mesh using a shader in order to create the illusion that there are no triangles.

There are algorithms for rendering smooth surfaces, but for a series of reasons they tend not to be used for real-time rendering.


Thank you for writing this up. The computing world needs more of these history articles.


Because people keep turning them into HTML. Leave the vectors alone, people! CSS is terrible.


Revision of history, wrong, and insultingly so. This post is a rewrite of serious graphics history. Read Foley Dan Vam, forget this tripe.


Parts of this post were based on the CACM'84 interview with van Dam. I cannot cover every single small development along the way. I had a paragraph on pre-Porter Duff 2D image models like QuickDraw, X11, and yes, Foley/van Dam's GKS but after a lot of hesitation, cut it for brevity. There are too many contributions to list, hopefully I can chronicle more in a future post.

The story I wanted to tell was to cover the large, broad developments in the industry that caused our solidification of the respective image models in both 2D and 3D. Perhaps I omitted a bit too much, and I'm certainly framing it with modern eyes, but I do not believe any of this to be revisionist.


Ah well, it's a complicated story. It did make me muse about the scads of graphics file formats back in the day plus the coexisting ecosystems. Everyone has their own slice of the picture.


This is a very interesting article, but it does have some errors. I'm no expert in the area, but here are some that I spotted.

> The first graphics devices to interact with computers were “plotters”,

The first graphics devices used as such on digital computers were the visible debugging CRTs driven from the Williams tube control signals, and of course "ASCII" art (before ASCII) on printers. George Dyson's book on the Johnniac has a lot of images of one-dimensional cellular automata from the dawn of the von Neumann machine. The TX-0 had a point-plotting CRT and also, I believe, the first pen plotter.

> The only major problem with PostScript was its Turing-completeness — viewing page 86 of a document means first running the script for pages 1-85

As Don pointed out, that isn't the major problem with it being Turing-complete. Aside from the frequent security holes that have occurred, I recall a four-line PostScript .signature in the 1990s that computed an IFS fractal for 60 seconds and then printed it; it would plot more or fewer points according to whether your printer's PostScript was faster or slower, so it might not print out the same way twice even on the same printer. Writing a PostScript document that renders more or fewer pages is just as easy. And the common kinds of manipulations you'd like to do with printable documents are unnecessarily difficult with PostScript because of the Turing-completeness — if you implement them by assuming documents that claim to respect DSC will do so, your tools will break at random on real-world documents whose DSC compliance is imperfect. Consider the problem of concatenating two PostScript documents (pscat), each of which might have redefined "moveto" in its prologue, or defined "M" as a shortcut version of "moveto" with similar but subtly different meanings; or, worse, interspersing their pages, or displaying reduced images of their pages (psnup, psbook, mpage) on the same page. In theory you could hack together some kind of sandbox that makes this stuff work reliably, but in practice the solution is to distill the documents to PDF using GhostScript.

(For an overview of DSC, see the excellent Brian Reid paper Jasper and Don posted links to: https://tech-insider.org/unix/research/1985/0301.html)

Even my favorite PostScript test document (from Network PostScript Printer Installation Illustrated Through Interpretive Dance) prints a rectangle that is quite rarely the same size on different printers; it's even different sizes on the same printer using different paper sizes:

    %!
    clippath stroke showpage
> The only major problem with PostScript was its Turing-completeness… And that could be slow. Adobe caught wind of this user complaint, and decided to create a new document format that didn’t have these restrictions

This is not really true. The original motivation for PostScript Distiller (before PDF stopped being a subset of PostScript) was to make a way to get documents onto computers that were too wimpy to run a full PostScript interpreter, like IBM PCs and Macintoshes. The problem was not, as I understand it, primarily that it was slow, and certainly not that it was Turing-complete (those machines ran lots of Turing-complete languages, after all) but that it required a lot of code and a lot of memory.

> It threw out the programming language — but the graphics technology stayed the same.

This was only true at the very beginning. PDF (and SVG) has evolved quite a bit over the years. Notably, PDF and SVG support alpha-blending and gradients, while PostScript didn't, last I checked.

> Bezier curves do exist in 3D, where they are known as B-splines, and they are computable, but they have the drawback that they limit the ways you can connect your mesh together.

This is almost entirely wrong. B-splines as such are one-dimensional, or two-dimensional if you consider the abscissa a dimension, though you can of course construct parametric curves of any dimension each of whose coordinates is a B-spline in that dimension. They are not, in general, computable (and neither are NURBS surfaces, which I think are what is meant), but all of Bézier curves, B-splines, and NURBS surfaces are easy to compute particular points on, if their parameters are computable. B-spline surfaces in 3-D don't, as far as I know, have anything particular to do with Bézier surfaces, except in the sense that Catmull–Clark is a Bézier-like algorithm that converges to a cubic B-spline surface, and of course B-splines (in the sense of basis splines) and the Bernstein polynomials used by Béziers are just alternative bases spanning the same space of polynomials, so you can freely convert a parametric polynomial surface of a given degree between the two representations. Nor do any of Bézier curves, NURBS surfaces, or B-splines impose constraints on surface topology.

> we call approaches like Bezier curves and Hobby splines implicit curves, because they are specified as a mathematical function which generates the curve

This is almost entirely wrong; Bézier curves and Hobby splines are parametric, not implicit, and "specified as a mathematical function which generates the curve" is not what "implicit curve" means. The truth is close to the opposite: an explicit function is a mathematical function which generates the curve, while an implicit function just tells you which side of it you're on.

> The 3D equivalent of an implicit curve is an implicit surface, made up of basic geometric primitives like spheres, cylinders and boxes.

This is very much not what an implicit surface is. The author has confused CSG ("constructive solid geometry"), which is making things up of basic geometric primitives like spheres, cylinders, and boxes, with implicit surfaces, which are almost, but not quite, completely unlike CSG.

An implicit surface is the solution set of an equation in three unknowns. If you want a sphere, that's easy: (x - 17)² + (y - 22)² + (z - 84)² = 39, for any values of 17, 22, 84, and 39. But a box? That's gonna be a bit rough: (x² - 1)(y² - 1)(z² - 1) = 0 contains a box, but its sides extend to infinity. It's definitely feasible to hack the equation to clip them, but it's not what implicit surfaces are good at, and I think some of the obvious approaches make life a lot harder for solvers.

A much more typical example of an implicit surface, in my book, is Jim Blinn's metaballs — something like (x - 17)² + (y - 22)² + (z - 84)² + (x - 17)² + (y - 28)² + (z - 84)² = 39, which gives you a blobby barbell-looking thing. Although you might think that it should give you two spheres, actually no patch of its surface is a patch of the surface of a sphere, because the proximity distorts both spheres into non-sphericity.

Or, consider a paraboloid, the shape of a satellite dish or a Newtonian telescope mirror: x² + y² - z = 0 (though that particular example is easy to solve as an explicit heightfield, so maybe it's not the best example of implicit modeling.) That's gonna be a real pain to model with spheres and cylinders. You're never gonna get there if you're trying to get your telescope mirror to focus a clean image.

> 2D graphics started and maintained forward momentum around these implicit curves, by near necessity in their use in modelling human letterforms and glyphs. The hardware and software to compute these paths in real-time was expensive

This is wrong; although some recent systems like Dan Amelang's Gezira do in fact compute paths from letterforms for every frame, none of METAFONT, GDI, and X11 do this, normally. Instead, all of them rasterize the letterforms early, then use the pre-rasterized letterforms over and over. PostScript is in kind of a weird gray area, but I'm reasonably sure that typical PostScript implementations do that most of the time as well, although you can rotate text and blow it up to huge sizes, which necessarily requires re-rasterization at some point, and of course, ever since Type 2, you have "charpath".

> Surfaces like Catmull-Clark and NURBS allow for arbitrarily connected meshes to empower artists, but this can lead to polynomials greater than the fourth degree, which tend to have no closed-form solution

This is not, as far as I know, wrong, but is misleading; if you are computing quartic parametric surface patches, you are not going to do it by using Ferrari's closed-form solution to quartic equations. You are going to do it by subdividing triangles, just as you do for higher-order equations, because tabulating the (x,y,z) values for a lattice of (u,v) values is pretty simple actually.

> The difficulty, of course, is that only a legitimate genius like Quílez can create Surfer Boy. There’s no existing tooling for signed-distance field geometry, it’s all code

This is not even wrong; it's incoherent. ImplicitCAD is "all code", sure, but does that make it "no existing tooling"? And, not to bag on IQ here, who is legitimately super awesome, but SDF raymarching is not actually that hard to implement. You don't have to be a genius. You probably shouldn't take the word of someone who confuses CSG and implicit modeling about it, either.

Also, it seems a bit amiss to bring up IQ and Shadertoy in the same paragraph, but not mention that IQ wrote Shadertoy! (Though of course it's a team effort now.)

Anyway, I thought it was a really interesting article, and it introduced me to a bunch of research I hadn't seen. So it was worth reading. Just be careful about the errors!


Apples to oranges...

The 3D equivalent of vector graphics would be Boundary Representation [1], which is multiple orders of magnitude harder than 2D vector graphics.

[1] https://en.m.wikipedia.org/wiki/Boundary_representation


Absolutely. One could also come up with a 2D imaging model that does not rely on curves and is friendly to rasterization. My goal is to illuminate why when we say "2D vector graphics" we typically mean apples, and why when we say "3D" we mean oranges. Plenty of other imaging models exist for both 2D and 3D, but for better or worse these are the ones that stuck. Part of my research is investigating alternative imaging models that doesn't have the same influences, like the boundary representation, and seeing what falls out of it.


>My goal is to illuminate why when we say "2D vector graphics" we typically mean apples, and why when we say "3D" we mean oranges.

This premise in the article was pretty confusing as I do not think this is universal.

When I think of vector graphics I think of fonts (2d) and star wars cgi (3d vector death stars).

When talking about rasterizers, the modern pipeline of rendering 3d meshes to a 2d back buffer comes to mind.

It took me some time to understand the purpose of this article because I first had to understand that 2d meant vector and 3d was to mean raster in this article.


2D and 3D are both vector (as in, a mathematical description of the shape).

Typically when we say "raster graphics", we mean graphics already computed as a set of points, like a bitmap PNG file. But here 2D comes in as a set of curves, and 3D comes in as a set of triangle edges. Both are mathematical descriptions that have to be "rasterized", e.g. transformed from the edge representation to the raster one.


Any thoughts on voxels? Write ups I’ve read on the concepts behind voxels have been pretty interesting. The idea of building up objects from grains of sand with material properties (instead of shaders, textures, bump maps, bounding boxes, etc.) is pretty interesting.


Volumetric representations are awesome in many ways since we can filter and evaluate them like pixels in 2D. It doesn't really help the curve solution, and for a lot of reasons we will use something like marching to convert back to a rasterizable mesh, but they help solve a lot of problems.


> Apples to oranges...

That's kind of the article's point. It explains how it's an apples to oranges comparison. If that's obvious to you, you don't need to read it.


Yep. It's almost as if the person posting that hadn't finished reading the article.


If it's obvious to me that the headline's intent is to mislead for clickbait purposes, then I probably ain't gonna bother reading the article, sorry.


Sorry, I didn't realise you were the article's author.

I'm answering with in good faith and with good will to you :)

Most of the article doesn't seem relevant to the headline, and as such I found it difficult to understand where the article was leading and why I should be interested to read on.

If the headline was something like "Parallel histories of 2D and 3D vector graphics" then I would have understood what I was committing to read and been more satisfied with the overall experience.

Obviously, the article has found an audience so perhaps I am wrong and you can safely ignore this thought ;-)


Please let me know how my title was clickbait-y, or a better title you might prefer. It wasn't my intention to mislead, and I am open to suggestions for better titles.


I am not sure if I understand the objection. Is that that you don't agree with the use of "2D graphics" and "3D graphics" to mean what they mean in the article?




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: