
Why are 2D vector graphics so much harder than 3D? - Jasper_
https://blog.mecheye.net/2019/05/why-is-2d-graphics-is-harder-than-3d-graphics/
======
DonHopkins
>This new printer was based on a small, stack-based Turing-complete language
called Forth, and they named it… the Interpress!

Xerox's laser printers were never "based on Forth". JAM and Interpress were
only conceptually related to Forth, in that they used postfix semantics.

That's like saying that because Java byte code uses a stack, your Android
phone is based on Forth. It's not. Stack based architectures (like the
Burroughs B5500 and B6500, developed in 1962, which directly inspired Design
System => JAM => Interpress => PostScript) have been around a lot longer than
Forth (developed in 1970), and simply using a stack doesn't make a language
Forth.

Here's a much more accurate and detailed historical account of the history of
Interpress, written by Brian Reid:

[https://groups.google.com/forum/#!topic/fa.laser-
lovers/H3us...](https://groups.google.com/forum/#!topic/fa.laser-
lovers/H3us4h8S3Kk)

Gaffney came to E&S from graduate school at the University of Illinois, where
he had used the Burroughs B5500 and B6500 computers. Their stack-oriented
architectures made a big impression on him. He combined the execution
semantics of the Burroughs machines with the evolving Evans and Sutherland
imaging models, to produce the Design System. Like all successful software
systems, the Design System slowly evolved as it was used, and many people
contributed to that evolution.

John Warnock joined Xerox PARC in 1978 to work for Chuck Geschke. There he
teamed up with Martin Newell in producing an interpreted graphics system
called JAM. "JAM" stands for "John And Martin". JAM had the same postfix
execution semantics as Gaffney's Design System, and was based on the Evans and
Sutherland imaging model, but augmented the E&S imaging model by providing a
much more extensive set of graphics primitives. Like the later versions of the
Design System, JAM was "token based" rather than "command line based", which
means that the JAM interpreter reads a stream of input tokens and processes
each token completely before moving to the next. Newell and Warnock
implemented JAM on various Xerox workstations; by 1981 JAM was available at
Stanford on the Xerox Alto computers, where I first saw it.

~~~
ttul
This is why I come to HN

~~~
yoodenvranx
This is why I usually check out the 2-3 top comments and then read the
article.

~~~
suhlig
Same here! I thought I was the only one with this strange habit.

------
DonHopkins
>The only major problem with PostScript was its Turing-completeness — viewing
page 86 of a document means first running the script for pages 1-85. And that
could be slow.

1) EPS (which the article just mentioned) is part of Adobe's Document
Structuring Convention, which solved the problem of skipping and reordering
pages.

[https://www.adobe.com/content/dam/acom/en/devnet/actionscrip...](https://www.adobe.com/content/dam/acom/en/devnet/actionscript/articles/5001.DSC_Spec.pdf)

[https://www.complang.tuwien.ac.at/ulrich/PS/eps.html](https://www.complang.tuwien.ac.at/ulrich/PS/eps.html)

>Every DSC-compliant document is indicated by having the comment %!PS-Adobe
3.0 as the first line. This comment is a flag to indicate that the document is
compliant. You should never use this comment unless your document really is
DSC compliant. There are many other parts to proper DSC. A document which
follows the DSC can be manipulated in many ways. In particular, postprocessors
can shuffle the pages, print two or more pages on a side, and so on. The
printer drivers from some notable companies do not follow the DSC, and their
PostScript documents are, therefore, impossible to work with once they've been
generated.

2) Inefficiently skipping pages (or being slow) is NOT the main problem with
PostScript's Turing completeness!

[https://www.loc.gov/preservation/digital/formats/fdd/fdd0002...](https://www.loc.gov/preservation/digital/formats/fdd/fdd000246.shtml)

>Security Issues: In early 2017, Microsoft took steps to prevent exploits
based on Word's import "filter" for EPS. Based on the assessment that "EPS
files are a legacy format that has largely fallen out of favor in today’s
ecosystem", Microsoft disabled the import filter by default in its April 2017
security update for Office. According to PPTools post I can no longer insert
EPS graphics into PowerPoint, this update applied to Office 2010, 2013, and
2016/365\. Import of EPS files into Office 2007 has been disabled since 2015.
The stand-alone Office for Mac is not affected.

>A Jan 2017 post from The Register, We don't want to alarm you, but PostScript
makes your printer an attack vector, publicizes the vulnerability of networked
PostScript printers. The potential for damaging exploits based on
communication between computers and PostScript printers has been understood
for years, but was not a serious threat before PostScript printers were
networked and accessible over the Internet. See also PostScript page from
Hacking Printers.

[http://hacking-printers.net/wiki/index.php/PostScript](http://hacking-
printers.net/wiki/index.php/PostScript)

~~~
DonHopkins
Brian Reid wrote about page independence, comparing Interpress' and
PostScript's different approaches. Adobe's later voluntary Document
Structuring Conventions actually used PostScript comments to make declarations
and delimit different parts of the file -- it wasn't actually a part of the
PostScript language, while Interpress defined pages as independent so they
couldn't possibly affect each other:

[https://groups.google.com/forum/#!topic/fa.laser-
lovers/H3us...](https://groups.google.com/forum/#!topic/fa.laser-
lovers/H3us4h8S3Kk)

>By now you can probably see the fundamental philosophical difference between
PostScript and Interpress. Interpress takes the stance that the language
system must guarantee certain useful properties, while PostScript takes the
stance that the language system must provide the user with the means to
achieve those properties if he wants them. With very few exceptions, both
languages provide the same facilities, but in Interpress the protection
mechanisms are mandatory and in PostScript they are optional. Debates over the
relative merits of mandatory and optional protection systems have raged for
years not only in the programming language community but also among owners of
motorcycle helmets. While the Interpress language mandates a particular
organization, the PostScript language provides the tools (structuring
conventions and SAVE/RESTORE) to duplicate that organization exactly, with all
of the attendant benefits. However, the PostScript user need not employ those
tools.

------
gugagore
"Without getting too into the math weeds, at a high-level, we call approaches
like Bezier curves and Hobby splines implicit curves"

I don't think of Bézier curves as implicit in any way. There's an explicit
equation for the curve, as a parameter of "time" (0 at the beginning of the
curve, 1 at the end, increasing monotonically in between).

~~~
Jasper_
Yes. Technically they are parametric curves but you can turn any parametric
curve into an implicit one (the reverse not being true), so I chose to leave
out that small detail for narrative purposes.

~~~
romwell
>Technically they are parametric curves

It would make the already excellent article even better if you simply linked
to the relevant[1] Wikipedia page.

Similarly, for 3D, you really want parametric surfaces[2], which NURBS are.

For the readers:

The key difference: to plot an _implicit_ curve, you need to _solve an
equation_ , which is a whole another can of worms. Solving equations is hard.
For high-degree equations, there are no formulas for solutions!

To plot a _parametric_ curve, you just need to plug numbers into some
formulas. That's much easier!

The problem discussed in the article is that even if you have an _explicit
parametrization_ , answering questions like "Where does this curve/surface
intersect this ray?" involves equation-solving. And that's hard. As would be
something like "represent this curve/surface with segments/triangles of
uniform size".

TL;DR: _implicit_ is something for which you don't have a formula. Bezier and
NURBS have easy formulas; they are _parametric_.

[1][https://en.wikipedia.org/wiki/Parametric_equation#Explicit_e...](https://en.wikipedia.org/wiki/Parametric_equation#Explicit_equations)

[2][https://en.wikipedia.org/wiki/Parametric_surface](https://en.wikipedia.org/wiki/Parametric_surface)

~~~
Jasper_
The implicit form is what's used when rasterizing, because we have the
opposite problem: we don't care about the value of a specific bezier function
at specific time t, we care about whether the pixel at location x,y is on
inside the path created by a number of bezier segments (but it's even more
complicated, because of winding order). That can't be solved with the
parametric form alone.

~~~
romwell
Indeed, but the terminology used in mathematics is very clear. So perhaps add
this snippet to the article?

Another thing possibly of value: where Bezier curves come from - automotive
design. The machines that were tracing these curves out needed to solve the
forward problem (where should the tool head be at time _t_?), and the
designers needed an easy way to change the shape of the curve. The Bezier
splines were (and still are) a great solution to that problem.

For that reason, it's easy to convert Bezier splines into G-Code for either
plotter - or, these days - 3D printer.

Also, sampling the parametric form does give you a polygon (or a polygonal
mesh), which is easier to work with. But that's another can of worms.

------
PaulHoule
The Hyperdimension Neptunia games have character graphics that breathe and
move and looking at them I thought they were vectors but when I looked into
the resource packs I was astonished to find no whole-character CG's but
instead bitmap eyes and mouths and shoulders and other parts that the system
would transform and move around with the 3-d scaler.

Even to draw 2-d graphics the 3-d technology works so much better.

~~~
pcwalton
I think that's just an artifact of the fact that not nearly as much effort has
been expended on 2D graphics as 3D. In fact, I've become convinced through my
Pathfinder work that the GPU is just as good as the CPU for 2D vector
graphics. It's just that our 2D rendering stacks are way behind.

(As an aside, I think this state of affairs is unfortunate, given how much
money there is in mobile apps and browsers and the like. The vast majority of
the research goes to 3D, and 3D is flashier and prettier work in a sense, but
the amount of time we spend reading text on a screen generally dwarfs the
amount of time people spend consuming 3D CG entertainment...)

~~~
weberc2
If the GPU is only "just as good" why bother with GPU rendering at all? Free
up the CPU for other work?

~~~
adanto6840
Yes, definitely. We did that in SimAirport, basically rendering everything as
if it was full 3D and use none of the engine-provided "2D features" (which
largely amounts to 'more work on the CPU').

Finding fun mechanics that add to CPU load is awfully easy. ;)

------
andybak
Nice to see a nod to Iniqo Quilez and the crazy SDF world. I still get a
thrill from the fact you can specify an infinite grid of spheres with a single
line of SDF code. Seems magical - as does the "almost free" CSG, morphing and
chamfering you get from SDF rendering.

~~~
djmips
But why ' of incredible infamy ' \- did he do something illegal, bad or
contemptible?

~~~
Jasper_
It's a bit tongue in cheek and perhaps I should edit it out before someone
misinterprets it. To a lot of people, iq is a mad scientist, in the best way.
The rediscovery of SDF and invention of raymarching almost immediately gave
every single graphics developer pause. It's such a different way to think
about graphics and modelling and art and helped open people's eyes to
alternate approaches and representations.

~~~
slazaro
IMO, yes, it shouldn't be there. I've been following the guy for some years
and was really surprised to see him called "infamous", so I went and googled
for a while to see if he ever did anything "wrong".

~~~
Jasper_
I adjusted the wording. Incredibly sorry about that, I definitely don't want
to hurt the man's reputation.

------
Narishma
> Microsoft, while working on Windows 95, wanted to create its own graphics
> API for developers, and a primary goal was making it compatible with
> existing printers so the graphics could be sent to printers as easily as a
> screen. This API was eventually released as GDI, a core component used by
> every engineer during Windows’s meteoric rise to popularity in the 90s.

AFAIK, GDI was part of Windows since the very first version released in 1985.

~~~
codewiz
Yes, and its binary serialization format Windows Metafile (WMF) also predates
Windows 95. Microsoft tried to push it as an alternative to PostScript for
printing, with some success in low-end PC printers with thick drivers.

[https://en.wikipedia.org/wiki/Windows_Metafile#Implementatio...](https://en.wikipedia.org/wiki/Windows_Metafile#Implementations)

------
layoutIfNeeded
Apples to oranges...

The 3D equivalent of vector graphics would be Boundary Representation [1],
which is multiple orders of magnitude harder than 2D vector graphics.

[1]
[https://en.m.wikipedia.org/wiki/Boundary_representation](https://en.m.wikipedia.org/wiki/Boundary_representation)

~~~
Jasper_
Absolutely. One could also come up with a 2D imaging model that does not rely
on curves and is friendly to rasterization. My goal is to illuminate why when
we say "2D vector graphics" we typically mean apples, and why when we say "3D"
we mean oranges. Plenty of other imaging models exist for both 2D and 3D, but
for better or worse these are the ones that stuck. Part of my research is
investigating alternative imaging models that doesn't have the same
influences, like the boundary representation, and seeing what falls out of it.

~~~
jayd16
>My goal is to illuminate why when we say "2D vector graphics" we typically
mean apples, and why when we say "3D" we mean oranges.

This premise in the article was pretty confusing as I do not think this is
universal.

When I think of vector graphics I think of fonts (2d) and star wars cgi (3d
vector death stars).

When talking about rasterizers, the modern pipeline of rendering 3d meshes to
a 2d back buffer comes to mind.

It took me some time to understand the purpose of this article because I first
had to understand that 2d meant vector and 3d was to mean raster in this
article.

~~~
Jasper_
2D and 3D are both vector (as in, a mathematical description of the shape).

Typically when we say "raster graphics", we mean graphics already computed as
a set of points, like a bitmap PNG file. But here 2D comes in as a set of
curves, and 3D comes in as a set of triangle edges. Both are mathematical
descriptions that have to be "rasterized", e.g. transformed from the edge
representation to the raster one.

------
jayd16
Is it just me or anyone else confused by the conflation of 2d to vector and 3d
to raster graphics in this article?

------
HocusLocus
Bless you people, for revealing this concise and informative gem.

------
awongh
The technical aspects of this article are super interesting, but also the
aspects that talk about adobe's early business model: _Adobe’s original
business model was to sell this font technology to people that make printers,
and sell special recreations of fonts, with added hints, to publishers, which
is why Adobe, to this day, sells their versions of Times and Futura. Adobe can
do this, by the way, because fonts, or, more formally, “typefaces”, are one of
five things explicitly excluded by US Copyright Law, since they were
originally designated as “too plain or utilitarian to be creative works”. What
is sold and copyrighted instead is the digital program that reproduces the
font on the screen._

An interesting way to look at this technology from a startup perspective. From
the new industry of digital printing they found a business model based on the
loophole that fonts are uncopywritable and could be monitized.

------
Const-me
I don’t agree that 2D is harder than 3D. Regardless on what you count, a page
of text is much simpler to render than a frame in modern videogame. Even if
you don’t cache TTF glyph textures.

I think it’s just legacy. For many decades before modern GPUs, 2D graphics was
done in certain ways. People accumulated experience, wrote books, taught stuff
to students in university.

Then modern GPUs happened, but initially only gamers and 3D graphics
professionals had them, so 2D programmers couldn’t care less, their users just
didn’t have hardware for GPU-based rendering. I think it was only around
2008-10 when non-game developers could finally use GPUs without sacrificing
users base, and even at that time it meant “DX9 GPU” so no geometry or compute
shaders.

I think people will adjust after a while. Professors teaching 2D graphics will
stop talking about moveto/lineto and start talking about submitting triangles
and writing shaders.

~~~
lifthrasiir
> I think people will adjust after a while. Professors teaching 2D graphics
> will stop talking about moveto/lineto and start talking about submitting
> triangles and writing shaders.

The 2D graphic system in the future _will_ of course submit triangles and make
use of shaders, but it is hard to imagine that we will be directly working
with triangles. And we have taken the subtly broken scenary of raw 3D
triangles granted (so far) but not tolerated jaggy 2D strokes. To resolve this
dichotomy either 3D should go continuous (e.g. implicit surfaces) or 2D should
go discrete as you suggest; the latter seems more unlikely.

~~~
Const-me
> 2D graphic system in the future

“The future is already here, it's just not evenly distributed” © William
Gibson.

It already works this way on modern Windows: WPF, UWP, and Direct2D already
use GPUs.

> it is hard to imagine that we will be directly working with triangles

Most people doing 2D graphics on CPU don’t moveto/lineto either, they use
higher-level libraries for that. To be good at it, you have to understand, to
some extent, couple underlying abstraction layers. That’s why universities are
teaching these basics. The basics for GPU-rendered 2D graphics are totally
different, and until very recently only 3D graphics professionals cared about
them.

> either 3D should go continuous (e.g. implicit surfaces)

You don’t necessarily need implicit surfaces. If you have a GPU ≤ 10 years
old, you can convert 2D vector stuff to triangles on the fly.

Here’s a simple example that renders a polyline: [https://github.com/Const-
me/vis_avs_dx/blob/master/avs_dx/Dx...](https://github.com/Const-
me/vis_avs_dx/blob/master/avs_dx/DxVisuals/Effects/Render/PolylineGS.hlsl) It
cuts a few corners e.g. doesn’t handle self-intersecting polylines
particularly well if you have transparency, but it’s all fixable, that was an
unpaid hobby project.

Often you can do the same with Bezier/Nurbs, see tessellation stages in D3D11
docs.

~~~
lifthrasiir
> It already works this way on modern Windows: WPF, UWP, and Direct2D already
> use GPUs.

Oh, you are right. Even DirectWrite makes use of GPUs (and not just texture
atlas) when you use a correct API.

> [...] That’s why universities are teaching these basics. [...]

I agree if you meant so. It is unfortunate that most university courses are
not concerned with fundamentals or only concerned with _outdated_
fundamentals.

~~~
Const-me
> most university courses are not concerned with fundamentals or only
> concerned with outdated fundamentals.

I don’t think they’re _that_ bad. For things that are relatively stable, like
compilers, file systems, OS kernels, or relational databases, universities are
doing OK job.

It’s just that GPUs have been evolving too fast for the last decade. They
drive evolution of everything that runs on them. GPUs have become ridiculously
powerful (in my desktop PC, GPU is 25 times faster than CPU doing math on
floating-point numbers), so more and more stuff does run on them. Initially
they only run 3D game engines, then 2D graphics, now also a lot of high-
performance computing. When stuff gets ported from CPU to GPU it changes a lot
due to very different architecture.

------
cyborgx7
That's funny. I was just wondering weather Dreams was based on implicit
surfaces, with their ability to scale their models in this good looking way,
and then the article goes and mentions it as an example.

Edit: I really enjoyed the article and its information, but I think I missed
the part where it answered the question in the title.

------
gugagore
Is it true that plotters were the first graphical device to be controlled by
computers? before oscilloscopes?

Doesn't really make a difference to the point in the article. Both are
"vector" devices. But I'm curious about the history.

~~~
_pmf_
I'd say looms, which would be raster devices.

~~~
WorldMaker
I'd say a loom deals with vectors, it just only handles thin vectors in two
directions 90 degrees rotated from each other.

------
AceJohnny2
Thanks you for this extensive historical and technical overview. It's really
useful to know where we are and how we got here. It's easy to underestimate
how widespread the consequences of these fundamental choices are.

------
gct
I'd just like to say, I'm generally hard to please on software, but Blend2D is
good stuff, those guys are s-m-r-t.

------
lodi
Thank you for writing this up. The computing world needs more of these history
articles.

------
hathawsh
When I create a simple curved object in Blender or OpenSCAD, I see a mesh of
triangles. When I create a simple curved object in Fusion 360, I see a smooth
object with no apparent triangles. Is Fusion 360 rendering a triangle mesh or
is it doing something else?

~~~
jononor
Fusion 360 (or FreeCAD) _renders_ a triangle mesh on-demand at display time
from the underlying model. But that underlying model is based on solids with
perfect platonic shapes like arcs, spheres etc. In Blender and OpenSCAD the
underlying model is also based on triangle meshes, so they cannot do the same.

~~~
TomVDB
Are you sure about OpenSCAD?

I thought it used constructive solid geometry?

~~~
jononor
Kinda, but I primarily have a user perspective. For instance even though most
of the OpenSCAD language operators are classic CSG ones, there are variables
to specify the number of facets on circles etc (which does not make much sense
in a pure solids world). This can be controlled globally using the $fa/$fs/$fn
variable (assuming a well-behaved model). Changing these variable seems to
require the entire geometry to be rebuilt. Rebuilding the model itself is
slow, and it gets slower the more facets you have - indicating that this is
not just a 'display' feature. I believe this is because the underlying
geometry kernel, CGAL, operates on meshes.

Note that OpenSCAD can also export a CSG datastructure of a model. The
OpenSCAD module in FreeCAD module uses this generate B-rep based solid models
using their OpenCascade kernel.

This raises the point of difference between OpenSCAD and Blender. In OpenSCAD
the mesh is mostly a secondary thing, an artifact of how the internals are
implemented - the "primary" model the user modifies is the code itself. Where
as in Blender, the mesh is the primary model being modified by user. Although
the distinction is a bit blurred, as Blender has some support for NURBS these
days. And one can have non-destructive operators ("modifiers") such as
subdivide/array/mirror etc.

------
DiseasedBadger
Because people keep turning them into HTML. Leave the vectors alone, people!
CSS is terrible.

------
kragen
This is a very interesting article, but it does have some errors. I'm no
expert in the area, but here are some that I spotted.

> The first graphics devices to interact with computers were “plotters”,

The first graphics devices used as such on _digital_ computers were the
visible debugging CRTs driven from the Williams tube control signals, and of
course "ASCII" art (before ASCII) on printers. George Dyson's book on the
Johnniac has a lot of images of one-dimensional cellular automata from the
dawn of the von Neumann machine. The TX-0 had a point-plotting CRT and also, I
believe, the first pen plotter.

> The only major problem with PostScript was its Turing-completeness — viewing
> page 86 of a document means first running the script for pages 1-85

As Don pointed out, that isn't the major problem with it being Turing-
complete. Aside from the frequent security holes that have occurred, I recall
a four-line PostScript .signature in the 1990s that computed an IFS fractal
for 60 seconds and then printed it; it would plot more or fewer points
according to whether your printer's PostScript was faster or slower, so it
might not print out the same way twice even on the same printer. Writing a
PostScript document that renders more or fewer pages is just as easy. And the
common kinds of manipulations you'd like to do with printable documents are
unnecessarily difficult with PostScript because of the Turing-completeness —
if you implement them by assuming documents that claim to respect DSC will do
so, your tools will break at random on real-world documents whose DSC
compliance is imperfect. Consider the problem of concatenating two PostScript
documents (pscat), each of which might have redefined "moveto" in its
prologue, or defined "M" as a shortcut version of "moveto" with similar but
subtly different meanings; or, worse, interspersing their pages, or displaying
reduced images of their pages (psnup, psbook, mpage) on the same page. In
theory you could hack together some kind of sandbox that makes this stuff work
reliably, but in practice the solution is to distill the documents to PDF
using GhostScript.

(For an overview of DSC, see the excellent Brian Reid paper Jasper and Don
posted links to: [https://tech-
insider.org/unix/research/1985/0301.html](https://tech-
insider.org/unix/research/1985/0301.html))

Even my favorite PostScript test document (from Network PostScript Printer
Installation Illustrated Through Interpretive Dance) prints a rectangle that
is quite rarely the same size on different printers; it's even different sizes
on the same printer using different paper sizes:

    
    
        %!
        clippath stroke showpage
    

> The only major problem with PostScript was its Turing-completeness… And that
> could be slow. Adobe caught wind of this user complaint, and decided to
> create a new document format that didn’t have these restrictions

This is not really true. The original motivation for PostScript Distiller
(before PDF stopped being a subset of PostScript) was to make a way to get
documents onto computers that were too wimpy to run a full PostScript
interpreter, like IBM PCs and Macintoshes. The problem was not, as I
understand it, primarily that it was slow, and certainly not that it was
Turing-complete (those machines ran lots of Turing-complete languages, after
all) but that it required a lot of code and a lot of memory.

> It threw out the programming language — but the graphics technology stayed
> the same.

This was only true at the very beginning. PDF (and SVG) has evolved quite a
bit over the years. Notably, PDF and SVG support alpha-blending and gradients,
while PostScript didn't, last I checked.

> Bezier curves do exist in 3D, where they are known as B-splines, and they
> are computable, but they have the drawback that they limit the ways you can
> connect your mesh together.

This is almost entirely wrong. B-splines as such are one-dimensional, or two-
dimensional if you consider the abscissa a dimension, though you can of course
construct parametric curves of any dimension each of whose coordinates is a
B-spline in that dimension. They are not, in general, computable (and neither
are NURBS surfaces, which I think are what is meant), but all of Bézier
curves, B-splines, and NURBS surfaces are easy to compute particular points
on, if their parameters are computable. B-spline surfaces in 3-D don't, as far
as I know, have anything particular to do with Bézier surfaces, except in the
sense that Catmull–Clark is a Bézier-like algorithm that converges to a cubic
B-spline surface, and of course B-splines (in the sense of basis splines) and
the Bernstein polynomials used by Béziers are just alternative bases spanning
the same space of polynomials, so you can freely convert a parametric
polynomial surface of a given degree between the two representations. Nor do
any of Bézier curves, NURBS surfaces, or B-splines impose constraints on
surface topology.

> we call approaches like Bezier curves and Hobby splines implicit curves,
> because they are specified as a mathematical function which generates the
> curve

This is almost entirely wrong; Bézier curves and Hobby splines are parametric,
not implicit, and "specified as a mathematical function which generates the
curve" is not what "implicit curve" means. The truth is close to the opposite:
an explicit function is a mathematical function which generates the curve,
while an implicit function just tells you which side of it you're on.

> The 3D equivalent of an implicit curve is an implicit surface, made up of
> basic geometric primitives like spheres, cylinders and boxes.

This is very much not what an implicit surface is. The author has confused CSG
("constructive solid geometry"), which is making things up of basic geometric
primitives like spheres, cylinders, and boxes, with implicit surfaces, which
are almost, but not quite, completely unlike CSG.

An implicit surface is the solution set of an equation in three unknowns. If
you want a sphere, that's easy: (x - 17)² + (y - 22)² + (z - 84)² = 39, for
any values of 17, 22, 84, and 39. But a box? That's gonna be a bit rough: (x²
- 1)(y² - 1)(z² - 1) = 0 _contains_ a box, but its sides extend to infinity.
It's definitely feasible to hack the equation to clip them, but it's not what
implicit surfaces are good at, and I think some of the obvious approaches make
life a lot harder for solvers.

A much more typical example of an implicit surface, in my book, is Jim Blinn's
metaballs — something like (x - 17)² + (y - 22)² + (z - 84)² + (x - 17)² + (y
- 28)² + (z - 84)² = 39, which gives you a blobby barbell-looking thing.
Although you might think that it should give you two spheres, actually no
patch of its surface is a patch of the surface of a sphere, because the
proximity distorts both spheres into non-sphericity.

Or, consider a paraboloid, the shape of a satellite dish or a Newtonian
telescope mirror: x² + y² - z = 0 (though that particular example is easy to
solve as an _explicit_ heightfield, so maybe it's not the best example of
_implicit_ modeling.) That's gonna be a real pain to model with spheres and
cylinders. You're never gonna get there if you're trying to get your telescope
mirror to focus a clean image.

> 2D graphics started and maintained forward momentum around these implicit
> curves, by near necessity in their use in modelling human letterforms and
> glyphs. The hardware and software to compute these paths in real-time was
> expensive

This is wrong; although some recent systems like Dan Amelang's Gezira do in
fact compute paths from letterforms for every frame, none of METAFONT, GDI,
and X11 do this, normally. Instead, all of them rasterize the letterforms
early, then use the pre-rasterized letterforms over and over. PostScript is in
kind of a weird gray area, but I'm reasonably sure that typical PostScript
implementations do that most of the time as well, although you _can_ rotate
text and blow it up to huge sizes, which necessarily requires re-rasterization
at some point, and of course, ever since Type 2, you have "charpath".

> Surfaces like Catmull-Clark and NURBS allow for arbitrarily connected meshes
> to empower artists, but this can lead to polynomials greater than the fourth
> degree, which tend to have no closed-form solution

This is not, as far as I know, wrong, but is misleading; if you are computing
quartic parametric surface patches, you are not going to do it by using
Ferrari's closed-form solution to quartic equations. You are going to do it by
subdividing triangles, just as you do for higher-order equations, because
tabulating the (x,y,z) values for a lattice of (u,v) values is pretty simple
actually.

> The difficulty, of course, is that only a legitimate genius like Quílez can
> create Surfer Boy. There’s no existing tooling for signed-distance field
> geometry, it’s all code

This is not even wrong; it's incoherent. ImplicitCAD is "all code", sure, but
does that make it "no existing tooling"? And, not to bag on IQ here, who is
legitimately super awesome, but SDF raymarching is not actually that hard to
implement. You don't have to be a genius. You probably shouldn't take the word
of someone who confuses CSG and implicit modeling about it, either.

Also, it seems a bit amiss to bring up IQ and Shadertoy in the same paragraph,
but not mention that IQ wrote Shadertoy! (Though of course it's a team effort
now.)

Anyway, I thought it was a really interesting article, and it introduced me to
a bunch of research I hadn't seen. So it was worth reading. Just be careful
about the errors!

------
bsenftner
Revision of history, wrong, and insultingly so. This post is a rewrite of
serious graphics history. Read Foley Dan Vam, forget this tripe.

~~~
Jasper_
Parts of this post were based on the CACM'84 interview with van Dam. I cannot
cover every single small development along the way. I had a paragraph on pre-
Porter Duff 2D image models like QuickDraw, X11, and yes, Foley/van Dam's GKS
but after a lot of hesitation, cut it for brevity. There are too many
contributions to list, hopefully I can chronicle more in a future post.

The story I wanted to tell was to cover the large, broad developments in the
industry that caused our solidification of the respective image models in both
2D and 3D. Perhaps I omitted a bit too much, and I'm certainly framing it with
modern eyes, but I do not believe any of this to be revisionist.

